text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
Feature #6983
Add option to compress log files
0%
Description
The <entry>.info.log files compress from 100MB down to about 5MB. We run on SAN based VM's where disk isn't free so it would be really nice if glideinwms would compress the log file when it rotates them. We'd like to keep a long period of log files for debugging. We have about a dozen entry points for various reasons so when everything gets multiplied out compression would be helpful.
History
#1 Updated by Joe Boyd almost 6 years ago
This request started from ticket INC000000445637 where Gerard logged:
I'd be happy to see how GWMS relies on standard logrotate daemon and
provides a /etc/logrotate.d/gwms* file where Ops teams can setup the
desired rotation policy.
It would simplify both GWMS code and it's management. After looking a bit
this morning I believe Condor allows to do that if one disables condor
rotation and uses copytruncate option in logrotate. Perhaps we could do the
same with GWMS.
Gerard
I don't care how it's done but I think compression of log files is a reasonable request and we shouldn't roll our own.
#2 Updated by Parag Mhashilkar almost 6 years ago
- Target version set to v3_2_x
#3 Updated by Parag Mhashilkar almost 6 years ago
- Assignee set to Parag Mhashilkar
- Target version changed from v3_2_x to v3_2_8
#4 Updated by Marco Mambelli almost 6 years ago
- Assignee changed from Parag Mhashilkar to Marco Mambelli
#5 Updated by Marco Mambelli over 5 years ago
Possible alternatives:
1. support an encoding for GlideinHandler. BaseRotatingHandler supports bz2 (encoding='bz2-codec' - only Python2, python3 has custom logrotators):
class GlideinHandler( BaseRotatingHandler): ... def __init__(self, filename, maxDays=1, minDays=0, maxMBytes=10, backupCount=5, encoding=None): ... BaseRotatingHandler.__init__(self, filename, mode, encoding) ... def add_processlog_handler(logger_name, log_dir, msg_types, extension, maxDays, minDays, maxMBytes, backupCount=5, encoding=None): ... handler = GlideinHandler(logfile, maxDays, minDays, maxMBytes, backupCount, encoding)
2. add manual compression for the files importing gzip or zipfile, e.g.:
def doRollover(self): ... if self.doCompress: if os.path.exists(dfn + ".zip"): os.remove(dfn + ".zip") file = zipfile.ZipFile(dfn + ".zip", "wb") file.write(dfn, os.path.basename(dfn), zipfile.ZIP_DEFLATED) file.close() os.remove(dfn)
This would still require the same parameter passing as 1.
Then in both 1 and 2 calls to logSupport.add_processlog_handler should access the configuration to decide wether to use compression or not (similar to the other logging parameters)
3. let logrotate handling log rotation. It is standard and will make RHEL/SL sysadmin happy. Not all systems will have it.
Check how the logging works to see if log files are opened and closed each time or some trigger command is needed.
Options to handle this: copytruncate, prescript, postscript, delaycompress
I need some feedback on which alternative would be better.
Thanks,
Marco
#6 Updated by Marco Mambelli over 5 years ago
Committed a first version (needs to be tested).
In the unit test compressed files are verified checking the file name (extension), not the actual content.
Marco
#7 Updated by Marco Mambelli over 5 years ago
for testing I creted a big file:
And I added the following lines.
On a RH5 factory: # to print a lot #MMDB
logSupport.log.info("################### Printing filling file:")
#with open('/opt/file-1m', 'r') as f:
f = open('/opt/file-1m', 'r')
s = f.read()
logSupport.log.info(s)
f.close()
Before the log update in the loop:
# Aggregate Monitoring data periodically
logSupport.log.info("Aggregate monitoring data")
aggregate_stats(factory_downtimes.checkDowntime())
And in a RH6 frontend:
# to print a lot #MMDB
logSupport.log.info("################### Printing filling file:")
with open('/opt/file-1m', 'r') as f:
s = f.read()
logSupport.log.info(s)
Before the log update in the loop:
logSupport.log.info("Aggregate monitoring data") # KEL - can we just call the monitor aggregator method directly? see above
aggregate_stats()
I added to the documentation a note that the max_byte value is truncated (whenever read) . Any value <1 will cause no rotation.
Marco
#8 Updated by Marco Mambelli over 5 years ago
- Status changed from New to Feedback
- Assignee changed from Marco Mambelli to Parag Mhashilkar
ready for review
#9 Updated by Marco Mambelli over 5 years ago
I forgot to add that to make sure that no strange chars were used I generated the test files used to files the log files with:
< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c 1M > /opt/file-1m
#10 Updated by Parag Mhashilkar over 5 years ago
- Assignee changed from Parag Mhashilkar to Marco Mambelli
sent feedback separately.
#11 Updated by Marco Mambelli over 5 years ago
- Status changed from Feedback to Resolved
merged
#12 Updated by Parag Mhashilkar over 5 years ago
- Status changed from Resolved to Closed
Also available in: Atom PDF | https://cdcvs.fnal.gov/redmine/issues/6983 | CC-MAIN-2020-34 | en | refinedweb |
.
Cargo is the Rust package manager. Cargo downloads your Rust package’s dependencies, compiles your packages, makes distributable packages, and uploads them to crates.io, the Rust community’s package registry. You can contribute to this book on GitHub.
Sections
Getting Started
To get started with Cargo, install Cargo (and Rust) and set up your rst crate.
Cargo Guide
The guide will give you all you need to know about how to use Cargo to develop Rust packages.
Cargo Reference
Getting Started To get started with Cargo, install Cargo (and Rust) and set up your rst crate.
Installation First steps with Cargo 1/22618/01/2020 The Cargo Book
Installation
The easiest way to get Cargo is to install the current stable release of Rust by using rustup . Installing Rust using rustup will also install cargo .
It will download a script, and start the installation. If everything goes well, you’ll see this appear:
On Windows, download and run rustup-init.exe. It will start the installation in a console and present the above message on success.
After this, you can use the rustup command to also install beta or nightly channels for Rust and Cargo.
For other installation options and information, visit the install page of the Rust website.
Cargo defaults to --bin to make a binary program. To make a library, we'd pass --lib . 2/22618/01/2020 The Cargo Book
$ cd hello_world $ tree . . ├── Cargo.toml └── src └── main.rs
1 directory, 2 files
This is all we need to get started. First, let’s check out Cargo.toml :
[package] name = "hello_world" version = "0.1.0" authors = ["Your Name <[email protected]>"] edition = "2018"
[dependencies]
This is called a manifest, and it contains all of the metadata that Cargo needs to compile your package.
fn main() { println!("Hello, world!"); }
$ cargo build Compiling hello_world v0.1.0 ()
$ ./target/debug/hello_world Hello, world!
We can also use cargo run to compile and then run it, all in one step:
$ cargo run Fresh hello_world v0.1.0 () Running `target/hello_world` Hello, world!
Going further
For more details on using Cargo, check out the Cargo Guide 3/22618/01/2020 The Cargo Book
Cargo Guide This guide will give you all that you need to know about how to use Cargo to develop Rust packages.
We’re passing --bin because we’re making a binary program: if we were making a library, we’d pass --lib . This also initializes a new git repository by default. If you don't want it to do that, pass --vcs none . 4/22618/01/2020 The Cargo Book le, Cargo.lock . It contains information about our dependencies. Since we don’t have any yet, it’s not very interesting.
Once you’re ready for release, you can use cargo build --release to compile your les with optimizations turned on: 5/22618/01/2020 The Cargo Book
Compiling in debug mode is the default for development-- compilation time is shorter since the compiler doesn't do optimizations, but the code will run slower. Release mode takes longer to compile, but the code will run faster.
First, get the package from somewhere. In this example, we’ll use rand cloned from its repository on GitHub:
$ cargo build Compiling rand v0.1.0 ()
This will fetch all of the dependencies and then build them, along with the package.
Dependencies crates.io is the Rust community's central package registry that serves as a location to discover and download packages. cargo is con gured to use it by default to nd requested packages.
Adding a dependency
If your Cargo.toml doesn't already have a [dependencies] section, add that, then list the crate name and version that you would like to use. This example adds a dependency of the time crate:
[dependencies] time = "0.1.12" 6/22618/01/2020 The Cargo Book le would look like with dependencies on the time and regex crates:
[dependencies] time = "0.1.12" regex = "0.1.41"
Re-run cargo build , and Cargo will fetch the new dependencies and all of their dependencies, compile them all, and update the Cargo.lock :
$ cargo build Updating crates.io index .
use regex::Regex;
fn main() { let re = Regex::new(r"^\d{4}-\d{2}-\d{2}$").unwrap(); println!("Did our date match? {}", re.is_match("2014-01-01")); } 7/22618/01/2020 The Cargo Book
$ cargo run Running `target/hello_world` Did our date match? true
Package Layout Cargo uses conventions for le placement to make it easy to dive into a new Cargo package:
..toml and Cargo.lock are stored in the root of your package (package root). Source code goes in the src directory. The default library le is src/lib.rs . The default executable le is src/main.rs . Other executables can be placed in src/bin/*.rs . Integration tests go in the tests directory (unit tests go in each le they're testing). Examples go in the examples directory. Benchmarks go in the benches directory.
Cargo.toml vs Cargo.lock Cargo.toml and Cargo.lock serve two di erent purposes. Before we talk about them, here’s a summary:
If you’re building a non-end product, such as a rust library that other rust packages will depend on, put Cargo.lock in your .gitignore . If you’re building an end product, which 8/22618/01/2020 The Cargo Book
are executable like command-line tool or an application, or a system library with crate-type of staticlib or cdylib , check Cargo.lock into git . If you're curious about why that is, see "Why do binaries have Cargo.lock in version control, but not libraries?" in the FAQ.
[package] name = "hello_world" version = "0.1.0" authors = ["Your Name <[email protected]>"]
[dependencies] rand = { git = "" }
This package has a single dependency, on the rand library. We’ve stated in this case that we’re relying on a particular Git repository that lives on GitHub. Since we haven’t speci ed any other information, Cargo assumes that we intend to use the latest commit on the master branch to build our package.
Sound good? Well, there’s one problem: If you build this package today, and then you send a copy to me, and I build this package tomorrow, something bad could happen. There could be more commits to rand in the meantime, and my build would include new commits while yours would not. Therefore, we would get di erent builds. This would be bad because we want reproducible builds.
:
Cargo will take the latest commit and write that information out into our Cargo.lock when we build for the rst time. That le will look like this: 9/22618/01/2020 The Cargo Book
[[package]] name = "hello_world" version = "0.1.0" dependencies = [ "rand 0.1.0 (git+- nursery/rand.git#9f35b8e439eeedd60b9414c58f389bdc6a3284f9)", ]
[[package]] name = "rand" version = "0.1.0" source = "git+- nursery/rand.git#9f35b8e439eeedd60b9414c58f389bdc6a3284f9"
You can see that there’s a lot more information here, including the exact revision we used to build. Now when you give your package to someone else, they’ll use the exact same SHA, even though we didn’t specify it in our Cargo.toml .
When we’re ready to opt in to a new version of the library, Cargo can re-calculate the dependencies and update things for us:
This will write out a new Cargo.lock with the new version information. Note that the argument to cargo update is actually a Package ID Speci cation and rand is just a short speci cation.
Tests Cargo can run your tests with the cargo test command. Cargo looks for tests to run in two places: in each of your src les and any tests in tests/ . Tests in your src les should be unit tests, and tests in tests/ should be integration-style tests. As such, you’ll need to import your crates into the les in tests .
Here's an example of running cargo test in our package, which currently has no tests:
$ cargo test Compiling rand v0.1.0 () Compiling hello_world v0.1.0 () Running target/test/hello_world-9c2b65bbb79eabce
running 0 tests
If our package had tests, we would see more output with the correct number of tests. 10/22618/01/2020 The Cargo Book
cargo test runs additional checks as well. For example, it will compile any examples you’ve included and will also test the examples in your documentation. Please see the testing guide in the Rust documentation for more details.
Continuous Integration
Travis CI
language: rust rust: - stable - beta - nightly matrix: allow_failures: - rust: nightly
This will test all three release channels, but any breakage in nightly will not fail your overall build. Please see the Travis CI Rust documentation for more information.
GitLab CI
stages: - build
rust-latest: stage: build image: rust:latest script: - cargo build --verbose - cargo test --verbose
rust-nightly: stage: build image: rustlang/rust:nightly script: - cargo build --verbose - cargo test --verbose allow_failure: true 11/22618/01/2020 The Cargo Book
This will test on the stable channel and nightly channel, but any breakage in nightly will not fail your overall build. Please see the GitLab CI for more information.
builds.sr.ht
To test your package on sr.ht, here is a sample .build.yml le. Be sure to change <your repo> and <your project> to the repo to clone and the directory where it was cloned.
image: archlinux packages: - rustup sources: - <your repo> tasks: - setup: | rustup toolchain install nightly stable cd <your project>/ rustup run stable cargo fetch - stable: | rustup default stable cd <your project>/ cargo build --verbose cargo test --verbose - nightly: | rustup default nightly cd <your project>/ cargo build --verbose ||: cargo test --verbose ||: - docs: | cd <your project>/ rustup run stable cargo doc --no-deps rustup run nightly cargo doc --no-deps ||:
This will test and build documentation on the stable channel and nightly channel, but any breakage in nightly will not fail your overall build. Please see the builds.sr.ht documentation for more information.
Cargo.
Files: config Cargo's global con guration le, see the con g entry in the reference. credentials Private login credentials from cargo login in order to log in to a registry. .crates.toml This hidden le contains package information of crates installed via cargo install . Do NOT edit by hand!
Directories: bin The bin directory contains executables of crates that were installed via cargo install or rustup . To be able to make these binaries accessible, add the path of the directory to your $PATH environment variable. git Git sources are stored here: git/db When a crate depends on a git repository, Cargo clones the repo as a bare repo into this directory and updates it if necessary. git/checkouts If a git source is used, the required commit of the repo is checked out from the bare repo inside git/db into this directory. This provides the compiler with the actual les contained in the repo of the commit speci ed for that dependency. Multiple checkouts of di erent commits of the same repo are possible. registry Packages and metadata of crate registries (such as crates.io) are located here. registry/index The index is a bare git repository which contains the metadata (versions, dependencies etc) of all available crates of a registry. registry/cache Downloaded dependencies are stored in the cache. The crates are compressed gzip archives named with a .crate extension. registry/src If a downloaded .crate archive is required by a package, it is unpacked into registry/src folder where rustc will nd the .rs les.
bin/ 13/22618/01/2020 The Cargo Book
registry/index/ registry/cache/ git/db/
Alternatively, the cargo-cache crate provides a simple CLI tool to only clear selected parts of the cache or show sizes of its components in your command-line.
Build cache Cargo shares build artifacts among all the packages of a single workspace. Today, Cargo does not share build results across di e . Alternatively, you can set build.rustc-wrapper in the Cargo con guration. Refer to sccache documentation for more details.
Cargo Reference The reference covers the details of various areas of Cargo.
Specifying Dependencies The Manifest Format Con guration Environment Variables Build Scripts Publishing on crates.io 14/22618/01/2020 The Cargo Book
Specifying Dependencies Your crates can depend on other libraries from crates.io or other registries, git repositories, or subdirectories on your local le system. You can also temporarily override the location of a dependency — for example, to be able to test out a bug x in the dependency that you are working on locally. You can have di erent dependencies for di erent platforms, and dependencies that are only used during development. Let's take a look at how to do each of these.
Cargo is con gured to look for dependencies on crates.io by default. Only the name and a version string are required in this case. In the cargo guide, we speci ed a dependency on the time crate:
The string "0.1.12" is a semver version requirement. Since this string does not have any operators in it, it is interpreted the same way as if we had speci ed "^0.1.12" , which is called a caret requirement.
Caret requirements
Here are some more examples of caret requirements and the versions that would be allowed with them: 15/22618/01/2020 The Cargo Book
This compatibility convention is di erent from SemVer in the way it treats versions before 1.0.0. While SemVer says there is no compatibility before 1.0.0, Cargo considers 0.x.y to be compatible with 0.x.z , where y ≥ z and x >.
Wildcard requirements
Wildcard requirements allow for any version where the wildcard is positioned.
* := >=0.0.0 1.* := >=1.0.0 <2.0.0 1.2.* := >=1.2.0 <1.3.0
Comparison requirements 16/22618/01/2020 The Cargo Book
>= 1.2.0 > 1 < 2 = 1.2.3
Multiple requirements
Multiple version requirements can also be separated with a comma, e.g., >= 1.2, < 1.5 .
To specify a dependency from a registry other than crates.io, rst the registry must be con gured in a .cargo/config le. See the registries documentation for more information. In the dependency, set the registry key to the name of the registry to use.
[dependencies] some-crate = { version = "1.0", registry = "my-registry" } speci ed next :
[dependencies] rand = { git = "", branch = "next" }
Over time, our hello_world package from the guide has grown signi cantly in size! It’s gotten to the point that we probably want to split out a separate crate for others to use. To 17/22618/01/2020 The Cargo Book
do this Cargo supports path dependencies which are typically sub-crates that live within one repository. Let’s start o by making a new crate inside of our hello_world package:
# inside of hello_world/ $ cargo new hello_utils
This will create a new folder hello_utils inside of which a Cargo.toml and src folder are ready to be con gured. speci ed x to the library inside of the larger application. An upstream crate you don't work on has a new feature or a bug x x to an upstream crate for a bug you found, but you'd like to immediately have your application start depending on the xed version of the crate to avoid blocking on the bug x getting merged.
These scenarios are currently all solved with the [patch] manifest section. Historically some of these scenarios have been solved with the [replace] section, but we'll document 18/22618/01/2020 The Cargo Book
Testing a bug x
Let's say you're working with the uuid crate but while you're working on it you discover a bug. You are, however, quite enterprising so you decide to also try to x the bug! Originally your manifest will look like:
[package] name = "my-library" version = "0.1.0" authors = ["..."]
[dependencies] uuid = "1.0"
[patch.crates-io] uuid = { path = "../path/to/uuid" }
Here we declare that we're patching the source crates-io with a new dependency. This will e ectively add the local checked out version of uuid to the crates.io registry for our local package.
Next up we need to ensure that our lock le signi cant and will a ect!
$ cargo build Compiling uuid v1.0.0 (.../uuid) Compiling my-library v0.1.0 (.../my-library) Finished dev [unoptimized + debuginfo] target(s) in 0.32 secs 19/22618/01/2020 The Cargo Book xed = '' }
Let's now shift gears a bit from bug xes
[dependencies] uuid = "1.0.1": 20/22618/01/2020 The Cargo Book
[package] name = "my-binary" version = "0.1.0" authors = ["..."]
[dependencies] my-library = { git = '' } uuid = "1.0"
Remember that [patch] is applicable transitively but can only be de ned.
In case the dependency you want to override isn't loaded from crates.io , you'll have to change a bit how you use [patch] :
[patch.""] my-library = { path = "../my-library/path" }
As a nal scenario,: 21/22618/01/2020 The Cargo Book
.
Sometimes you're only temporarily working on a crate and you don't want to have to modify Cargo.toml like with the [patch] section above. For this use case Cargo o ers a much more limited version of overrides called path overrides.
Path overrides are speci ed through .cargo/config instead of Cargo.toml , and you can nd more documentation about this con guration. Inside of .cargo/config you'll specify a key called paths :
paths = ["/path/to/uuid"]
This array should be lled speci cation. For example this means that path overrides cannot be used to test out adding a dependency to a crate, instead [patch] must be used in that situation. As a result usage of a path override is typically isolated to quick bug xes rather than larger changes.
Note: using a local con guration to override paths will only work for crates that have been published to crates.io. You cannot use this feature to tell Cargo how to nd local unpublished crates. 22/22618/01/2020 The Cargo Book
Platform-speci c dependencies take the same format, but are listed under a target section. Normally Rust-like #[cfg] syntax will be used to de ne.
If you want to know which cfg targets are available on your platform, run rustc -- print=cfg from the command line. If you want to know which cfg targets are available for another platform, such as 64-bit Windows, run rustc --print=cfg --target=x86_64-pc- windows-msvc . speci cation, quote the full path and le name:
[target."x86_64/windows.json".dependencies] winhttp = "0.4.0"
[target."i686/linux.json".dependencies] openssl = "1.0.1" native = { path = "native/i686" }
[target."x86_64/linux.json".dependencies] openssl = "1.0.1" native = { path = "native/x86_64" }
Development dependencies 23/22618/01/2020 The Cargo Book o ers conditional features, you can specify which to use:
[dependencies.awesome] version = "1.3.5" default-features = false # do not include the default features, and optionally # cherry-pick individual features features = ["secure-password", "civet"] 24/22618/01/2020 The Cargo Book
When writing a [dependencies] section in Cargo.toml the key you write for a dependency typically matches up to the name of the crate you import from in the code. For some projects, though, you may wish to reference the crate with a di erent name in the code regardless of how it's published on crates.io. For example you may wish to::
All three of these crates have the package name of foo in their own Cargo.toml , so we're explicitly using the package key to inform Cargo that we want the foo package even though we're calling it something else locally. The package key, if not speci ed, defaults to the name of the dependency being requested.
! 25/22618/01/2020 The Cargo Book
[package] name = "hello_world" # the name of the package version = "0.1.0" # the current version, obeying semver authors = ["Alice <[email protected]>", "Bob <[email protected]>"]
The package name is an identi er used to refer to the package. It is used when listed as a dependency in another package, and as the default name of inferred lib and bin targets.
The name must not be empty, use only alphanumeric characters or - or _ . Note that cargo new and cargo init impose some additional restrictions on the package name, such as enforcing that it is a valid Rust identi er and not a keyword. crates.io imposes even more restrictions, such as enforcing only ASCII characters, not a reserved name, not a special Windows name such as "nul", is not too long, etc.
Cargo bakes in the concept of Semantic Versioning, so make sure you follow some basic rules:
Before you reach 1.0.0, anything goes, but if you make breaking changes, increment the minor version. In Rust, breaking changes include adding elds to structs or variants to enums. After 1.0.0, only make breaking changes when you increment the major version. Don’t break the build. After 1.0.0, don’t add any new public API (no new pub anything) in patch-level versions. Always increment the minor version if you add any new pub structs, traits, elds, types, functions, methods or anything else. Use version numbers with three numeric parts such as 1.0.0 rather than 1.0.
The authors eld lists people or organizations that are considered the "authors" of the package. The exact meaning is open to interpretation — it may list the original or primary authors, current maintainers, or owners of the package. These names will be listed on the 26/22618/01/2020 The Cargo Book
crate's page on crates.io. An optional email address may be included within angled brackets at the end of each author.
You can opt in to a speci c Rust Edition for your package with the edition key in Cargo.toml . If you don't specify the edition, it will default to 2015.
[package] # ... edition = '2018'
The edition key a ects which edition your package is compiled with. Cargo will always generate packages via cargo new with the edition key set to the latest edition. Setting the edition key in [package] will a ect all targets/crates in the package, including test suites, benchmarks, binaries, examples, etc.
This eld speci es a le in the package root which is a build script for building native code. More information can be found in the build script guide.
[package] # ... build = "build.rs"
This eld speci es the name of a native library that is being linked to. More information can be found in the links section of the build script guide.
[package] # ... links = "foo" build = "build.rs"
This eld speci es a URL to a website hosting the crate's documentation. If no URL is speci ed in the manifest le, crates.io will automatically link your crate to the corresponding docs.rs page.
Documentation links from speci c hosts are blacklisted. Hosts are added to the blacklist if they are known to not be hosting documentation and are possibly of malicious intent e.g., ad tracking networks. URLs from the following hosts are blacklisted:
rust-ci.org 27/22618/01/2020 The Cargo Book
Documentation URLs from blacklisted hosts will not appear on crates.io, and may be replaced by docs.rs links.
You can explicitly specify that a set of le patterns should be ignored or included for the purposes of packaging. The patterns speci ed in the exclude eld identify a set of les that are not included, and the patterns in include specify les that are explicitly included.
foo matches any le or directory with the name foo anywhere in the package. This is equivalent to the pattern **/foo . /foo matches any le or directory with the name foo only in the root of the package. foo/ matches any directory with the name foo anywhere in the package. Common glob patterns like * , ? , and [] are supported: * matches zero or more characters except / . For example, *.html matches any le or directory with the .html extension anywhere in the package. ? matches any character except / . For example, foo? matches food , but not foo . [] allows for matching a range of characters. For example, [ab] matches either a or b . [a-z] matches letters a through z. **/ pre x matches in any directory. For example, **/foo/bar matches the le or directory bar anywhere that is directly under directory foo . /** su x matches everything inside. For example, foo/** matches all les inside directory foo , including all les in subdirectories below foo . /**/ matches zero or more directories. For example, a/**/b matches a/b , a/x/b , a/x/y/b , and so on. ! pre x negates a pattern. For example, a pattern of src/**.rs and !foo.rs would match all les with the .rs extension inside the src directory, except for any le named foo.rs .
If git is being used for a package, the exclude eld will be seeded with the gitignore settings from the repository.
[package] # ... exclude = ["build/**/*.o", "doc/**/*.html"]
[package] # ... include = ["src/**/*", "Cargo.toml"]
The options are mutually exclusive: setting include will override an exclude . Note that include must be an exhaustive list of les as otherwise necessary source les may not be included. The package's Cargo.toml is automatically included. 28/22618/01/2020 The Cargo Book
The include/exclude list is also used for change tracking in some situations. For targets built with rustdoc , it is used to determine the list of les to track to determine if the target should be rebuilt. If the package has a build script that does not emit any rerun-if-* directives, then the include/exclude list is used for tracking if the build script should be re- run if any of those les change.
The publish eld can be used to prevent a package from being published to a package registry (like crates.io) by mistake, for instance to keep a package private in a company.
[package] # ... publish = false
The value may also be an array of strings which are registry names that are allowed to be published to.
[package] # ... publish = ["some-registry-name"]
The workspace eld can be used to con gure the workspace that this package will be a member of. If not speci ed this will be inferred as the rst Cargo.toml with [workspace] upwards in the lesystem.
[package] # ... workspace = "path/to/workspace/root"
For more information, see the documentation for the workspace table below.
Package metadata
There are a number of optional metadata elds also accepted under the [package] section: 29/22618/01/2020 The Cargo Book
= "..." 30/22618/01/2020 The Cargo Book
# can describe why, there could be a better solution available or there could be problems with # the crate that the author does not want to fix). # - `none`: Displays no badge on crates.io, since the maintainer has not chosen to specify # their intentions, potential crate users will need to investigate on their own. maintenance = { status = "..." }
The crates.io registry will render the description, display the license, link to the three URLs and categorize by the keywords. These keys provide useful information to users of the registry and also in uence the search ranking of a crate. It is highly discouraged to omit everything in a published crate.
SPDX 2.1 license expressions are documented here. The current version of the license list is available here, and version 3.6 is available here.
Cargo by default will warn about unused keys in Cargo.toml to assist in detecting typos and such. The package.metadata table, however, is completely ignored by Cargo and will not be warned about. This section can be used for tools which would like to store package con guration in Cargo.toml . For example:
[package] name = "..." # ...
The default-run eld in the [package] section of the manifest can be used to specify a default binary picked by cargo run . For example, when there is both src/bin/a.rs and src/bin/b.rs :
[package] default-run = "a"
Dependency sections
See the specifying dependencies page for information on the [dependencies] , [dev- dependencies] , [build-dependencies] , and target-speci c [target.*.dependencies] sections. 32/22618/01/2020 The Cargo Book
Cargo supports custom con guration of how rustc is invoked through pro les at the top level. Any manifest may declare a pro le, but only the top level package’s pro les are actually read. All dependencies’ pro les will be overridden. This is done so the top-level package has control over how its dependencies are compiled.
There are four currently supported pro le names, all of which have the same con guration available to them. Listed below is the con guration available, along with the defaults for each pro le. 33/22618/01/2020 The Cargo Book
# The release profile, used for `cargo build --release` (and the dependencies # for `cargo test --release`, including the local library or binary). [profile.release] opt-level = 3 debug = false rpath = false lto = false debug-assertions = false codegen-units = 16 panic = 'unwind' incremental = false overflow-checks = false
# The testing profile, used for `cargo test` (for `cargo test --release` see # the `release` and `bench` profiles). [profile.test] opt-level = 0 debug = 2 rpath = false lto = false debug-assertions = true codegen-units = 16 panic = 'unwind' incremental = true overflow-checks = true
# The benchmarking profile, used for `cargo bench` (and the test targets and # unit tests for `cargo test --release`). [profile.bench] opt-level = 3 34/22618/01/2020 The Cargo Book
debug = false rpath = false lto = false debug-assertions = false codegen-units = 16 panic = 'unwind' incremental = false overflow-checks = false 35/22618/01/2020 The Cargo Book
[package] name = "awesome"
[features] # The default set of optional packages. Most people will want to use these # packages, but they are strictly optional. Note that `session` is not a package # but rather another feature listed in this manifest. default = ["jquery", "uglifier", }
Rules
Feature names must not con ict with other package names in the manifest. This is because they are opted into via features = [...] , which only has a single namespace. With the exception of the default feature, all features are opt-in. To opt out of the default feature, use default-features = false and cherry-pick individual features. 36/22618/01/2020 The Cargo Book
Note that it is explicitly allowed for features to not actually activate any optional dependencies. This allows packages to internally enable/disable features without requiring a new dependency.
One major use-case for this feature is specifying optional features in end-products. For example, the Servo package may want to include optional features that people can enable or disable when they build it.
In that case, Servo will describe features in its Cargo.toml and they can be enabled using command-line ags:
Usage in packages:
In almost all cases, it is an antipattern to use these features outside of high-level packages that are designed for curation. If a feature is optional, it can almost certainly be expressed 37/22618/01/2020 The Cargo Book
as a separate package.
Packages can de ne a workspace which is a set of crates that will all share the same Cargo.lock and output directory. The [workspace] table can be de ned as:
[workspace]
Workspaces were added to Cargo as part of RFC 1525 and have a number of properties:
A workspace can contain multiple crates where one of them is the root crate. The root crate's Cargo.toml contains the [workspace] table, but is not required to have other con guration. Whenever any crate in the workspace is compiled, output is placed in the workspace root (i.e., next to the root crate's Cargo.toml ). The lock le for all crates in the workspace resides in the workspace root. The [patch] , [replace] and [profile.*] sections in Cargo.toml are only recognized in the root crate's manifest, and ignored in member crates' manifests.
The root crate of a workspace, indicated by the presence of [workspace] in its manifest, is responsible for de ning the entire workspace. All path dependencies residing in the workspace directory become members. You can add additional packages to the workspace by listing them in the members key. Note that members of the workspaces listed explicitly will also have their path dependencies included in the workspace. Sometimes a package may have a lot of workspace members and it can be onerous to keep up to date. The members list rst crate whose manifest contains [workspace] upwards in the lesystem. con guration automatically. 38/22618/01/2020 The Cargo Book
Virtual Manifest
In workspace manifests, if the package table is present, the workspace root crate will be treated as a normal package, as well as a workspace. If the package table is not present in a workspace manifest, it is called a virtual manifest.
Package selection
[workspace] members = ["path/to/member1", "path/to/member2", "path/to/member3/*"] default-members = ["path/to/member2", "path/to/member3/foo"]
When default-members is not speci ed, the default is the root manifest if it is a package, or every member manifest (as if --workspace were speci ed on the command-line) for virtual workspaces.
Cargo will also treat any les located in src/bin/*.rs as executables. If your executable consists of more than just one source le, you might also use a directory inside src/bin containing a main.rs le which will be treated as an executable with a name of the parent directory.
Your package can optionally contain folders named examples , tests , and benches , which Cargo will treat as containing examples, integration tests, and benchmarks respectively. Analogous to bin targets, they may be composed of single les or directories with a main.rs le. 39/22618/01/2020 The Cargo Book
To structure your code after you've created the les and folders for your package, you should remember to use Rust's module system, which you can read about in the book.
See Con guring a target below for more details on manually con guring target settings. See Target auto-discovery below for more information on controlling how Cargo automatically infers targets.> .
[[example]] name = "foo" crate-type = ["staticlib"]
You can build individual library examples with the command cargo build --example <example-name> . 40/22618/01/2020 The Cargo Book
Tests
compile and run your library’s unit tests, which are in the les le in tests/*.rs is an integration test. When you run cargo test , Cargo will compile each of these les as a separate crate. The crate can link to your library by using extern crate <library-name> , like any other code that depends on it.
Cargo will not automatically compile les les.
All of the [[bin]] , [lib] , [[bench]] , [[test]] , and [[example]] sections support similar con guration speci ed. 41/22618/01/2020 The Cargo Book
[lib] # The name of a target is the name of the library that will be generated. This # is defaulted to the name of the package,
# If set then a target can be configured to use a different edition than the # `[package]` is configured to use, perhaps only compiling a library with the # 2018 edition or only compiling one unit test with the 2015 edition. By default # all targets are compiled with the edition specified in `[package]`. edition = '2015'
Target auto-discovery 42/22618/01/2020 The Cargo Book
By default, Cargo automatically determines the targets to build based on the layout of the les on the lesystem. The target con guration tables, such as [lib] , [[bin]] , [[test]] , [[bench]] , or [[example]] , can be used to add additional targets that don't follow the standard directory layout.
The automatic target discovery can be disabled so that only manually con gured targets will be built. Setting the keys autobins , autoexamples , autotests , or autobenches to false in the [package] section will disable auto-discovery of the corresponding target type.
[package] # … autobins = false
Note: For packages with the 2015 edition, the default for auto-discovery is false if at least one target is manually de ned in Cargo.toml . Beginning with the 2018 edition, the default is always true .
The required-features eld speci es which features the target needs in order to be built. If any of the required features are not selected, the target will be skipped. This is only relevant for the [[bin]] , [[bench]] , [[test]] , and [[example]] sections, it has no e ect on [lib] .
[features] # ... postgres = [] sqlite = [] tools = []
[[bin]] # ... required-features = ["postgres", "tools"] 43/22618/01/2020 The Cargo Book
If your package can read more about the di erent crate types in the Rust Reference Manual
This rst [patch] in the example above demonstrates overriding crates.io, and the second [patch] demonstrates overriding a git source.
Each entry in these tables is a normal dependency speci cation, the same as found in the [dependencies] section of the manifest. The dependencies listed in the [patch] section are resolved and used to patch the source at the URL speci ed.. 44/22618/01/2020 The Cargo Book
this feature.
You can patch in multiple versions of the same crate with the package key used to rename dependencies. For example let's say that the serde crate has a bug x that we'd like to use to its 1.* series but we'd also like to prototype using a 2.0.0 version of serde we have in our git repository. To con gure this we'd do:
[patch.crates-io] serde = { git = '' } serde2 = { git = '', package = 'serde', branch = 'v2' }
The rst serde = ... directive indicates that serde 1.* should be used from the git repository (pulling in the bug x identi er here is actually ignored. We simply need a unique name which doesn't con ict with other patched crates.
[replace] "foo:0.1.0" = { git = '' } "bar:1.0.2" = { path = 'my/local/bar' }
Each key in the [replace] table is a package ID speci cation, which allows arbitrarily choosing a node in the dependency graph to override. di erent source (e.g., git or a local path).
Con guration 45/22618/01/2020 The Cargo Book
This document will explain how Cargo’s con guration system works, as well as available keys or con guration. For con guration of a package through its manifest, see the manifest format.
Hierarchical structure
Cargo allows local con guration for a particular package as well as global con guration, like git. Cargo extends this to a hierarchical strategy. If, for example, Cargo were invoked in /projects/foo/bar/baz , then the following con guration les would be probed for and uni ed in this order:
/projects/foo/bar/baz/.cargo/config /projects/foo/bar/.cargo/config /projects/foo/.cargo/config /projects/.cargo/config /.cargo/config $CARGO_HOME/config ( $CARGO_HOME defaults to $HOME/.cargo )
With this structure, you can specify con guration per-package, and even possibly check it into version control. You can also specify personal defaults with a con guration le in your home directory.
All con guration is currently in the TOML format (like the manifest), with simple key-value pairs inside of sections (tables) which all get merged together. con g le that the value resides within. 46/22618/01/2020 The Cargo Book
change the version control system used. Valid values are `git`, # `hg` (for Mercurial), `pijul`, `fossil`, = ["..", ".."] # Similar for the $triple configuration, but using the `cfg` syntax. # If one or more `cfg`s, and a $triple target are candidates, then the $triple # will be used # If several `cfg` are candidates, then the build will error runner = ".." 47/22618/01/2020 The Cargo Book
[http] proxy = "host:port" # HTTP proxy to use for HTTP requests (defaults to none) # in libcurl format, e.g., "socks5h://host:port" timeout = 30 # Timeout for each HTTP request, in seconds cainfo = "cert.pem" # Path to Certificate Authority (CA) bundle (optional) check-revoke = true # Indicates whether SSL certs are checked for revocation ssl-version = "tlsv1.3" # Indicates which SSL version or above to use (options are # "default", "tlsv1", "tlsv1.0", "tlsv1.1", "tlsv1.2", "tlsv1.3") # To better control SSL version, we can even use # `ssl-version.min = "..."` and `ssl-version.max = "..."` # where "..." is one of the above options. But note these two forms # ("setting `ssl-version`" and "setting both `min`/`max`) # can't co-exist. low-speed-limit = 5 # Lower threshold for bytes/sec (10 = default, 0 = disabled) multiplexing = true # whether or not to use HTTP/2 multiplexing where possible
# This setting can be used to help debug what's going on with HTTP requests made # by Cargo. When set to `true` then Cargo's normal debug logging will be filled # in with HTTP information, which you can extract with # `CARGO_LOG=cargo::ops::registry=debug` (and `trace` may print more). # # Be wary when posting these logs elsewhere though, it may be the case that a # header has an authentication token in it you don't want leaked! Be sure to # briefly review logs before posting them. debug = false
[build] jobs = 1 # number of parallel jobs, defaults to # of CPUs rustc = "rustc" # the rust compiler tool rustc-wrapper = ".." # run this wrapper instead of `rustc`; useful to set up a # build cache tool such as `sccache` rustdoc = "rustdoc" # the doc generator tool target = "triple" # build for the target triple (ignored by `cargo install`) target-dir = "target" # path of where to place all generated artifacts rustflags = ["..", ".."] # custom flags to pass to all compiler invocations rustdocflags = ["..", ".."] # custom flags to pass to rustdoc incremental = true # whether or not to enable incremental compilation # If `incremental` is not set, then the value from # the profile is used. dep-info-basedir = ".." # full path for the base directory for targets in depfiles
[term] verbose = false # whether cargo provides verbose output color = 'auto' # whether cargo colorizes output 48/22618/01/2020 The Cargo Book
# Network configuration [net] retry = 2 # number of times a network call will automatically retried git-fetch-with-cli = false # if `true` we'll use `git`-the-CLI to fetch git repos offline = false # do not access the network, but otherwise try to proceed if possible
# Alias cargo commands. The first 4 aliases are built in. If your # command requires grouped whitespace use the list format. [alias] b = "build" c = "check" t = "test" r = "run" rr = "run --release" space_example = ["run", "--release", "--", "\"command list\""]
Environment variables
Cargo can also be con gured through environment variables in addition to the TOML syntax above. For each con guration key above of the form foo.bar the environment variable CARGO_FOO_BAR can also be used to de ne the value. For example the build.jobs key can also be de ned by CARGO_BUILD_JOBS .
Environment variables will take precedent over TOML con guration, and currently only integer, boolean, and string keys are supported to be de ned by environment variables. This means that source replacement, which is expressed by tables, cannot be con gured through environment variables.
In addition to the system above, Cargo recognizes a few other speci c environment variables.
Credentials
Con guration values with sensitive information are stored in the $CARGO_HOME/credentials le. This le is automatically created and updated by cargo login . It follows the same format as Cargo con g les.
[registry] token = "..." # Access token for crates.io
# `$name` should be a registry name (see above for more information about # configuring registries). [registries.$name] token = "..." # Access token for the named registry
Tokens are used by some Cargo commands such as cargo publish for authenticating with remote registries. Care should be taken to protect the tokens and to keep them secret. 49/22618/01/2020 The Cargo Book
As with most other con g values, tokens may be speci ed with environment variables..
Environment Variables Cargo sets and reads a number of environment variables which your code can detect or override. Here is a list of the variables Cargo sets, organized by when it interacts with them:. For more details refer to the guide. CARGO_TARGET_DIR — Location of where to place all generated artifacts, relative to the current working directory. RUSTC — Instead of running rustc , Cargo will execute this speci ed compiler instead. RUSTC_WRAPPER — Instead of simply running rustc , Cargo will execute this speci ed wrapper instead, passing as its commandline arguments the rustc invocation, with the rst argument being rustc . Useful to set up a build cache tool such as sccache . RUSTDOC — Instead of running rustdoc , Cargo will execute this speci ed rustdoc instance instead. RUSTDOCFLAGS — A space-separated list of custom ags to pass to all rustdoc invocations that Cargo performs. In contrast with cargo rustdoc , this is useful for passing a ag to all rustdoc instances. RUSTFLAGS — A space-separated list of custom ags to pass to all compiler invocations that Cargo performs. In contrast with cargo rustc , this is useful for passing a ag to all compiler instances. CARGO_INCREMENTAL — If this is set to 1 then Cargo will force incremental compilation to be enabled for the current compilation, and when set to 0 it will force disabling it. If this env var isn't present then cargo's defaults will otherwise be used. CARGO_CACHE_RUSTC_INFO — If this is set to 0 then Cargo will not try to cache compiler version information.
Note that Cargo will also read environment variables for .cargo/config con guration values, as described in that documentation 50/22618/01/2020 The Cargo Book
Cargo exposes these environment variables to your crate when it is compiled. Note that this applies for test binaries as well. To get the value of any of these variables in a Rust program, do this:(); 51/22618/01/2020 The Cargo Book
Cargo exposes this environment variable to 3rd party subcommands (ie. programs named cargo-foobar placed in $PATH ): con guration option. 52/22618/01/2020 The Cargo Book
The Rust le designated by the build command (relative to the package root) will be compiled and invoked before anything else is compiled in the package, allowing your Rust code to depend on the built or generated artifacts. By default Cargo looks for a "build.rs" le in a package root (even if you do not specify a value for build ). Use build = "custom_build_name.rs" to specify a custom build name or build = false to disable automatic detection of the build script.
Each of these use cases will be detailed in full below to give examples of how the build command works.
When the build script is run, there are a number of inputs to the build script, all passed in the form of environment variables.
In addition to environment variables, the build script’s current directory is the source directory of the build script’s package.
All the lines printed to stdout by a build script are written to a le like target/debug/build/<pkg>/output (the precise location may depend on your con guration). If you would like to see such output directly in your terminal, invoke cargo as 'very verbose' with the -vv ag. Note that if neither the build script nor package source les are modi ed,: 53/22618/01/2020 The Cargo Book
There are a few special keys that Cargo recognizes, some a ecting how the crate is built: 54/22618/01/2020 The Cargo Book
Any other element is a user-de ned.
In addition to the manifest key build , Cargo also supports a links manifest key to declare the name of a native library that is being linked to:
This manifest states that the package links to the libfoo native library, and it also has a build script for locating and/or building the library. Cargo requires that a build command is speci ed if a links entry is also speci ed.
The purpose of this manifest key is to give Cargo an understanding about the set of native dependencies that a package has, as well as providing a principled system of passing metadata between package build scripts. 55/22618/01/2020 The Cargo Book.
If a manifest contains a links key, then Cargo supports overriding the build script speci ed with a custom library. The purpose of this functionality is to prevent running the build script in question altogether and instead supply the metadata ahead of time.
To override a build script, place the following con guration in any acceptable Cargo con guration location.
[target.x86_64-unknown-linux-gnu.foo] rustc-link-search = ["/path/to/foo"] rustc-link-lib = ["foo"] root = "/path/to/foo" key = "value"
This section states that for the target x86_64-unknown-linux-gnu the library named foo has the metadata speci ed. This metadata is the same as the metadata generated as if the build script had run, providing a number of key/value pairs where the rustc-flags , rustc- link-search , and rustc-link-lib keys are slightly special.
With this con guration, if a package declares that it links to foo then the build script will not be compiled or run, and the metadata speci ed will instead be used.
Some Cargo packages need to have code generated just before they are compiled for various reasons. Here we’ll walk through a simple example which generates a library call as part of the build script. 56/22618/01/2020 The Cargo Book
. speci ed(); }
The script uses the OUT_DIR environment variable to discover where the output les should be located. It can use the process’ current working directory to nd where the input les should be located, but in this case we don’t have any input les. In general, build scripts should not modify any les outside of OUT_DIR . It may seem ne on the rst blush, but it does cause problems when you use such crate as a dependency, because there's an implicit invariant that sources in .cargo/registry should be immutable. cargo won't allow such scripts when packaging. This script is relatively simple as it just writes out a small generated le. One could imagine that other more fanciful operations could take place such as generating a Rust module from a C header le or another language de nition, for example. 57/22618/01/2020 The Cargo Book
// src/main.rs
include!(concat!(env!("OUT_DIR"), "/hello.rs"));
fn main() { println!("{}", message()); }
This is where the real magic happens. The library is using the rustc-de ned include! macro in combination with the concat! and env! macros to include the generated le ( hello.rs ) into the crate’s compilation.
Using the structure shown here, crates can include any number of generated les from the build script itself.!”.
. ├── Cargo.toml ├── build.rs └── src ├── hello.c └── main.rs
1 directory, 4 files
[package] name = "hello-world-from-c" version = "0.1.0" authors = ["[email protected]"] build = "build.rs"
For now we’re not going to use any build dependencies, so let’s take a look at the build script now: 58/22618/01/2020 The Cargo Book"); }
This build script starts out by compiling our C le into an object le (by invoking gcc ) and then converting this object le into a static library (by invoking ar ). The nal step is feedback to Cargo itself to say that our output was in out_dir and the compiler should link the crate to libhello.a statically via the -l static=hello ag.
The gcc command itself is not portable across platforms. For example it’s unlikely that Windows platforms have gcc , and not even all Unix platforms may have gcc . The ar command is also in a similar situation. These commands do not take cross-compilation into account. If we’re cross compiling for a platform such as Android it’s unlikely that gcc will produce an ARM executable.
Not to fear, though, this is where a build-dependencies entry would help! The Cargo ecosystem has a number of packages to make this sort of task much easier, portable, and standardized. For example, the build script could be written as:
fn main() { cc::Build::new() .file("src/hello.c") .compile("hello"); }
Add a build time dependency on the cc crate with the following addition to your Cargo.toml : 59/22618/01/2020 The Cargo Book
[build-dependencies] cc = "1.0"
It invokes the appropriate compiler (MSVC for windows, gcc for MinGW, cc for Unix platforms, etc.). It takes the TARGET variable into account by passing appropriate ags to the compiler being used. Other environment variables, such as OPT_LEVEL , DEBUG , etc., are all handled automatically. The stdout output and OUT_DIR locations are also handled by the cc library.
Here we can start to see some of the major bene ts"); }
// Note the lack of the `#[link]` attribute. We’re delegating the responsibility // of selecting what to link to over to the build script rather than hardcoding //.
The nal case study here will be investigating how a Cargo library links to a system library and how the build script is leveraged to support this use case. 60/22618/01/2020 The Cargo speci ed,-speci c, crates.io has a convention of package naming and functionality. Any package named foo-sys should provide two major pieces of functionality: 61/22618/01/2020 The Cargo Book
The library crate should link to the native library libfoo . This will often probe the current system for libfoo before resorting to building from source. The library crate should provide declarations for functions in libfoo , but not bindings or higher-level abstractions.
The set of *-sys packages provides a common set of dependencies for linking to native libraries. There are a number of bene ts earned from having this convention of native- library-related packages:
Common dependencies on foo-sys alleviates the above rule about one package per value of links . A common dependency allows centralizing logic on discovering libfoo itself (or building it from source). These dependencies are easily overridable.
Building libgit2
Now that we’ve got libgit2’s dependencies sorted out, we need to actually write the build script. We’re not going to look at speci c snippets of code here and instead only take a look at the high-level details of the build script of libgit2-sys . This is not recommending all packages follow this strategy, but rather just outlining one speci c strategy.
The rst nd:
Most of the functionality of this build script is easily refactorable into common dependencies, so our build script isn’t quite as intimidating as this descriptions! In reality it’s expected that build scripts are quite succinct by farming logic such as above to build dependencies. 62/22618/01/2020 The Cargo Book
Publishing on crates.io Once you've got a library that you'd like to share with the world, it's time to publish it on crates.io! Publishing a crate is when a speci c version is uploaded to be hosted on crates.io.
Take care when publishing a crate, because a publish is permanent. The version can never be overwritten, and the code cannot be deleted. There is no limit to the number of versions which can be published, however.
First thing’s rst, you’ll need an account on crates.io to acquire an API token. To do so, visit the home page and log in via a GitHub account (required for now). After this, visit your Account Settings page and run the cargo login command speci ed.
This command will inform Cargo of your API token and store it locally in your ~/.cargo/credentials . Note that this token is a secret and should not be shared with anyone else. If it leaks for any reason, you should regenerate it immediately.
Keep in mind that crate names on crates.io are allocated on a rst-come- rst- serve basis. Once a crate name is taken, it cannot be used for another crate.
Check out the metadata you can specify in Cargo.toml to ensure your crate can be discovered more easily! Before publishing, make sure you have lled out the following elds:
authors license or license-file description homepage documentation repository
It would also be a good idea to include some keywords and categories , though they are not required.
If you are publishing a library, you may also want to consult the Rust API Guidelines.
Packaging a crate 63/22618/01/2020 The Cargo Book
The next step is to package up your crate and upload it to crates.io. For this we’ll use the cargo publish subcommand. This command performs the following steps:
It is recommended that you rst run cargo publish --dry-run (or cargo package which is equivalent) to ensure there aren't any warnings or errors before publishing. This will perform the rst three steps listed above.
You can inspect the generated .crate le in the target/package directory. crates.io currently has a 10MB size limit on the .crate le. You may want to check the size of the .crate le to ensure you didn't accidentally package up large assets that are not required to build your package, such as test data, website documentation, or code generation. You can check which les are included with the following command:
Cargo will automatically ignore les ignored by your version control system when packaging, but if you want to specify an extra set of les to ignore you can use the exclude key in the manifest:
[package] # ... exclude = [ "public/assets/*", "videos/*", ]
If you’d rather explicitly list the les to include, Cargo also supports an include key, which if set, overrides the exclude key:
[package] # ... include = [ "**/*.rs", "Cargo.toml", ]
When you are ready to publish, use the cargo publish command to upload to crates.io: 64/22618/01/2020 The Cargo Book
$ cargo publish
In order to release a new version, change the version value speci ed in your Cargo.toml manifest. Keep in mind the semver rules, and consult RFC 1105 for what constitutes a semver-breaking change. Then run cargo publish as described above to upload the new version.
Management of crates is primarily done through the command line cargo tool rather than the crates.io web interface. For this, there are a few subcommands to manage a crate.
cargo yank
Occasions may arise where you publish a version of a crate that actually ends up being broken for one reason or another (syntax error, forgot to include a le, etc.). For situations such as this, Cargo supports a “yank” of a version of a crate.
A yank does not delete any code. This feature is not intended for deleting accidentally uploaded secrets, for example. If that happens, you must reset those secrets immediately.
The semantics of a yanked version are that no new dependencies can be created against that version, but all existing dependencies continue to work. One of the major goals of crates.io is to act as a permanent archive of crates that does not change over time, and allowing deletion of a version would go against this goal. Essentially a yank means that all packages with a Cargo.lock will not break, while any future Cargo.lock les generated will not list the yanked version.
cargo owner
A crate is often developed by more than one person, or the primary maintainer may change over time! The owner of a crate is the only person allowed to publish new versions of the crate, but an owner may designate additional owners. 65/22618/01/2020 The Cargo Book
The owner IDs given to these commands must be GitHub user names or GitHub teams.
If a user name is given to --add , that user is invited as a “named” owner, with full rights to the crate. In addition to being able to publish or yank versions of the crate, they have the ability to add or remove owners, including the owner that made them an owner. Needless to say, you shouldn’t make people you don’t fully trust into a named owner. In order to become a named owner, a user must have logged into crates.io previously.
If a team name is given to --add , that team is invited as a “team” owner, with restricted right to the crate. While they have permission to publish or yank versions of the crate, they do not have the ability to add or remove owners. In addition to being more convenient for managing groups of owners, teams are just a bit more secure against owners becoming malicious.
The syntax for teams is currently github:org:team (see examples above). In order to invite a team as an owner one must be a member of that team. No such restriction applies to removing a team as an owner.
GitHub permissions
Team membership is not something GitHub provides simple public access to, and it is likely for you to encounter the following message when working with them:
It looks like you don’t have permission to query a necessary property from GitHub to complete this request. You may need to re-authenticate on crates.io to grant permission to read GitHub org memberships. Just go to.
This is basically a catch-all for “you tried to query a team, and one of the ve levels of membership access control denied this”. That is not an exaggeration. GitHub’s support for team access control is Enterprise Grade.
The most likely cause of this is simply that you last logged in before this feature was added. We originally requested no permissions from GitHub when authenticating users, because we didn’t actually ever use the user’s token for anything other than logging them in. However to query team membership on your behalf, we now require the read:org scope.
You are free to deny us this scope, and everything that worked before teams were introduced will keep working. However you will never be able to add a team as an owner, or publish a crate as a team owner. If you ever attempt to do this, you will get the error above. You may also see this error if you ever try to publish a crate that you don’t own at all, but otherwise happens to have a team.
If you ever change your mind, or just aren’t sure if crates.io has su cient permission, you can always go to, which will prompt you for permission if crates.io doesn’t have all the scopes it would like to. 66/22618/01/2020 The Cargo Book
An additional barrier to querying GitHub is that the organization may be actively denying third party access. To check this, you can go to:
where :org is the name of the organization (e.g., rust-lang ). You may see something like:
Where you may choose to explicitly remove crates.io from your organization’s blacklist, or simply press the “Remove Restrictions” button to allow all third party applications to access this data.
Alternatively, when crates.io requested the read:org scope, you could have explicitly whitelisted crates.io querying the org in question by pressing the “Grant Access” button next to its name: 67/22618/01/2020 The Cargo Book
pkgid := pkgname | [ proto "://" ] hostname-and-path [ "#" ( pkgname | semver ) ] pkgname := name [ ":" semver ]
These could all be references to a package foo version 1.2.3 from the registry at crates.io
The goal of this is to enable both succinct and exhaustive syntaxes for referring to packages in a dependency graph. Ambiguous references may refer to one or more packages. Most commands generate an error if more than one package could be referred to with the same speci cation. 68/22618/01/2020 The Cargo Book
Source Replacement This document is about replacing the crate index. You can read about overriding dependencies in the overriding dependencies section of this documentation.
A source is a provider that contains crates that may be included as dependencies for a package. Cargo supports the ability to replace one source with another to express strategies such as:
Vendoring - custom sources can be de ned which represent crates on the local lesystem. These sources are subsets of the source that they're replacing and can be checked into packages if necessary. Mirroring - sources can be replaced with an equivalent version which acts as a cache for crates.io itself.
Cargo has a core assumption about source replacement that the source code is exactly the same from both sources. Note that this also means that a replacement source is not allowed to have crates which are not present in the original source.
Con guration of replacement sources is done through .cargo/config and the full set of available keys are: 69/22618/01/2020 The Cargo Book
# Under the `source` table are a number of other tables whose keys are a # name for the relevant source. For example this section defines a new # source, called `my-vendor-source`, which comes from a directory # located at `vendor` relative to the directory containing this `.cargo/config` # file [source.my-vendor-source] directory = "vendor"
# The crates.io default source for crates is available under the name # "crates-io", and here we use the `replace-with` key to indicate that it's # replaced with our source above. [source.crates-io] replace-with = "my-vendor-source"
# Each source has its own table where the key is the name of the source [source.the-source-name]
Registry Sources
A "registry source" is one that is the same as crates.io itself. That is, it has an index served in a git repository which matches the format of the crates.io index. That repository then has con guration indicating where to download crates from.
Currently there is not an already-available project for setting up a mirror of crates.io. Stay tuned though!
A "local registry source" is intended to be a subset of another registry source, but available on the local lesystem (aka vendoring). Local registries are downloaded ahead of time, typically sync'd with a Cargo.lock , and are made up of a set of *.crate les and an index like the normal registry is. 70/22618/01/2020 The Cargo Book
The primary way to manage and create local registry sources is through the cargo-local- registry subcommand, available on crates.io and can be installed with cargo install cargo-local-registry .
Local registries are contained within one directory and contain a number of *.crate les downloaded from crates.io as well as an index directory with the same format as the crates.io-index project (populated with just entries for the crates that are present).
Directory Sources
A "directory source" is similar to a local registry source where it contains a number of crates available on the local lesystem, suitable for vendoring dependencies. Directory sources are primarily managed the cargo vendor subcommand.
Directory sources are distinct from local registries though in that they contain the unpacked version of *.crate les, making it more suitable in some situations to check everything into source control. A directory source is just a directory containing a number of other directories which contain the source code for crates (the unpacked version of *.crate les). Currently no restriction is placed on the name of each directory.
Each crate in a directory source also has an associated metadata le indicating the checksum of each le in the crate to protect against accidental modi cations.
External tools One of the goals of Cargo is simple integration with third-party tools, like IDEs and other build systems. To make integration easier, Cargo has several facilities:
You can use cargo metadata command to get information about package structure and dependencies. The output of the command looks like this: 71/22618/01/2020 The Cargo Book
{ // Integer version number of the format. "version": integer,
"name": string,
"version": string,
"source": SourceId,
"targets: [ Target ],
// Path to Cargo.toml "manifest_path": string, } ],
"workspace_members": [ PackageId ],
// Dependencies graph. "resolve": { "nodes": [ { "id": PackageId, "dependencies": [ PackageId ] } ] } }
The format is stable and versioned. When calling cargo metadata , you should pass -- format-version ag explicitly to avoid forward incompatibility hazard.
When passing --message-format=json , Cargo will output the following information during the build: 72/22618/01/2020 The Cargo Book
The output goes to stdout in the JSON object per line format. The reason eld distinguishes di erent kinds of messages.
Information about dependencies in the Make le-compatible format is stored in the .d les alongside the artifacts.
Custom subcommands
Cargo is designed to be extensible with new subcommands without having to modify Cargo itself. This is achieved by translating a cargo invocation of the form cargo (?<command>[^ ]+) into an invocation of an external tool cargo-${command} . The external tool must be present in one of the user's $PATH directories.
When Cargo invokes a custom subcommand, the rst argument to the subcommand will be the lename of the custom subcommand, as usual. The second argument will be the subcommand name itself. For example, the second argument would be ${command} when invoking cargo-${command} . Any additional arguments on the command line will be forwarded unchanged.
Cargo can also display the help output of a custom subcommand with cargo help ${command} . Cargo assumes that the subcommand will print a help message if its third argument is --help . So, cargo help ${command} would invoke cargo-${command} ${command} --help .
Custom subcommands may use the CARGO environment variable to call back to Cargo. Alternatively, it can link to cargo crate as a library, but this approach has drawbacks:
Registries.
To use a registry other than crates.io, the name and index URL of the registry must be added to a .cargo/config le. The registries table has a key for each registry, for 73/22618/01/2020 The Cargo Book con g values, the index may be speci ed with an environment variable instead of a con g le. For example, setting the following environment variable will accomplish the same thing as de ning a con g le:
CARGO_REGISTRIES_MY_REGISTRY_INDEX=
Note: crates.io does not accept packages that depend on crates from other registries.
If the registry supports web API access, then packages can be published directly to the registry from Cargo. Several of Cargo's commands such as cargo publish take a -- registry command-line ag to indicate which registry to use. For example, to publish the package in the current directory:
Instead of always passing the --registry command-line option, the default registry may be set in .cargo/config: 74/22618/01/2020 The Cargo Book
[package] # ... publish = ["my-registry"]
The publish value may also be false to restrict all publishing, which is the same as an empty list.
les created by cargo package . Users won't be able to use Cargo to publish to it, but this may be su cient de nes e ciently fetch incremental updates to the index. In the root of the repository is a le named config.json which contains JSON information used by Cargo for accessing the registry. This is an example of what the crates.io con g le looks like:
{ "dl": "", "api": "" }
dl : This is the URL for downloading crates listed in the index. The value may have the markers {crate} and {version} which are replaced with the name and version of the crate to download. If the markers are not present, then the value /{crate}/{version}/download is appended to the end. api : This is the base URL for the web API. This key is optional, but if it is not speci ed, commands such as cargo publish will not work. The web API is described below.
The download endpoint should send the .crate le for the requested package. Cargo supports https, http, and le URLs, HTTP redirects, HTTP1 and HTTP2. The exact speci cs of TLS support depend on the platform that Cargo is running on, the version of Cargo, and how it was compiled.
The rest of the index repository contains one le for each package, where the lename is the name of the package in lowercase. Each version of the package has a separate line in the le. The les are organized in a tier of directories:
Note: Although the index lenames are in lowercase, the elds that contain package names in Cargo.toml and the index JSON data are case-sensitive and may contain upper and lower case characters.
Registries should consider enforcing limitations on package names added to their index. Cargo itself allows names with any alphanumeric, - , or _ characters. crates.io imposes its own limitations, including the following:
Registries should consider incorporating similar restrictions, and consider the security implications, such as IDN homograph attacks and other concerns in UTR36 and UTS39.
Each line in a package le contains a JSON object that describes a published version of the package. The following is a pretty-printed example with comments explaining the format of the entry. 76/22618/01/2020 The Cargo Book
{ //. 77/22618/01/2020 The Cargo Book
"links": null }
The JSON objects should not be modi ed after they are added except for the yanked eld whose value may change at any time.
Web API
A registry may host a web API at the location de ned elds. If a JSON eld is missing, it should be assumed to be null. The endpoints are versioned with the v1 component of the path, and Cargo is responsible for handling backwards compatibility fallbacks should any be required in the future.
Content-Type : application/json Accept : application/json User-Agent : The Cargo version such as cargo 1.32.0 (8610973aa 2019-01-02) . This may be modi ed by the user in a con guration value. Added in 1.29.
Publish 78/22618/01/2020 The Cargo Book
Endpoint: /api/v1/crates/new Method: PUT Authorization: Included
The publish endpoint is used to publish a new version of a crate. The server should validate the crate, make it available for download, and add it to the index.
The following is a commented example of the JSON object. Some notes of some restrictions imposed by crates.io are included only to illustrate some suggestions on types of validation that may be done, and should not be considered as an exhaustive list of restrictions crates.io imposes. 79/22618/01/2020 The Cargo Book
{ //. crates.io requires at least one entry. "authors": ["Alice <[email protected]>"], //, 80/22618/01/2020 The Cargo Book
{ // 81/22618/01/2020 The Cargo Book
The yank endpoint will set the yank eld of the given version of a crate to true in the index.
{ // Indicates the delete succeeded, always true. "ok": true, }
Unyank
Endpoint: /api/v1/crates/{crate_name}/{version}/unyank Method: PUT Authorization: Included
The unyank endpoint will set the yank eld of the given version of a crate to false in the index. 82/22618/01/2020 The Cargo Book
{ //.
{ // Array of `login` strings of owners to add. "users": ["login_name"] }
{ //: 83/22618/01/2020 The Cargo Book
{ // Array of `login` strings of owners to remove. "users": ["login_name"] }
{ // Indicates the remove succeeded, always true. "ok": true }
Search
Endpoint: /api/v1/crates Method: GET Query Parameters: q : The search query string. per_page : Number of results, default 10, max 100.
The search request will perform a search for crates, using criteria de ned on the server.
{ //. 84/22618/01/2020 The Cargo Book
Unstable Features Experimental Cargo features are only available on the nightly channel. You typically use one of the -Z ags to enable them. Run cargo -Z help to see a list of ags available.
Some unstable features will require you to specify the cargo-features key in Cargo.toml .
no-index-update
The -Z no-index-update ag ensures that Cargo does not attempt to update the registry index. This is intended for tools such as Crater that issue many Cargo commands, and you want to avoid the network latency for updating the index each time.
mtime-on-use
The -Z mtime-on-use ag is an experiment to have Cargo update the mtime of used les to make it easier for tools like cargo-sweep to detect which les are stale. For many work ows this needs to be set on all invocations of cargo. To make this more practical setting the unstable.mtime_on_use ag in .cargo/config or the corresponding ENV variable will apply the -Z mtime-on-use to all invocations of nightly cargo. (the con g ag is ignored by stable)
avoid-dev-deps
When running commands such as cargo install or cargo build , Cargo currently requires dev-dependencies to be downloaded, even if they are not used. The -Z avoid- dev-deps ag allows Cargo to avoid downloading dev-dependencies if they are not needed. The Cargo.lock le will not be generated if dev-dependencies are skipped.
minimal-versions
Note: It is not recommended to use this feature. Because it enforces minimal versions for all transitive dependencies, its usefulness is limited since not all external dependencies declare proper lower version bounds. It is intended that it will be changed in the future to only enforce minimal versions for direct dependencies.
The intended use-case of this ag is to check, during continuous integration, that the versions speci ed in Cargo.toml are a correct re ection lename can be tricky since you need to parse JSON output. The --out-dir ag makes it easier to predictably access the artifacts. Note that the artifacts are copied, so the originals are still in the target directory. Example:
doctest-xcompile
This ag changes cargo test 's behavior when handling doctests when a target is passed. Currently, if a target is passed that is di erent from the host cargo will simply skip testing doctests. If this ag is present, cargo will continue as normal, passing the tests to doctest, while also passing it a --target option, as well as enabling -Zunstable-features -- enable-per-target-ignores and passing along information from .cargo/config . See the rustc issue for more information. 86/22618/01/2020 The Cargo Book
Pro le Overrides
Pro les can be overridden for speci c packages and custom build scripts. The general format looks like this:
cargo-features = ["profile-overrides"]
[package] ...
[profile.dev] opt-level = 0 debug = true
# All dependencies (but not this crate itself or any workspace member) # will be compiled with -Copt-level=2 . This includes build dependencies. [profile.dev.package."*"] opt-level = 2
Overrides can be speci ed for any pro le, including custom named pro les.
With this feature you can de ne custom pro les having new names. With the custom pro le enabled, build artifacts can be emitted by default to directories other than release or debug , based on the custom pro le's name.
For example:
cargo-features = ["named-profiles"]
[profile.release-lto] inherits = "release" lto = true 87/22618/01/2020 The Cargo Book
An inherits key is used in order to receive attributes from other pro les, so that a new custom pro le can be based on the standard dev or release pro le presets. Cargo emits errors in case inherits loops are detected. When considering inheritance hierarchy, all pro les directly or indirectly inherit from either from release or from dev .
Valid pro le names are: must not be empty, use only alphanumeric characters or - or _ .
Passing --profile with the pro le's name to various Cargo commands, directs operations to use the pro le's attributes. Overrides that are speci ed in the pro les from which the custom pro le inherits are inherited too.
For example, using cargo build with --profile and the manifest from above:
When a custom pro le is used, build artifcats go to a di erent target by default. In the example above, you can expect to see the outputs under target/release-lto .
Some of the paths generated under target/ have resulted in a de-facto "build protocol", where cargo is invoked as a part of a larger project build. So, to preserve the existing behavior, there is also a new attribute dir-name , which when left unspeci ed, defaults to the name of the pro le. For example:
[profile.release-lto] inherits = "release" dir-name = "lto" # Emits to target/lto instead of target/release-lto lto = true
[profile.dev] opt-level = 3
Namespaced features 88/22618/01/2020 The Cargo Book
Currently, it is not possible to have a feature and a dependency with the same name in the manifest. If you set namespaced-features to true , the namespaces for features and dependencies are separated. The e ect of this is that, in the feature requirements, dependencies have to be pre xed de ned. However, if a feature of the same name as a dependency is de ned, that feature must include the dependency as a requirement, as foo = ["crate:foo"] .
Build-plan
The --build-plan argument for the build command will output JSON with information about which commands would be run without actually executing anything. This can be useful when integrating with another build tool. Example:
Metabuild: 89/22618/01/2020 The Cargo Book.
install-upgrade
The install-upgrade feature changes the behavior of cargo install so that it will reinstall a package if it is not "up-to-date". If it is "up-to-date", it will do nothing and exit with success instead of failing. Example:
If any of these values change, then Cargo will reinstall the package.
Installation will still fail if a di erent package installs a binary of the same name. --force may be used to unconditionally reinstall the package.
Installing with --path will always build and install, unless there are con icting binaries from another package.
public-dependency
cargo-features = ["public-dependency"]
[dependencies] my_dep = { version = "1.2.3", public = true } private_dep = "2.0.0" # Will be 'private' by default
build-std
The build-std feature enables Cargo to compile the standard library itself as part of a crate graph compilation. This feature has also historically been known as "std-aware Cargo". This feature is still in very early stages of development, and is also a possible massive feature addition to Cargo. This is a very large feature to document, even in the minimal form that it exists in today, so if you're curious to stay up to date you'll want to follow the tracking repository and its set of issues.
It is also required today that the -Z build-std ag is combined with the --target ag. Note that you're not forced to do a cross compilation, you're just forced to pass --target in one form or another.
Here we recompiled the standard library in debug mode with debug assertions (like src/main.rs is compiled) and everything was linked together at the end. 91/22618/01/2020 The Cargo Book
Using -Z build-std will implicitly compile the stable crates core , std , alloc , and proc_macro . If you're using cargo test it will also compile the test crate. If you're working with an environment which does not support some of these crates, then you can pass an argument to -Zbuild-std as well:
Requirements
You must install libstd's source code through rustup component add rust-src You must pass --target You must use both a nightly Cargo and a nightly rustc The -Z build-std ag must be passed to all cargo invocations.
The -Z build-std feature is in the very early stages of development! This feature for Cargo has an extremely long history and is very large in scope, and this is just the beginning. If you'd like to report bugs please either report them to:
Also if you'd like to see a feature that's not yet implemented and/or if something doesn't quite work the way you'd like it to, feel free to check out the issue tracker of the tracking repository, and if it's not there please le a new issue!
timings
The timings feature gives some information about how long each compilation takes, and tracks concurrency information over time.
The -Ztimings ag can optionally take a comma-separated list of the following values: 92/22618/01/2020 The Cargo Book
info — Displays a message to stdout after each compilation nishes with how long it took. json — Emits some JSON information about timing information.
There are two graphs in the output. The "unit" graph shows the duration of each unit over time. A "unit" is a single compiler invocation. There are lines that show which additional units are "unlocked" when a unit nishes. That is, it shows the new units that are now allowed to run because their dependencies are all nished. Hover the mouse over a unit to highlight the lines. This can help visualize the critical path of dependencies. This may change between runs because the units may nish in di erent orders.
The "codegen" times are highlighted in a lavender color. In some cases, build pipelining allows units to start when their dependencies are performing code generation. This information is not always displayed (for example, binary units do not show when code generation starts).
The "custom build" units are build.rs scripts, which when run are highlighted in orange.
The second graph shows Cargo's concurrency over time. The three lines are:
"Waiting" (red) — This is the number of units waiting for a CPU slot to open. "Inactive" (blue) — This is the number of units that are waiting for their dependencies to nish. "Active" (green) — This is the number of units currently running.
Note: This does not show the concurrency in the compiler itself. rustc coordinates with Cargo via the "job server" to stay within the concurrency limit. This currently mostly applies to the code generation phase.
binary-dep-depinfo 93/22618/01/2020 The Cargo Book
panic-abort-tests
The -Z panic-abort-tests ag will enable nightly support to compile test harness crates with -Cpanic=abort . Without this ag Cargo will compile tests, and everything they depend on, with -Cpanic=unwind because it's the only way test -the-crate knows how to operate. As of rust-lang/rust#64158, however, the test crate supports -C panic=abort with a test- per-process, and can help avoid compiling crate graphs multiple times.
It's currently unclear how this feature will be stabilized in Cargo, but we'd like to stabilize it somehow!
cargo
NAME cargo - The Rust package manager
SYNOPSIS cargo [OPTIONS] COMMAND [ARGS] cargo [OPTIONS] --version cargo [OPTIONS] --list cargo [OPTIONS] --help cargo [OPTIONS] --explain CODE
DESCRIPTION This program is a package manager and build tool for the Rust language, available at. 94/22618/01/2020 The Cargo Book- x(1) Automatically x lint warnings reported by rustc.
cargo-run(1) Run a binary or example of the local package.
cargo-rustc(1) Compile a package, and pass extra options to the compiler.
cargo-rustdoc(1) Build a package’s documentation, using speci ed custom ags.
cargo-test(1) Execute unit and integration tests of a package.
Manifest Commands
cargo-generate-lock le(1) Generate Cargo.lock for a project.
cargo-locate-project(1) Print a JSON representation of a Cargo.toml le’s location.
cargo-metadata(1) Output the resolved dependencies of a package, the concrete used versions including overrides, in machine-readable format. 95/22618/01/2020 The Cargo Book
cargo-pkgid(1) Print a fully quali ed package speci cation.
cargo-update(1) Update dependencies as recorded in the local lock le.) 96/22618/01/2020 The Cargo Book
cargo-version(1) Show version information.
OPTIONS
Special Options
-V --version Print version info and exit. If used with --verbose , prints extra information.
--list List all installed Cargo subcommands. If used with --verbose , prints extra information.
--explain CODE Run rustc --explain CODE which will print out a detailed explanation of an error message (for example, E0004 ).
Display Options
-v --verbose Use verbose output. May be speci ed twice for "very verbose" output which includes extra output such as dependency warnings and build script output. May also be speci ed with the term.verbose con g value.
-q --quiet No output printed to stdout.
--color WHEN Control when colored output is used. Valid values:
Manifest Options 97/22618/01/2020 The Cargo Book
--frozen --locked Either of these ags requires that the Cargo.lock le is up-to-date. If the lock le is missing, or it needs to be updated, Cargo will exit with an error. The --frozen ag also prevents Cargo from attempting to access the network to determine if it is out-of- date.
These may be used in environments where you want to assert that the Cargo.lock le is up-to-date (such as a CI build) or want to avoid network access.
--o ine Prevents Cargo from accessing the network for any reason. Without this ag, Cargo will stop with an error if it needs to access the network and the network is not available. With this ag, Cargo will attempt to proceed without the network if possible.
Beware that this may result in di erent dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going o ine.
Common Options
-h --help Prints help information.
-Z FLAG… Unstable (nightly-only) ags to Cargo. Run cargo -Z help for details.
ENVIRONMENT See the reference for details on environment variables that Cargo reads.
Exit Status 0 Cargo succeeded.
101 Cargo failed to complete. 98/22618/01/2020 The Cargo Book
FILES ~/.cargo/ Default location for Cargo’s "home" directory where it stores various les. The location can be changed with the CARGO_HOME environment variable.
$CARGO_HOME/bin/ Binaries installed by cargo-install(1) will be located here. If using rustup, executables distributed with Rust are also located here.
$CARGO_HOME/config The global con guration le. See the reference for more information about con guration les.
.cargo/config Cargo automatically searches for a le named .cargo/config in the current directory, and all parent directories. These con guration les will be merged with the global con guration le.
$CARGO_HOME/credentials 1. Build a local package and all of its dependencies: cargo build
BUGS See for issues.
SEE ALSO rustc(1), rustdoc(1)
cargo bench
NAME cargo-bench - Execute benchmarks of a package
SYNOPSIS cargo bench [OPTIONS] [BENCHNAME] [-- BENCH-OPTIONS]
DESCRIPTION Compile and execute benchmarks.
The benchmark ltering argument BENCHNAME and all the arguments following the two dashes ( -- ) are passed to the benchmark bench — --help . As an example, this will 100/22618/01/2020 The Cargo Book
run only the benchmark named foo (and skip other similarly named benchmarks like foobar ):
Benchmarks are built with the --test option to rustc which creates an executable with a main function that automatically runs all functions annotated with the #[bench] attribute. Cargo passes the --bench ag to the test harness to tell it to run only benchmarks.
The libtest harness may be disabled by setting harness = false in the target manifest settings, in which case your code will need to provide its own main function to handle running benchmarks.
Benchmark Options
--no-run Compile, but don’t run benchmarks.
--no-fail-fast Run all benchmarks regardless of failure. Without this ag, Cargo will exit after the rst executable fails. The Rust test harness will run all benchmarks within the executable to completion, this ag only applies to the executable as a whole.
Package Selection
By default, when no package selection options are given, the packages selected depend on the selected manifest le (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package de ned speci ed packages. See cargo-pkgid(1) for the SPEC format. This ag may be speci ed multiple times.
--workspace Benchmark all members in the workspace. 101/22618/01/2020 The Cargo Book
--all Deprecated alias for --workspace .
--exclude SPEC… Exclude the speci ed packages. Must be used in conjunction with the --workspace ag. This ag may be speci ed multiple times.
Target Selection
When no target selection options are given, cargo bench will build the following targets of the selected packages:
The default behavior can be changed by setting the bench ag for the target in the manifest settings. Setting examples to bench = true will build and run the example as a benchmark. Setting targets to bench = false will stop them from being benchmarked by default. Target selection options that take a target by name ignore the bench ag and will always benchmark the given target.
Passing target selection ags will benchmark only the speci ed targets.
--lib Benchmark the package’s library.
--bin NAME… Benchmark the speci ed binary. This ag may be speci ed multiple times.
--bins Benchmark all binary targets.
--example NAME… Benchmark the speci ed example. This ag may be speci ed multiple times.
--examples Benchmark all example targets.
--test NAME… Benchmark the speci ed integration test. This ag may be speci ed multiple times.
--tests Benchmark, 102/22618/01/2020 The Cargo Book
etc.). Targets may be enabled or disabled by setting the test ag in the manifest settings for the target.
--bench NAME… Benchmark the speci ed benchmark. This ag may be speci ed multiple times.
--benches Benchmark Benchmark all targets. This is equivalent to specifying --lib --bins --tests --benches --examples .
Feature Selection
When no feature options are given, the default feature is activated for every selected package.
--features FEATURES Space or comma separated list of features to activate. These features only apply to the current directory’s package. Features of direct dependencies may be enabled with <dep-name>/<feature-name> syntax.
-.
Output Options
--target-dir DIRECTORY 103/22618/01/2020 The Cargo Book
Directory for all generated artifacts and intermediate les. May also be speci ed with the CARGO_TARGET_DIR environment variable, or the build.target-dir con g value. Defaults to target in the root of the workspace.
By default the Rust test harness hides output from benchmark execution to keep results readable. Benchmark output can be recovered (e.g., for debugging) by passing --nocapture to the benchmark binaries:
--message-format FMT The output format for diagnostic messages. Can be speci ed multiple times and consists of comma-separated values. Valid values: 104/22618/01/2020 The Cargo Book
Manifest Options
--manifest-path PATH Path to the Cargo.toml le. By default, Cargo searches for the Cargo.toml le in the current directory or any parent directory.
Miscellaneous Options
The --jobs argument a ects the building of the benchmark executable but does not a ect how many threads are used when running the benchmarks. The Rust test harness runs benchmarks serially in a single thread.
-j N --jobs N Number of parallel jobs to run. May also be speci ed with the build.jobs con g value. Defaults to the number of CPUs. 105/22618/01/2020 The Cargo Book
PROFILES Pro les may be used to con gure compiler options such as optimization levels and debug settings. See the reference for more details.
Benchmarks are always built with the bench pro le. Binary and lib targets are built separately as benchmarks with the bench pro le. Library targets are built with the release pro les when linked to binaries and benchmarks. Dependencies use the release pro le.
If you need a debug build of a benchmark, try building it with cargo-build(1) which will use the test pro le which is by default unoptimized and includes debug information. You can then run the debug-enabled benchmark manually.
EXAMPLES 1. Build and execute all the benchmarks of the current package: cargo bench
SEE ALSO cargo(1), cargo-test(1) 106/22618/01/2020 The Cargo Book
cargo build
NAME cargo-build - Compile the current package
SYNOPSIS cargo build [OPTIONS]
DESCRIPTION Compile local packages and all of their dependencies.
-p SPEC… --package SPEC… Build only the speci ed packages. See cargo-pkgid(1) for the SPEC format. This ag may be speci ed multiple times.
--workspace Build all members in the workspace.
--all 107/22618/01/2020 The Cargo Book
When no target selection options are given, cargo build will build all binary and library targets of the selected packages. Binaries are skipped if they have required-features that are missing.
Passing target selection ags will build only the speci ed targets.
--lib Build the package’s library.
--bin NAME… Build the speci ed binary. This ag may be speci ed multiple times.
--bins Build all binary targets.
--example NAME… Build the speci ed example. This ag may be speci ed multiple times.
--examples Build all example targets.
--test NAME… Build the speci ed integration test. This ag may be speci ed multiple times.
--tests… Build the speci ed benchmark. This ag may be speci ed multiple times.
-, 108/22618/01/2020 The Cargo Book
etc.). Targets may be enabled or disabled by setting the bench ag in the manifest settings for the target.
--all-targets Build all targets. This is equivalent to specifying --lib --bins --tests --benches --examples .
--target TRIPLE Build for the given architecture. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Run rustc --print target-list for a list of supported targets.
--release Build optimized artifacts with the release pro le. See the PROFILES section for details on how this a ects pro le selection.
--target-dir DIRECTORY Directory for all generated artifacts and intermediate les. May also be speci ed with the CARGO_TARGET_DIR environment variable, or the build.target-dir con g value. Defaults to target in the root of the workspace.
--out-dir DIRECTORY Copy nal artifacts to this directory. 109/22618/01/2020 The Cargo Book
This option is unstable and available only on the nightly channel and requires the -Z unstable-options ag to enable. See for more information.
--build-plan Outputs a series of JSON messages to stdout that indicate the commands to run the build.
This option is unstable and available only on the nightly channel and requires the -Z unstable-options ag to enable. See for more information. 110/22618/01/2020 The Cargo Book
-j N --jobs N Number of parallel jobs to run. May also be speci ed with the build.jobs con g value. Defaults to the number of CPUs. 111/22618/01/2020 The Cargo Book
Pro le selection depends on the target and crate being built. By default the dev or test pro les are used. If the --release ag is given, then the release or bench pro les are used.
EXAMPLES 1. Build the local package and all of its dependencies: cargo build 112/22618/01/2020 The Cargo Book
SEE ALSO cargo(1), cargo-rustc(1)
cargo check
NAME cargo-check - Check the current package
SYNOPSIS cargo check [OPTIONS]
DESCRIPTION Check a local package and all of its dependencies for errors. This will essentially compile the packages without performing the nal step of code generation, which is faster than running cargo build . The compiler will save metadata les to disk so that future runs will reuse them if the source has not been modi ed.
-p SPEC… --package SPEC… 113/22618/01/2020 The Cargo Book
Check only the speci ed packages. See cargo-pkgid(1) for the SPEC format. This ag may be speci ed multiple times.
--workspace Check all members in the workspace.
When no target selection options are given, cargo check will check all binary and library targets of the selected packages. Binaries are skipped if they have required-features that are missing.
Passing target selection ags will check only the speci ed targets.
--lib Check the package’s library.
--bin NAME… Check the speci ed binary. This ag may be speci ed multiple times.
--bins Check all binary targets.
--example NAME… Check the speci ed example. This ag may be speci ed multiple times.
--examples Check all example targets.
--test NAME… Check the speci ed integration test. This ag may be speci ed multiple times.
--tests Check… Check the speci ed benchmark. This ag may be speci ed multiple times. 114/22618/01/2020 The Cargo Book
--benches Check Check all targets. This is equivalent to specifying --lib --bins --tests --benches --examples .
--target TRIPLE Check for the given architecture. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Run rustc --print target-list for a list of supported targets.
--release Check optimized artifacts with the release pro le. See the PROFILES section for details on how this a ects pro le selection.
--pro le NAME Changes check behavior. Currently only test is supported, which will check with the # [cfg(test)] attribute enabled. This is useful to have it check unit tests which are usually excluded via the cfg attribute. This does not change the actual pro le used. 115/22618/01/2020 The Cargo Book 116/22618/01/2020 The Cargo Book
PROFILES 117/22618/01/2020 The Cargo Book
Pro les may be used to con gure compiler options such as optimization levels and debug settings. See the reference for more details.
EXAMPLES 1. Check the local package for errors: cargo check
SEE ALSO 118/22618/01/2020 The Cargo Book
cargo(1), cargo-build(1)
cargo clean
NAME cargo-clean - Remove generated artifacts
SYNOPSIS cargo clean [OPTIONS]
DESCRIPTION Remove artifacts from the target directory that Cargo has generated in the past.
With no options, cargo clean will delete the entire target directory.
When no packages are selected, all packages and all dependencies in the workspace are cleaned.
-p SPEC… --package SPEC… Clean only the speci ed packages. This ag may be speci ed multiple times. See cargo- pkgid(1) for the SPEC format.
Clean Options
--doc This option will cause cargo clean to remove only the doc directory in the target directory. 119/22618/01/2020 The Cargo Book
--release Clean all artifacts that were built with the release or bench pro les.
--target TRIPLE Clean.
--frozen --locked Either of these ags requires that the Cargo.lock le is up-to-date. If the lock le is missing, or it needs to be updated, Cargo will exit with an error. The --frozen ag 120/22618/01/2020 The Cargo Book
also prevents Cargo from attempting to access the network to determine if it is out-of- date.
EXAMPLES 121/22618/01/2020 The Cargo Book
SEE ALSO cargo(1), cargo-build(1)
cargo doc
NAME cargo-doc - Build a package's documentation
SYNOPSIS cargo doc [OPTIONS]
DESCRIPTION Build the documentation for the local package and all dependencies. The output is placed in target/doc in rustdoc’s usual format.
Documentation Options
--open Open the docs in a browser after building them.
--no-deps Do not build documentation for dependencies. 122/22618/01/2020 The Cargo Book
--document-private-items Include non-public items in the documentation.
-p SPEC… --package SPEC… Document only the speci ed packages. See cargo-pkgid(1) for the SPEC format. This ag may be speci ed multiple times.
--workspace Document all members in the workspace. ag and will always document the given target.
--lib Document the package’s library.
--bin NAME… Document the speci ed binary. This ag may be speci ed multiple times.
--bins 123/22618/01/2020 The Cargo Book
--target TRIPLE Document for the given architecture. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Run rustc --print target-list for a list of supported targets.
--release Document optimized artifacts with the release pro le. See the PROFILES section for details on how this a ects pro le selection.
-v --verbose Use verbose output. May be speci ed twice for "very verbose" output which includes extra output such as dependency warnings and build script output. May also be 124/22618/01/2020 The Cargo Book
--manifest-path PATH Path to the Cargo.toml le. By default, Cargo searches for the Cargo.toml le in the current directory or any parent directory.
--o ine 125/22618/01/2020 The Cargo Book
Prevents Cargo from accessing the network for any reason. Without this ag, Cargo will stop with an error if it needs to access the network and the network is not available. With this ag, Cargo will attempt to proceed without the network if possible. 126/22618/01/2020 The Cargo Book
EXAMPLES 1. Build the local package documentation and its dependencies and output to target/doc .
cargo doc
SEE ALSO cargo(1), cargo-rustdoc(1), rustdoc(1)
cargo fetch
NAME cargo-fetch - Fetch dependencies of a package from the network
SYNOPSIS cargo fetch [OPTIONS] 127/22618/01/2020 The Cargo Book
DESCRIPTION If a Cargo.lock le is available, this command will ensure that all of the git dependencies and/or registry dependencies are downloaded and locally available. Subsequent Cargo commands never touch the network after a cargo fetch unless the lock le changes.
If the lock le is not available, then this command will generate the lock le before fetching the dependencies.
If --target is not speci ed, then all target dependencies are fetched.
See also the cargo-prefetch plugin which adds a command to download popular crates. This may be useful if you plan to use Cargo without a network with the --offline ag.
Fetch options
--target TRIPLE Fetch for the given architecture. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Run rustc --print target-list for a list of supported targets. 128/22618/01/2020 The Cargo Book 129/22618/01/2020 The Cargo Book
EXAMPLES 1. Fetch all dependencies: cargo fetch
SEE ALSO cargo(1), cargo-update(1), cargo-generate-lock le(1)
cargo x
NAME cargo- x - Automatically x x! The cargo fix subcommand is also being developed for the Rust 2018 edition to provide code the ability to easily opt-in to the new edition without having to worry about any breakage. 130/22618/01/2020 The Cargo Book
Executing cargo fix will under the hood execute cargo-check(1). Any warnings applicable to your crate will be automatically xed (if possible) and all remaining warnings will be displayed when the check process is nished. For example if you’d like to prepare for the 2018 edition, you can do so by executing:
which behaves the same as cargo check --all-targets . Similarly if you’d like to x code for di erent platforms you can do:
If you encounter any problems with cargo fix or otherwise have any questions or feature requests please don’t hesitate to le an issue at
Fix options
--broken-code Fix code even if it already has compiler errors. This is useful if cargo fix fails to apply the changes. It will apply the changes and leave the broken code in the working directory for you to inspect and manually x.
--edition Apply changes that will update the code to the latest edition. This will not update the edition in the Cargo.toml manifest, which must be updated manually.
-. 131/22618/01/2020 The Cargo Book
-p SPEC… --package SPEC… Fix only the speci ed packages. See cargo-pkgid(1) for the SPEC format. This ag may be speci ed multiple times.
--workspace Fix all members in the workspace.
When no target selection options are given, cargo fix will x all targets ( --all-targets implied). Binaries are skipped if they have required-features that are missing.
--lib Fix the package’s library.
--bin NAME… Fix the speci ed binary. This ag may be speci ed multiple times.
--bins Fix all binary targets.
--example NAME… Fix the speci ed example. This ag may be speci ed multiple times.
--examples Fix all example targets. 132/22618/01/2020 The Cargo Book
--test NAME… Fix the speci ed integration test. This ag may be speci ed multiple times.
--tests Fix… Fix the speci ed benchmark. This ag may be speci ed multiple times.
--benches Fix Fix all targets. This is equivalent to specifying --lib --bins --tests --benches --examples .
--target TRIPLE Fix for the given architecture. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Run rustc --print target-list 133/22618/01/2020 The Cargo Book
--release Fix optimized artifacts with the release pro le. See the PROFILES section for details on how this a ects pro le selection.
--pro le NAME Changes x behavior. Currently only test is supported, which will x with the # [cfg(test)] attribute enabled. This is useful to have it x unit tests which are usually excluded via the cfg attribute. This does not change the actual pro le used. 134/22618/01/2020 The Cargo Book 135/22618/01/2020 The Cargo Book
EXAMPLES 136/22618/01/2020 The Cargo Book
SEE ALSO cargo(1), cargo-check(1)
cargo.
Package Selection 137/22618/01/2020 The Cargo Book
By default, the package in the current working directory is selected. The -p ag can be used to choose a di erent package in a workspace.
-p SPEC --package SPEC The package to run. See cargo-pkgid(1) for the SPEC format.
When no target selection options are given, cargo run will run the binary target. If there are multiple binary targets, you must pass a target ag to choose one. Or, the default-run eld may be speci ed in the [package] section of Cargo.toml to choose the name of the binary to run by default.
--bin NAME Run the speci ed binary.
--example NAME Run the speci ed example.
--target TRIPLE Run for the given architecture. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Run rustc --print target-list for a list of supported targets.
--release 138/22618/01/2020 The Cargo Book
Run optimized artifacts with the release pro le. See the PROFILES section for details on how this a ects pro le selection. 139/22618/01/2020 The Cargo Book 140/22618/01/2020 The Cargo Book
EXAMPLES 1. Build the local package and run its main target (assuming only one binary): cargo run 141/22618/01/2020 The Cargo Book
cargo rustc
NAME cargo-rustc - Compile the current package, and pass extra options to the compiler
SYNOPSIS cargo rustc [OPTIONS] [-- ARGS]
DESCRIPTION The speci ed target for the current package (or package speci ed by -p if provided) will be compiled along with all of its dependencies. The speci ed ARGS will all be passed to the nal compiler invocation, not any of the dependencies. Note that the compiler will still unconditionally receive arguments such as -L , --extern , and --crate-type , and the speci ed ARGS will simply be added to the compiler invocation.
This command requires that only one target is being compiled when additional arguments are provided. If more than one target is available for the current package the lters of -- lib , --bin , etc, must be used to select which target is compiled. To pass ags to all compiler processes spawned by Cargo, use the RUSTFLAGS environment variable or the build.rustflags con g value. 142/22618/01/2020 The Cargo Book
-p SPEC --package SPEC The package to build. See cargo-pkgid(1) for the SPEC format.
When no target selection options are given, cargo rustc will build all binary and library targets of the selected package.
- 143/22618/01/2020 The Cargo Book
-v --verbose 144/22618/01/2020 The Cargo Book
Use verbose output. May be speci ed twice for "very verbose" output which includes extra output such as dependency warnings and build script output. May also be speci ed with the term.verbose con g value. 145/22618/01/2020 The Cargo Book 146/22618/01/2020 The Cargo Book
EXAMPLES 1. Check if your package (not including dependencies) uses unsafe code: cargo rustc --lib -- -D unsafe-code
2. Try an experimental ag on the nightly compiler, such as this which prints the size of every type: cargo rustc --lib -- -Z print-type-sizes
SEE ALSO cargo(1), cargo-build(1), rustc(1)
cargo rustdoc 147/22618/01/2020 The Cargo Book
NAME cargo-rustdoc - Build a package's documentation, using speci ed custom ags
SYNOPSIS cargo rustdoc [OPTIONS] [-- ARGS]
DESCRIPTION The speci ed target for the current package (or package speci ed by -p if provided) will be documented with the speci ed ARGS being passed to the nal rustdoc invocation. Dependencies will not be documented as part of this command. Note that rustdoc will still unconditionally receive arguments such as -L , --extern , and --crate-type , and the speci ed ARGS will simply be added to the rustdoc invocation.
This command requires that only one target is being compiled when additional arguments are provided. If more than one target is available for the current package the lters of -- lib , --bin , etc, must be used to select which target is compiled. To pass ags to all rustdoc processes spawned by Cargo, use the RUSTDOCFLAGS environment variable or the build.rustdocflags con guration option.
-p SPEC --package SPEC The package to document. See cargo-pkgid(1) for the SPEC format. 148/22618/01/2020 The Cargo Book ags will document only the speci ed targets.
--bins Document all binary targets.
--example NAME… Document the speci ed example. This ag may be speci ed multiple times.
--examples Document all example targets.
--test NAME… Document the speci ed integration test. This ag may be speci ed multiple times.
--tests Document… Document the speci ed benchmark. This ag may be speci ed multiple times.
--benches Document Document all targets. This is equivalent to specifying --lib --bins --tests --benches --examples . 149/22618/01/2020 The Cargo Book
-q 150/22618/01/2020 The Cargo Book
--quiet No output printed to stdout. 151/22618/01/2020 The Cargo Book 152/22618/01/2020 The Cargo Book
EXAMPLES 1. Build documentation with custom CSS included from a given le: cargo rustdoc --lib -- --extend-css extra.css
SEE ALSO cargo(1), cargo-doc(1), rustdoc(1)
cargo test
NAME cargo-test - Execute unit and integration tests of a package
SYNOPSIS cargo test [OPTIONS] [TESTNAME] [-- TEST-OPTIONS]
DESCRIPTION 153/22618/01/2020 The Cargo Book
The test ltering argument TESTNAME and all the arguments following the two dashes ( -- ) are passed to the test test — --help . As an example, this will run all tests with foo in their name on 3 threads in parallel:.
Test Options
--no-run Compile, but don’t run tests.
--no-fail-fast Run all tests regardless of failure. Without this ag, Cargo will exit after the rst executable fails. The Rust test harness will run all tests within the executable to completion, this ag only applies to the executable as a whole. 154/22618/01/2020 The Cargo Book
-p SPEC… --package SPEC… Test only the speci ed packages. See cargo-pkgid(1) for the SPEC format. This ag may be speci ed multiple times.
--workspace Test all members in the workspace. ag for the target in the manifest settings. Setting examples to test = true will build and run the example as a test. Setting targets to test = false will stop them from being tested by default. Target selection options that take a target by name ignore the test ag and will always test the given target.
Doc tests for libraries may be disabled by setting doctest = false for the library in the manifest.
Passing target selection ags will test only the speci ed targets.
--lib Test the package’s library.
--bin NAME… Test the speci ed binary. This ag may be speci ed multiple times. 155/22618/01/2020 The Cargo Book
--bins Test all binary targets.
--example NAME… Test the speci ed example. This ag may be speci ed multiple times.
--examples Test all example targets.
--test NAME… Test the speci ed integration test. This ag may be speci ed multiple times.
--tests… Test the speci ed benchmark. This ag may be speci ed multiple times.
--benches Test all targets. This is equivalent to specifying --lib --bins --tests --benches --examples .
--doc Test only the library’s documentation. This cannot be mixed with other target options.
--all-features 156/22618/01/2020 The Cargo Book
--target TRIPLE Test for the given architecture. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Run rustc --print target-list for a list of supported targets.
--release Test optimized artifacts with the release pro le. See the PROFILES section for details on how this a ects pro le selection.
By default the Rust test harness hides output from test execution to keep results readable. Test output can be recovered (e.g., for debugging) by passing --nocapture to the test binaries:
--color WHEN Control when colored output is used. Valid values: 157/22618/01/2020 The Cargo Book 158/22618/01/2020 The Cargo Book
The --jobs argument a ects the building of the test executable but does not a ect how many threads are used when running the tests. The Rust test harness includes an option to control the number of threads used: 159/22618/01/2020 The Cargo Book
Unit tests are separate executable artifacts which use the test / bench pro les. Example targets are built the same as with cargo build (using the dev / release pro les) unless you are building them with the test harness (by setting test = true in the manifest or using the --example ag) in which case they use the test / bench pro les. Library targets are built with the dev / release pro les when linked to an integration test, binary, or doctest.
EXAMPLES 1. Execute all the unit and integration tests of the current package: cargo test
SEE ALSO cargo(1), cargo-bench(1)
cargo generate-lock le 160/22618/01/2020 The Cargo Book
NAME cargo-generate-lock le - Generate the lock le for a package
SYNOPSIS cargo generate-lockfile [OPTIONS]
DESCRIPTION This command will create the Cargo.lock lock le for the current package or workspace. If the lock le already exists, it will be rebuilt if there are any manifest changes or dependency updates.
See also cargo-update(1) which is also capable of creating a Cargo.lock lock le and has more options for controlling update behavior. 161/22618/01/2020 The Cargo Book
Exit Status 162/22618/01/2020 The Cargo Book
0 Cargo succeeded.
EXAMPLES 1. Create or update the lock le for the current package or workspace: cargo generate-lockfile
SEE ALSO cargo(1), cargo-update(1)
cargo locate-project
NAME cargo-locate-project - Print a JSON representation of a Cargo.toml le's location
SYNOPSIS cargo locate-project [OPTIONS]
DESCRIPTION This command will print a JSON object to stdout with the full path to the Cargo.toml manifest.
See also cargo-metadata(1) which is capable of returning the path to a workspace root.
OPTIONS 163/22618/01/2020 The Cargo Book 164/22618/01/2020 The Cargo Book
EXAMPLES 1. Display the path to the manifest based on the current directory: cargo locate-project
SEE ALSO cargo(1), cargo-metadata(1)
cargo metadata
NAME cargo-metadata - Machine-readable metadata about the current package
SYNOPSIS cargo metadata [OPTIONS]
DESCRIPTION Output the resolved dependencies of a package, the concrete used versions including overrides, in JSON to stdout.
See the cargo_metadata crate for a Rust API for reading the metadata. 165/22618/01/2020 The Cargo Book
OUTPUT FORMAT The output has the following format: 166/22618/01/2020 The Cargo Book
{ /* Array of all packages in the workspace. It also includes all feature-enabled dependencies unless --no-deps is used. */ "packages": [ { /* The name of the package. */ "name": "my-package", /* The version of the package. */ "version": "0.1.0", /* The Package ID, a unique identifier for referring to the package. */ "id": "my-package 0.1.0 (path+)", /* The license value from the manifest, or null. */ "license": "MIT/Apache-2.0", /* The license-file value from the manifest, or null. */ "license_file": "LICENSE", /* The description value from the manifest, or null. */ "description": "Package description.", /* The source ID of the package. This represents where a package is retrieved from. This is null for path dependencies and workspace members. For other dependencies, it is a string with the format: - "registry+URL" for registry-based dependencies. Example: "registry+- index" - "git+URL" for git-based dependencies. Example: "git+? rev=5e85ba14aaa20f8133863373404cb0af69eeef2c#5e85ba14aaa20f8133863373404cb0af6
*/ "source": null, /* Array of dependencies declared in the package's manifest. */ "dependencies": [ { /* The name of the dependency. */ "name": "bitflags", /* The source ID of the dependency. May be null, see description for the package source. */ "source": "registry+- lang/crates.io-index", /* The version requirement for the dependency. Dependencies without a version requirement have a value of "*". */ "req": "^1.0", /* The dependency kind. "dev", "build", or null for a normal dependency. */ "kind": null, /* If the dependency is renamed, this is the new name for the dependency as a string. null if it is not renamed. */ 167/22618/01/2020 The Cargo Book
"rename": null, /* Boolean of whether or not this is an optional dependency. */ "optional": false, /* Boolean of whether or not default features are enabled. */ "uses_default_features": true, /* Array of features enabled. */ "features": [], /* The target platform for the dependency. null if not a target dependency. */ "target": "cfg(windows)", /* A string of the URL of the registry this dependency is from. If not specified or null, the dependency is from the default registry (crates.io). */ "registry": null } ], /* Array of Cargo targets. */ "targets": [ { /* Array of target kinds. - lib targets list the `crate-type` values from the manifest such as "lib", "rlib", "dylib", "proc-macro", etc. (default ["lib"]) - binary is ["bin"] - example is ["example"] - integration test is ["test"] - benchmark is ["bench"] - build script is ["custom-build"] */ "kind": [ "bin" ], /* Array of crate types. - lib and example libraries list the `crate-type` values from the manifest such as "lib", "rlib", "dylib", "proc-macro", etc. (default ["lib"]) - all other target kinds are ["bin"] */ "crate_types": [ "bin" ], /* The name of the target. */ "name": "my-package", /* Absolute path to the root source file of the target. */ "src_path": "/path/to/my-package/src/main.rs", /* The Rust edition of the target. Defaults to the package edition. */ "edition": "2018", /* Array of required features. 168/22618/01/2020 The Cargo Book
"repository": "", /* The default edition of the package. Note that individual targets may have different editions. */ "edition": "2018", /* Optional string that is the name of a native library the package is linking to. */ "links": null, } ], /* Array of members of the workspace. Each entry is the Package ID for the package. */ "workspace_members": [ "my-package 0.1.0 (path+)", ], /* The resolved dependency graph, with the concrete versions and features selected. The set depends on the enabled features. This is null if --no-deps is specified. By default, this includes all dependencies for all target platforms. The `--filter-platform` flag may be used to narrow to a specific target triple. */ "resolve": { /* Array of nodes within the dependency graph. Each node is a package. */ "nodes": [ { /* The Package ID of this node. */ "id": "my-package 0.1.0 (path+)", /* The dependencies of this package, an array of Package IDs. */ "dependencies": [ "bitflags 1.0.4 (registry+- lang/crates.io-index)" ], /* The dependencies of this package. This is an alternative to "dependencies" which contains additional information. In particular, this handles renamed dependencies. */ "deps": [ { /* The name of the dependency's library target. If this is a renamed dependency, this is the new name. */ "name": "bitflags", /* The Package ID of the dependency. */ "pkg": "bitflags 1.0.4 (registry+)" } ], /* Array of features enabled on this package. */ 170/22618/01/2020 The Cargo Book
"features": [ "default" ] } ], /* The root package of the workspace. This is null if this is a virtual workspace. Otherwise it is the Package ID of the root package. */ "root": "my-package 0.1.0 (path+)" }, /* The absolute path to the build directory where Cargo places its output. */ "target_directory": "/path/to/my-package/target", /* The version of the schema for this metadata structure. This will be changed if incompatible changes are ever made. */ "version": 1, /* The absolute path to the root of the workspace. */ "workspace_root": "/path/to/my-package" }
--no-deps Output information only about the workspace members and don’t fetch dependencies.
--format-version VERSION Specify the version of the output format to use. Currently 1 is the only possible value.
-- lter-platform TRIPLE This lters the resolve output to only include dependencies for the given target triple. Without this ag, the resolve includes all targets.
Note that the dependencies listed in the "packages" array still includes all dependencies. Each package de nition is intended to be an unaltered reproduction of the information within Cargo.toml .
--features FEATURES 171/22618/01/2020 The Cargo Book
Space or comma separated list of features to activate. These features only apply to the current directory’s package. Features of direct dependencies may be enabled with <dep-name>/<feature-name> syntax. 172/22618/01/2020 The Cargo Book
EXAMPLES 1. Output JSON about the current package: cargo metadata --format-version=1 173/22618/01/2020 The Cargo Book
SEE ALSO cargo(1)
cargo pkgid
NAME cargo-pkgid - Print a fully quali ed package speci cation
SYNOPSIS cargo pkgid [OPTIONS] [SPEC]
DESCRIPTION Given a SPEC argument, print out the fully quali ed package ID speci er for a package or dependency in the current workspace. This command will generate an error if SPEC is ambiguous as to which package it refers to in the dependency graph. If no SPEC is given, then the speci er for the local package is printed.
This command requires that a lock le is available and dependencies have been fetched.
A package speci er consists of a name, version, and source URL. You are allowed to use partial speci ers to succinctly match a speci c package as long as it matches only one package. The format of a SPEC can be one of the following:
NAME bitflags
URL 174/22618/01/2020 The Cargo Book- URL # NAME index#bitflags
-p SPEC --package SPEC Get the package ID for the given package instead of the current package.
--manifest-path PATH Path to the Cargo.toml le. By default, Cargo searches for the Cargo.toml le in the current directory or any parent directory. 175/22618/01/2020 The Cargo Book 176/22618/01/2020 The Cargo Book
EXAMPLES 1. Retrieve package speci cation for foo package: cargo pkgid foo
SEE ALSO cargo(1), cargo-generate-lock le(1), cargo-metadata(1)
cargo update
NAME cargo-update - Update dependencies as recorded in the local lock le
SYNOPSIS cargo update [OPTIONS]
DESCRIPTION This command will update dependencies in the Cargo.lock le to the latest version. It requires that the Cargo.lock le already exists as generated by commands such as cargo- build(1) or cargo-generate-lock le(1). 177/22618/01/2020 The Cargo Book
Update Options
-p SPEC… --package SPEC… Update only the speci ed packages. This ag may be speci ed multiple times. See cargo-pkgid(1) for the SPEC format.
If packages are speci ed with the -p ag, then a conservative update of the lock le will be performed. This means that only the dependency speci ed by SPEC will be updated. Its transitive dependencies will be updated only if SPEC cannot be updated without updating dependencies. All other dependencies will remain locked at their currently recorded versions.
--aggressive When used with -p , dependencies of SPEC are forced to update as well. Cannot be used with --precise .
--precise PRECISE When used with -p , allows you to specify a speci c version number to set the package to. If the package comes from a git repository, this can be a git revision (such as a SHA hash or tag).
--dry-run Displays what would be updated, but doesn’t actually write the lock le. 178/22618/01/2020 The Cargo Book 179/22618/01/2020 The Cargo Book
EXAMPLES 1. Update all dependencies in the lock le: cargo update
SEE ALSO cargo(1), cargo-generate-lock le(1)
cargo vendor
NAME cargo-vendor - Vendor all dependencies locally
SYNOPSIS cargo vendor [OPTIONS] [PATH]
DESCRIPTION This cargo subcommand will vendor all crates.io and git dependencies for a project into the speci ed directory at <path> . After this command completes the vendor directory speci ed 180/22618/01/2020 The Cargo Book
by <path> will contain all remote sources from dependencies speci ed. Additional manifests beyond the default one can be speci ed with the -s option.
The cargo vendor command will also print out the con guration necessary to use the vendored sources, which you will need to add to .cargo/config .
Owner Options
-s MANIFEST --sync MANIFEST Specify extra Cargo.toml manifests to workspaces which should also be vendored and synced to the output.
--no-delete Don’t delete the "vendor" directory when vendoring, but rather keep all existing contents of the vendor directory
--respect-source-con g Instead of ignoring [source] con guration by default in .cargo/config read it and use it when downloading crates from crates.io, for example
--manifest-path PATH Path to the Cargo.toml le. By default, Cargo searches for the Cargo.toml le in the current directory or any parent directory.
--color WHEN Control when colored output is used. Valid values: 181/22618/01/2020 The Cargo Book
Exit Status 0 182/22618/01/2020 The Cargo Book
Cargo succeeded.
EXAMPLES 1. Vendor all dependencies into a local "vendor" folder cargo vendor
cargo verify-project
NAME cargo-verify-project - Check correctness of crate manifest
SYNOPSIS cargo verify-project [OPTIONS]
DESCRIPTION This command will parse the local manifest and check its validity. It emits a JSON object with the result. A successful validation will display: 183/22618/01/2020 The Cargo Book {"success":"true"}
These may be used in environments where you want to assert that the Cargo.lock le is up-to-date (such as a CI build) or want to avoid network access. 184/22618/01/2020 The Cargo Book
--o ine Prevents Cargo from accessing the network for any reason. Without this ag, Cargo will stop with an error if it needs to access the network and the network is not available. With this ag, Cargo will attempt to proceed without the network if possible.
Exit Status 0 The workspace is OK.
1 The workspace is invalid.
EXAMPLES 1. Check the current workspace for errors: cargo verify-project 185/22618/01/2020 The Cargo Book
SEE ALSO cargo(1), cargo-package(1)
cargo init
NAME cargo-init - Create a new Cargo package in an existing directory
SYNOPSIS cargo init [OPTIONS] [PATH]
DESCRIPTION This command will create a new Cargo manifest in the current directory. Give a path as an argument to create in the given directory.
If there are typically-named Rust source les already in the directory, those will be used. If not, then a sample src/main.rs le will be created, or src/lib.rs if --lib is passed.
If the directory is not already in a VCS repository, then a new repository is created (see -- vcs below).
The "authors" eld in the manifest is determined from the environment or con guration settings. A name is required and is determined from ( rst match wins):
See the reference for more information about con guration les.
See cargo-new(1) for a similar command which will create a new package in a new directory.
Init speci ed, defaults to git or the con guration value cargo-new.vcs , or none if already inside a VCS repository.
--registry REGISTRY This sets the publish eld in Cargo.toml to the given registry name which will restrict publishing only to that registry.
Registry names are de ned in Cargo con g les. If not speci ed, the default registry de ned by the registry.default con g key is used. If the default registry is not set and --registry is not used, the publish eld will not be set which means that publishing will not be restricted.
-v 187/22618/01/2020 The Cargo Book
--verbose Use verbose output. May be speci ed twice for "very verbose" output which includes extra output such as dependency warnings and build script output. May also be speci ed with the term.verbose con g value. 188/22618/01/2020 The Cargo Book
SEE ALSO cargo(1), cargo-new(1)
cargo install.
--root option CARGO_INSTALL_ROOT environment variable install.root Cargo con g value CARGO_HOME environment variable $HOME/.cargo
There are multiple sources from which a crate can be installed. The default location is crates.io but the --git , --path , and --registry ags can change this source. If the source contains more than one package (such as crates.io or a git repository with multiple crates) the CRATE argument is required to indicate which crate should be installed. 189/22618/01/2020 The Cargo Book
Crates from crates.io can optionally specify the version they wish to install via the -- version ags, source is crates.io or --git then by default the crate will be built in a temporary target directory. To avoid this, the target directory can be speci ed by setting the CARGO_TARGET_DIR environment variable to a relative path. In particular, this can be useful for caching build artifacts on continuous integration systems.
By default, the Cargo.lock le that is included with the package will be ignored. This means that Cargo will recompute which versions of dependencies to use, possibly using newer versions that have been released since the package was published. The --locked ag can be used to force Cargo to use the packaged Cargo.lock le xes or updates to any dependency. Note that Cargo did not start publishing Cargo.lock les until version 1.37, which means packages published with prior versions will not have a Cargo.lock le available.
Install Options
--vers VERSION --version VERSION Specify a version to install.
--git URL Git URL to install the speci ed crate from.
--branch BRANCH Branch to use when installing from git.
--tag TAG Tag to use when installing from git.
--rev SHA Speci c commit to use when installing from git.
--path PATH Filesystem path to local crate to install. 190/22618/01/2020 The Cargo Book
--list List all installed packages and their versions.
-f --force Force overwriting existing crates or binaries. This can be used to reinstall or upgrade a crate.
--bin NAME… Install only the speci ed binary.
--bins Install all binaries.
--example NAME… Install only the speci ed example.
--examples Install all examples.
--root DIR Directory to install packages into.
--registry REGISTRY Name of the registry to use. Registry names are de ned in Cargo con g les. If not speci ed, the default registry is used, which is de ned by the registry.default con g key which defaults to crates-io .
--target TRIPLE 191/22618/01/2020 The Cargo Book
Install for the given architecture. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Run rustc --print target-list for a list of supported targets.
--debug Build with the dev pro le instead the release pro le. 192/22618/01/2020 The Cargo Book
EXAMPLES 1. Install a package from crates.io: 193/22618/01/2020 The Cargo Book cargo install ripgrep
SEE ALSO cargo(1), cargo-uninstall(1), cargo-search(1), cargo-publish(1)
cargo new
NAME cargo-new - Create a new Cargo package
SYNOPSIS cargo new [OPTIONS] PATH
DESCRIPTION This command will create a new Cargo package in the given directory. This includes a simple template with a Cargo.toml manifest, sample source le, and a VCS ignore le. If the directory is not already in a VCS repository, then a new repository is created (see --vcs below). 194/22618/01/2020 The Cargo Book
See cargo-init(1) for a similar command which will create a new manifest in an existing directory.
New Options 195/22618/01/2020 The Cargo Book 196/22618/01/2020 The Cargo Book
EXAMPLES 1. Create a binary Cargo package in the given directory: cargo new foo
SEE ALSO cargo(1), cargo-init(1)
cargo search
NAME cargo-search - Search packages in crates.io
SYNOPSIS cargo search [OPTIONS] [QUERY…]
DESCRIPTION This performs a textual search for crates on. The matching crates will be displayed along with their description in TOML format suitable for copying into a Cargo.toml manifest.
Search Options
--limit LIMIT Limit the number of results (default: 10, max: 100).
--index INDEX 197/22618/01/2020 The Cargo Book
Exit Status 198/22618/01/2020 The Cargo Book
EXAMPLES 1. Search for a package from crates.io: cargo search serde
SEE ALSO cargo(1), cargo-install(1), cargo-publish(1)
cargo uninstall
NAME cargo-uninstall - Remove a Rust binary
SYNOPSIS cargo uninstall [OPTIONS] [SPEC…]
DESCRIPTION This command removes a package installed with cargo-install(1). The SPEC argument is a package ID speci cation of the package to remove (see cargo-pkgid(1)).
By default all binaries are removed for a crate but the --bin and --example ags can be used to only remove particular binaries.
--root option 199/22618/01/2020 The Cargo Book
-p --package SPEC… Package to uninstall.
--bin NAME… Only uninstall the binary NAME.
--root DIR Directory to uninstall packages from.
-h 200/22618/01/2020 The Cargo Book
--help Prints help information.
EXAMPLES 1. Uninstall a previously installed package. cargo uninstall ripgrep
SEE ALSO cargo(1), cargo-install(1)
cargo login
NAME cargo-login - Save an API token from the registry locally 201/22618/01/2020 The Cargo Book
SYNOPSIS cargo login [OPTIONS] [TOKEN]
DESCRIPTION This command will save the API token to disk so that commands that require authentication, such as cargo-publish(1), will be automatically authenticated. The token is saved in $CARGO_HOME/credentials . CARGO_HOME defaults to .cargo in your home directory.
If the TOKEN argument is not speci ed, it will be read from stdin.
Take care to keep the token secret, it should not be shared with anyone else.
Login Options
EXAMPLES 1. Save the API token to disk: cargo login
SEE ALSO cargo(1), cargo-publish(1)
cargo owner 203/22618/01/2020 The Cargo Book
NAME cargo-owner - Manage the owners of a crate on the registry
SYNOPSIS cargo owner [OPTIONS] --add LOGIN [CRATE] cargo owner [OPTIONS] --remove LOGIN [CRATE] cargo owner [OPTIONS] --list [CRATE]
DESCRIPTION This command will modify the owners for a crate on the registry. Owners of a crate can upload new versions and yank old versions. Non-team owners can also modify the set of owners, so take care!
This command requires you to be authenticated with either the --token option or using cargo-login(1).
If the crate name is not speci ed, it will use the package name from the current directory.
See the reference for more information about owners and publishing.
-a --add LOGIN… Invite the given user or team as an owner.
-r --remove LOGIN… Remove the given user or team as an owner.
-l --list List owners of a crate.
--token TOKEN 204/22618/01/2020 The Cargo Book
API token to use when authenticating. This overrides the token stored in the credentials le (which is created by cargo-login(1)).
Cargo con g environment variables can be used to override the tokens stored in the credentials le..
--index INDEX The URL of the registry index to use. 205/22618/01/2020 The Cargo Book
EXAMPLES 1. List owners of a package: cargo owner --list foo
SEE ALSO cargo(1), cargo-login(1), cargo-publish(1)
cargo package
NAME cargo-package - Assemble the local package into a distributable tarball
SYNOPSIS 206/22618/01/2020 The Cargo Book
DESCRIPTION This command will create a distributable, compressed .crate le with the source code of the package in the current directory. The resulting le will be stored in the target/package directory. This performs the following steps:
1. Load and check the current workspace, performing some basic checks. Path dependencies are not allowed unless they have a version key. Cargo will ignore the path key for dependencies in published packages. dev-dependencies do not have this restriction. 2. Create the compressed .crate le. The original Cargo.toml le is rewritten and normalized. [patch] , [replace] , and [workspace] sections are removed from the manifest. Cargo.lock is automatically included if the package contains an executable binary or example target. cargo-install(1) will use the packaged lock le if the -- locked ag is used. A .cargo_vcs_info.json le is included that contains information about the current VCS checkout hash if available (not included with --allow-dirty ). 3. Extract the .crate le and build it to verify it can build. 4. Check that build scripts did not modify any source les.
The list of les included can be controlled with the include and exclude elds in the manifest.
See the reference for more details about packaging and publishing.
Package Options
-l --list Print les included in a package without making one.
--no-verify Don’t verify the contents by building them.
--no-metadata Ignore warnings about a lack of human-usable metadata (such as the description or the license). 207/22618/01/2020 The Cargo Book
--allow-dirty Allow working directories with uncommitted VCS changes to be packaged.
--target TRIPLE Package. 208/22618/01/2020 The Cargo Book
-h 209/22618/01/2020 The Cargo Book
EXAMPLES 1. Create a compressed .crate le of the current package: cargo package
cargo publish
NAME cargo-publish - Upload a package to the registry 210/22618/01/2020 The Cargo Book
SYNOPSIS cargo publish [OPTIONS]
DESCRIPTION This command will create a distributable, compressed .crate le with the source code of the package in the current directory and upload it to a registry. The default registry is. This performs the following steps:
Publish Options
--dry-run Perform all checks without uploading.
--token TOKEN API token to use when authenticating. This overrides the token stored in the credentials le (which is created by cargo-login(1)).
--allow-dirty 211/22618/01/2020 The Cargo Book
--target TRIPLE Publish. 212/22618/01/2020 The Cargo Book
EXAMPLES 1. Publish the current package: cargo publish
SEE ALSO cargo(1), cargo-package(1), cargo-login(1)
cargo yank 214/22618/01/2020 The Cargo Book
NAME cargo-yank - Remove a pushed crate from the index
SYNOPSIS cargo yank [OPTIONS] --vers VERSION [CRATE]
DESCRIPTION The yank command removes a previously published allow any new crates to be locked to any yanked version.
--vers VERSION The version to yank or un-yank.
--undo Undo a yank, putting a version back into the index.
Cargo con g environment variables can be used to override the tokens stored in the credentials le. The token for crates.io may be speci ed with the CARGO_REGISTRY_TOKEN environment variable. Tokens for other registries may be 215/22618/01/2020 The Cargo Book 216/22618/01/2020 The Cargo Book
EXAMPLES 1. Yank a crate from the index: cargo yank --vers 1.0.7 foo
cargo help
NAME cargo-help - Get help for a Cargo command
SYNOPSIS cargo help [SUBCOMMAND]
DESCRIPTION Prints a help message for the given command. 217/22618/01/2020 The Cargo Book
EXAMPLES 1. Get help for a command: cargo help build
cargo version
NAME cargo-version - Show version information
SYNOPSIS cargo version [OPTIONS]
DESCRIPTION Displays the version of Cargo.
OPTIONS -v --verbose Display additional version information. 218/22618/01/2020 The Cargo Book
EXAMPLES 1. Display the version: cargo version.
We think that it’s very important to support multiple ways to download packages, including downloading from GitHub and copying packages into your package itself.
That said, we think that crates.io o ers a number of important bene ts,: 219/22618/01/2020 The Cargo Book e ciently, and then to e ciently download just the published package, and not other bloat that happens to exist in the repository. This adds up to a signi cant improvement in the speed of dependency resolution and fetching. As dependency graphs scale up, downloading all of the git repositories bogs down fast. Also remember that not everybody has a high-speed, low-latency Internet connection.
Yes!
Cargo handles compiling Rust code, but we know that many Rust packages link against C code. We also know that there are decades of tooling built up around compiling languages other than Rust.
Our solution: Cargo allows a package to specify a script (written in Rust) to run before invoking rustc . Rust is leveraged to implement platform-speci c con guration and refactor out common build functionality among packages.
Indeed. While we intend Cargo to be useful as a standalone way to compile Rust packages.
Rust itself provides facilities for con guring sections of code based on the platform. Cargo also supports platform-speci c dependencies, and we plan to support more per-platform con guration in Cargo.toml in the future.
All commits to Cargo are required to pass the local test suite on Windows. If, however, you nd a Windows issue, we consider it a bug, so please le an issue.
The purpose of a Cargo.lock is to describe the state of the world at the time of a successful build. It is then used to provide deterministic builds across whatever machine is building the package by ensuring that the exact same dependencies are being compiled.
This property is most desirable from applications and packages which are at the very end of the dependency chain (binaries). As a result, it is recommended that all binaries check in their Cargo.lock .
For libraries the situation is somewhat di e Cargo used all of the dependencies' Cargo.lock les, then multiple copies of the library could be used, and perhaps even a version con ict.
In other words, libraries specify semver requirements for their dependencies but cannot see the full picture. Only end products like binaries have a full picture to decide what versions of dependencies should be used. 221/22618/01/2020 The Cargo Book
specify the range that they do work with, even if it’s something as general as “every 1.x.y version.”
Why Cargo.toml ?
As one of the most frequent interactions with Cargo, the question of why the con guration le is named Cargo.toml arises from time to time. The leading capital- C was chosen to ensure that the manifest was grouped with other similar con guration les in directory listings. Sorting les often puts capital letters before lowercase letters, ensuring les like Makefile and Cargo.toml are placed together. The trailing .toml was chosen to emphasize the fact that the le is in the TOML con guration format.
Cargo does not allow other names such as cargo.toml or Cargofile to emphasize the ease of how a Cargo repository can be identi ed. An option of many possible names has historically led to confusion where one case was handled but others were accidentally forgotten.
Cargo is often used in situations with limited or no network access such as airplanes, CI environments, or embedded in large production deployments. Users are often surprised when Cargo attempts to fetch resources from the network, and hence the request for Cargo to work o ine modi ed in the meantime. This avoidance of the network boils down to a Cargo.lock existing and a populated cache of the crates re ected in the lock le. If either of these components are missing, then they're required for the build to succeed and must be fetched remotely.
As of Rust 1.11.0 Cargo understands a new ag, --frozen , which is an assertion that it shouldn't touch the network. When passed, Cargo will immediately return an error if it would otherwise attempt a network request. The error should include contextual information about why the network request is being made in the rst place to help debug as well. Note that this ag does not change the behavior of Cargo, it simply asserts that Cargo shouldn't touch the network as a previous command has been run to ensure that network activity shouldn't be necessary. 222/22618/01/2020 The Cargo Book
Glossary
Artifact
An artifact is the le or set of les created as a result of the compilation process. This includes linkable libraries and executable binaries.
Crate
Every target in a package is a crate. Crates are either libraries or executable binaries. It may loosely refer to either the source code of the target, or the compiled artifact that the target produces. A crate may also refer to a compressed package fetched from a registry.
Edition
A Rust edition is a developmental landmark of the Rust language. The edition of a package is speci ed in the Cargo.toml manifest, and individual targets can specify which edition they use. See the Edition Guide for more information.
Feature
A feature is a named ag which allows for conditional compilation. A feature can refer to an optional dependency, or an arbitrary name de ned in a Cargo.toml manifest that can be checked within source code. Cargo has unstable feature ags which can be used to enable experimental behavior of Cargo itself. The Rust compiler and Rustdoc have their own unstable feature ags (see The Unstable Book and The Rustdoc Book). CPU targets have target features which specify capabilities of a CPU.
Index
Lock le
The Cargo.lock lock le is a le that captures the exact version of every dependency used in a workspace or package. It is automatically generated by Cargo. See Cargo.toml vs 223/22618/01/2020 The Cargo Book
Cargo.lock.
Manifest
A virtual manifest is a Cargo.toml le that only describes a workspace, and does not include a package.
Member
Package
A package is a collection of source les and a Cargo.toml manifest which describes the package. A package has a name and version which is used for specifying dependencies between packages. A package contains multiple targets, which are either libraries or executable binaries.
The package root is the directory where the package's Cargo.toml manifest is located.
The package ID speci cation, or SPEC, is a string used to uniquely reference a speci c version of a package from a speci c source.
Project
Registry
A registry is a service that contains a collection of downloadable crates that can be installed or used as dependencies for a package. The default registry is crates.io. The registry has an index which contains a list of all crates, and tells Cargo how to download the crates that are needed.
Source
A source is a provider that contains crates that may be included as dependencies for a package. There are several kinds of sources: 224/22618/01/2020 The Cargo Book
Spec
Target
Cargo Target — Cargo packages consist of targets which correspond to artifacts that will be produced. Packages can have library, binary, example, test, and benchmark targets. The list of targets are con gured in the Cargo.toml manifest, often inferred automatically by the directory layout of the source les. Target Directory — Cargo places all built artifacts and intermediate les in the target directory. By default this is a directory named target at the workspace root, or the package root if not using a workspace. The directory may be changed with the -- target-dir command-line option, the CARGO_TARGET_DIR environment variable, or the build.target-dir con g option. Target Architecture — The OS and machine architecture for the built artifacts are typically referred to as a target. Target Triple — A triple is a speci c format for specifying a target architecture. Triples may be referred to as a target triple which is the architecture for the artifact produced, and the host triple which is the architecture that the compiler is running on. The target triple can be speci ed with the --target command-line option or the build.target con g option. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> where: arch = The base CPU architecture, for example x86_64 , i686 , arm , thumb , mips , etc. sub = The CPU sub-architecture, for example arm has v7 , v7s , v5te , etc. vendor = The vendor, for example unknown , apple , pc , linux , etc. sys = The system name, for example linux , windows , etc. none is typically used for bare-metal without an OS. abi = The ABI, for example gnu , android , eabi , etc. 225/22618/01/2020 The Cargo Book
Some parameters may be omitted. Run rustc --print target-list for a list of supported targets.
Test Targets
Cargo test targets generate binaries which help verify proper operation and correctness of code. There are two types of test artifacts:
Unit test — A unit test is an executable binary compiled directly from a library or a binary target. It contains the entire contents of the library or binary code, and runs # [test] annotated functions, intended to verify individual units of code. Integration test target — An integration test target is an executable binary compiled from a test target which is a distinct crate whose source is located in the tests directory or speci ed by the [[test]] table in the Cargo.toml manifest. It is intended to only test the public API of a library, or execute a binary to verify its operation.
Workspace
A virtual workspace is a workspace where the root Cargo.toml manifest does not de ne a package, and only lists the workspace members.
The workspace root is the directory where the workspace's Cargo.toml manifest is located. 226. | https://it.scribd.com/document/443400983/The-Cargo-Book-pdf | CC-MAIN-2020-34 | en | refinedweb |
Alright, time to have some fun exploring efficient negative sampling implementations in NumPy…
Negative sampling is a technique used to train machine learning models that generally have several order of magnitudes more negative observations compared to positive ones. And in most cases, these negative observations are not given to us explicitly and instead, must be generated somehow. Today, I think the most prevalent usages of negative sampling is in training Word2Vec (or similar) and in training implicit recommendation systems (BPR). In this post, I’m going to frame the problem under the recommendation system setting — sorry NLP fans.
Problem
For a given user, we have the indices of positive items corresponding to that user. These are items that the user has consumed in the past. We also know the fixed size of the entire item catalog. Oh, we will also assume that the given positive indices are ordered. This is quite a reasonable assumption because positive items are often stored in CSR interaction matrices (err… at least in the world of recommender systems).
And from this information, we would like to sample from the other (non-positive) items with equal probability.
n_items = 10 pos_inds = [3, 7]
Bad Ideas
We could enumerate all the possible choices of negative items and then use
np.random.choice (or similar). However, as there are usually orders of magnitude more negative items than positive items, this is not memory friendly.
Incremental Guess and Check
As a trivial (but feasible) solution, we are going to continually sample a random item from our catalog, and keep items if they are not positive. This will continue until we have enough negative samples.
def negsamp_incr(pos_check, pos_inds, n_items, n_samp=32): """ Guess and check with arbitrary positivity check """ neg_inds = [] while len(neg_inds) < n_samp: raw_samp = np.random.randint(0, n_items) if not pos_check(raw_samp, pos_inds): neg_inds.append(raw_samp) return neg_inds
A major downside here is that we are sampling a single value many times — rather than sampling many values once. And although it will be infrequent, we have to re-sample if we get unlucky and randomly choose a positive item.
This family of strategies will pretty much only differ by how item positivity is checked. We will go through a couple of ways to tinker with the complexity of the positivity check, but keep in mind that the number of positive items is generally small, so these modifications are actually not super-duper important.
Using
in operator on the raw list:
With a
list, the item positivity check is O(n) as it checks every element of the list.
def negsamp_incr_naive(pos_inds, n_items, n_samp=32): """ Guess and check with list membership """ pos_check = lambda raw_samp, pos_inds: raw_samp in pos_inds return negsamp_incr(pos_check, pos_inds, n_items, n_samp)
Using
in operator on a set created from the list:
Here, we’re going to first convert our
list into a python
set which is implemented as a hashtable. Insertion is O(1), so the conversion itself is O(n). However, once the set is created, our item positivity check (set membership) will be O(1) thereon after. So we can expect this to be a nicer strategy if
n_samp is large.
def negsamp_incr_set(pos_inds, n_items, n_samp=32): """ Guess and check with hashtable membership """ pos_inds = set(pos_inds) pos_check = lambda raw_samp, pos_inds: raw_samp in pos_inds return negsamp_incr(pos_check, pos_inds, n_items, n_samp)
Using a binary search on the list (assuming it’s sorted):
One of best things you can do exploit the sortedness of a list is to use binary search. All this does is change our item positivity check to O(log(n)).
from bisect import bisect_left def bsearch_in(search_val, val_arr): i = bisect_left(val_arr, search_val) return i != len(val_arr) and val_arr[i] == search_val def negsamp_incr_bsearch(pos_inds, n_items, n_samp=32): """ Guess and check with binary search `pos_inds` is assumed to be ordered """ pos_check = bsearch_in return negsamp_incr(pos_check, pos_inds, n_items, n_samp)
(Aside: LightFM, a popular recommendation system implements this in Cython. They also have a good reason to implement this in a sequential fashion — but we won’t go into that.)
Vectorized Binary Search
Here we are going to address the issue of incremental generation. All random samples will now be generated and verified in vectorized manners. The upside here is that we will reap the benefits of NumPy’s underlying optimized vector processing. Any positives found during this check will then be masked off. A new problem arises in that if we hit any positives, we will end up returning less samples than prescribed by the
n_samp parameter. Yeah, we could fill in the holes with the previously discussed strategies, but let’s just leave it at that.
def negsamp_vectorized_bsearch(pos_inds, n_items, n_samp=32): """ Guess and check vectorized Assumes that we are allowed to potentially return less than n_samp samples """ raw_samps = np.random.randint(0, n_items, size=n_samp) ss = np.searchsorted(pos_inds, raw_samps) pos_mask = raw_samps == np.take(pos_inds, ss, mode='clip') neg_inds = raw_samps[~pos_mask] return neg_inds
Vectorized Pre-verified Binary Search
Finally, we are going to address both main pitfalls of the guess-and-check strategies.
Vectorize: generate all our random samples at once
Pre-verify: no need for an item positivity check
We know how many negative items are available to be sampled since we have the size of our item catalog, and the number of positive items (
len(pos_inds) is just O(1) ) to subtract off. So let’s sample uniformly over a range of imaginary negative indices with 1–1 correspondence with our negative items. This gives us the correct distribution since we have the correct number of negative item slots to sample from; however, the indices now need to be adjusted.
To fix our imaginary index, we must add the number of positive items that precede each position. Assuming our positive indices are sorted, this is just a binary search (compliments of np.searchsorted). But keep in mind that in our search, for each positive index, we also need to subtract the number of positive items that precede each position.
def negsamp_vectorized_bsearch(pos_inds, n_items, n_samp=32): """ Pre-verified with binary search `pos_inds` is assumed to be ordered """ raw_samp = np.random.randint(0, n_items - len(pos_inds), size=n_samp) pos_inds_adj = pos_inds - np.arange(len(pos_inds)) ss = np.searchsorted(pos_inds_adj, raw_samp, side='right') neg_inds = raw_samp + ss return neg_inds
Briefly, let’s look at how this works for all possible raw sampled values.
n_items = 10 pos_inds = [3, 7] # raw_samp = np.random.randint(0, n_items - len(pos_inds), size=n_samp) # Instead of sampling, see what happens to each possible sampled value raw_samp = np.arange(0, n_items - len(pos_inds)) raw_samp: array([0, 1, 2, 3, 4, 5, 6, 7]) # Subtract the number of positive items preceding pos_inds_adj = pos_inds - np.arange(len(pos_inds)) pos_inds_adj: array([3, 6]) # Find where each raw sample fits in our adjusted positive indices ss = np.searchsorted(pos_inds_adj, raw_samp, side='right') ss: array([0, 0, 0, 1, 1, 1, 2, 2]) # Adjust our raw samples neg_inds = raw_samp + ss neg_inds: array([0, 1, 2, 4, 5, 6, 8, 9])
As desired, each of our sampled values has a 1–1 mapping to a negative item.
Summary Notebook with Results
The notebook linked below compares the implementations discussed in this post in some example scenarios. The previously discussed “Vectorized Pre-verified Binary Search” strategy seems to be the most performant except in the edge case where
n_samp=1 where vectorization no longer pays off (in that case, all strategies are very close).
Concluding Remarks
In models that require negative sample, the sample stage is often a bottleneck in the training process. So even little optimizations like this are pretty helpful.
Some further thinking:
- how to efficiently sample for many users at a time (variable length number of positive items)
- at what point (sparsity of our interaction matrix) does our assumption that
n_neg_items >> n_pos_itemswreck each implementation
- how easy is it to modify each implementation to accommodate for custom probability distributions — if we wanted to take item frequency or expose into account | https://tech.hbc.com/2018-03-23-negative-sampling-in-numpy.html | CC-MAIN-2019-35 | en | refinedweb |
import updates server 2016
Please fix the broken link when importing updates into WSUS in Server 2016 from the update catalog.
The issue is that the link contains a protocol version 1.20 at the end but the correct version is 1.80!
1 vote
Karl Wester-Ebbinghaus (@tweet_alqamar) commented
still not fixed | https://windowsserver.uservoice.com/forums/304618-installation-and-patching/suggestions/34807510-import-updates-server-2016 | CC-MAIN-2019-35 | en | refinedweb |
Set free your inner geek! Make fridge magnets from old keyboard keys!
My!
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Obtain the Necessary Tools and Supplies
Here are the things you'll need...
.
Step 2: Disassemble Your Keyboard
To begin, flip your 'board over. You'll likely find at least two screws. Remove them and set them aside. (You won't be needing them, unless you like to save them) Now try to pull the top and bottom of your keyboard apart. If it's like mine, it'll probably open by a sort of hinge at the top, which slides out once the case opens. If it seems to be stuck, make sure you got _all_ of the screws out. Sometimes manufactures like to hide them under little rubber feet or stickers. If it still doesn't want to come open, you can always try using brute strength... We won't be needing the case later anyway.
Inside, you'll probably find a small circuit board in one corner (probably the corner where the cord comes out) and the rest will be concealed by a large metal sheet, held in place by one or more screws. Again, remove them and set them aside. Then lift off the metal sheet. (Your keyboard might not have a metal sheet, I've only tried this on one keyboard which used the sheet as a ground.) You should see one or more layers of plastic with circuit traces on them, and some kind of rubber sheet with little bumps on it. Either that, or there will be little rubber buttons on the sheet itself. The rubber things are what make your keys springy! None of this is important, unless you want to save the sheets to make a cool wallet. Anyway, you should now be looking at the back of the keys. On to the next step!
Step 3: Removing the Keys
The keys consist of: a) the actual key and, b) a little plastic plunger thing. The best way I've found to remove the keys from the keyboard is to take a small flathead screwdriver, insert it into the key, then lift the keyboard and push. The key should pop away from the plunger and onto your desk. As of yet, I haven't found a use for the plungers... They look kinda cool though. Remove all of the keys from the board using this method, or any other methods that might be more suited to your individual board. Note that removing the larger keys, such as return (enter) and the spacebar is exactly the same as removing the smaller ones, because they still only have one actual contact point. Note that they may have little guiding tabs that also stick through the board. Be sure you're pushing on the point that actually connects the key to the plunger, because you might risk damaging the key if you push anywhere else.
Step 4: Cleaning the Keys
Since the keyboard keys are one of the dirtiest surfaces in your home, and your kitchen should be one of the cleanest surfaces in your home, I think it's a good idea to clean the keyboard keys before putting them on your fridge.
I just soaked them in warm soapy water and scrubbed the dirtiest ones with a toothbrush. Boiling them would probably be easier, but I didn't feel like using the stove.
Step 5: Making the Magnets
And now, the moment you've all been waiting for.... Now we get to make the magnets!!!
Alright, I know you're psyched, but first we have to let your hot glue gun warm up. (Or if you're really patient, you can use other glue.) Start by picking the key you're going to back with magnet. Cut a strip (or a chunk if you're not using the stuff on a roll like me) a little bigger than the key you're going to back. Then, apply glue all around the edge of the key, and stick it onto the magnet. (If you're using the store bought stuff, be sure to peel off the backing first if you're using the stick on kind. Don't just stick the magnet on to the key because the glue that they back the magnet with isn't worth beans.) Wait for the glue to fully dry and cure, and then trim off the excess magnet around the edge. Congratulations! Your first key is done. (Don't celebrate too much, because you've still got a lot more to do.)
One little tip: If you want to do the keyboard the same way I did, try grouping the keys into sections. This way, your kitchen won't be filled with keyboard keys that fall off the fridge and trip you up. The one downside to this is that the keys tend to fall off the magnet backing if you grab them the wrong way. The solution? I think you could just use stronger glue....
The enter and spacebar (and possibly other) keys are different than the rest. They have little plastic tabs that get in the way of the magnet strip...
Step 6: The Tricky Keys
For the enter and spacebar keys, you'll first need to remove a few little plastic tabs before you can glue the magnet on. To do this, you'll need your cutting pliers. Simply snip off the extruding tabs using brute strength. For this step, wear safety goggles (or shield your eyes.) I had one plastic tab that ricocheted off of several objects in my kitchen before coming to rest on the floor. Then just sand off any little bits of the tab that are left. Then you can glue the magnet on.
Also, if you're grouping the arrow keys like I did, make sure you got them into the correct configuration or they'll look odd. To make sure that they're in the correct alignment, make sure that all of the pictures on the keys are in the same place (all arrows in upper-left corner.) Also, make sure that all of the keys slant the same direction. If you are using a standard desktop keyboard, this usually means that the keys have a lower top than bottom when lying flat.
If you are making a group of keys that has more than one row, and the keys stick off of the magnetic strip a bit, make sure you glue them really well so that they don't fall off. Some of my arrow keys came off the strip after I was finished, so I had to use a stronger glue to repair them.
Step 7: Finishing Up
Yep, you're done. How long did it take you to glue all the keys? It took me two and a half hours. Yeesh. Anyway, if the keys start to fall off of the magnet base over time, it's probably time to re-glue with something stronger. You wouldn't want to throw them away after all that hard work you did....
That's it! Enjoy your new magnets. Stick em' to your fridge, or anything else that they'll stick to.... And have fun!
12 Discussions
4 years ago on Introduction
Why not fill the keys with plaster of Paris, Bondo, clay or some other moldable stuff to give the key more heft and more gluing surface area?
10 years ago on Step 4
I used a quicker, easier method for cleaning when I made the keyboard thumb tacks. Put the keyboard in the dish washer before you remove the keys. Clean as new.
Reply 6 years ago on Step 4
I second this method. I often use it to clean my desktop's keyboard after removing the electronics. Put it in the top rack and it will look like new when it's done. Takes a while to dry fully but well worth it.
9 years ago on Introduction
wow this is amazing, I would have never thought of this
10 years ago on Step 7
Great instructions! I'll have to make sure I make some time to do this... my 2 year old will love having these on the fridge! (so will my rude friends... i think i'll separate most of the keys to have some fun.. i've got a couple of spare keyboards hanging around :) +++
10 years ago on Introduction
What brand/make of keyboards are you guys/gals using? I ask because I have opened up 4 different brands of keyboards and none of them have a separate plunger and key. All of the keys are molded with the plunger as one piece. Thanks for your help.
12 years ago on Introduction
I guess this isn't a new idea... I didn't see this before I wrote mine... Ugh, there goes an entire day of making magnets. I feel dumb. Oh well, my way of doing it is a bit different than his, so technically it's a project with different methods and ideas.
Reply 10 years ago on Introduction
Echoing what other posts are saying, ideas don't exist in a vacuum. We create things based on what we've seen and our experiences. Truly original and unique ideas are very rare. New methods, different methods , or even just improved methods can often move whole industries forward. Sharing your methods and ideas may inspire others. So please share!
Reply 12 years ago on Introduction
There are multiple ways to do the same thing, and they are all valid Instructables.
Reply 12 years ago on Introduction
No worries - there's always room for more than one ;)
11 years ago on Introduction
ha ha ha cool I want to do it ,I have an old keyboard with all the buttons ripped out.
12 years ago on Introduction
Hiya,. | https://www.instructables.com/id/Keyboard-Refrigerator-Magnets---New-Method/ | CC-MAIN-2019-35 | en | refinedweb |
.[Read More]
March 23, 2017Marion Desmazieres
By Sam Morgan, Head of Education at Makers Academy
Editor’s note: This is part one of our Makers Academy series for Ruby developers. Learn more about this free training on the Alexa Skills Kit in this blog post.
Welcome to the first module of Makers Academy's short course on building Alexa skills using Ruby. Amazon's Alexa Skills Kit allows developers to extend existing applications with deep voice integration and construct entirely new applications that leverage the cutting-edge voice-controlled technology.
This course will cover all the terminology and techniques required to get fully-functional skills pushed live to owners of Alexa-enabled devices all around the world using Ruby and Sinatra.
This module contains a basic introduction to scaffolding a skill and interacting with Alexa. This module introduces:
During this module, you will construct a simple skill called “Hello World.” While building this skill, you will come to understand how the above concepts work and play together. This module uses:
Let's get started![Read More] 17, 2017Jeff Blankenburg
We all hold interesting data in our heads. Maybe it's a list of all the action figures we played with as a kid, specific details about the 50 U.S. states, or a historical list of the starting quarterbacks for our favorite football team. When we're with friends, sometimes we'll even quiz each other on these nuanced categories of information. It's a fun, interactive way to share our knowledge and learn more about our favorite topics.
You can now bring that experience to Alexa using our new quiz skill template. You provide the data and the number of properties in that data, and Alexa will dynamically build a quiz game for you.]
March.[Read More]
March 15, 2017David Isbitski
Amazon today announced a new program that will make it free for tens of thousands of Alexa developers to build and host most Alexa skills charges each month.
Now, developers with a live Alexa skill can apply to receive a $100 AWS promotional credit and can also receive. 03, 2017Michael Palermo
Today we are happy to announce lock control and query, a new feature in the Smart Home Skill API now available in the US, with support for the UK and Germany coming soon. This feature is supported with locks from August, Yale, Kwikset, and Schlage as well as hub support from SmartThings and Wink. Now any developer targeting devices with locking behavior can enable customers to issue a voice command such as, “Alexa, lock the front door.” In addition, developers can build in support for customers asking for the status of a smart locking device with a voice command such as, “Alexa, is the front door locked?”
Much like the recently announced thermostat query feature, the lock query feature simplifies development efforts by enabling specific voice interactive experiences straight from the Smart Home Skill API. This is accomplished under the new Alexa.ConnectedHome.Query namespace.
Developers can report errors using the same namespace. These errors are then used to guide the customer with the proper corrective actions. It’s crucial that developers return meaningful and correct errors so that customers can feel confident about the status of their locks. For example, if the smart locking device is unable to provide a stateful value because a door is open, developers should report this in their directive response as shown below.] | https://developer.amazon.com/ja/blogs/alexa/?page=60 | CC-MAIN-2019-35 | en | refinedweb |
You need to sign in to do that
Don't have an account?
Trigger to update lead status when activity is logged
trigger changeLeadStatus on Task (before insert, before update) {
String desiredNewLeadStatus = 'Working';
List<Id> leadIds=new List<Id>();
for(Task t:trigger.new){
if(t.Status=='Completed'){
if(String.valueOf(t.whoId).startsWith('00Q')==TRUE){//check if the task is associated with a lead
leadIds.add(t.whoId);
}//if 2
}//if 1
}//for
List<Lead> leadsToUpdate=[SELECT Id, Status FROM Lead WHERE Id IN :leadIds AND IsConverted=FALSE];
For (Lead l:leadsToUpdate){
l.Status=desiredNewLeadStatus;
}//for
try{
update leadsToUpdate;
}catch(DMLException e){
system.debug('Leads were not all properly updated. Error: '+e);
}
}//trigger
The process did not set the correct Type value on submitting for approval
Challenge not yet complete... here's what's wrong:
The process did not set the correct Type value on submitting for approval
I'm not sure why it isn't approving. Please help!
Here is mine version of that Approval process which worked perfectly.
You will notice that in Approval steps I don't have any rejection step which is marked as red bold in your Approval process. remove that criteria and it should work.
Thanks,
Himanshu
.
Create an Apex class that uses the @future annotation to update Account records.
Create an Apex class with a method using the @future annotation that accepts a List of Account IDs and updates a custom field on the Account object with the number of contacts associated to the Account. Write unit tests that achieve 100% code coverage for the class.
Create a field on the Account object called 'Number_of_Contacts__c' of type Number. This field will hold the total number of Contacts for the Account.
Create an Apex class called 'AccountProcessor' that contains a 'countContacts' method that accepts a List of Account IDs. This method must use the @future annotation.
For each Account ID passed to the method, count the number of Contact records associated to it and update the 'Number_of_Contacts__c' field with this value.
Create an Apex test class called 'AccountProcessorTest'.
The unit tests must cover all lines of code included in the AccountProcessor class, resulting in 100% code coverage.
Run your test class at least once (via 'Run All' tests the Developer Console) before attempting to verify this challenge.
public class AccountProcessor
{
@future
public static void countContacts(Set<id> setId)
{
List<Account> lstAccount = [select id,Number_of_Contacts__c , (select id from contacts ) from account where id in :setId ];
for( Account acc : lstAccount )
{
List<Contact> lstCont = acc.contacts ;
acc.Number_of_Contacts__c = lstCont.size();
}
update lstAccount;
}
}
and
@IsTest
public class AccountProcessorTest {
public static testmethod void TestAccountProcessorTest(){
Account a = new Account();
a.Name = 'Test Account';
Insert a;
Contact cont = New Contact();
cont.FirstName ='Bob';
cont.LastName ='Masters';
cont.AccountId = a.Id;
Insert cont;
set<Id> setAccId = new Set<ID>();
setAccId.add(a.id);
Test.startTest();
AccountProcessor.countContacts(setAccId);
Test.stopTest();
Account ACC = [select Number_of_Contacts__c from Account where id = :a.id LIMIT 1];
System.assertEquals ( Integer.valueOf(ACC.Number_of_Contacts__c) ,1);
}
}
Error "Executing against the trigger does not work as expected."
I have checked the name of the class, task name as mentioned in the challenge description.
I have copied the code below :
trigger ClosedOpportunityTrigger on Opportunity (before insert, before update) {
List<Task> taskList = new List<Task>();
//If an opportunity is inserted or updated with a stage of 'Closed Won'
// add a task created with the subject 'Follow Up Test Task'.
for (Opportunity opp : [SELECT Id,Name FROM Opportunity
WHERE Id IN :Trigger.new AND StageName = 'Closed Won']) {
//add a task with subject 'Follow Up Test Task'.
taskList.add(new Task(Subject='Follow Up Test Task', WhatId = opp.id ));
}
if (taskList.size() > 0) {
insert taskList;
}
Thank you
Pierre-Alain
Please select this as a best answer.
Is there a way to perform validation for apex:inputField?
For example, there are 2 field, users name and email. Users cannot leave the name field blank, if so, I want to show a message of "This field is required to fill in" beside the field. And for email field, I want to have another message to remind users to insert a valid email.
Is there anyway in visualforce to do so? Thanks for your help.
You can add something like this:
1. Check the Empty
<apex:page
<script>
function show()
{
var name=document.getElementById('page:f1:p1:ip1').value;
if(name== "" || name==null)
{
document.getElementById("page:f1:p1:op2").innerHTML = "Please enter your name";
}
}
</script>
<apex:form
<apex:pageblock
<apex:outputlabel
<apex:inputtext
<apex:commandbutton
<apex:outputlabel
</apex:pageblock>
</apex:form>
</apex:page>
---> Besides salesforce has the on field validation, so u can have that option.
2. create formula syntax
Regression : '([a-zA-Z0-9_\\-\\.]+)@(((\\[a-z]{1,3}\\.[a-z]{1,3}\\.[a-z]{1,3}\\.)|(([a-zA-Z0-9\\-]+\\.)+))([a-zA-Z]{2,4}|[0-9]{1,3}))'
Thanks
Aniket
Difference between Process Builder and Flows with example ?
Thanks in Advance.
AJ
Process Builder:
Process Builder is.
I suggest you, to complete the trailhead of process builder it has a better example to make you clear about it.
Flow:
Flow is a powerful business automation tool that can manipulate data in Salesforce in a variety of ways. Such an application can be created right from the org’s setup with just drag-drop/point-click. The ease of creating flows makes it the number one go-to tool when it comes to complex business requirements
The trailhead for process builder:
Useful link for Flow with an example:
I hope you find the above solution helpful. If it does, please mark as Best Answer to help others too.
Thanks and Regards,
Deepali Kulshrestha.
Generate an Apex class using WSDL2Apex and write a test class.
The Challenge is as follows:
Generate an Apex class using WSDL2Apex and write a test class.
Generate an Apex class using WSDL2Apex for a SOAP web service, write unit tests that achieve 100% code coverage for the class using a mock response, and run your Apex tests.
Use WSDL2Apex to generate a class called 'ParkService' in public scope using this WSDL file. After you click the 'Parse WSDL' button don't forget to change the name of the Apex Class Name from 'parksServices' to 'ParkService'.
Create a class called 'ParkLocator' that has a 'country' method that uses the 'ParkService' class and returns an array of available park names for a particular country passed to the web service. Possible country names that can be passed to the web service include Germany, India, Japan and United States.
Create a test class named ParkLocatorTest that uses a mock class called ParkServiceMock to mock the callout response.
The unit tests must cover all lines of code included in the ParkLocator class, resulting in 100% code coverage.
Run your test class at least once (via 'Run All' tests the Developer Console) before attempting to verify this challenge.
The error I receive when checking the challencge is:
Challenge Not yet complete... here's what's wrong:
Executing the 'country' method on 'ParkLocator' failed. Make sure the method exists with the name 'country', is public and static, accepts a String and returns an array of Strings from the web service.
Here is the code I am using:
public class ParkLocator { public static String[] country(String ctry) { ParkService.ParksImplPort prk = new ParkService.ParksImplPort(); return prk.byCountry(ctry); } }
and
@isTest global class ParkServiceMock implements WebServiceMock { global void doInvoke( Object stub, Object request, Map<String, Object> response, String endpoint, String soapAction, String requestName, String responseNS, String responseName, String responseType) { // start - specify the response you want to send ParkService.byCountryResponse response_x = new ParkService.byCountryResponse(); List<String> myStrings = new List<String> {'Park1','Park2','Park3'}; response_x.return_x = myStrings; // end response.put('response_x', response_x); } }
and
@isTest private class ParkLocatorTest { @isTest static void testCallout() { // This causes a fake response to be generated Test.setMock(WebServiceMock.class, new ParkServiceMock()); // Call the method that invokes a callout List<String> result = new List<String>(); List<String> expectedvalue = new List<String>{'Park1','Park2','Park3'}; result = ParkLocator.country('India'); // Verify that a fake result is returned System.assertEquals(expectedvalue, result); } }
Any help which can be provided is greatly appreciated. If you could advise me at [email protected] if you reply with a solution, I can log in to check it.
Thanks.
Ryan
Use below code for ParkLocator class.
public class ParkLocator { public static String[] country(String country){ ParkService.ParksImplPort parks = new ParkService.ParksImplPort(); String[] parksname = parks.byCountry(country); return parksname; } }If this not resolves the problem then use a new Developer Org for completing the Challenge.
Let me know if this helps :)
How to automatically authorize an application without user interaction
I dont know much about PHP, i am posting generic CURL statement that will be hlepful to understand.
- Use Username and passwords to login to saleforce and get the access token, which when appended with REST calls to salesforce will give data back.For this someone has to create an App inside Salesforce whic will provide client secret and ID
curl https://<instance>.salesforce.com/services/oauth2/token -d "grant_type=password" -d "client_id=myclientid" -d "client_secret=myclientsecret" -d "[email protected]" -d "password=mypassword123456"
</pre>
This will return JSON format data which will contain acceess_token to be used for further calls. and than use REST URL as follows
<pre>
curl -H "Authorization: Bearer "THIS_IS_THE_ACCESS_TOKEN_RECEIVED_EARLIER>"
</pre>.
1) u create project obj and API project__C.
2) choose date type of project...text (not for any number)
3).in this create field priority with API priority__C.
4) in security controlls owd set public read only..on project obj& also give sharing rule name as ur wish.(i am given that private)
5) if all ready exist Training Coordinator in ur role highrarchey then don.t add..its..if not there add Training Coordinator.on under ceo role.
then u got 500 points........ok bye..
You need to create the same trigger on the Event Object. Events and Tasks are both activities, but they are separate objects (They are special in that way). You already have the code, just copy it for Events and you should be good!
Hope this helps! | https://developer.salesforce.com/forums?communityId=09aF00000004HMGIA2 | CC-MAIN-2019-35 | en | refinedweb |
Top Answers to Salesforce Interview Questions
Here are some of the top benefits of Salesforce CRM:
- Ensuring faster and better sales opportunity
- Deploying an analytical approach to customer acquisition
- Reducing cost and improving customer satisfaction
- Automation of repetitive and less important tasks
- Improved efficiency and enhanced communication on all fronts
Know more about why you should go for the Salesforce Certification Training? through this blog.
Simply put, custom objects are database tables in Salesforce. All the data related to an enterprise can be stored in Salesforce.com. There is a need for a junction object which is a custom object, and it has a Master–Detail relationship. You can create a master–detail relationship between two objects, and then connect a child object as a related list. Custom objects, which can be listed in Custom Settings, has a set of static data which is reusable.
These custom objects have to be defined first and then the following steps need to be followed:
- Join records with custom objects
- Custom object data are displayed in custom lists
- Create a custom tab for a custom object
- Build page layouts
- Create a dashboard and a report for analyzing the custom object
- Custom tab, app, and object can be shared
In Salesforce, you can link the standard and custom object records in a related list. It is done by the object relationship overview. Various types of relationships can be created in order to connect specific business cases with specific customers. It is possible to create a custom relationship on an object and define various relationship types.
Object relations in Salesforce can be of the following types:
- One to many
- Many to many
- Master–Detail
Now that you are aware of the benefits of Salesforce, for more detail check the Salesforce course.
An app in Salesforce is a container which contains name, logo, and a group of tabs that work as a unit to provide the functionality. Users can switch between apps using the Force.com app’s drop-down menu at the top-right corner of every page.
Some of the main benefits of Salesforce SaaS are:
- A pay-as-you-go model perfectly suites all customers
- No hassle of infrastructure management
- All applications are accessed via the Internet
- Easy integration between various applications
- Latest features are provided without any delay
- Guaranteed uptime and security
- Scalable performance for various operations
- Ability to access via mobile devices from anywhere
Learn more about Salesforce in this insightful Salesforce blog!
Salesforce is very meticulous when it comes to recording intricate details like sales numbers, customer details, customers served, repeat customers, etc. in order to create detailed reports, charts, and dashboards for keeping track of sales.
Workflow in Salesforce is basically a container or business logic engine which automates certain actions based on particular criteria. If the criteria is true, the actions get executed. When it is false, the record will get saved but no action will get executed..
Go through the Salesforce Course in London to get clear understanding of Salesforce.
A Master–Detail relationship is basically a Parent–Child relationship, in which ‘Master’ represents the Parent and other details represent the Child. If Parent is deleted then, Child also gets deleted. Roll-up summary fields can only be created on Master records which will calculate the SUM, AVG, and MIN of the Child records.
In a Master–Detail relationship, when a Master record is deleted, the Detail record also gets deleted, automatically.
In a Lookup relationship, the Child record will not be deleted, even if the Parent record is deleted.
Yes, you can have a roll-up summary in the case of a Master-Detail relationship. But not in the case of a lookup relationship. This is because a roll-up summary field is used to display a value in the Master record based on the values of a set of fields in the Detail record.
An sObject is any object which can be stored in the Force.com platform database. Apex allows the use of generic sObject abstract type to represent any object.
For example, ‘vehicle’ is a generic type and ‘car’ and ‘motorbike’ all are concrete types of ‘vehicle’.
Go through this Salesforce Tutorial to learn more about Salesforce end to end.
Triggers in Salesforce are called Apex Triggers. These are distinct and are available specifically for common and expected actions like lead conversions. It is just a code that is executed before or after a record is inserted or updated. A Trigger is different from a Workflow as it is a piece of code; whereas, a Workflow is an automated process and uses no code.
Trigger.new returns a list of records which has been added recently to sObjects. The records which are yet to be saved in the database are returned. Only insert and update triggers have the sObject list, and records can only be modified in before.trigger.
- System Administrator: Customization and administration of an application
- Standard User: Can edit, view, update, or delete one’s own record
- Read Only: Able to just view the records
- Solution Manager: Comes with the standard user permission but also can manage categories and published solutions
- Marketing User: you simplify the design, development, and deployment of cloud-based applications and websites. Salesforce Developers can work with Cloud Integrated Development Environment and deploy the applications on the Force.com servers.
These are described in detail on Salesforce community.
- Tabular report: In this, the grand total is displayed in a table format.
- Matrix report: This is an in-depth report wherein there is both row-based and column-based grouping.
- Summary report: Summary report is a report in which the grouping is on a column basis.
- Joined report: Joining of two or more reports into one creates a joined report.
A Salesforce dashboard can be seen as a visual and pictorial representation of a dashboard with the facility to add up to 20 reports.
Learn more about Salesforce in this Salesforce training in New York to get ahead in your career!
Various dashboard components are explained below:
- Chart: It is used for showing data graphically.
- Gauge: It is used for showing a single value within a range of custom values.
- Metric: This is used for displaying a single key–value. It is possible to click the empty text field next to the grand total and enter the metric label directly on components. All metrics placed above and below one another in the dashboard column would be displayed as a single component.
Various dashboard components are explained below:
- Table: The report data can be shown in column form using Table.
- Visualforce Page: It is used for creating a custom component or showing information not available in other component types.
- Custom S-component: This contains the content that is run or displayed in a browser like Excel file, ActiveX Control, Java applet, or custom HTML web form. tight integration with the database and also deploy auto-generated controllers for database objects. Developers can use Apex codes to write their own controllers. It is also possible to access AJAX components or create their own components.
Interested in learning Salesforce? Click here to learn more in this Salesforce Training in Sydney!
A static resource lets you upload content that is in the form of .jar and .zip formats, style sheets, JavaScript, and so on. It is recommended to deploy a static resource rather than uploading files to the Documents tab since it is possible to pack a set of files into a directory hierarchy and upload it. These files can be easily referred to in a Visualforce page.
Salesforce Object Query Language (SOQL) lets you search only one object, whereas the Salesforce Object Search Language (SOSL) lets you search for multiple objects. You can query for all types of fields in SOQL, but you can query only for text, email, and phone numbers in SOSL. Data Manipulation Language operations can be performed on query results but not on search results.){}
Become Master of Salesforce by going through this online Salesforce course in Toronto.
Collections are a type of variables used to store multiple numbers of records (data). Types of collections in Salesforce are:
- Lists
- Maps
- Sets
Maps are used to store data in the form of key–value pairs, where each unique key maps to a single value.
Syntax: Map<String, String> country_city = new Map<String, String>();
An Apex transaction represents a set of operations that are executed as a single unit. These operations include DML operations which are responsible for querying records. All the DML operations in a transaction either get completed successfully or get rolled back completely, if an error occurs even in saving a single record.
Get certified from top Salesforce course in Singapore Now
Global class is accessible across the Salesforce instance irrespective of namespaces.
Whereas, public classes are accessible only in the corresponding namespaces.
The get (getter) method is used to pass values from the controller to the VF page.
Whereas, the set (setter) method is used to set the value back to the controller variable.
The following fields are automatically indexed in Salesforce:
- Custom fields marked as an external ID or a unique field
- Primary keys (ID, Name, and Owner fields)
- Audit dates (such as SystemModStamp)
- Foreign keys (Lookup or Master–Detail relationship fields)
Time-dependent workflow cannot be created for ‘created, and every time it’s edited’.
Learn Complete Salesforce at Hyderabad in 24 Hrs.
Sandbox is a similar copy of a Salesforce production for testing, development, and training. The content and size of a sandbox may vary depending on the type of sandbox and the edition of the production organization which is associated with the sandbox. There are four types of sandboxes available:
- Developer Sandbox
- Developer Pro Sandbox
- Partial Data Sandbox
- Full Sandbox
An apex class is a template from which Apex objects can be created. These classes consist of other classes, variables, user-defined methods, exception types, and the static initialization code.
It is a cloud-based CRM which doesn’t require IT experts to set up or manage the cloud. One can simply log in and connect to the customers directly. CRM Salesforce system is a well-organized platform which provides information to its customers from different sources. It is a customer-centric system which integrates customers’ information for an organization’s benefit.
Salesforce Lightning is a platform that provides tools to every organization to build next-generation UI and UX in Salesforce. Lightning creates a modern productivity-boosting user experience. It is used to create fast, beautiful, and unique user experience just like real lightning so that sales teams can sell their product faster. Lightning Experience uses an open-source Aura framework. It is a completely re-designed framework to create a modern user interface.
There are various reasons why Batch Apex is better than Normal Apex.
- A Normal Apex uses 100 records per cycle to execute SOQL queries. Whereas, a Batch Apex does the same in 200 records per cycle. So, it is very fast when the execution of SOQL queries is considered.
- A Normal Apex can retrieve 50,000 SOQL queries but, in Batch Apex, 50,000,000 SOQL queries can be retrieved.
- A Normal Apex has a heap size of 6 MB; whereas, Batch Apex has a heap size of 12 MB.
- When executing bulk records, Normal Apex classes are more vulnerable to encounter errors as compared to Batch Apex. So, it is normally error-less.
Are you interested in learning Salesforce course in Bangalore from Experts?
Ways to call an Apex class in Salesforce are as follows:
- From Visualforce page
- From developer console
- From JavaScript links
- By using trigger
- From another class
- From home page components
A many-to-many relationship can be created by using a junction object. A Junction object is a custom object which has two Master–Detail relationships.
Based on structural differences Salesforce has four different types of reports: | https://intellipaat.com/blog/interview-question/salesforce-interview-questions/ | CC-MAIN-2019-35 | en | refinedweb |
Using the power of .NET and the power of COM InterOperability through WithClass 2000, you can view the System.Drawing library in a rough UML diagram.
Tools Used Visual C# .NET, WithClass 2000 Enterprise(Trial Version) Using the power of .NET and the power of COM InterOperability through WithClass 2000, you can view the System.Drawing library in a rough UML diagram. The first cut in this article doesn't bring in relationships, but Part II of this article will show you how to bring relationships into the diagram. If you want to try out the code in this article, and don't have the full version of WithClass 2000 Enterprise, you can download the trial version of the product at Microgold's Website.Reflection is a powerful part of UML that allows you to view details about classes at runtime. When I refer to classes, I mean any class that is actively running in the system. Therefore we can view any library we bring through in the using statement. Currently this does not include code, but we can at least look at fields, properties, methods, class details and other interesting aspects of the library. I've referenced Hari Shankar's article, Reflection in .NET, to help me get through this example. It's worth taking a look at.WithClass is a COM component as well as an application. The user can access WithClass and draw classes, relationships, states, interactions, or almost any diagramming capability UML supports, through its COM interface. To construct a new UML WithClass document to work with, let's examine the code below:public class WithClassReflector{public Type[] TheTypes; // Launch WithClass and construct a COM Application Object for itpublic With_Class.Application wcApp = new With_Class.Application();public With_Class.IWithClass myDoc;public WithClassReflector(){//// TODO: Add Constructor Logic here//// Create a NewFile in our application// And assign a document reference to itwcApp.ActiveDocument.NewFile();myDoc = (With_Class.IWithClass)wcApp.ActiveDocument;}Before this code could compile, I needed to right click on the solution explorer and add a Reference to WithClass under the COM Tab. I also needed to add the necessary using statements shown below directly above the class to access reflection and Interoperability:using System.Reflection; // needed to use the power of reflectionusing System.Runtime.InteropServices; // needed to use WithClass along with a Reference in the projectNow we are ready to roll. I've Created a separate class WithClassReflector (as seen above), to construct the reflection aspects of the Drawing library. Below is the method that does most of the work for extracting the information:public void ExtractAssembly(){Assembly theAssembly; // assembly object // get a hold of the path on your system for the location of the Drawing Assembly// this string may vary depending upon where the System.Drawing.dll is on your systemstring assemblyPath = "c:\\winnt\\microsoft.net\\framework\\v1.0.2204\\System.Drawing.dll";// Load up the assembly into the assembly objecttheAssembly = System.Reflection.Assembly.LoadFrom(assemblyPath);// Get the collection of types in the Drawing Assembly (each type is generally a class or interface or structure)TheTypes=theAssembly.GetTypes();int count = 0;// cycle through each class and extract method, field, and property informationforeach(Type t in TheTypes){.....}Basically, the code just cycles through each class in our assembly and we use WithClass to create the classes and class details in a viewable drawing. Below is the part of the code that initially creates a class in WithClass:// Create a new class from the reflected name t.Name in WithClass 2000 at position 30,20With_Class.IClass c1 = myDoc.NewClass(30, 20, t.Name); // Tell the created class to show parameter information, type information, property information, and Visibility// information.c1.ShowParameters = true;c1.ShowProperties = true;c1.ShowType = true;c1.ShowVisibility = true;Now let's look at one of the detailed code pieces we can extract from reflection, methods. The code below extracts the methods of a class into WithClass's current class, c1:// Get the methods for this class using reflectionMethodInfo[] mInfo=t.GetMethods(); // Cycle through each method in the class, and add it to the Class in WithClass 2000foreach(MethodInfo m in mInfo){// Create a new operation in WithClass with the next reflected methodWith_Class.IOperation wcOp = c1.NewOperation(m.Name);// Determine the visibility for the methodif (m.IsPrivate){wcOp.Visibility = "private";}else if (m.IsPublic){wcOp.Visibility = "public";}// Assign the return type in WithClasswcOp.ReturnType = m.ReturnType.ToString();// Cycle through each parameter in the reflected method and add to WithClassforeach (ParameterInfo parm in m.GetParameters() ){if (parm.IsOut){wcOp.NewParameter(parm.ParameterType.ToString(), parm.Name, parm.DefaultValue.ToString(), "out");}else{wcOp.NewParameter(parm.ParameterType.ToString(), parm.Name, parm.DefaultValue.ToString(), "in");}}// Assign virtual, static, and final properties to WithClasswcOp.IsVirtual = m.IsVirtual;wcOp.IsStatic = m.IsStatic;wcOp.IsFinal = m.IsFinal;}After we've created all the information in WithClass, we've added a method that tells WithClass to arrange its classes and relationships:public void ArrangeAssembly(){myDoc.ArrangeClasses();myDoc.ArrangeRelationships();myDoc.RefreshWindow();}The ArrangeRelationships of the WithClass Document becomes more significant when we are creating relationships. We will tackle adding relationships in Part II of this 2 part discussion. The download with this article limits the # of types brought into WithClass to nineteen for demonstration purposes, but you are free to take this limitation out. Also you can use some of the WithClass 2000 trial version's add-ins to create some nice Word or HTML reports to view the reflection information in other formats.
View All | https://www.c-sharpcorner.com/article/using-reflection-and-with-class2000-to-view-the-net-system/ | CC-MAIN-2019-35 | en | refinedweb |
Class SharableParticipants
- java.lang.Object
- org.eclipse.ltk.core.refactoring.participants.SharableParticipants
public class SharableParticipants extends ObjectAn opaque list to manage sharable participants.
The list is managed by the refactoring itself. Clients typically only pass the list to the corresponding method defined in
ParticipantManager
Note: this class is not intended to be extended or instantiated by clients.
- Since:
- 3.0
- See Also:
ISharableParticipant,
ProcessorBasedRefactoring,
ParticipantManager
- Restriction:
- This class is not intended to be subclassed by clients.
- Restriction:
- This class is not intended to be instantiated by clients. | http://help.eclipse.org/2019-03/nftopic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/ltk/core/refactoring/participants/SharableParticipants.html | CC-MAIN-2019-35 | en | refinedweb |
wifi_get_scan_result_ssid()
Get the Service Set Identifier (SSID) for a Wi-Fi scan result entry.
Synopsis:
#include <wifi/wifi_service.h>
WIFI_API wifi_result_t wifi_get_scan_result_ssid(wifi_scan_results_t *scan_results, int entry_number, char *ssid)
Since:
BlackBerry 10.2.0
Arguments:
- scan_results
The scan result list.
- entry_number
Scan result entry to query. The index range is between 1 and num_scan_entries, which can be queried by calling wifi_get_scan_results().
- the scan result entry provided.
Returns:
A return code from wifi_result_t.
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.wifi_service.lib_ref/topic/wifi_get_scan_result_ssid.html | CC-MAIN-2019-35 | en | refinedweb |
Let’s work off the previous Todo tutorial and add a way to filter those Todos using just Stimulus.
We will only need to work on the
todos/index.html file, and to add one Javascript controller. This example will highlight how Stimulus enhances your server rendered HTML, without the need to build complicated constellations of Javascript objects.
Let’s begin by adding a
div that will wrap our table, and a text
input that we will use as a the filter source. The wrapper
div will hold the filter controller. The
input will call the
filter#filter method on the controller, and it will supply the filter value from the target
filter.source that we’ll use to hide our rows. Each of the
tr elements is set as a
filter.filterable target, and then we’ll set the value that we should filter at
data-filter-key; in our case, the todo status and the title, all in lower case.
<div data- <input data- <table> <% @todos.each do |todo| %> <tr data- <td> <input type="checkbox" <% if todo.completed %> checked <% end %> > <%= todo.title %> </td> </tr> <% end %> </table> </div>
That’s all the html we need to update. Let’s move on to the javascript controller. Name your file
filter_controller.js and setup the Stimulus controller template:
import { Controller } from "stimulus" export default class extends Controller { } }
Let’s add the targets list, one for the input field source, and one for all the filterable rows:
static targets = [ "source", "filterable" ]
And finally, let’s add the filter method:
filter(event) { let lowerCaseFilterTerm = this.sourceTarget.value.toLowerCase() this.filterableTargets.forEach((el, i) => { let filterableKey = el.getAttribute("data-filter-key") el.classList.toggle("filter--notFound", !filterableKey.includes( lowerCaseFilterTerm ) ) })
The filter method will first pull the filter term from the source target, our
input field on the
index.html page. Then it will go through each of the filterableTargets, element by element, get the filter key that we set on each element, and toggle a class that will hide elements that don’t have the filter term in the filter key.
The CSS class that hides is element is here:
.filter--notFound { display: none; }
Now you have a simple way to filter data on your page.
Want To Learn More?
Try out some more of my Stimulus.js Tutorials. | https://johnbeatty.co/2018/03/15/stimulus-js-tutorial-how-do-i-filter-data-in-a-list-or-table/ | CC-MAIN-2019-35 | en | refinedweb |
In this article we will:
- Learn why stateful packages challenge stability
- See an example of a stateful package
- Identify CanJS’s stateful packages
- Provide strategies that minimize the problems with stateful packages
With the elimination of side effects, it becomes possible to use multiple versions of the same package within the same application. Ideally, you should be able to use components made with
[email protected] along side components made with
[email protected]. This means you wouldn’t have to rewrite working code to use a new major release!
Unfortunately, there are some packages where using multiple versions is impossible. These are stateful packages. For example, can-view-callbacks is a stateful package used to register custom elements and attributes in CanJS. Its code looks similar to the following:
// can-view-callbacks@3 var tags = {}; module.exports ={ tag: function(tag, callback){ if(tag){ tags[tag] = callback; } else{ return tags[tag]; } } });
A stateful module contains its own state (
tags in can-view-callbacks case) and allows outside code to mutate that state. Let’s see an example of how multiple versions of a stateful package could be so much trouble.
Imagine wanting to use two versions of
can-component in an application.
old-thing.js uses
[email protected]:
// old-thing.js var Component = require("can-component@3"); var view = require("./old-thing.stache"); Component.extend({ tag: "old-thing", ViewModel: {}, view: view });
new-thing.js uses
[email protected]:
// new-thing.js import {register} from "can-component@4"; import view from "./new-thing.curly"; import define from "can-define"; @define class NewThing { } Component.register("new-thing", NewThing, view);
But if
[email protected] MUST use
[email protected] and
[email protected] MUST use
[email protected], there will be two custom element registries and make it impossible to use both types of components in the same template. Stateful packages must be treated with care!
CanJS’s stateful packages
CanJS has the following stateful modules:
Stateful solutions
There are a few ways to mitigate the problems with stateful modules:
1. Shift the statefulness to the developer.
One option, is to avoid stateful modules altogether and make the user create the state and pass it around to other functionality that need it. For example, we could eliminate
can-view-callbacks as follows:
First, make all components export their constructor function:
// my-component.js module.exports = Component.extend({ ... });
Then, every template must import their components:
<!-- app.stache --> <can-import <MyComponent/>
This isn’t a viable solution for many other packages because it would create too large a burden on developers with little concrete stability gains. Fortunately, there's other things we can do to help.
2. Minimize the statefulness and harden APIs.
Stateful packages should expose the state with the most minimal and simple API possible. You can always create other packages that interface with the stateful API. For example, we could just directly export the
tags data in
can-view-callbacks like:
// can-view-callbacks module.exports = {};
Other modules could add more user friendly APIs around this shared state.
3. Let people know when they’ve loaded two versions of the same package.
We use can-namespace to prevent loading duplicate packages in a sneaky way. The can-namespace package simply exports an empty object like:
// [email protected] module.exports = {};
We should never have to release a new version of
can-namespace, but every stateful package imports it and makes sure there’s only one of itself as follows:
// [email protected] var namespace = require("can-namespace"); if (namespace.cid) { throw new Error("You can't have two versions of can-cid!”); } else { module.exports = namespace.cid = cid; }
If we make changes to the stateful modules, we can at least ensure the user knows if they are getting multiple.
Conclusions
Stateful code stinks, but minimizing the scope of its impact has helped CanJS progress much faster in the past year than anytime before, without having to make breaking changes. In the next section we will see how a tiny bit of well-defined statefulness with can-symbol allows CanJS to tightly integrate with other libraries. | https://www.bitovi.com/blog/coping-with-stateful-code | CC-MAIN-2019-35 | en | refinedweb |
import os import sys import ConfigParser # Part I. This works: 'first run:', base_dir, template_file # Part II. This fails: def parseConfig(): base_dir, template_file return base_dir, www_base, contact_address, template_file parseConfig() print 'second run:', base_dir, template_file
# Excerpt from ini file [Main] base_dir = /path to base dir/ www_base = /path to url/ template_file = /path to template.html contact_address = [email protected] [Graphs] omitted
See the attached code.
Cheers,
John.
H-G, I'm still struggling with how to differentiate between variables that are local to a routine and variables that can be returned by a function, but that will take time I think.
Ordinarily I should just say thank you and close this out, but thought I'd push my luck:
the ini file excerpt I posted earlier omits the other section(s) - I post another excerpt below that I'm using to read values into a dict. The reason for this (and the weird format of this section of the ini file) is that I haven't figured out how to assign a proper sequence to the options as they are printed. I can't change the keys and I don't know how to sort parser options by value. Python docs for the version I use at work (2.5) can be hard to come by!
The code I post below is admittedly a hack - I'm sure there's a better way to do this and I'm open to suggestions - if I need to open a new question let me know...
Open in new window
Well, if you have a variable that is local to a routine, then its value could be returned:
Executing the following script results in:
--------------------------
C:\>python bob.py
"x" is not a local variable
"y" is a local variable
C:\>
Open in new window
for y in range(0, 16): #number of options
for lhss in parser.options('Stations')
rhss = parser.get('Stations', lhss)
x = int(rhss[0:2])
if x == y:
station_list.append(rhss)
But for other reasons I still need a dict in most cases. Anyway, thanks very much!
Good luck & have a great day. | https://www.experts-exchange.com/questions/26558498/Accessing-python-ConfigParser-option-values.html | CC-MAIN-2019-35 | en | refinedweb |
Given a text txt[0..n-1] and a pattern pat[0..m-1], write a function that prints all occurrences of pat[] in txt[]. You may assume that n > m.
Examples:
Input : txt[] = "geeks for geeks" pat[] = "geeks" Output : Pattern found at index 0 Pattern found at index 10 Input : txt[] = "aaaa" pat[] = "aa" Output : Pattern found at index 0 Pattern found at index 1 attern found at index 2
The idea is to use find() in C++.
// CPP program to print all occurrences of a pattern // in a text #include <bits/stdc++.h> using namespace std; void printOccurrences(string txt, string pat) { int found = txt.find(pat); while (found != string::npos) { cout << "Pattern found at index " << found << endl; found = txt.find(pat, found + 1); } } int main() { string txt = "aaaa", pat = "aa"; printOccurrences(txt, pat); return 0; }
Output:
Pattern found at index 0 Pattern found at index 1 Pattern found at index. | http://linksoftvn.com/pattern-searching-using-c-library/ | CC-MAIN-2019-43 | en | refinedweb |
Download source code for Custom CacheManager Implementation for Windows/Console applications
Introduction
Mainly the term caching is used in web applications environments to store commonly used database values. By caching this information rather than relying on repeated database calls, decreases the demand on the Web server and database server's system resources and the Web application's scalability is increased. In ASP .Net Web applications, caching is used to retain pages or data across HTTP requests and reuse them.
.Net Framework exposes lot of methodologies and classes to maintain data in memory. But they can be used only in ASP .Net web applications. A windows application is NOT a stateless model as ASP .Net. There are very limited options to store data in Windows / console applications. Let us explore the options available in Windows / Console applications in detail below.
Caching Techniques
One option to retain the data is to declare the objects / members at the class level. It is always available until the object is destroyed or the object goes out of scope. It is good to have a storage where in application can drop any objects into and use it any time they want. Here I've tried building a generic Cache Manager component in C#.
Cache Manager
Here, objective is to build a storage rack where consumers of the component can add, locate and remove any objects by a user defined key. Cache Manager the holder of the storage is responsible for adding and maintaining objects till the application is running. Objects that stored in the rack can be shared by different consumers if the key is known.
Key design considerations are –
Fundamentals behind the Design
To store more elements, array like structure is used. This helps the CacheManager to add one or more objects. Identifying or searching the objects in the array is not viable thing. It would be good to have objects located by a name(key). This makes searching easier and lot quicker. .Net provides different Collection objects to achieve the same. CacheManager uses Dictionary class to maintain Key, Value pair collections to implement the storage rack.
Next big challenge here is keeping the CacheManager available till the application runs. This is achieved by creating the class with “static” members. Static objects are created in the stack and will be available till the application handle that created this object or appdomain is alive. Static objects also helps in sharing the same object instance as it is stored in stack. This gives us a great flexibility to share the storage rack and objects to different consumers.
A Dictionary (collecttion key value pair) is created to add the objects that are to be cached. This Dictionary object is created as static member to ensure only one instance of the Dictionary object is created for different calls. Since the Dictionary is made as static objects that are added to the cache will stay alive till the life time of the appdomain.
Also to avoid object creation from the client, declared a private constructor. This step may not be essential as the Dictionary itself is static.
Below is the Cache Manager Code snippet
public class CacheManager
{
#region Static Variable
// Private static variable which holds the key value pair
private static Dictionary<string, object> Cache = new Dictionary<string, object>();
#endregion
private CacheManager()
{
}
#region Public Methods
/// <summary>
/// <remarks>This function addeds the given object in the Dictionary object</remarks>
/// <param name="key"></param>
/// <param name="value"></param>
/// <returns>Boolean</returns>
/// </summary>
public static Boolean Add(string key, object value)
bool bResult = false;
lock (Cache)
{
try
{
Cache.Add(key.Trim(), value);
bResult = true;
}
catch (ArgumentNullException argumentNullException)
throw argumentNullException;
catch (ArgumentException argumentException)
throw argumentException;
}
return bResult;
/// <remarks>This function remove the given key from Dictionary</remarks>
public static Boolean Remove(string key)
Boolean Result = false;
try
if (Cache.ContainsKey(key.Trim()))
{
Cache.Remove(key.Trim());
Result = true;
}
else
{
Result = false;
return Result;
/// <remarks>This function gets the object for the given key</remarks>
public static object Get(string key)
object ReturnObject = null;
try
Cache.TryGetValue(key.Trim(), out ReturnObject);
catch (ArgumentNullException argumentNullException)
{
throw argumentNullException;
catch (KeyNotFoundException keyNotFoundException)
throw keyNotFoundException;
return ReturnObject;
/// <remarks>This function checks for the given key</remarks>
public static Boolean IsExists(string key)
return Cache.ContainsKey(key.Trim());
/// <summary>
/// Clears all avaialble keys and values in the cache
public static void Flush()
Cache.Clear();
catch (Exception excp)
throw excp;
}
Latest Articles
Latest Articles from Mouli
Login to post response | http://www.dotnetfunda.com/articles/show/602/custom-cachemanager-implementation-for-windowsconsole-applications | CC-MAIN-2019-43 | en | refinedweb |
Problem: by injection of ResourceInfo in resource class or in provider in OSGi environment CXF throws following exception:
Caused by: java.lang.IllegalArgumentException: interface org.apache.cxf.jaxrs.impl.tl.ThreadLocalProxy is not visible from class loader at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:487)[:1.7.0_40] at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:722)[:1.7.0_40] at org.apache.cxf.jaxrs.utils.InjectionUtils.createThreadLocalProxy(InjectionUtils.java:962)
Reason of the problem: there is no dedicated thread local context for ResourceInfo and CXF instantiates it using java reflection proxy in InjectionUtils:
return (ThreadLocalProxy<T>)Proxy.newProxyInstance(type.getClassLoader(), new Class[] {type, ThreadLocalProxy.class }, new ThreadLocalInvocationHandler<T>());
The problem is that classloader in first argument is taken from type class, but proxy have to implement two interfaces: type and ThreadLocalProxy (second argument). It works fine in standalone environment, but in OSGi classloader of ResourceInfo bundle doesn't know nothing about ThreadLocalProxy interface in CXF JAX-RS bundle. Of course, the ThreadLocalProxy cannot be found.
Solution: possible solution is use the classloader of current class (InjectionUtils) instead of type class. As far as InjectionUtils classloader knows type class as well it should work in standalone as well as in OSGi environments. | https://issues.apache.org/jira/browse/CXF-6005 | CC-MAIN-2019-43 | en | refinedweb |
Components and supplies
Apps and online services
About this project
This project is basically what it sounds like - a smart outlet. This smart outlet can apply to any device that has a plug FOR 120V only! (Targeted mainly at lamps using low amps.) This outlet is an outlet controlled by a 1-channel relay. This project also uses an RTC (real time clock) to determine what time it is and, based on the 24 hr clock, it will either turn on or off (depending on the time) because it actually has predetermined times to turn on and then to turn off. Also, another thing is that this will have a plug coming out of it that has to be plugged into a wall socket!
Link to the library:
IMPORTANT SAFETY INFORMATION!! PLEASE READ!
1) Use a Grounded Cord and Interrupt the Hot Wire
As can be seen in the pictures below, a 3-prong plug is used. The hot (black) wire from the line is connected to the common terminal of the relay module. The normally open (NO) output of the relay is then connected to the brass screw of the outlet. The white wire (neutral) connects to the silver screw and the green (ground) connects to the green screw of the outlet.
2) Use a Relay Module
A single-channel relay module from Elegoo was used to switch the hot wire. This module is identical to the Keyes SR1y module () and contains a flyback diode connected to the control input (for back EMF), a transistor to control the relay coil and a series resistor to limit the current into the transistor. As the relay is only rated up to 10A, either limit the load connected to the outlet, or use a fuse in line with the hot wire. If possible, a single channel relay with an optocoupler would provide additional isolation for the Arduino.
3) Physical Separation
Make sure to mount the relay module in the plastic housing away from the high voltage wires ensuring that the solder side of the relay module faces the plastic housing so that low voltage wiring does not inadvertently come into contact with high voltage wiring if the unit is subjected to shock or vibration.
Image of the wiring on the inside of the smart plug (yours should look like this).
Close-up image of the relay module.
Code
The codeArduino
#include <DS3231.h> int Relay = 4; DS3231 rtc(SDA, SCL); Time t; const int OnHour = 07; const int OnMin = 15; const int OffHour = 07; const int OffMin = 20; void setup() { Serial.begin(115200); rtc.begin(); pinMode(Relay, OUTPUT); digitalWrite(Relay, LOW); //rtc.setTime(21,10,00);//set your time and date by uncomenting these lines //rtc.setDate(26,6,2018); } void loop() { t = rtc.getTime(); Serial.print(t.hour); Serial.print(" hour(s), "); Serial.print(t.min); Serial.print(" minute(s)"); Serial.println(" "); delay (1000); if(t.hour == OnHour && t.min == OnMin){ digitalWrite(Relay,HIGH); Serial.println("LIGHT ON"); } else if(t.hour == OffHour && t.min == OffMin){ digitalWrite(Relay,LOW); Serial.println("LIGHT OFF"); } }
Custom parts and enclosures
Schematics
Author
xXarduino_11Xx
- 14 projects
- 12 followers
Published onJune 27, 2018
Members who respect this project
you might like | https://create.arduino.cc/projecthub/xXarduino_11Xx/smart-plug-7b3e6a | CC-MAIN-2019-43 | en | refinedweb |
Update Status bar from inside a script
On 23/07/2016 at 08:29, xxxxxxxx wrote:
How can I update the status bar with c4d.StatusSetBar() and make it show the progress from inside a script?
Even if I place c4d.EventAdd() commands inside my loop, the status bar doesn't update.
On 23/07/2016 at 10:32, xxxxxxxx wrote:
You need to make your computer stop processing the script per iteration for a short period of time to see the value change take place using sleep().
import c4d, time def main() : for i in xrange(0, 101) : #Convert the value to a string. And add the percent symbol to it value = str(i) value += "%" c4d.StatusSetText( value ) time.sleep(.10) if __name__=='__main__': main()
On 23/07/2016 at 15:43, xxxxxxxx wrote:
It is still not working
Could it be that it is because I also have a dialog?
However, the main loop is after the Command method of the dialog closes the dialog (and, yet, the closing only really happens after the main loop finishes, don't really understand why).
This is, in a nutshell, my main loop:
lim=len(all_files) count=100.0/lim for c,i in enumerate(all_files) : # perform some stuff with the filename, inside the all_files list # copy and erase some files, based on the processed filename # only update the status every 100 items if c % 100 == 0: c4d.StatusSetBar(int(c*count)) c4d.StatusSetText("%s / %s" % (c,lim)) time.sleep(.1) c4d.StatusClear()
On 24/07/2016 at 07:26, xxxxxxxx wrote:
I have already tried:
c4d.EventAdd(c4d.EVENT_FORCEREDRAW)
and
c4d.SendCoreMessage(c4d.COREMSG_CINEMA,c4d.BaseContainer(c4d.COREMSG_CINEMA_FORCE_AM_UPDATE))
All of it with a time.sleep(.25) in front... and nothing!!
I still get no refresh in the status bar.
I really would like to get some feedback to the user in the status bar, as the process can take a few seconds to finish.
On 24/07/2016 at 09:47, xxxxxxxx wrote:
I don't know about anyone else. But I'm not totally clear on what you're doing.
It sounds like you're having threading issues. But it's not clear to me.
Is there any way you can post a simple working example?
Something that runs, has a GeDialog in it, and your progress bar code in it.
FYI:
I've noticed that the support guys don't usually answer questions on the weekends anymore like they used to.
They seem to be sticking to regular business hours now. Which is understandable.
So if you ask a question on Friday. It can take up to 3 days to get a reply from them.
-ScottA
On 24/07/2016 at 10:41, xxxxxxxx wrote:
I totally understand that and I know that if I post a question on Friday, I may only get and answer on Monday. So, I appreciate very much your help during these weekend days.
I'm writing a script. It is not a full plugin. So, I just have a GeDialog to define the parameters but my code, the one that should update the progress bar, is outside of the code of the dialog.
So, in my main I have this:
dlg=my_dialog() dlg.Open(c4d.DLG_TYPE_MODAL, defaultw=300, defaulth=50) the_path=dlg.the_path the_name=dlg.the_name if not dlg.ok: return
I have some variables inside the dialog class (the_path, the_name and ok).
If ok is True, the user pressed the OK button.
If ok is False, the user pressed the Cancel button.
This is all set in the Command method.
So, the code that follows the last line of code above is the one, inside the main that should update the progress bar.
A strange thing that occurs is that my Command method includes this:
def Command(self,id, msg) : if id==B_CANCEL: self.ok=False self.Close() return True if id==B_OK: self.ok=True self.Close() return True
...and the dialog closes if I press the Cancel button.
But if I press the OK button, the dialog only closes after my whole main loop finishes. And that is already outside the dialog code.
Very weird.
On 24/07/2016 at 11:45, xxxxxxxx wrote:
I'm afraid I'm still not clear on the problem.
Here is an example that implements the progressbar in two different places.
One is in the GeDialog's Command() method. The other one is executed in the main() method.
They are both working as expected for me.
import c4d, time from c4d import gui class MyDialog(gui.GeDialog) : BUTTON_ID = 1001 def CreateLayout(self) : self.AddButton(self.BUTTON_ID, c4d.BFH_SCALE|c4d.BFV_SCALE, 100, 25, "Close Dialog") return True def InitValues(self) : return True def Command(self, id, msg) : if id == self.BUTTON_ID: #If you use this code #The progressbar will update. Then the dialog will close when it's finished """for i in xrange(0, 101) : #Convert the value to a string. And add the percent symbol to it value = str(i) value += "%" c4d.StatusSetText( value ) time.sleep(.10)""" c4d.StatusClear() #Closes the dialog after the above code runs self.Close() return True if __name__=='__main__': dlg = MyDialog() dlg.Open(dlgtype=c4d.DLG_TYPE_MODAL, defaultw=200, defaulth=200) #If you use code #The dialog will close first. Then the progressbar will update """for i in xrange(0, 101) : #Convert the value to a string. And add the percent symbol to it value = str(i) value += "%" c4d.StatusSetText( value ) time.sleep(.10) c4d.StatusClear()"""
The only thing I can think of is to maybe try not using True after each button code. Just call True once at the end of the Command() method. But I doubt that would make a difference.
Sorry. I'm not being much help.
-ScottA
On 24/07/2016 at 12:02, xxxxxxxx wrote:
Well, I tried it, and... it doesn't work
I created a simple movie showing you what is happening:
On 24/07/2016 at 12:27, xxxxxxxx wrote:
It's works fine for me on a PC. You're using a mac right?
In the past I've had several mac users tell me that my dialog.Close() code did not work for them.
I don't own one so I can't test it. But based on my experiences, macs definitely have problem with the dialog.Close() function.
-ScottA
On 24/07/2016 at 14:30, xxxxxxxx wrote:
Thank you, Scott.
So, it is a Mac issue
Well, my code works, except for that problem.
I will try to find a workaround for it.
Damn!!! It should work fine on both platforms.
On 25/07/2016 at 02:21, xxxxxxxx wrote:
Hello,
the following script works for me on both win and mac (R17 SP2) :
class TestDialog(gui.GeDialog) : def CreateLayout(self) : self.SetTitle("test") self.AddButton(1000, c4d.BFH_SCALEFIT | c4d.BFV_SCALEFIT, 0, 0, "Action") self.AddButton(2000, c4d.BFH_SCALEFIT | c4d.BFV_SCALEFIT, 0, 0, "Close") return True def Command(self, id, msg) : if id == 1000: for i in xrange(100) : time.sleep(.1) print(i) c4d.StatusSetBar(float(i)) c4d.StatusSetBar(float(-1)) if id == 2000: self.Close() return c4d.gui.GeDialog.Command(self, id, msg) def main() : dialog = TestDialog() dialog.Open(c4d.DLG_TYPE_MODAL_RESIZEABLE, 132456, -1 ,-1 ,400 ,400) if __name__=='__main__': main()
I think there are some differences on both Mac and Windows in regards to GUI handling so I cannot guarantee that this works in any situation.
Best wishes,
Sebastian
On 25/07/2016 at 02:54, xxxxxxxx wrote:
I will try it as soon as I get home.
I'm at work now and I only have access to Windows machines.
Thank you, Sebastian.
On 26/07/2016 at 00:05, xxxxxxxx wrote:
Hello again, Sebastian.
I tried it and no Status Bar appears :-(
And the Console output is only shown in the end of the cycle.
So, no updates during the cycle. | https://plugincafe.maxon.net/topic/9608/12905_update-status-bar-from-inside-a-script/10 | CC-MAIN-2019-43 | en | refinedweb |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi, I have an issue with Geomerative. I'd like to switch from text to text using this library according to a time schedule. It works but when switching to a shorter text I still see the previous text! How can I switch properly from a text to another?! Thanks a lot in advance for your help. Best, L
import geomerative.*; // Liste de liste de points. La 1ère correspond au nombre de phrases, ici 4, la 2ème correspond au nom total de points par phrase RPoint[][] myPoints = new RPoint[4][0]; RFont font; String [] FR1 = { "On me dit de te haïr et je m'y efforce", "Je t'imagine cruel, violent, implacable", "Mais à te voir je n'ai bientôt plus de force", "Et de te blesser je suis bien incapable", }; String [] FR2 = { "Tous mes camarades combattent avec rage", "Et pleurent la nuit au souvenir des supplices", "Infligés à leurs frères qui sont du même âge", "et rêvent comme eux de toucher une peau lisse" }; String [] FR3 = { "Et de pouvoir enfin caresser des obus", "Autres que ceux produits par le pouvoir obtus", "Je rêve de quitter ces boyaux infernaux" }; String [] FR4 = { "De laisser ces furieux des deux bords du Rhin", "Et de pouvoir embrasser ta chute de rein", "Et porter notre amour sur les fonts baptismaux" }; final color textColor = color(245); // TIME int startTime; //----------------SETUP--------------------------------- void setup() { size(1920, 1080, JAVA2D); smooth(); RG.init(this); font = new RFont("FreeSans.ttf", 86, CENTER); stroke(textColor); strokeWeight(0.05); //INIT initAll(); changePhrases(0); changePhrases(1); changePhrases(2); changePhrases(3); // TIME startTime=millis(); } //----------------DRAW--------------------------------- void draw() { background(255); translate(width/2, height/1.5); for (int i=0; i< myPoints.length; i++) { for (int j=0; j< myPoints[i].length-1; j++) { pushMatrix(); translate(myPoints[i][j].x, myPoints[i][j].y-420+i*180); noFill(); stroke(0, 200); strokeWeight(0.25); float angle = TWO_PI*10; rotate(j/angle); bezier(-2*(noise(10)), 10, 25*(noise(10)), -5, 2*noise(5), -15, 10, -3); //bezier(-10*(noise(20))+mouseX/15, 30+mouseY/10, -10*(noise(10))+mouseX/15, 20+mouseY/15, -20*noise(20)+mouseX/15, -20+mouseY/5, 10+mouseX/15, -10+mouseY/15); popMatrix(); } } if (millis()-startTime > 0 && millis()-startTime < 3000) { changePhrases(0); } if (millis()-startTime > 3000 && millis()-startTime < 6000) { changePhrases(1); } if (millis()-startTime > 6000 && millis()-startTime < 9000) { changePhrases(2); } if (millis()-startTime > 9000 && millis()-startTime < 12000) { changePhrases(3); } } //----------------INITIALIZE--------------------------------- void initAll() { for (int j=0; j<FR1.length; j++) { RGroup myGroup = font.toGroup(FR1[j]); myGroup = myGroup.toPolygonGroup(); myPoints[j] = myGroup.getPoints(); } } //FUNCTION TO SWITCH PHRASES void changePhrases(int state) { switch(state) { case 0: for (int j=0; j<FR1.length; j++) { RGroup myGroup = font.toGroup(FR1[j]); myGroup = myGroup.toPolygonGroup(); myPoints[j] = myGroup.getPoints(); } break; case 1: for (int j=0; j<FR2.length; j++) { RGroup myGroup = font.toGroup(FR2[j]); myGroup = myGroup.toPolygonGroup(); myPoints[j] = myGroup.getPoints(); } break; case 2: for (int j=0; j<FR3.length; j++) { RGroup myGroup = font.toGroup(FR3[j]); myGroup = myGroup.toPolygonGroup(); myPoints[j] = myGroup.getPoints(); } break; case 3: for (int j=0; j<FR4.length; j++) { RGroup myGroup = font.toGroup(FR4[j]); myGroup = myGroup.toPolygonGroup(); myPoints[j] = myGroup.getPoints(); } break; } } //////////////////////////////////////////////
Answers
Your array
myPointsis [4][0]. In changePhrases, you loop over it and update each row of myPoints based on the length of FR1, or FR2, et cetera. But FR3 and FR4 are only three lines long, so the fourth row of myPoints is never overwritten.
One way to fix this is to add a fourth, blank line to FR3 and FR4. Another way is to replace myPoints completely with e.g.
new RPoint[FR3.length][0]rather than updating it -- now it will always be the correct length, with no extra lines.
There are other things that could be cleaned up which would make your code easier to edit -- keeping an array of string arrays, changing the way that your timers work, making changePhrases accept a String[] array or an index (rather than a switch statement that is manually wired to different parts of a String[][] array).
Look at this cleaned up code for comparison. It is your sketch with minor changes: it uses a set of sets of strings rather than referring to them manually -- this means you can just add more sets of phrases without needing to update the code, and it will just work. Phrases can be any number of lines, 2 lines or 10. There are two functions -- a timer, and a drawing function that renders the phrase strings to RPoints.
Dear Jeremy, thank you so much for your great help. I'm not at home but asap I'll implement the complete code with the part of the code you wrote and will let you know. What if the variable 'duration' is not the same for each loop?! Thanks a lot, L
Dear Jeremy what if I'd like to have a variable changePhraseTimer function?! 3 seconds for the 1rst phrase 1second pause 2 seconds for the 2nd phrase 1 second pause
I tried in many ways, but I can't solve the problem... Thanks a lot in adavance. Best, L
then post one of your ways here
with pleasure, here it is :
It somehow works but not as I wish! Thanks for your answer.
L
Dear Chrisir, In fact my problem is linked to the attarctor in my main sketch. I switch from a phrase to the other and the attractor for the lines works too but no the attractor linked to the words !? Don't find why... I tried to debug, but don't succeed. Here is the code: Thanks for your help... best, L
@jeremydouglass
over to you
Thanks Chrisir! Actually I partly solved the problem, the attractor deform the text now, but the text is returning back immediatly to its initial state!! I certainly initiate the attractor in the wrong place... I hope @jeremydouglas will have some time to help me later on today!
Dear @jeremydouglas if you find some time to help me I'd be very glad! Thanks a lot. Best, L
Dear Chrisir and Jeremy, I still have the same problem in my sketch : the attractor deform the text, but the text is returning back immediatly to its initial state!! I certainly initiate the attractor in the wrong place... I've imlemented other parts of the sketch but now I'm stuck with this error. If you can help me I would really appreciate ! Thanks a lot. L
Please make sure you provide the latest code and some details or where the problem could be in the code. If you point out what section is not working properly, it could save some time to ppl that are trying to help you.
Kf
ok I will do so thanks a lot ;))
Hello @jeremydouglas, Pleasefind below a simple version of my latest code. I have two phraseTimer functions, one works well (phraseTimer()) the others doesn't work properly (phraseTimerALTER()), I don't find why ?! Could you please help me with this issue ? Thanks a lot in adavnce. Best wishes, L
Remember to hit ctrl-t in processing to get better indents automatically
Oups, thanks Chrisir, yes sorry I've corrected the indents and fix a bit the code :
Continued discussion here:
With pleasure, sorry but between time a friend and I we corrected most of the mistakes, but if I'm stuck today I'll tell you for sure ;) | https://forum.processing.org/two/discussion/28017/how-to-switch-from-one-text-to-another-according-to-time-with-the-geomerative-library | CC-MAIN-2019-43 | en | refinedweb |
Author: mikemccand
Date: Thu Jan 29 14:13:35 2009
New Revision: 738862
URL:
Log:
LUCENE-1487: improve javadoc for FieldCacheTermsFilter
Modified:
lucene/java/trunk/src/java/org/apache/lucene/search/FieldCacheTermsFilter.java
Modified: lucene/java/trunk/src/java/org/apache/lucene/search/FieldCacheTermsFilter.java
URL:
==============================================================================
--- lucene/java/trunk/src/java/org/apache/lucene/search/FieldCacheTermsFilter.java (original)
+++ lucene/java/trunk/src/java/org/apache/lucene/search/FieldCacheTermsFilter.java Thu Jan
29 14:13:35 2009
@@ -18,31 +18,82 @@
*/
import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.TermDocs; // for javadoc
import org.apache.lucene.util.OpenBitSet;
import java.io.IOException;
import java.util.Iterator;
/**
- * A term filter built on top of a cached single field (in FieldCache). It can be used only
- * with single-valued fields.
+ * A {@link Filter} that only accepts documents whose single
+ * term value in the specified field is contained in the
+ * provided set of allowed terms.
+ *
* <p/>
- * FieldCacheTermsFilter builds a single cache for the field the first time it is used. Each
- * subsequent FieldCacheTermsFilter on the same field then re-uses this cache even if the
terms
- * themselves are different.
- * <p/>
- * The FieldCacheTermsFilter is faster than building a TermsFilter each time.
- * FieldCacheTermsFilter are fast to build in cases where number of documents are far more
than
- * unique terms. Internally, it creates a BitSet by term number and scans by document id.
- * <p/>
- * As with all FieldCache based functionality, FieldCacheTermsFilter is only valid for fields
- * which contain zero or one terms for each document. Thus it works on dates, prices and
other
- * single value fields but will not work on regular text fields. It is preferable to use
an
- * NOT_ANALYZED field to ensure that there is only a single term.
+ *
+ * This is the same functionality as TermsFilter (from
+ * contrib/queries), except this filter requires that the
+ * field contains only a single term for all documents.
+ * Because of drastically different implementations, they
+ * also have different performance characteristics, as
+ * described below.
+ *
* <p/>
- * Also, collation is performed at the time the FieldCache is built; to change collation
you
- * need to override the getFieldCache() method to change the underlying cache.
+ *
+ *.
*/
+
public class FieldCacheTermsFilter extends Filter {
private String field;
private Iterable terms; | http://mail-archives.apache.org/mod_mbox/lucene-java-commits/200901.mbox/%[email protected]%3E | CC-MAIN-2019-43 | en | refinedweb |
On demand data in Python, Part 1
Python iterators and generators
Learn how to process data in Python efficiently, on demand rather than pre-emptively
Content series:
This content is part # of # in the series: On demand data in Python, Part 1
This content is part of the series:On demand data in Python, Part 1
Stay tuned for additional content in this series.
We are lucky enough to live at a time when market forces have pushed the price of memory, disk, and even CPU capacity to formerly inconceivable lows. At the same time, however, booming applications such as big data, AI, and cognitive computing are pushing our requirements for these resources upward at a dizzying rate. There is some irony that at a time when computing resources are plentiful, it's becoming even more important for developers to understand how to scale down their consumption to remain competitive.
The main reason Python has remained such a popular programming language for almost two decades is that it is so easy to learn. Within an hour, you can learn how easy lists and dictionaries are to manipulate. The bad news is that the naive approach to solving many problems with lists and dictionaries can quickly get you into trouble trying to scale your app, because without care, Python tends to be a bit more resource hungry than other programming languages.
The good news is that Python has some useful features to facilitate more efficient processing. At the foundation of many of these features is Python's iterator protocol, which is the main topic of this tutorial. The full series of four tutorials will build on this to show you how to process large data sets efficiently with Python.
You should be familiar with the basics of Python, such as conditions, loops, functions, exceptions, lists, and dictionaries. This tutorial series focuses on Python 3; to run the code, you need Python 3.5 or a more recent version.
Iterators
Most likely, your earliest exposure to Python loops was code like the following:
for ix in range(10): print(ix)
Python's
for statement operates on what are called
iterators. An iterator is an object that can be invoked over
and over to produce a series of values. If the value after the
in keyword is not already an iterator,
for tries
to convert it to an iterator. The built-in
range function is
an example of one that can be converted to an iterator. It produces a
series of numbers, and the
for loop iterates over these
items, assigning each in turn to the variable
ix.
It's time to deepen your understanding of Python by taking a closer look at iterators like range. Enter the following in a Python interpreter:
r = range(10)
You have now initialized a range iterator, but that's all. Go ahead and ask
it for its first value. You ask an iterator for a value in Python by using
the built-in
next function.
>>> r = range(10) >>> print(next(r)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'range' object is not an iterator
This exception indicates that you have to convert the object to an iterator
before you use it as an iterator. You can do this using the built in
iter function.
r = iter(range(10)) print(next(r))
This time it prints 0, as you might expect. Go ahead and enter
print(next(r)) again and it will print 1, and so on. Keep on
entering this same line. At this point, you should be grateful that on
most systems, you can just press the Up arrow on the Python interpreter to
retrieve the most recent command, then press Enter to execute it again, or
even tweak it before pressing Enter, if you like.
In this case, you'll eventually get to something like the following:
>>> print(next(r)) 9 >>> print(next(r)) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration
We only asked for a range of 10 integers, so we fall off the end of that
range after it has produced 9. The iterator doesn't immediately do
anything to indicate it has come to an end, but any subsequent
next() calls will raise the
StopIteration
exception. As with any exception, you can choose to write your own code to
handle it. Try the following code after the iterator
r has
been used up.
try: print(next(r)) except StopIteration as e: print("That's all folks!")
It prints the message "That's all folks!" The
for statement
uses the
StopIteration exception to determine when to exit
the loop.
Other iterables
A range is only one sort of object that can be converted to an iterator. The following interpreter session demonstrates how a variety of standard types are interpreted as iterators.
>>> it = iter([1,2,3]) >>> print(next(it)) 1 >>> it = iter((1,2,3)) >>> print(next(it)) 1 >>> it = iter({1: 'a', 2: 'b', 3: 'c'}) >>> print(next(it)) 1 >>> it = iter({'a': 1, 'b': 2, 'c': 3}) >>> print(next(it)) a >>> it = iter(set((1,2,3))) >>> print(next(it)) 1 >>> it = iter('xyz') >>> print(next(it)) x
It's quite straightforward in the case of a list or tuple. A dictionary iterates over just its keys, and of course, no order is guaranteed. Order of iteration isn't guaranteed in the case of sets either, even though in this case, the first item from the iterator happened to be the first item in the tuple that was used to construct the set. A string iterates over its characters. All such objects are called iterables.
As you can imagine, not every Python object can be converted to an iterator.
>>> it = iter(1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not iterable >>> it = iter(None) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'NoneType' object is not iterable
The best part is, of course, that you can make your own iterator types. All you need to do is define a class with certain specially named methods. Doing so is out of the scope of this tutorial series, but that's OK because the most straightforward way to create your own custom iterator is not a special class, but a special function called a generator function. I discuss this next.
Generators
You're used to the idea of a function, which takes some arguments and
returns with a value, or with None. There can be more than one possible
exit points, return statements, or just the last indented line of the
function, which is the same thing as
return None, but each
time the function runs, only one of these exit points is selected, based
on conditions in the function.
A generator function is a special type of function that interacts in a more complex, but useful way with the code that invokes it. Here is a simple example that you can paste into your interpreter session:
def gen123(): yield 2 yield 5 yield 9
This is automatically a generator function because it contains at least one yield statement in its body. This one subtle distinction is the only thing that turns a regular function into a generator function, which is a bit tricky because there is a huge difference between regular and generator functions.
Call a generator function like any other function:
>>> it = gen123() >>> print(it) <generator object gen123 at 0x10ccccba0>
This function call returns right away, and not with a value specified in the function body. Calling a generator function always returns what is called a generator object. A generator object is an iterator that produces values from the yield statements in the generator function body. In standard terminology, a generator object yields a series of values. Let's dig into the generator object from the previous code snippet.
>>> print(next(it)) 2 >>> print(next(it)) 5 >>> print(next(it)) 9 >>> print(next(it)) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration
Each time you call
next() on the object, you get back the next
yield value, until there are no more, in which case you get
the
StopIterator exception. Of course, because it is an
iterator, you can use it in a
for loop. Just remember to
create a new generator object, because the first one has been
exhausted.
>>> it = gen123() >>> for ix in it: ... print(ix) ... 1 2 3 >>> for ix in gen123(): ... print(ix) ... 1 2 3
Generator function arguments
Generator functions accept arguments, and these get passed into the body of the generator. Paste in the following generator function.
def gen123plus(x): yield x + 1 yield x + 2 yield x + 3
Now try it with different arguments, for example:
>>> for ix in gen123plus(10): ... print(ix) ... 11 12 13
When you are iterating over a generator object, the state of its function is suspended and resumed as you go, which introduces a new concept with Python functions. You can now in effect run code from multiple functions in a way that overlaps. Take the following session.
>>> it1 = gen123plus(10) >>> it2 = gen123plus(20) >>> print(next(it1)) 11 >>> print(next(it2)) 21 >>> print(next(it1)) 12 >>> print(next(it1)) 13 >>> print(next(it2)) 22 >>> print(next(it1)) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration >>> print(next(it2)) 23 >>> print(next(it2)) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration
I create two generator objects from the one generator function. I can then
get the next item from one or other object, and notice how each is
suspended and resumed independently. They are independent in every way,
including in how they fall into
StopIteration.
Make sure that you study this session carefully until you really get what's going on. Once you get it, you will truly have a basic grasp on generators and what makes them so powerful.
Note that you can use all the usual positional and keyword argument features as well.
Local state in generator functions
You can do all the normal things with conditions, loops, and local variables in generator functions and build up to very sophisticated and specialized iterators.
Let's have a bit of fun with the next example. We're all tired of being controlled by the weather. Let's create some weather of our own. Listing 1 is a weather simulator that prints a series of sunny or rainy days, with the occasional commentary.
If you think about the weather, a sunny day is often followed by another sunny day, and a rainy day is often followed by another rainy day. You can simulate this by randomly choosing the next day's weather, but with a higher probability that the weather will stay the same. One word for weather that is very likely to change is volatile, and in this generator function has an argument, volatility, which should be between 0 and 1. The lower this argument, the more chance that the weather will stay the same from day to day. In this listing, volatility is set to 0.2, which means that on average 4 out of 5 transitions should stay the same.
The listing has the added feature that if there are more than three sunny days in a row, or more than three rainy days in a row, it posts a bit of commentary.
Listing 1. Listing 1. Weather simulator
import random def weathermaker(volatility, days): ''' Yield a series of messages giving the day's weather and occasional commentary volatility - a float between 0 and 1; the greater this number the greater the likelihood that the weather will change on each given day days --volatility,)
The
weathermaker function uses many common programming
features but also illustrates some interesting aspects of generators. The
number of items yielded is not fixed. It can be as few as the number of
days, or it could be more because of the commentary on runs of sunny or
rainy days. These are yielded in different condition branches.
Run the listing and you should see something like:
$!
Of course it's based on randomness and there is a 4 out of 5 chance each time that the weather stays the same, so you could just as easily get:
$ python weathermaker.py today it is sunny!
Take some time to play around with this yourself, first of all passing
different values in for
volatility and
days, and
then tweaking the generator function code itself. Experimentation is the
best way to be sure you really understand how the generator works.
I hope this more interesting example fuels your imagination with some of
the power of generators. You could write the previous code without
generators certainly, but not only is this approach more expressive and
usually more efficient, but you have the advantage of being able to reuse
the
weathermaker generator in other interesting ways besides
the simple loop at the bottom of the listing.
Generator expressions
A common use of generators is to iterate over one iterator and manipulate it in some way, producing a modified iterator.
Let's write a generator that takes an iterator and substitutes values found in a sequence according to a provided set of replacements.
def substituter(seq, substitutions): for item in seq: if item in substitutions: yield substitutions[item] else: yield item
In the following session, you can see an example of how to use this generator:
>>>>> subs = {'hello': 'goodbye', 'world': 'galaxy'} >>> for word in substituter(s.split(), subs): ... print(word, end=' ') ... goodbye galaxy and everyone in the galaxy
Again, take some time to play around with this yourself, trying other loop manipulations until you understand clearly how the generator works.
This sort of manipulation is so common that Python provides a handy syntax for it, called a generator expression. Here is the previous session implemented using a generator expression.
>>> words = ( subs.get(item, item) for item in s.split() ) >>> for word in words: ... print(word, end=' ') ... goodbye galaxy and everyone in the galaxy
In short, any time you have parentheses around a
for
expression, it is a generator expression. The resulting object, assigned
to
words in this case, is a generator object. Sometimes you
end up using some of the more interesting bits of Python to fit such
expressions. In this case, I take advantage of the
get method on dictionaries, which looks up a key but allows me to specify a default to be returned if the key is not found. I ask for either the substitution
value of
item, if found, otherwise just
item as
is.
List comprehensions recap
You might be familiar with list comprehensions. This is a similar syntax, but using square brackets. The result of a list comprehension is a list:
>>> mylist = [ ix for ix in range(10, 20) ] >>> print(mylist) [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
A generator expression's syntax is similar, but it returns a generator object:
>>> mygen = ( ix for ix in range(10, 20) ) >>> print(mygen) <generator object <genexpr> at 0x10ccccba0> >>> print(next(mygen)) 10
The main practical difference between these last two examples is that the list created in the first sits there from the moment it is created, taking up all the memory needed to store its values. The generator expression doesn't use so much storage and is rather suspended and resumed whenever it is iterated over, as the body of a generator function would be. In effect, it allows you to get the data on demand, rather than having it all prestocked for you.
An off-the-cuff analogy is that your household might drink 200 gallons of milk each year, but you don't want to have to build a storage facility in your basement for all that milk. Instead, you go to the store to buy a gallon at a time, as you need more milk. Using a generator instead of building lists all the time is a bit like using your grocery store rather than building yourself a warehouse.
There are also dictionary expressions, but these are outside the scope of this tutorial. Note that you can easily convert a generator expression into a list, and this can sometimes be a way of consuming a generator all at once, but it can also defeat the purpose of using a generator by creating memory-hungry lists if you're not careful.
>>> mygen = ( ix for ix in range(10, 20) ) >>> print(list(mygen)) [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
I'll sometimes construct lists from generators in this tutorial series for quick demonstrations.
Filtering and chaining
You can use a simple condition in generator expressions to filter out items
from the input iterator. The following example produces all numbers from 1
to 20 that are neither multiples of 2 or 3. It uses the handy
math.gcd function, which returns the greatest common divisor
of two integers. If the GCD of a number and 2, for example, is 1, then
that number is not a multiple of 2.
>>> import math >>> notby2or3 = ( n for n in range(1, 20) if math.gcd(n, 2) == 1 and math.gcd(n, 3) == 1 ) >>> print(list(notby2or3)) [1, 5, 7, 11, 13, 17, 19]
You can see how the
if expression is right inline within the
generator expression. Note that you can also nest the
for
expressions in generator expressions. Then again, generator expressions
are really just compact syntax for generator functions, so if you start
needing really complex generator expressions, you might end up with more
readable code just using generator functions.
You can chain together generator objects, including generator expressions.
>>> notby2or3 = ( n for n in range(1, 20) if math.gcd(n, 2) == 1 and math.gcd(n, 3) == 1 ) >>> squarednotby2or3 = ( n*n for n in notby2or3 ) >>> print(list(squarednotby2or3)) [1, 25, 49, 121, 169, 289, 361]
Such patterns of chained generators are powerful and efficient. In the
previous example, the first line defines a generator object, but none of
its work is done. The second line defines a second generator object, which
refers to the first one, but neither of the work is done for these
objects. It's not until the full iteration is requested, in this case by
the
list constructor, that all the work is done. This idea of
doing the work of iterating over things as needed is called lazy
evaluation, and it is one of the hallmarks of well-designed code using
generators. When you use such chains of generators and generator
expressions, however, remember that something has to actually trigger the iteration. In this case, it's the
list function. It could also be a
for loop. It's an easy mistake to set up all sorts of generators and then forget to trigger the iteration, in which case you end up scratching your head as to why your code is not doing anything.
The value of laziness
In this tutorial, you've learned the basics of iterators and also the most interesting sources of iterators, generator functions, and expressions.
You could certainly write all the code in this tutorial without any generators, but learning to use generators opens up more flexible and efficient ways of thinking about operating on any concepts or data that develop in a series. To repeat my analogy from earlier, it makes more sense to get a gallon of milk from the store at a time, on demand, rather than building yourself a warehouse with a year's supply. Though developers call the equivalent approach lazy evaluation, the laziness is more about the timing of when you obtain what you need. It probably doesn't seem so lazy to make a trip to the grocery store every other day. Similarly, sometimes writing code to use generators can take a bit more work and can even be a bit mind-bending, but the benefits come with more scalable processing.
Learning iterators and generators is one important step in mastering Python, and another is learning the many amazing tools provided in the standard library to process iterators. That will be the topic of the next tutorial in this series. | https://www.ibm.com/developerworks/library/ba-on-demand-data-python-1/index.html | CC-MAIN-2019-43 | en | refinedweb |
import "mime/multipart"
Package multipart implements MIME multipart parsing, as defined in RFC 2046.
The implementation is sufficient for HTTP (RFC 2388) and the multipart bodies generated by popular browsers.
formdata.go multipart.go writer.go
ErrMessageTooLarge is returned by ReadForm if the message form data is too large to be processed. Size int64 //.
FileName returns the filename parameter of the Part's Content-Disposition header.
FormName returns the name parameter if p has a Content-Disposition of type "form-data". Otherwise it returns the empty string.
Read reads the body of a part, after its headers and before the next part (if any) begins.
Reader is an iterator over parts in a MIME multipart body. Reader's underlying parser consumes its input as needed. Seeking isn't supported.
NewReader creates a new multipart Reader reading from r using the given MIME boundary.
The boundary is usually obtained from the "boundary" parameter of the message's "Content-Type" header. Use mime.ParseMediaType to parse such headers. := ioutil.ReadAll(p) if err != nil { log.Fatal(err) } fmt.Printf("Part %q: %q\n", p.Header.Get("Foo"), slurp) } }
Output:
Part "one": "A section" Part "two": "And another"
NextPart returns the next part in the multipart or an error. When there are no more parts, the error io.EOF is returned.
ReadForm parses an entire multipart message whose parts have a Content-Disposition of "form-data". It stores up to maxMemory bytes + 10MB (reserved for non-file parts) in memory. File parts which can't be stored in memory will be stored on disk in temporary files. It returns ErrMessageTooLarge if all non-file parts can't be stored in memory.
A Writer generates multipart messages.
NewWriter returns a new multipart Writer with a random boundary, writing to w..
Package multipart imports 13 packages (graph) and is imported by 6094 packages. Updated 2019-09-27. Refresh now. Tools for package owners. | https://godoc.org/mime/multipart | CC-MAIN-2019-43 | en | refinedweb |
sbt-assemblysbt-assembly
Deploy fat JARs. Restart processes.
sbt-assembly is a sbt plugin originally ported from codahale's assembly-sbt, which I'm guessing was inspired by Maven's assembly plugin. The goal is simple: Create a fat")
(You may need to check this project's tags to see what the most recent release is.):
lazy val commonSettings = Seq( version := "0.1-SNAPSHOT", organization := "com.example", scalaVersion := "2.10.1", test in assembly := {} ) lazy val app = (project in file("app")). settings(commonSettings: _*). settings( mainClass in assembly := Some("com.example.Main"), // more settings here ... ) lazy val utils = (project in file("utils")). settings(commonSettings: _*). settings( assemblyJarName in assembly :=
mainClass in assembly in build.sbt (or just let it autodetect one) then you'll end up with a fully executable JAR, ready to rock.
Here is the list of the keys you can rewire for
assembly task.
assemblyJarName test mainClass assemblyOutputPath assemblyMergeStrategy assemblyOption assemblyExcludedJars assembledMappings
For example the name of the jar can be set as follows in build.sbt:
assemblyJarName in assembly := "something.jar"
To skip the test during assembly,
test in assembly := {}
To set an explicit main class,
mainClass in assembly := Some("com.example.Main")
Excluding an explicit main class from your assembly requires something a little bit different though
packageOptions in assembly ~= { pos => pos.filterNot { po => po.isInstanceOf[Package.MainClass] } }:
assemblyMergeStrategy in assembly := { = (assemblyMergeStrategy in assembly).value oldStrategy(x) }
NOTE:
assemblyMergeStrategy in assemblyexpects a function. You can't do
assemblyMergeStrategy in assembly := MergeStrategy.first!
- Some files must be discarded or renamed otherwise to avoid breaking the zip (due to duplicate file name) or the legal license. Delegate default handling to
(assemblyMergeStrategy in assembly.
assemblyShadeRules in assembly :=.
assemblyShadeRules in assembly :=:
logLevel in assembly := Level.Debug Spark people want to include "provided" dependencies back to
run, @douglaz has come up with a one-liner solution on StackOverflow sbt: how can I add "provided" dependencies back to run/test tasks' classpath?:
run in Compile := Defaults.runTask(fullClasspath in Compile, mainClass in (Compile, run), runner in (Compile, run)) fat:
assemblyMergeStrategy in assembly := { case PathList("about.html") => MergeStrategy.rename case x => val oldStrategy = (assemblyMergeStrategy in assembly)
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false, includeDependency = false),
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)
assemblyExcludedJarsassemblyExcludedJars
If all efforts fail, here's a way to exclude JAR files:
assemblyExcludedJars in assembly := { val cp = (fullClasspath in assembly).value cp filter {_.data.getName == "compile-0.1.0.jar"} }
Other ThingsOther Things
Content hashContent hash
You can also append SHA-1 fingerprint to the assembly file name, this may help you to determine whether it has changed and, for example, if it's necessary to deploy the dependencies,
assemblyOption in assembly := (assemblyOption in assembly).value.copy(appendContentHash = true)
CachingCaching
By default for performance reasons, the result of unzipping any dependency JAR files to disk is cached from run-to-run. This feature can be disabled by setting:
assemblyOption in assembly := (assemblyOption in assembly).value.copy(cacheUnzip = false)
In addition the fat:
assemblyOption in assembly := (assemblyOption in assembly).value.copy(cacheOutput = false)
Prepending a launch scriptPrepending a launch script
Your can prepend a launch script to the fat assemblyOption in assembly := (assemblyOption in assembly).value.copy(prependShellScript = Some(defaultUniversalScript(shebang = false))) assemblyJarName in assembly := fat jar as follows:
import sbtassembly.AssemblyPlugin.defaultShellScript assemblyOption in assembly := (assemblyOption in assembly).value.copy(prependShellScript = Some(defaultShellScript)) assemblyJarName in assembly := s"${name.value}-${version.value}"
Publishing (Not Recommended)Publishing (Not Recommended)
Publishing fat):
artifact in (Compile, assembly) := { val art = (artifact in (Compile, assembly)).value art.withClassifier(Some("assembly")) } addArtifact(artifact in (Compile, assembly), assembly)
Q: Despite the concerned friends, I still want publish fat JARs. What advice do you have?Q: Despite the concerned friends, I still want publish fat JARs. What advice do you have?
You would likely need to set up a front business to lie about what dependencies you have in
pom.xml and
ivy.xml. To do so, make a subproject for fat JAR purpose only where you depend on the dependencies, and make a second cosmetic subproject that you use only for publishing purpose:
lazy val fatJar = project .enablePlugins(AssemblyPlugin) .settings( depend on the good stuff skip in publish := true ) lazy val cosmetic = project .settings( name := "shaded-something", // I am sober. no dependencies. packageBin in Compile := (assembly in (fatJar, Compile)).value )
LicenseLicense
Published under The MIT License, see LICENSE | https://index.scala-lang.org/sbt/sbt-assembly/sbt-assembly/0.15.0?target=_2.12_1.0 | CC-MAIN-2020-45 | en | refinedweb |
getpriority, setpriority - get/set program scheduling priority
#include <sys/time.h>
#include <sys/resource.h>
int
getpriority(int which, id_t who);
int
setpriority(int which, id. Priority values outside the range -20 to 20 are
truncated to the
appropriate limit. Only the superuseruser attempted to lower a process
priority.
nice(1), fork(2), renice(8)
The getpriority() function call appeared in 4.2BSD.
OpenBSD 3.6 June 4, 1993 | https://nixdoc.net/man-pages/OpenBSD/man2/setpriority.2.html | CC-MAIN-2020-45 | en | refinedweb |
Rendering a ContentFolder as a Block to List Assets
Update: I'm an idiot. Fixed it up.
For content-heavy clients, we get the occasional request for listing file assets on the site for download. From what I can tell, there are a couple of options that require a bit of author effort, but I really wanted to provide something that is more suited to the best authoring experience offered by Episerver - drag and drop within Content Areas.
Knowing I can drag and drop most content types into a ContentArea and expect something to happen, I decided to give it a try using one of the asset panel ContentFolders. So I hover my mouse over a folder, casually hold down the mouse button, and drag the folder into an open area of the page. The field highlights! The folder drops in! It works!
Almost.
According to Episerver's documentation, "a content folder is used to structure content and has no visual appearance on the site." Meaning that the ContentFolder type has no Controller and no View.
Great. I can fix that. Code below.
Here's the part where I was an idiot. I was using the wrong parameter name in my Index method and therefore thought that the system was always returning null. Key lesson: make sure you're aligning with convention. So I deleted where I was explaining my workaround that I didn't need and am just including the final code below. This isn't production, but feel free to point out my other mistakes.
Controller:
using EPiServer.Core; using EPiServer.Web.Mvc; using System; using System.Linq; using System.Web.Mvc; using System.Web.Routing; using EPiServer; using EPiServer.ServiceLocation; using Alloy.Models.ViewModels; using Alloy.Models.Media; using Alloy.Business; namespace Alloy.Controllers { public class ContentFolderController : PartialContentController { private IContentRepository contentRepository => ServiceLocator.Current.GetInstance(); // GET: ContentFolder public override ActionResult Index(ContentFolder currentContent) { if(currentContent != null) { var model = new ContentFolderViewModel(); model.Name = currentContent.Name; var documents = contentRepository.GetChildren(currentContent.ContentLink); model.Documents = documents.Select(d => new DocumentViewModel() { Name = d.Name, Description = d.Description, Url = d.GetUrl(), FileSize = d.BinaryData.GetSize(), EveryoneHasAccess = d.IsAvailableToEveryone() }); return PartialView(model); } return PartialView(); } } }
View:
@model Alloy.Models.ViewModels.ContentFolderViewModel <h3>@Model.Name</h3> <ul> @foreach (var doc in Model.Documents) { <li> <a href="@doc.Url">@doc.Name</a> @if (!string.IsNullOrWhiteSpace(doc.Description)) { <div>@Html.Raw(doc.Description)</div> } <div style="font-style: italic; color: #777">File size: @doc.FileSize</div> </li> } </ul>
ViewModels:
using System.Collections.Generic; namespace Alloy.Models.ViewModels { public class ContentFolderViewModel { public string Name { get; set; } public IEnumerable<DocumentViewModel> Documents { get; set; } } }
namespace Alloy.Models.ViewModels { public class DocumentViewModel { public string Name { get; set; } public string FileSize { get; set; } public string Url { get; set; } public string Description { get; set; } public bool EveryoneHasAccess { get; set; } } }
Extensions:
using EPiServer.Core; using EPiServer.Security; using EPiServer.ServiceLocation; using EPiServer.Web.Routing; using System; using System.Security.Principal; namespace Alloy.Business { public static class MediaExtensions { private static UrlResolver urlResolver => ServiceLocator.Current.GetInstance<UrlResolver>(); private static PermissionService permissionService => ServiceLocator.Current.GetInstance<PermissionService>(); public static string GetSize(this EPiServer.Framework.Blobs.Blob blob) { using (var blobReader = blob.OpenRead()) { var l = blobReader.Length; if(l < 1000) { return $"{l} bytes"; } if(l < 1000000) { return $"{l / 1024} KB"; } return $"{l / 1056478} MB"; } } public static Boolean IsAvailableToEveryone<T>(this T content) where T : IContent { return content.RoleHasAccess(new[] { "Everyone" }, AccessLevel.Read); } public static Boolean RoleHasAccess<T>(this T content, string[] roles, AccessLevel accessLevel) where T : IContent { var securedContent = content as ISecurable; var descriptor = securedContent.GetSecurityDescriptor(); var identity = new GenericIdentity("doesn't matter"); var principal = new GenericPrincipal(identity, roles); return descriptor.HasAccess(principal, accessLevel); } public static string GetUrl<T>(this T content) where T : IContent { return urlResolver.GetUrl(content.ContentLink); } } }
Hi! The reason "folder" is always null is because of the parameter name.
For Episerver controllers, the model binder will check and see if the name of the parameter is currentPage/currentBlock/currentContent. If it is, it will get the current content from the routeData.
So, changing the name from "folder" to "currentContent" will get you get current ContentFolder.
Thanks - I'm an idiot. While it should be obvious to anyone with more Epi experience, I'm leaving the code for more novice searchers (like myself).
Maybe worth adding a Log.Info in Episerver if a controller is missing a matching parameter to aid in finding that error? I think most developers have dont that at least once.
At least I have.. :)
It can take a few hours to find if you have never seen it before...
Adding my 2 cents :)
I would move Length calculation at the moment when media is saved, not when it's rendered.
And rendering the description property in the template could be also wrapped into some generic `ifnotempty` helper method.
Aaaaand :) If extension is defined for IContent, should it be located in MediaExtensions class? ;)
Cheers!
Hey Valdis,
I completely agree with every point!
This is just a PoC I put together for a project, nowhere near production code, so I'm willing to live with some shortcuts in favor of that. I'm changing the extension a bit to add some validation (see comments in other post). When I'm satisfied with it, I'll update this as well. | https://world.episerver.com/blogs/egandalf/dates/2016/10/rendering-a-contentfolder-as-a-block-to-list-assets/ | CC-MAIN-2020-45 | en | refinedweb |
0,5
Can be extended to negative numbers by defining a(-n) = -a(n).
Based on the product rule for differentiation of functions: for functions f(x) and g(x), (fg)' = f'g + fg'. So with numbers, (ab)' = a'b + ab'. This implies 1' = 0. - Kerry Mitchell, Mar 18 2004
The derivative of a number x with respect to a prime number p as being the number "dx/dp" = (x-x^p)/p, which is an integer due to Fermat's little theorem. - Alexandru Buium, Mar 18 2004
The relation (ab)' = a'b + ab' implies 1' = 0, but it does not imply p' = 1 for p a prime. In fact, any function f defined on the primes can be extended uniquely to a function on the integers satisfying this relation: f(Product_i p_i^e_i) = (Product_i p_i^e_i) * (Sum_i e_i*f(p_i)/p_i). - Franklin T. Adams-Watters, Nov 07 2006
a(m*p^p) = (m + a(m))*p^p, p prime: a(m*A051674(k))=A129283(m)*A051674(k). - Reinhard Zumkeller, Apr 07 2007
See A131116 and A131117 for record values and where they occur. - Reinhard Zumkeller, Jun 17 2007
Let n be the product of a multiset P of k primes. Consider the k-dimensional box whose edges are the elements of P. Then the (k-1)-dimensional surface of this box is 2a(n). For example, For example, 2a(25) = 20, the perimeter of a 5 X 5 square. Similarly, 2a(18) = 42, the surface area of a 2 X 3 X 3 box. - David W. Wilson, Mar 11 2011
The arithmetic derivative n' was introduced, probably for the first time, by the Spanish mathematician José Mingot Shelly in June 1911 with "Una cuestión de la teoría de los números", work presented at the "Tercer Congreso Nacional para el Progreso de las Ciencias, Granada", cf. link to the abstract on Zentralblatt MATH, and L. E. Dickson, History of the Theory of Numbers. - Giorgio Balzarotti, Oct 19 2013
a(A235991(n)) odd; a(A235992(n)) even. - Reinhard Zumkeller, Mar 11 2014
Sequence A157037 lists numbers with prime arithmetic derivative, i.e., indices of primes in this sequence. - M. F. Hasler, Apr 07 2015
Maybe the simplest "natural extension" of the arithmetic derivative, in the spirit of the above remark by Franklin T. Adams-Watters (2006), is the "pi based" version where f(p) = primepi(p), see sequence A258851. When f is chosen to be the identity map (on primes), one gets A066959. - M. F. Hasler, Jul 13 2015
When n is composite, it appears that a(n) has lower bound 2*sqrt(n), with equality when n is the square of a prime, and a(n) has upper bound (n/2)*((log n)/(log 2)), with equality when n is a power of 2. - Daniel Forgues, Jun 22 2016
G. Balzarotti, P. P. Lava, La derivata aritmetica, Editore U. Hoepli, Milano, 2013
E. J. Barbeau, Problem, Canad. Math. Congress Notes, 5 (No. 8, April 1973), 6-7.
L. E. Dickson, History of the Theory of Numbers, Vol. 1, Chapter XIX, p. 451, Dover Edition, 2005. (Work originally published in 1919.)
A. M. Gleason et al., The William Lowell Putnam Mathematical Competition: Problems and Solutions 1938-1964, Math. Assoc. America, 1980, p. 295.
N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence).
T. D. Noe, Table of n, a(n) for n = 0..10000
Krassimir T. Atanassov, A formula for the n-th prime number, Comptes rendus de l'Académie bulgare des Sciences, Tome 66, No 4, 2013.
E. J. Barbeau, Remark on an arithmetic derivative, Canad. Math. Bull. vol. 4, no. 2, May 1961.
A. Buium, Home Page
A. Buium, Differential characters of Abelian varieties over p-adic fields, Invent. Math. 122 (1995), no. 2, 309-340.
A. Buium, Geometry of p-jets, Duke Math. J. 82 (1996), no. 2, 349-367.
A. Buium, Arithmetic analogues of derivations, J. Algebra 198 (1997), no. 1, 290-299.
A. Buium, Differential modular forms, J. Reine Angew. Math. 520 (2000), 95-167.
José María Grau and Antonio M. Oller-Marcén, Giuga Numbers and the Arithmetic Derivative, Journal of Integer Sequences, Vol. 15 (2012), #12.4.1.
R. K. Guy, Letter to N. J. A. Sloane, Apr 1975
P. Haukkanen, M. Mattila, J. K. Merikoski and T. Tossavainen, Can the Arithmetic Derivative be Defined on a Non-Unique Factorization Domain?, Journal of Integer Sequences, 16 (2013), #13.1.2. - From N. J. A. Sloane, Feb 03 2013
J. Kovič, The Arithmetic Derivative and Antiderivative, Journal of Integer Sequences 15 (2012), Article 12.3.8.
Ivars Peterson, Deriving the Structure of Numbers, Science News, March 20, 2004.
D. J. M. Shelly, Una cuestión de la teoria de los numeros, Asociation Esp. Granada 1911, 1-12 S (1911). (Abstract of ref. JFM42.0209.02 on zbMATH.org)
Victor Ufnarovski and Bo Åhlander, How to Differentiate a Number, J. Integer Seqs., Vol. 6, 2003, #03.3.4.
Linda Westrick, Investigations of the Number Derivative, Siemens Foundation competition 2003 and Intel Science Talent Search 2004.
Wikipedia, Arithmetic derivative
If n = Product p_i^e_i, a(n) = n * Sum (e_i/p_i).
For n > 1: a(n) = a(A032742(n)) * A020639(n) + A032742(n). - Reinhard Zumkeller, May 09 2011
a(n) = n * Sum_{p|n} v_p(n)/p, where v_p(n) is the largest power of the prime p dividing n. - Wesley Ivan Hurt, Jul 12 2015
For n >= 2, Sum_{k=2..n} [1/a(k)] = pi(n) = A000720(n), where [x] stands for the integer part of x (see K. T. Atanassov article). - Ivan N. Ianakiev, Mar 22 2019
From A.H.M. Smeets, Jan 17 2020: (Start)
Lim_{n -> inf} (1/n^2)*Sum_{i=1..n} a(i) = A136141/2.
Lim_{n -> inf} (1/n)*Sum_{i=1..n} a(i)/i = A136141.
a(n) = n if and only if n = p^p, where p is a prime number. (End)
6' = (2*3)' = 2'*3 + 2*3' = 1*3 + 2*1 = 5.
Note that for example, 2' + 3' = 1 + 1 = 2, (2+3)' = 5' = 1. So ' is not linear.
G.f. = x^2 + x^3 + 4*x^4 + x^5 + 5*x^6 + x^7 + 12*x^8 + 6*x^9 + 7*x^10 + ...
A003415 := proc(n) local B, m, i, t1, t2, t3; B := 1000000000039; if n<=1 then RETURN(0); fi; if isprime(n) then RETURN(1); fi; t1 := ifactor(B*n); m := nops(t1); t2 := 0; for i from 1 to m do t3 := op(i, t1); if nops(t3) = 1 then t2 := t2+1/op(t3); else t2 := t2+op(2, t3)/op(op(1, t3)); fi od: t2 := t2-1/B; n*t2; end;
A003415 := proc(n)
local a, f;
a := 0 ;
for f in ifactors(n)[2] do
a := a+ op(2, f)/op(1, f);
end do;
n*a ;
end proc: # R. J. Mathar, Apr 05 2012
a[ n_] := If[ Abs @ n < 2, 0, n Total[ #2 / #1 & @@@ FactorInteger[ Abs @ n]]]; (* Michael Somos, Apr 12 2011 *)
dn[0] = 0; dn[1] = 0; dn[n_?Negative] := -dn[-n]; dn[n_] := Module[{f = Transpose[FactorInteger[n]]}, If[PrimeQ[n], 1, Total[n*f[[2]]/f[[1]]]]]; Table[dn[n], {n, 0, 100}] (* T. D. Noe, Sep 28 2012 *)
(PARI) A003415(n) = {local(fac); if(n<1, 0, fac=factor(n); sum(i=1, matsize(fac)[1], n*fac[i, 2]/fac[i, 1]))} /* Michael B. Porter, Nov 25 2009 */
(PARI) apply( A003415(n)=vecsum([n/f[1]*f[2]|f<-factor(n+!n)~]), [0..99]) \\ M. F. Hasler, Sep 25 2013, updated Nov 27 2019
(Haskell)
a003415 0 = 0
a003415 n = ad n a000040_list where
ad 1 _ = 0
ad n ps'@(p:ps)
| n < p * p = 1
| r > 0 = ad n ps
| otherwise = n' + p * ad n' ps' where
(n', r) = divMod n p
-- Reinhard Zumkeller, May 09 2011
(MAGMA) Ad:=func<h | h*(&+[Factorisation(h)[i][2]/Factorisation(h)[i][1]: i in [1..#Factorisation(h)]])>; [n le 1 select 0 else Ad(n): n in [0..80]]; // Bruno Berselli, Oct 22 2013
(Python)
from sympy import factorint
def A003415(n):
return sum([int(n*e/p) for p, e in factorint(n).items()]) if n > 1 else 0
# Chai Wah Wu, Aug 21 2014
(Sage)
F = [] if n == 0 else factor(n)
return n * sum(g / f for f, g in F)
[A003415(n) for n in range(79)] # Peter Luschny, Aug 23 2014
(GAP)
A003415:= Concatenation([0, 0], List(List([2..10^3], Factors),
i->Product(i)*Sum(i, j->1/j))); # Muniru A Asiru, Aug 31 2017
Cf. A086134 (least prime factor of n').
Cf. A086131 (greatest prime factor of n').
Cf. A068719 (derivative of 2n).
Cf. A068720 (derivative of n^2).
Cf. A068721 (derivative of n^3).
Cf. A001787 (derivative of 2^n).
Cf. A027471 (derivative of 3^n).
Cf. A085708 (derivative of 10^n).
Cf. A068327 (derivative of n^n).
Cf. A024451 (derivative of p#).
Cf. A068237 (numerator of derivative of 1/n).
Cf. A068238 (denominator of derivative of 1/n).
Cf. A068328 (derivative of squarefree numbers).
Cf. A068311 (derivative of n!).
Cf. A168386 (derivative of n!!).
Cf. A260619 (derivative of hyperfactorial(n)).
Cf. A260620 (derivative of superfactorial(n)).
Cf. A068312 (derivative of triangular numbers).
Cf. A068329 (derivative of Fibonacci(n)).
Cf. A096371 (derivative of partition number).
Cf. A099301 (derivative of d(n)).
Cf. A099310 (derivative of phi(n)).
Cf. A327860 (derivative of prime product form of primorial base expansion of n).
Cf. A068346 (second derivative of n).
Cf. A099306 (third derivative of n).
Cf. A258644 (fourth derivative of n).
Cf. A258645 (fifth derivative of n).
Cf. A258646 (sixth derivative of n).
Cf. A258647 (seventh derivative of n).
Cf. A258648 (eighth derivative of n).
Cf. A258649 (ninth derivative of n).
Cf. A258650 (tenth derivative of n).
Cf. A185232 (n-th derivative of n).
Cf. A258651 (A(n,k) = k-th arithmetic derivative of n).
Cf. A085731 (gcd(n,n')).
Cf. A098699 (least x such that x' = n, antiderivative of n).
Cf. A098700 (n such that x' = n has no integer solution).
Cf. A099302 (number of solutions to x' = n).
Cf. A099303 (greatest x such that x' = n).
Cf. A051674 (n such that n' = n).
Cf. A083347 (n such that n' < n).
Cf. A083348 (n such that n' > n).
Cf. A099304 (least k such that (n+k)' = n' + k').
Cf. A099305 (number of solutions to (n+k)' = n' + k').
Cf. A328235 (least k > 0 such that (n+k)' = u * n' for some natural number u).
Cf. A328236 (least m > 1 such that (m*n)' = u * n' for some natural number u).
Cf. A099307 (least k such that the k-th arithmetic derivative of n is zero).
Cf. A099308 (k-th arithmetic derivative of n is zero for some k).
Cf. A099309 (k-th arithmetic derivative of n is nonzero for all k).
Cf. A129150 (n-th derivative of 2^3).
Cf. A129151 (n-th derivative of 3^4).
Cf. A129152 (n-th derivative of 5^6).
Cf. A189481 (x' = n has a unique solution).
Cf. A190121 (partial sums).
Cf. A258057 (first differences).
Cf. A229501 (n divides the n-th partial sum).
Cf. A165560 (parity).
Cf. A235991 (n' is odd), A235992 (n' is even).
Cf. A327863, A327864, A327865 (n' is a multiple of 3, 4, 5).
Cf. A157037 (n' is prime), A192192 (n'' is prime), A328239 (n''' is prime).
Cf. A328393 (n' is squarefree), A328234 (squarefree and > 1).
Cf. A328244 (n'' is squarefree), A328246 (n''' is squarefree).
Cf. A328303 (n' is not squarefree), A328252 (n' is squarefree, but n is not).
Cf. A328248 (least k such that the (k-1)-th derivative of n is squarefree).
Cf. A328251 (k-th arithmetic derivative is never squarefree for any k >= 0).
Cf. A256750 (least k such that the k-th derivative is either 0 or has a factor p^p).
Cf. A327928 (number of distinct primes p such that p^p divides n').
Cf. A327929 (n' has at least one divisor of the form p^p).
Cf. A327978 (n' is primorial number > 1).
Cf. A328243 (n' is a partial sum of primorial numbers and larger than one).
Cf. A328310 (maximal prime exponent of n' minus maximal prime exponent of n).
Cf. A328320 (max. prime exponent of n' is less than that of n).
Cf. A328321 (max. prime exponent of n' is >= that of n).
Cf. A328383 (least k such that the k-th derivative of n is either a multiple or a divisor of n, but not both).
Cf. A263111 (the ordinal transform of a).
Cf. A300251, A319684 (Möbius and inverse Möbius transform).
Cf. A305809 (Dirichlet convolution square).
Cf. A069359 (similar formula which agrees on squarefree numbers).
Cf. A258851 (the pi-based arithmetic derivative of n).
Cf. A328768, A328769 (primorial-based arithmetic derivatives of n).
Cf. A328845, A328846 (Fibonacci-based arithmetic derivatives of n).
Cf. A302055, A327963, A327965, A328099 (for other variants and modifications).
Cf. A038554 (another sequence using "derivative" in its name, but involving binary expansion of n).
Sequence in context: A024919 A328385 A328099 * A302055 A086300 A028271
Adjacent sequences: A003412 A003413 A003414 * A003416 A003417 A003418
nonn,easy,nice,hear,look
N. J. A. Sloane, R. K. Guy
More terms from Michel ten Voorde, Apr 11 2001
approved | http://oeis.org/A003415 | CC-MAIN-2020-45 | en | refinedweb |
The npm package angular-archwizard receives a total of 9,336 downloads a week. As such, we scored angular-archwizard popularity level to be Small.
Based on project statistics from the GitHub repository for the npm package angular-archwizard, we found that it has been starred 256-archwizard is missing a security policy.
# Install the Snyk CLI and test your project
npm i snyk && snyk test angular-archwizard
Further analysis of the maintenance status of angular-archwizard based on released npm versions cadence, the repository activity, and other data points determined that its maintenance is Inactive.
We found that angular-archwizard angular-archwizard repository, this is possibly a sign for a growing and inviting community.
We found a way for you to contribute to the project! Looks like angular-archwizard is missing a Code of Conduct.
We detected a total of 1 direct & transitive dependencies for angular-archwizard. See the full dependency tree of angular-archwizard
angular-archwizard has more than a single and default latest tag published for the npm package. This means, there may be other tags available for this package, such as next to indicate future releases, or stable to indicate stable releases.
This project contains a functional module with a wizard component and some supportive components and directives for Angular version 9 or later.
Run
npm run build to build the project. The build artifacts will be stored in the
dist/ directory.
Run
npm test to execute the unit tests via Karma.
Latest
angular-archwizard is compatible with Angular 9+.
angular-archwizard can be developed with Gitpod, a free one-click online IDE for GitHub:
angular-archwizard
angular-archwizard is available as a NPM package. To install
angular-archwizard in your project directory run:
$ npm install --save angular-archwizard
After installation you can import
angular-archwizard into your Angular project by adding the
ArchwizardModule to your module declaration as follows:
import { ArchwizardModule } from 'angular-archwizard'; @NgModule({ imports: [ ArchwizardModule ], }) export class Module { }
To allow customization,
angular-archwizard bundles CSS styles separately. If you are using Angular CLI, import them into your
styles.css...
@import '../node_modules/angular-archwizard/archwizard.css';
...or include them into
angular.json:
{ // ... "styles": [ "node_modules/angular-archwizard/archwizard.css", "src/styles.css" ] // ... }
If you are using SCSS, you can include the styles in the form of a
.scss file:
node_modules/angular-archwizard/archwizard.scss.
This way you can easily customize wizard's appearance by tweaking SCSS variables as described in Styles Customization.
To use this wizard component in an angular project simply add a
aw-wizard component to the html template of your component:
<aw-wizard> <aw-wizard-step Content of Step 1 <button type="button" awNextStep>Next Step</button> <button type="button" [awGoToStep]="{stepIndex: 2}">Go directly to third Step</button> </aw-wizard-step> <aw-wizard-step Content of Step 3 <button type="button" awPreviousStep>Previous Step</button> <button type="button" (click)="finishFunction()">Finish</button> </aw-wizard-step> </aw-wizard>
The
<aw-wizard> environment is the environment in which you define the steps belonging to your wizard.
In addition to the contained wizard steps,
angular-archwizard enables you to define the location and the layout of the navigation bar inside your wizard.
To set the location, the layout of the navigation bar and many other settings, you can pass the following parameters to the
aw-wizard component:
The location of the navigation bar, contained inside the wizard, can be specified through the
navBarLocation input value.
This value can be either
top,
bottom,
left or
right, where the values specify the position at which the navigation bar will be shown.
In addition
top and
bottom will lead to a horizontal navigation bar, when
left and
right lead to a vertical navigation bar at the
left or right side.
If no
navBarLocation is given the navigation bar will be shown at the top of the wizard.
Another option that can be changed is the design or layout of the navigation bar.
Currently five different navigation bar layouts exist.
These are
small,
large-filled,
large-empty,
large-filled-symbols and
large-empty-symbols.
The first three layouts are showing circles with or without a background, for each step of your wizard, in the navigation bar.
The second two layouts
large-filled-symbols and
large-empty-symbols optionally add a symbol in the center of the circle,
for each step of your wizard, in the navigation bar, if such a symbol has been defined for the step.
Normally the steps in the navigation bar are layed out from left to right or from top to bottom.
In some cases, like with languages that are written from right to left, it may be required to change this direction to layout the steps from right to left.
To layout the steps from right to left you can pass
right-to-left to the
navBarDirection input of the wizard component.
Per default the wizard always starts with the first wizard step, after initialisation. The same applies for a reset, where the wizard normally resets to the first step.
Sometimes this needs to be changed. If another default wizard step needs to be used, you can set it, by using the
[defaultStepIndex] input of the wizard component.
For example, to start the wizard in the second step,
[defaultStepIndex]="2" needs to be set.
Please be aware, that angular will interpret the given input value as a string if it's not enclosed by
[]!
Sometimes it may be necessary to disable navigation via the navigation bar.
In such a case you can disable navigation via the navigation bar by setting the input
[disableNavigationBar] of the wizard component to
true.
After disabling the navigation bar, the user can't use the navigation bar anymore to navigate between steps.
Disabling the navigation bar doesn't restrict the use of elements (buttons or links) with an
awNextStep,
awPreviousStep or
awGoToStep directive.
Possible
<aw-wizard> parameters:
angular-archwizard contains two ways to define a wizard step.
One of these two ways is by using the
<aw-wizard-step> component.
A wizard step can have its own unique id.
This id can then be used to navigate to the step.
In addition the
[stepId] of a wizard step is used as the
id of the
li element for the wizard step in the navigation bar.
A wizard step needs to contain a title, which is shown in the navigation bar of the wizard.
To set the title of a step, add the
stepTitle input attribute, with the choosen step title, to the definition of your wizard step.
Sometimes it's useful to add a symbol in the center of the circle in the navigation bar, which belongs to the step.
angular-archwizard supports this through the
[navigationSymbol] input attribute of the wizard step.
Be aware, that not all layouts display the symbols.
Only the layouts
large-filled-symbols and
large-empty-symbols display the symbols!
If you want to add a
2 to the circle in the navigation bar belonging to the second step, you can do it like this:
<aw-wizard-step ... </aw-wizard-step>
In addition to normal symbols it's also possible to use an icon from a font as a symbol. To use an icon from a font you need to first search for the unicode belonging to the icon you want to insert. Afterwards you can use the unicode in the numeric character reference format as the symbol for the step. In addition you need to specify the font family, to which the icon belongs, otherwise the symbol can't be displayed correctly.
The font family of the used symbol can be specified via the
fontFamily field of the given
[navigationSymbol] json input object.
For example, if you want to show the icon with the unicode
\f2dd of FontAwesome inside a step circle in the navigation bar, then
you can do this via the following
[navigationSymbol] input attribute:
<aw-wizard-step ... </aw-wizard-step>
Sometimes it's required to only allow the user to enter a specific step if a certain validation method returns true.
In such a case you can use the
[canEnter] input of the targeted wizard step.
This input can be either a boolean, which directly tells the wizard if the targeted step can be entered,
or a lambda function, taking a
MovingDirection and returning a
boolean or a
Promise<boolean>.
This function will then be called, with the direction in which the targeted step will be entered, whenever an operation has been performed, that leads to a change of the current step.
It then returns true, when the step change should succeed and false otherwise.
If you have an additional check or validation you need to perform to decide, if the step can be exited (both to the next step and to the previous step),
you can either pass a boolean or a function, taking a
MovingDirection enum and returning a boolean or a
Promise<boolean>, to the
[canExit] attribute of the wizard step.
This boolean, or function, is taken into account, when an operation has been performed, which leads to a transition of the current step.
If
[canExit] has been bound to a boolean, it needs to be true to leave the step in either direction (foreward AND backward).
If only exiting in one direction should be covered, you can pass a function, taking a
MovingDirection and returning a boolean, to
[canExit].
This function will then be called whenever an operation has been performed, that leads to a change of the current step.
If you need to call a function to do some initialisation work before entering a wizard step you can add a
stepEnter attribute to the wizard step environment like this:
<aw-wizard-step ... </aw-wizard-step>
This leads to the calling of the
enterSecondStep function when the wizard moves to this step.
When the first step of the wizard contains a
stepEnter function, it not only gets called
when the user moves back from a later step to the first step, but also after the wizard is initialized.
The event emitter will call the given function with a parameter that contains the
MovingDirection of the user.
If the user went backwards, for example from the third step to the second or first step, then
MovingDirection.Backwards will be passed to the function.
If the user went forwards
MovingDirection.Forwards will be passed to the function.
Similar to
stepEnter you can add a
stepExit attribute to the wizard step environment, if you want to call a function every time a wizard step is exited
either by pressing on a component with an
awNextStep or
awPreviousStep directive, or by a click on the navigation bar.
stepExit, like
stepEnter can call the given function with an argument of type
MovingDirection that signalises in which direction the step was exited.
Possible
<aw-wizard-step> parameters:
In addition to the "normal" step component
<aw-wizard-step> it's also possible to define an optional
<aw-wizard-completion-step>.
The
aw-wizard-completion-step is meant as the final wizard step, which signalises the user, that he or she successfully completed the wizard.
When an
aw-wizard-completion-step has been entered by the user, all wizard steps, including the optional steps belonging to the wizard, are marked as completed.
In addition the user is prevented from leaving the
aw-wizard-completion-step to another step, once it has been entered.
The given parameters for the wizard completion step are identical to the normal wizard step.
The only difference is, that it isn't possible to pass a
(stepExit) and
[canExit] parameter to the
aw-wizard-completion-step, because it can't be exited.
Possible
<aw-wizard-completion-step> parameters:
By default
angular-archwizard operates in a "strict" navigation mode.
It requires users to navigate through the wizard steps in a linear fashion, where they can only enter the next step if all previous steps have been completed and the exit condition of the current step have been fulfilled.
The only exception to this rule are optional steps, which a user can skip.
Using the navigation bar, the user can navigate back to steps they already visited.
You can alter this behavior by applying to the
<aw-wizard> element an additional
[awNavigationMode] directive, which can be used in two ways.
The easiest option is to tweak the default navigation mode with
[navigateBackward] and/or
[navigateForward] inputs which control the navigation bar and have the following options:
Take notice that the
'allow' and
'visited' options still respect step exit conditions. Also, the completion step still only becomes enterable after all previous steps are completed. Example usage:
<aw-wizard [awNavigationMode] ... </aw-wizard>
If changes you need are more radical, you can define your own navigation mode. In order to do this, create a class implementing the
NavigationMode interface and pass an instance of this class into the
[awNavigationMode] directive.
This takes priority over
[navigateBackward] and
[navigateForward] inputs.
Example usage:
custom-navigation-mode.ts:
import { NavigationMode } from 'angular-archwizard' class CustomNavigationMode implements NavigationMode { // ... }
my.component.ts:
@Component({ // ... }) class MyComponent { navigationMode = new CustomNavigationMode(); }
my.component.html:
<aw-wizard [awNavigationMode]="navigationMode"> ... </aw-wizard>
Instead of implementing the
NavigationMode interface from scratch, you can extend one of the classes provided by
angular-archwizard:
BaseNavigationMode: This class contains an abstract method called
isNavigable, which you will have to override to define wizard's behavior towards navigation using the navigation bar.
ConfigurableNavigationMode: This class defines the default navigation mode used by
angular-archwizard.
In some cases, it might be more convenient to base your custom implementation on it.
This way of customizing the wizard is advanced, so be prepared to refer to documentation comments and source code for help.
Possible
awNavigationMode parameters:
In some cases it may be required that the user is allowed to leave an entered
aw-wizard-completion-step.
In such a case you can enable this by adding the directive
[awEnableBackLinks] to the
aw-wizard-completion-step.
<aw-wizard-completion-step awEnableBackLinks> Final wizard step </aw-wizard-completion-step>
Possible
awEnableBackLinks parameters:
Sometimes it's not enough to define a title with the
stepTitle attribute in
<aw-wizard-step> and
<aw-wizard-completion-step>.
One example for such a case is, if the title should be written in another font.
Another example would be if it's desired that the title should be chosen depending on the available width of your screen or window.
In such cases you may want to specify the html for the title of a wizard step yourself.
This can be achieved by using the
[awWizardStepTitle] directive inside a wizard step on a
ng-template component.
<aw-wizard-step (stepEnter)="enterStep($event)"> <ng-template awWizardStepTitle> <span class="hidden-sm-down">Delivery address</span> <span class="hidden-md-up">Address</span> </ng-template> </aw-wizard-step>
Additionally it is possible to inject the corresponding
WizardStep object into the
ng-template environment.
This for example allows customization of the step title depending on the state of the wizard step, like on the completion and selection state:
<aw-wizard-step (stepEnter)="enterStep($event)"> <ng-template awWizardStepTitle {{ wizardStep.completed ? "Delivery address (✔)" : "Delivery address" }} </ng-template> </aw-wizard-step>
In addition to the step title, the navigation symbol/step symbol can also be set via a directive.
This is required, if the navigation step symbol is not a simple character or a symbol, but something more complex, like a html component.
In such a case, the the navigation symbol can be specified using the
[awWizardStepSymbol] directive, inside a wizard step on a
ng-template component.
<aw-wizard-step (stepEnter)="enterStep($event)"> <ng-template awWizardStepSymbol> <!-- use <i class="fa fa-file"></i> for fontawesome version 4 --> <i class="far fa-file"></i> </ng-template> </aw-wizard-step>
Additionally it is possible to inject the corresponding
WizardStep object into the
ng-template environment.
This for example allows customization of the navigation symbol depending on the state of the wizard step, like on the completion and selection state:
<aw-wizard-step (stepEnter)="enterStep($event)"> <ng-template awWizardStepSymbol <!-- use <i *</i> for fontawesome version 4 --> <i *</i> <!-- use <i *</i> for fontawesome version 4 --> <i *</i> </ng-template> </aw-wizard-step>
If you need to define an optional step, that doesn't need to be done to continue to the next steps, you can define an optional step
by adding the
awOptionalStep directive to the step you want to declare as optional:
<aw-wizard-step awOptionalStep> ... </aw-wizard-step>
Sometimes a wizard step should only be marked as optional if some condition has been fulfilled.
In such a case you can pass the condition to the
awOptionalStep input parameter of the
awOptionalStep directive
to tell the wizard whether the step should be marked as optional:
<aw-wizard-step [awOptionalStep]="condition"> ... </aw-wizard-step>
It is important to note that the condition input value can not be changed after initialization.
In some cases it is required to specify a step as completed by default.
This means that the step should be shown as completed directly after initialization.
A step can be marked as completed by default by adding the
awCompletedStep directive to
the step you want to declare as completed:
<aw-wizard-step awCompletedStep> ... </aw-wizard-step>
Sometimes a wizard step should only be marked as completed if some condition has been fulfilled.
In such cases you can pass the condition to the
awCompletedStep input parameter of the
awCompletedStep directive
to tell the wizard, whether the step should be marked as complete:
<aw-wizard-step [awCompletedStep]="condition"> ... </aw-wizard-step>
It is important to note that the condition input value can not be changed after initialization.
In some cases it may be a better choice to set the default wizard step not via a static number.
Another way to set the default wizard step is by using the
awSelectedStep directive.
When attaching the
awSelectedStep directive to an arbitrary wizard step, it will be marked as the default wizard step,
which is shown directly after the wizard startup.
angular-archwizard has three directives, which allow moving between steps.
These directives are the
awPreviousStep,
asNextStep and
awGoToStep directives.
The
awGoToStep directive needs to receive an input, which tells the wizard, to which step it should navigate,
when the element with the
awGoToStep directive has been clicked.
This input accepts different arguments:
a destination step index: One possible argument for the input is a destination step index. A destination step index is always zero-based, i.e. the index of the first step inside the wizard is always zero.
To pass a destination step index to an
awGoToStep directive,
you need to pass the following json object to the directive:
<button [awGoToStep]="{ stepIndex: 2 }" (finalize)="finalizeStep()">Go directly to the third Step</button>
a destination step id:
Another possible argument for the input is a the unique step id of the destination step.
This step id can be set for all wizard steps through their input
[stepId].
To pass a unique destination step id to an
awGoToStep directive,
you need to pass the following json object to the directive:
<button [awGoToStep]="{ stepId: 'unique id of the third step' }" (finalize)="finalizeStep()">Go directly to the third Step</button>
a step offset between the current step and the destination step: Alternatively to an absolute step index or an unique step id, it's also possible to set the destination wizard step as an offset to the source step:
<button [awGoToStep]="{ stepOffset: 1 }" (finalize)="finalizeStep()">Go to the third Step</button>
In all above examples a click on the "Go to the third Step" button will move the user to the next step (the third step) compared to the step the button belongs to (the second step). If the button is part of the second step, a click on it will move the user to the third step.
In all above cases it's important to use
[] around the
awGoToStep directive to tell angular that the argument is to be interpreted as javascript.
In addition to a static value you can also pass a local variable from your component typescript class,
that contains to which step a click on the element should change the current step of the wizard.
This can be useful if your step transitions depend on some application dependent logic, that changes depending on the user input.
Here again it's important to use
[] around the
awGoToStep directive to tell angular that the argument is to be interpreted as javascript.
Sometimes it's required to bind an event emitter to a specific element, which can perform a step transition.
Such an event emitter can be bound to the
(preFinalize) output of the element, which contains the
awGoToStep directive.
This event emitter is then called, directly before the wizard transitions to the given step.
Alternatively you can also bind an event emitter to
(postFinalize),
which is executed directly after the wizard transitions to the given step.
In case you don't really care when the finalization event emitter is called, you can also bind it simply to
(finalize).
finalize is a synonym for
preFinalize.
Possible parameters:
By adding a
awNextStep directive to a button or a link inside a step, you automatically add a
onClick listener to the button or link, that leads to the next step.
This listener will automatically change the currently selected wizard step to the next wizard step after a click on the component.
<button (finalize)="finalizeStep()" awNextStep>Next Step</button>
Like the
awGoToStep directive the
awNextStep directive provides a
preFinalize,
postFinalize and
finalize output, which are called every time
the current step is successfully exited, by clicking on the element containing the
nextStep directive.
In the given code snipped above, a click on the button with the text
Next Step leads to a call of the
finalize function every time the button has been pressed.
Possible parameters:
By adding a
awPreviousStep directive to a button or a link, you automatically add a
onClick listener to the button or link, that changes your wizard to the previous step.
This listener will automatically change the currently selected wizard step to the previous wizard step after a click on the component.
<button (finalize)="finalizeStep()" awPreviousStep>Previous Step</button>
Like both the
awGoToStep and
awNextStep directives the
awPreviousStep directives provides a
preFinalize,
postFinalize and
finalize output, which are called every time
the current step is successfully exited, by clicking on the element containing the
awPreviousStep directive.
Possible parameters:
In some cases it may be a good idea to move a wizard step to a custom component.
This can be done by defining adding the
awWizardStep directive to the component that contains the wizard step.
<aw-wizard> <aw-wizard-step Step 1 </aw-wizard-step> <custom-step awWizardStep ... </custom-step> <aw-wizard-step Step 3 </aw-wizard-step> </aw-wizard>
Possible
awWizardStep parameters:
In addition to the possibility of defining a normal wizard step in a custom component,
it is also possible to define a wizard completion step in a custom component.
To define a wizard completion step in a custom component you need to add the
[awWizardCompletionStep] directive to the custom component
that contains the wizard completion step.
<aw-wizard> <aw-wizard-step Step 1 </aw-wizard-step> <custom-step awWizardCompletionStep ... </custom-step> </aw-wizard>
Possible
awWizardCompletionStep parameters:
Sometimes it's also required to reset the wizard to its initial state.
In such a case you can use the
awResetWizard directive.
This directive can be added to a button or a link for example.
When clicking on this element, the wizard will automatically reset to its
defaultStepIndex.
In addition it's possible to define an
EventEmitter, that is called when the wizard is being reset.
This
EventEmitter can be bound to the
(finalize) input of the
awResetWizard directive.
Possible
awResetWizard parameters:
Sometimes it's required to access the wizard component directly. In such a case you can get the instance of the used wizard component in your own component via:
@ViewChild(WizardComponent) public wizard: WizardComponent;
In addition to letting the user navigate the wizard with
awNextStep,
awPreviousStep and
awGoToStep directives,
you can trigger navigation programmatically. Use navigation methods exposed by the
WizardComponent class:
wizard.goToNextStep(); wizard.goToPreviousStep(); wizard.goToStep(desinationIndex);
Sometimes you like to use your own custom CSS for some parts of the wizard like its navigation bar. This is quite easy to do. Different ways are possible:
Either use a wrapper around the wizard:
<div class="my-custom-css-wrapper"> <aw-wizard> ... </aw-wizard> </div>
Or add your css wrapper class directly to the wizard element:
<aw-wizard ... </aw-wizard>
When overriding css properties already defined in the existing navigation bar layouts, it is required to use
!important.
In addition it is required to add
encapsulation: ViewEncapsulation.None to the component, that defines the wizard and overrides its layout.
For additional information about how to write your own navigation bar please take a look at the existing navigation bar layouts, which can be found in the
wizard-navigation-bar.scss file.
In some cases it may be required to remove or insert one or multiple steps after the wizard initialization. For example after a user does some interaction with the wizard, it may be required to add or remove a later step. In such situations the wizard supports the removal and insertion of steps in the DOM.
If an earlier step, compared to the current step, has been removed or inserted, the wizard will adjust the current step to ensure that the changed state is valid again.
When removing a step be sure to not remove the step the wizard is currently displaying, because otherwise the wizard will be inside an invalid state, which may lead to strange and unexpected behavior.
If you are using SCSS, you can customize the wizard's global styles and color theme using SCSS variables:
Import
node_modules/angular-archwizard/archwizard.scss into your
styles.scss file as described in the Installation section.
Re-define any of the variables you can find at the top of
node_modules/angular-archwizard/variables.scss.
In the following example, we configure a simple color theme which only defines styles for two step states: 'default' and 'current'.
// styles.scss $aw-colors: ( '_': ( 'default': ( 'border-color-default': #76b900, 'background-color-default': null, 'symbol-color-default': #68aa20, 'border-color-hover': #569700, 'background-color-hover': null, 'symbol-color-hover': #569700, ), 'current': ( 'border-color-default': #bbdc80, 'background-color-default': #bbdc80, 'symbol-color-default': #808080, 'border-color-hover': #76b900, 'background-color-hover': #76b900, 'symbol-color-hover': #808080, ) ) ); @import '../node_modules/angular-archwizard/archwizard.scss';
Please don't hesitate to look inside
node_modules/angular-archwizard/variables.scss for documentation
on the
$aw-colors variable and other variables you can tweak to tune the wizard to your needs.
You can find an basic example project using
angular-archwizard here.
The sources for the example can be found in the angular-archwizard-demo repository.
It illustrates how the wizard looks like and how the different settings can change its layout and behavior. | https://snyk.io/advisor/npm-package/angular-archwizard | CC-MAIN-2020-45 | en | refinedweb |
PMHTTPNEWCLIENT(3) Library Functions Manual PMHTTPNEWCLIENT(3)
pmhttpNewClient, pmhttpFreeClient, pmhttpClientFetch - simple HTTP client interfaces
#include <pcp/pmapi.h> #include <pcp/pmhttp.h> struct http_client *pmhttpNewClient(void); void pmhttpFreeClient(struct http_client *client); int pmhttpClientFetch(struct http_client *client, const char *url, char *bodybuf, size_t bodylen, char *typebuf, size_t typelen); cc ... -lpcp_web
pmhttpNewClient allocates and initializes an opaque HTTP client that is ready to make requests from a server.NewClient will return NULL on failure, which can only occur when allocation of memory is not possible. pmhttpClientFetch will return the number of bytes places into the bodybuf buffer, else a negated error code indicating the nature of the failure.
pmdaapache(1), pmjsonInit(3), PMAPI(3), PMWEBAPIHTTPNEWCLIENT(3) | https://man7.org/linux/man-pages/man3/pmhttpnewclient.3.html | CC-MAIN-2020-45 | en | refinedweb |
SAP Labs came for on campus internship recruitment (2 months) at Delhi Technological University on 26th July, 2018 for B.Tech students.
There were a total of 4 rounds –
- Online round on Hackerrank – 1 hour duration
The round consisted of around 20 MCQ questions and 2 simple coding problems.
MCQs were a mix of output problems, aptitude problems, and questions on OOPS.
1.1. First problem was simple in which an integer array was given and an array had to be returned in which we had to put at index i, 1 if arr[i] is a power of 2 and 0 if arr[i] is not a power of 2.
1.2 The second problem was purely based on OOPS, in which inheritance had to be shown by overriding the base class function.
11 students were shortlisted from this round.
- Technical Round 1
The interviewer asked the most cliche question as I entered, “Tell me about yourself”. I gave him some general information about me and the technologies I have worked on.
My interview was based a lot on quizzes (Many of my friends had project discussions and some problems on Data structures in their interviews as well).
2.1. The first quiz was physics based in which 2 cars (A and B) were moving in opposite directions diametrically opposite to each other (at 50 km/hr). Radius was 5 km. A person start from centre at a speed of 2m/s and starts to move towards car A and as they both collide, the person moves towards car B. I was asked to trace the path of the person. The answer was a spiral path because of the difference in their speeds. The other question was to find the distance traveled by the person when in 2 hours. The answer was simple speed * time.
2.2. 3 points are present on a monitor A, B, and C. Find if C lies on the line segment made by points A and B when there coordinates are given. Simple solution was comparing the distances AC + CB = AB.
2.3. A simple function has to be written in which returns 1 if 0 is passed to it and 0 if 1 is passed to it. The condition was only to use mathematical operations like +, -, %, *, / and no conditional statements were allowed. The answer was to return (x + 1) % 2.
2.4. Another physics based question was 2 iron bars are given and we have to find which of them is a magnet without using anything else.
Solution –
2.5. Dice based puzzle
Firstly, I was asked to give a solution in which the single digits can be represented without using a 0 on a dice and as soon as I figured out the solution, he asked me to give a solution with single digit numbers to be represented with a 0. The clue that he gave me was to think out of the box which helped me think that 6 could be used a 9.
- Technical Round 2
3.1. Started of with a physics numerical which was a little tricky (based on relative velocity).
3.2. Asked me about Binary Search Trees and their properties.
3.3. All kinds of tree traversals explanation
3.4. Explain RDBMS
3.5. Normalization
3.6. Paging
3.7. Quiz on coins – There are 10 coins, out of which 9 are of equal wait and 1 is heavier, minimum number of weights (using a weighing balance) in the worst case to identify the heavier coin. Answer to this is 2 (Dividing the coins in groups of 3). He then asked me what will be the answer if it is not known if the coin which is different is heavier or lighter, the answer is 3 in this case.
3.8. Deadlocks and prevention methods we can use
3.9. Semaphores (and their types)
3.10. Explain merge sort and quick sort and their worst case complexities.
- HR Round
4.1. Asked me to tell him about myself, in terms of what kind of person I am, rather than the technical projects.
4.2. Gave me a real life situation and asked me what would my action be in that case. It was based on two job offers I am having and which would I would go for.
4.3. Asked me things like what do I see in a half filled water bottle in front of me.
4.4. Discussed random stuff like how I liked my school life and then asked me simple questions about the solar system and planets.
4.5. Asked me some questions (mentioned they were asked in a Google interview) –
If I fill the water bottle with pebbles, is it full? I said no. He then asked if adding sand to it will make it full. I said that air is still present between the sand so it’s not. He asked me how we could fill that gap too, so I suggested water.
The results were announced the next day and I got selected for the internship. I would like to thank Geeksforgeeks for helping me out throughout the process and helping me prepare for the internship season.
My Personal Notes arrow_drop_up
Recommended Posts:
- SAP Labs Interview Experience for 2 Months Internship
- SAP labs Interview Experience(6 months internship)
- SAP Labs Interview Experience | For 2 Months Internship
- SAP Labs Interview Experience | Set 30 (On Campus for Scholar@SAP Program)
- SAP Labs Interview Experience for Scholar@SAP Program
- Walmart Labs Interview Experience for 2 Months Internship (On-Campus)
- SAP Labs Interview Experience | Set 20 (For Internship)
- SAP Labs Interview Experience | Set 28 (On-Campus Internship)
- SAP Labs Interview Experience | Set 34 (On-Campus for Internship)
- SAP Labs, India Internship Interview Experience (On Campus)
- SAP Labs Interview Experience for Internship
- SAP Labs India (Full time+internship) interview experience
- SAP Labs Internship Interview Experience | August 2019 (On-Campus)
- SAP Labs Interview Experience | On-Campus Internship
- [24]7 Innovation Labs Interview Experience (6 Months Experience)
- SAP Labs Interview Experience | Set 1 (On-Campus)
- SAP Labs Interview Experience | Set 1B (For Developer Associate)
- SAP Labs Interview Experience | Set 2 (On-Campus)
- SAP Labs Interview Experience | Set 3 (Campus-Pool)
- SAP Labs Interview Experience | Set 4 . | https://www.geeksforgeeks.org/sap-labs-interview-experience-set-36-for-2-months-internship/?ref=lbp | CC-MAIN-2020-45 | en | refinedweb |
Important: Please read the Qt Code of Conduct -
#include repost
Since my original post DID NOT get solved I am taking the liberty t repost with additional info.
I am trying not to reinvent the wheel and putting two WORKING application under one roof.
My task is to add existing files of btscanner apprication into tab dialog.
I am using plain "add existing files " and they are being added into correct folders under correct additional sub folders showing the path.
They are added into tab dialog project
I have added this manually
After all this - complier cannot find #inlcude - starting wiht "main.cpp"
Missing device.h in main error
Questions
- What do additional folders with path accomplish ?
- How do x.pro HEADERS gets used or not ge used by whom - makefile or compiler?
- It looks as plain "include device.h" in source file is not sufficient to make / compile - WHERE DO I ADD the necessryn path ?
Addendum
After adding relative path to #include it passed device but not service header. .
Cheers
Hi, shouldn't it be #include ..../service.h (not sevice.h)?
#include " file " ; searches local directory ONLY and stops
#include <file> ; searches "above " local directory
The "../../.." syntax seems to go "up the tree" , assuming the referenced project is in "current access". (I need to work on that theory)
Anything else , such as ellipsis or "....." is unknown syntax to me.
Can you provide reference ?
Sorry, I mean, the line no. 45 that currently has the red error is:
#include <../../bluetooth/btscanner/sevice.h>
try changing it to:
#include <../../bluetooth/btscanner/service.h>
@AnneRanch
I think @hskoglund means you've made a typo...
Here is what works partial path (?) and then full path - the entire project compiles and runs.
Anybody interested to find out WHY it works ?
Perhaps detailed analysis of complier output - now available AFTER the error is gone (!) would be interesting to somebody. ( I could post it)
My "solution" is
path (GUI) in "project tree" and (../../..) in x.pro file means NOTHING to the complier.
@AnneRanch said in #include repost:
I have added this manually
Just to add:
This is pure evil :)
Dont use
=to change settings... Use
+=to add modules to your basic config or use
-=to take modules.
Minor , insignificant detail not helping to resolve THIS #include issue.
BTW - I just cut and pasted it from "an official" btscanner example .
This post is deleted!
It's a general thing.
As I've said in one of your other posts, examples are minimalistic standalone projects, that are (in most cases) not meant to get improved or extended even further.
So, IMHO it's not a good idea to import a whole example to your own projects and take over the example's
profile...
I don't want to question your whole idea, but I would say, that there are easier and faster ways to make your own BT Scanner "test" / "example" project.
Or is there anything that forces you, to import the full example?
- J.Hilk Moderators last edited by J.Hilk
@AnneRanch to answer your original question, the pro- file offers you the possibility to expand the include path, via
INCLUDPATH += ....
in your case
INCLUDEPATH +=$$PWD/../../bluetooth/btscanner
should do the trick.
But use it with caution, I find that using INCLUDEPATH convolutes the code more than that it makes it easier to read.
But that maybe just me
#include " file " ; searches local directory ONLY and stops
#include <file> ; searches "above " local directory
where did you get that from?
the actual definition:
- #include <filename>: Searches for the file in implementation-defined manner. The intent of this syntax is to search for the files under control of the implementation. Typical implementations search only standard include directories. The standard C++ library and the standard C library are implicitly included in these standard include directories. The standard include directories usually can be controlled by the user through compiler options.
- #include "filename": Searches for the file in implementation-defined manner. The intent of this syntax is to search for the files that are not controlled by the implementation. Typical implementations first search the directory where the current file resides and, only if the file is not found, search the standard include directories as with (1).
The only thing that "searches upward" that I now, is qmake in search of a .qmake.conf file
But there may be more 🤷♂️
@Pl45m4 Agree with your approach, however, the initial question was about why "#include" does not work as expected AND
why the project tree and project file entries make no difference OR more precisely does not effect the complication. My usage of btscanner is purely selfish – it works in Qt - as opposed to many other “sample codes” , and that is OK with me.
What is NOT OK is tool likes Qt Creator messing with C language syntax by adding layers of
poorly explained “STUFF” , such as inventing syntax “/../../xxx” where #include <FILIE>; should do.
As far as “samples” being second grade code – something about advertising Qt comes to mind, and I shall leave that as is.
@J-Hilk
I will repeat what I have said already and add - the syntax for #inlcude has not changed since it was introduced. The Qt Creator adds stuff which is not only odd but is not used during compile.
Yes, I did not cut and paste "the real Mc Coy" definition as you did.
Was MY definition incorrect ?
I am not sure if adding PATH (to pro file) is necessary - it is already in project tree and if it is important it should be added to pro file by Qt Creator.
Appreciate all the comments and suggestions, it is very helpful to get my project going.
Thanks
@AnneRanch said in #include repost:
What is NOT OK is tool likes Qt Creator messing with C language syntax by adding layers of
poorly explained “STUFF” , such as inventing syntax “/../../xxx” where #include <FILIE>; should do.
Qt Creator is an IDE, including a C/C++ editor and a debugger. It does not, and cannot, alter the syntax of any language. If it did, programs would not compile. The compiler/linker is not a Qt component.
Whatever
#includes you have shown in your code will conform to, or similar, as per @J-Hilk 's post. Assuming your are using
gcc, its documentation should provide any details on handling/how to pass directories on the compile line, etc.
#include <>tends to look in some system directories which
#include ""does not. Handling of a relative path with
..is probably compiler-implementation-specific.
@JonB This comments i misinterpreters this entire thread.
Nobody is challenging Qt as IDE "interface " to make and compiler / linker.
What I questioned is what appears superficial "includes" with no visible effect on processes. If common item likes #include has to be done manually we have very poorly functioning "IDE", nothing to do with complier.
Since you mentioned complier - where can I read Qt Creator compile options ?
I have 4 processor system and like to add "-j" option to speed things up.
But I'll post this separately - different subject
@AnneRanch said in #include repost:
@JonB This comments i misinterpreters this entire thread.
No, it doesn't. You think that Qt Creator is doing something funny about
#includes, and keep saying so. It is not.
@AnneRanch said in #include repost:
I have 4 processor system and like to add "-j" option to speed things up.
(4 processor cores, I assume)
This post is deleted!
@Pl45m4 Pardon my ignorance , but -j is a complier option .
When I added it to "make" it did not show in compiler output.
Which brings another question - who is on first - "make" or "qmake" or both ?
And since I am not allowed to do multiple posts - why is there "build" and "rebuild" ? Back in the beginning of programming - when file was "dirty" it would get rebuild AUTOMATICALLY when "build" was requested anyway.
.
"Build" only builds (link +compile) files that have changed ("dirty" files).
"Rebuild" will build all files, regardless whether they have changed or not. And this could take several minutes or even more in huge projects.
The path in your error msg says "Qt_Repository Copy". Is that the right one? Did you move or rename any files? Try to rename any button (by double clicking on e.g. "Scan") in your current ui file (just the button text, not the actual widget name) and run your program. If the name is still the old one, your program is probably using a different ui file.
- JKSH Moderators last edited by
@AnneRanch said in #include repost:
who is on first - "make" or "qmake" or both ?
qmake...
- ...parses your *.pro file and generates your Makefile
- ...parses your *.ui file and generates *.cpp and .h files
- (and more)
makeparses your Makefile and runs your build tools | https://forum.qt.io/topic/117793/include-repost | CC-MAIN-2020-45 | en | refinedweb |
The ServiceLocationHelper has some extension methods (in namespace EPiServer I think) that make it easier to get e.g. a content repository. The class is very thin and basically exists only to make it possible to add these extension methods, so it should be cheap to instantiate.
There should be no difference in the container contents between ServiceLocator.Current and context.Locate.Advanced though.
Hi
Maybe mainly question to EPiServer personel but which one we should actually use in initialization modules?
I can see with ILSpy the difference mainly that the context.Locate always does new ServiceLocationHelper(...)
- so if using the context and you need many services then you should get a local scope reference to the ServiceLocationHelper
- and then use that to get the services
- if using the ServiceLocator.Current then you can keep calling that property for all the service requests
But is there any difference which to use? | https://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2014/8/InitializationModule-contextLocateAdvanced-vs-ServiceLocatorCurrent/ | CC-MAIN-2020-45 | en | refinedweb |
There is quite a lot of talk around gRPC lately. This article will introduce you to gRPC, what it actually is, how does it compare to REST Protocol, and really when to use it, some concepts on Protocol Buffers. Further, we will also go through a demonstration of working with gRPC in ASP.NET Core to get a complete picture of this trending technology coupled with ASP.NET Core. You can find the complete source code of the the below built application here.
It’s quite an un-deniable fact that REST APIs has ruled over the data communication world for quite a long now. Although it’s been the go-to approach for providing data linkage between client and server, there are quite a lot of drawbacks too. It may be minor, but here are a few.
- The response payload size may grow drastically due to the usage of JSON formatted data. Add some bad code practices to this, and you end up with much slower request-response times.
- Do we all really follow the REST principles? It’s quite hard and not very practical to follow the REST Principles while building a realtime application. Getting your colleagues co-relate with your thought process can be quite challenging, yeah?
- The market already has a better variant that is much faster and easier to develop.
This is where gRPC comes into the game.
Table of Contents
Introducing gRPC in ASP.NET Core
gRPC or g Remote Procedure Calls in an Open Source RPC technology that was initially developed by Google back in 2015. Probably the g in gRPC stands for Google, but it is still not coined officially. The key idea was to make a service that is much faster than the existing services like WebAPI, WCF. GraphQL and so on. With gRPC, the performance was the top priority when it was designed. It’s roughly 7-10 times faster than a standard WebAPI! It supports bi-directional communication and really everything that you would need from a CRUD Application to an Enterprise-level solution, or maybe building another Youtube (data streaming services).
To understand the context of gRPC, let’s first understand that it is a protocol to send and receive data over a network in the same way Web APIs and WCFs operates, but in a more efficient manner. Let’s get started.
The two important features of gRPC to keep in mind is that it runs on the HTTP/2 Protocol and uses the Protocol Buffer to transport data. We will talk about Protocol Buffers in a later section. gRPC is supported by a lot of frameworks including Microsoft that introduced gRPC Projects from .NET Core 3.0 and above. So make sure that you are running on the latest .NET Core SDK.
To understand gRPC easier, let’s relate it to an ASP.NET Core WebAPI, as we are all quite comfortable with the data flow within the WebAPI, right? gRPC is a very similar technology that takes in a Request message and gives back a Response to the caller over the web/network. The difference mainly lies in how the data is being transported and how efficiently it does so.
REST vs gRPC
Here are a few key comparisons between REST and gRPC.
What are Protocol Buffers?
Protocol Buffers or protobuf are essentialy a way to serialize data, somewhat like JSON Serialization but much simpler, compact and faster. To get a better picture, protos are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data. As mentioned earlier, gRPC relies on Protocol Buffers or protos for communication. Protos support a huge variety of languages including C#, Python, Java, Go, Dart, Ruby and so much more. The latest version of the protos is proto3.
Protobuf format is designed in a way that it is much faster than JSON or XML formats. Thanks to smaller file sizes due to serialized binary strings. The definition of the messages / data to be seriliazed is written into proto files. The protobuf files usually have a .proto file extension.
How gRPC Work?
The flow starts with creating a gRPC Server. Next, you create proto files that not only contains definitions of the response and requests but also services that has to implemented by the server. Once the proto files are ready, you would have to add a reference to these files as a service in both the client and server projects.
As soon as the client invokes the Services defined within the Proto file and requests to the port where the server is running, the Service implementation at the server side gets invokes and retuns the data in binary format which will be further de-serialized to the response object that is defined in the configuration file (proto). This is the basic idea. You will be more clear with the concept when we start implementing.
What we will build
We will be building a service that can return a simple recordset of data. We will be using the gRPC Project type to build the gRPC Server. Then, we will consume this service using a simple console application. Keep in mind that the client could be anything that supports the gRPC protocols. So basically, we will be going through an entire cycle of data communication between the client and the server. Along the way we will also go through important concepts like building protos and sharing with the client, the gRPC folder structure and more. Let’s get started.
Working with gRPC in ASP.NET Core
Let’s get started with gRPC in ASP.NET Core and build ourself a neat implementation to demonstrtate the gRPC in a greater detail. As mentioned earlier, gRPC is already available as a Project Template with Visual Studio 2019, given that you have the latest SDK of .NET Core 3.x installed on your machine.
Open up Visual Studio 2019 and create a new gRPC Project. You can do so by searching for ‘gRPC’. This brings up a project template selection for creating a gRPC in ASP.NET Core. Click Next.
We will be building both gRPC Server that basically runs on the ASP.NET Core container and also a gRPC Client that can be anything. For demonstration we will be building a simple console application that would act as a gRPC Client.
Note that we are building both the Client and Server within the same Solution named gRPC.Dotnet.Learner. This is only to keep the project and repository organized for this article. In Practical cases, the client and server should be built on separate Solutions. The only Resource that is to be shared is the Protobuf file which we will deal in the upcoming sections.
Here we are creating the Server first and naming it gRPCServer.
Select the gRPC Service in the Selection below and click next.
Visual Studio does it’s magic and creates the gRPC Project for you and installs all the required dependencies.
Getting used to the gRPC Project Structure
The first thing you would notice in a gRPC ASP.NET Core Service is the similarity with other ASP.NET Core projects. You get to see the same Program.cs , Startup.cs and other files. The new inclusions are Protos and the Services Folder.
Let’s explore the Startup class first. There are a few things to note here.
public void ConfigureServices(IServiceCollection services) { services.AddGrpc(); }
Here we are adding gRPC to the ASP.NET Core Service Container.
app.UseEndpoints(endpoints => { endpoints.MapGrpcService<GreeterService>(); endpoints.MapGet("/", async context => { await context.Response.WriteAsync("Communication with gRPC endpoints must be made through a gRPC client. To learn how to create a client, visit:"); }); });
At Line 3, you can see that we are mapping the gRPC Service to the GreeterService which is a class that is already available in our Server Project under the Services Folder. Note that you will have to add new mapping here everytime you create a new Service. We will be creating a new service later in this article as well.
Protos folder holds all the protos files that we had talked about earlier. This will make much more sense when you go through a proto file. Visual Studio creates a default proto file named greet.proto in the Protos folder.
syntax = "proto3"; option csharp_namespace = "gRPCServer"; package greet; //; }
Line #1 – Here you define which syntax version to use. proto3 is the latest version of the proto language.
Line #3 specifies the namespace we are going to use for this proto.
Line #5 defines the name of the proto package.
You do not have to modiy the above lines. Here comes the fun part.
Line #8-11 Defines the supported services of this particular proto file. To talk more within the context. We have a proto file named Greet that has a service definition of SayHello. Get it? Now it can have multiple services like SayBye and so on.
Line #10 specifies a rpc Service with the name SayHello that takes in a ‘Message’ object named HelloRequest and responds with a HelloReply
What are Messages?
Message objects in proto files are similar to the Entity / DTO / Model Classes in C#. They basically hold the properties that relate to a particular message object. In our case, we have a message object with property name of type string. You can also nest further message objects within another message object that can act like a nested C# class.
Wonder what string message = 1; means?
1 refers to the order in which the property will be serialized and sent to the client. The next property will have the index of 2 and so on. Get the point, yeah?
Summing up, the proto file has some proto specific defintions, service definitions which include methods that can take in message objects and send back a processed message object as the response. Makes sense, yeah?
With that out of the way, let’s talk about the Service Implementation. Remember we defined a Greeter service with a rpc method named SayHello? So what happens in the background as soon as you save this proto file is that Visual Studio auto-generates the actual service implementation for you. You can find the auto generated files usually at the path – \gRPC.Dotnet.Learner\gRPCServer\obj\Debug\netcoreapp3.1. The important file to check here is the GreetGrpc.cs file. We will be seeing this file later in this section.
DO NOT Edit any of these auto generated files. It may break your gRPC application.
Let’s open up the GreeterService class in the Services folder.
public class GreeterService : Greeter.GreeterBase
Line #1 is a standard C# definition of a class. The point to note is that it interits from a Greeter class that we never created. So Visual Studio is responsible to create a C# Class that has the same name as that of the service object name we had created earlier in the proto file. Remember there was a service definition by the name Greeter?
Let’s navigate to the Greeter.GreeterBase. Move your cursor to the line and press F12.
There is quite a lot of generated code that barely makes any sense to the end user for the first. I want you to concentrate on the following lines of code that is available in the Greeter class.
public abstract partial class GreeterBase { public virtual global::System.Threading.Tasks.Task<global::gRPCServer.HelloReply> SayHello(global::gRPCServer.HelloRequest request, grpc::ServerCallContext context) { throw new grpc::RpcException(new grpc::Status(grpc::StatusCode.Unimplemented, "")); } }
At line 3 you can see our method definition, SayHello. Note that it is an Unimplemented method. Understand the context? So, we are inhereting this particular class as the base to the actual service class and will override the SayHello method and finish off with our implementation there.
Coming back to the basics of gRPC, it depends on the proto files for the contracts that will be implemented as a Service within the Server. All this might seem complicated, but it is actually cool how everything is wired up.
Let’s go back to our GreeterService.cs now.) { return Task.FromResult(new HelloReply { Message = "Hello " + request.Name }); } }
Here is a very standard C# with all the expected pieces of code like the constructor injection of ILogger. The interesting part here is the implementation of the SayHello method.
As said earlier, you can find that it override the SayHello method of the base class and implements the method. It accepts parameters like the HelloRequest and a special ServerCallContext object that belongs to the gRPC Core Library. Finally it returns a message of type Hello reply with the particular business logic. Simple, yeah?
That’s quite everything with regards to the project structure and default code of the gRPC Server. Let’s build it and run the application.
You can notice that it simply pops up a console that says that the server is up and running at a particular port.
Let’s try to navigate to this location.
As mentioned earlier, gRPC services cannot be invoked by just visiting the port / URL. It needs a gRPC Client to communicate with the gRPC Server. This is one of the major differences you will find between gRPC and REST APIs.
So that’s where the article goes next.
Building a gRPC Client
a gRPC Client could be anything from a simple console application to another ASP.NET Core Web Application. To keep this demonstration simple, let’s go with a Console Application Project.
Now that we have our Console application ready, the first thing you would have to do is to install the required packages that enables your application to become a gRPC Client.
Open up the Package Manager Console and install the following packages.
Note that these packages are applicable for any type of gRPC Client.
Install-Package Grpc.Tools Install-Package Grpc.Net.Client Install-Package Google.Protobuf
With all the required packages installed, let’s copy over the protos folder from the Server Project to the Client Project. With that done, open the properties of the greet.proto in the Client Console Project. Make sure that you set the Stub class to Client Only.
Now you have essentially changed the property of a proto file. Be aware that the proto file is responsible for generating the base class codes. Since you have modified a proto file, it is important to rebuild the application so that Visual Studio can regenerate the required new files to the obj folder. Remember that this is very important in gRPC and can often break the application if not rebuilt.
Let’s make the required modifications to the CLient Application to establish communication with the gRPC Server now. Open up Program.cs of the Console application and make the following changes.
{ static async Task Main(string[] args) { var data = new HelloRequest { Name = "Mukesh" }; var grpcChannel = GrpcChannel.ForAddress(""); var client = new Greeter.GreeterClient(grpcChannel); var response = await client.SayHelloAsync(data); Console.WriteLine(response.Message); Console.ReadLine(); } }
Line 5 creates a request object.
Line 6 creates a new grpc Channel that enables server communication. Here we provide the URL on which gRPC server in up and running.
Line 7 uses the created channel and creates a new client object that is connected to the gRPC Server.Imagine this like creation of the HTTP Client Object. Makes sense, yeah?
Finally, we retrieve the response in Line 8 via the client object. Now the client object has access to methods like SendHello and SendHelloAsync. We will be using the async variant just for the sake of it.
At Line 9, we are writing the response message back to the console. Let’s run the application and check if we are getting the expected result.
Before running the project, Make sure you have selected the Multiple Startup project options and made the changes so as to run both the Server and Client Project at the same time.
From the above image it is quite clear that we have received the expected response with the least amount of configuration. Trust me, gRPC is going to be dominate the Web Services industry for the next few decades.
Let’s build our own gRPC Service
Now that we got a basic idea on how to work with gRPC in ASP.NET Core and setting up a client and server connection, let’s try to re-do the entire process by creating a new proto and Service class. Let’s make a proto that returns a list of Products that are hardcoded. You can also connect the server to the Database via Entity Framework Core, but you get the idea, yeah?
We will have methods to return a single product by ID and another method that can return a list of Products back to the client.
The first step in implementing our own Product Service in a gRPC Application is to add a new proto. Open up the Server Project and add a new proto file under the Protos folder. Name it products.proto.
This will give you a new proto file with the syntax and namespace already ready. We will just have to add in the Service Definition and Message Objects here.
Before continuing make sure that yo u change the properties of the proto file similar to the below screenshot. Remember to do this step everytime you create a new proto file.
Let’s add in our required service definition and the associated Message objects to the products.proto file.
syntax = "proto3"; option csharp_namespace = "gRPCServer.Protos"; service Product{ rpc GetProductById (GetProductByIdModel) returns (ProductModel); rpc GetAllProducts (GetAllProductsRequest) returns (stream ProductModel); } message GetProductByIdModel{ int32 productId = 1; } message GetAllProductsRequest{ } message ProductModel{ int32 productId = 1; string name = 2; string description = 3; float price =4; }
Line 3 to 6 is where you define the Service and it’s method. Here we have 2 Methods under the Product Service. The first one takes in a Message object (productId) and returns a single product. The other method takes a blank Message object and returns a stream of ProductModel objects.
What is a stream? gRPC usually is used for streaming in and out data. In our context we use it to stream the data one product at a time from the server to the client. This is quite similar to IEnumerable<ProductModel> but with lot more flexibilty.
Line 13, we define the actual model of the product. Note that you cannot use int, decimal like you would do back in C#. Remember that proto is a completely different language and has nothing to do with C# or even Microsoft. You can find the allowed object types here.
We need a list of Product data to work with, right? In order to mimic the existence of a Database, i am creating a new class with a list of hardcoded value for the Product model. you can find it in the Data Folder of the Server Project.
Note that I am using such an approach only to save time. You can install the Entity Framework Core package, establish a connection and use the dbContext to pull in product data from your database as well.
public static class ProductData { public static List<ProductModel> ProductModels = new List<ProductModel> { new ProductModel { ProductId = 1, Name = "Pepsi", Description = "Soft Drink", Price = 10 }, new ProductModel { ProductId = 2, Name = "Fanta", Description = "Soft Drink", Price = 13 }, new ProductModel { ProductId = 3, Name = "Pizza", Description = "Fast Food", Price = 25 }, new ProductModel { ProductId = 4, Name = "French Fries", Description = "Fast Food", Price = 20 } }; }
With the Product data ready, let’s create a service class that could serve data from the ProductData Class to the client. In the server project add a new class named ProductService under the Services folder.
public class ProductService : Product.ProductBase { private readonly ILogger<ProductService> _logger; public ProductService(ILogger<ProductService> logger) { _logger = logger; } public override Task<ProductModel> GetProductById(GetProductByIdModel request, ServerCallContext context) { var product = ProductData.ProductModels.Where(p => p.ProductId == request.ProductId).FirstOrDefault(); if(product!=null) { return Task.FromResult(product); } else { return null; } } public override async Task GetAllProducts(GetAllProductsRequest request, IServerStreamWriter<ProductModel> responseStream, ServerCallContext context) { var allProducts = ProductData.ProductModels.ToList(); foreach(var product in allProducts) { await responseStream.WriteAsync(product); } } }
QUICK TIP – You can type in public override which would then given you a list of override-able methods. Simply navigate to the required method and tap on Tab. It creates the template of the entire function for you. That’s quite a lot of development time saved.
Line 9 to 20 is the method that returns a single product based on the Id that is sent by the client. I guess this is a very straightforward method that used LINQ to fetch the data from the product List.
Remember we used something called a stream in the proto file? Line 21 to 28 is the function that can return a stream of Product data back to the client. Note that we are sending data one by one in a stream to the response. As long as the control remains in the foreach loop, the client continues to listen for the data stream.
Finally, open up the starup.cs and add the mapping to the new gRPC Service under the app.UseEndpoints(…)
endpoints.MapGrpcService<ProductService>();
What remains is the client implementation. But before that, let’s rebuild our Server project.
With gRPC, make sure that you rebuild the entire project every time there is a change made. This is because there is a lot of auto generation of code that happens in the background.
At the client side, the first thing to copy over the products.proto file from the Server to the client. Make sure you change the properties of the client proto as follows. Make this a practice as soon as copy over the proto files.
Now, open up the Program.cs in our Console application and add in the following.
static async Task Main(string[] args) { var data = new GetProductByIdModel { ProductId = 2 }; var grpcChannel = GrpcChannel.ForAddress(""); var client = new Product.ProductClient(grpcChannel); var response = await client.GetProductByIdAsync(data); Console.WriteLine(response); Console.ReadLine(); using (var clientData = client.GetAllProducts(new GetAllProductsRequest())) { while(await clientData.ResponseStream.MoveNext(new System.Threading.CancellationToken())) { var thisProduct = clientData.ResponseStream.Current; Console.WriteLine(thisProduct); } } Console.ReadLine(); }
For demonstration purposes, we will be using both the methods here in the Main Function.
Line 3 creates a new GetProductByIdModel with a productId of 2. This will be something the client will be sending dynamically.
Line 4 creates a new gRPC channel pointing to the address where the gRPC server runs.
Line 5 creates a new client object using the gRPC channel. Note that we will be using this client object to access both the GetByID and GetAll methods.
Line 6, we pass in the data to the client, which returns back the required response.
Similarly, from Line 9 to 16, we return a stream of product data one line at a time.
Let’s run both the Server and Client Project and check out the responses.
As you see, we have received exactly what we wanted. Feel free to play along with different variations of this awesome technology. Let’s wrap up this article for now. You can find the complete source code here.
Did you like this content? Found this article helpful? Consider Supporting by buying me a coffee.Buy me a coffee
Summary
In this article, we covered several awesome concepts that are good to have in your tech-stack. We learnt all about the basics of gRPC, Protocol Buffers, the difference between gRPC and REST, building proto files, working with gRPC in ASP.NET Core. We also built ourself a sample application that consists of both Server and Client Implementation of gRPC.
In the upcoming articles, we will learn more about gRPC in regards with authentication, maybe building a CRUD API and much more. Feel free to suggest.
Leave behind your valuable queries, suggestions in the comment section below. Also, if you think that you learned something new from this article, do not forget to share this within your developer community. Happy Coding!
Frequently Asked Questions
Is gRPC faster than REST API?
Thanks to the added advantage of using Protocol Buffers and HTTP/2 , gRPCs are roughly around 7 to 10 times faster than an average WebAPI.
Thanks Mukesh,
Great introduction to gRPC for ASP.NET Core
You are welcome. More to come 🙂
Regards
Simple and understandable approach. Great Job!!!
Thanks for the feedback!
Today I know something about gRPC | https://www.codewithmukesh.com/blog/grpc-in-aspnet-core-getting-started/ | CC-MAIN-2020-45 | en | refinedweb |
Available items
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
sgqlc- Simple GraphQL Client ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. image:: :target:
.. image:: :target:
This package offers an easy to use
GraphQL_ client. It's composed of the following modules:
:mod:
sgqlc.types: declare GraphQL in Python, base to generate and interpret queries. Submodule :mod:
sgqlc.types.datetimewill provide bindings for :mod:
datetimeand ISO 8601, while :mod:
sgqlc.types.relaywill expose
Node,
PageInfoand
Connection.
:mod:
sgqlc.operation: use declared types to generate and interpret queries.
:mod:
sgqlc.endpoint: provide access to GraphQL endpoints, notably :mod:
sgqlc.endpoint.httpprovides :class:
HTTPEndpointusing :mod:
urllib.request.urlopen().
Straight from:
A query language for your API was created by Facebook based on their problems and solutions using
REST_ to develop applications to consume their APIs. It was publicly announced at
React.js Conf 2015_ and started to gain traction since then. Right now there are big names transitioning from REST to GraphQL:
Yelp_
Shopify_ and
GitHub, that did an excellent
postto explain why they changed.
A short list of advantages over REST:
Built-in schema, with documentation, strong typing and introspection. There is no need to use
Swagger_ or any other external tools to play with it. Actually GraphQL provides a standard in-browser IDE for exploring GraphQL endpoints:;
Only the fields that you want. The queries must explicitly select which fields are required, and that's all you're getting. If more fields are added to the type, they won't break the API, since the new fields won't be returned to old clients, as they didn't ask for such fields. This makes much easier to keep APIs stable and avoids versioning. Standard REST usually delivers all available fields in the results, and when new fields are to be included, a new API version is added (reflected in the URL path, or in an HTTP header);
All data in one request. Instead of navigating hypermedia-driven RESTful services, like discovering new
"_links": {"href"...and executing a new HTTP request, with GraphQL you specify nested queries and let the whole navigation be done by the server. This reduces latency a lot;
The resulting JSON object matches the given query exactly; if you requested
{ parent { child { info } } }, you're going to receive the JSON object
{"parent": {"child": {"info": value }}}.
From GitHub's
Migrating from REST to GraphQL_ one can see these in real life::
$ curl -v [ { "login": "...", "id": 1234, "avatarurl": "...", "gravatarid": "", "url": "...", "htmlurl": "...", "followersurl": "", "followingurl": "{/otheruser}", "gistsurl": "{/gistid}", "starredurl": "{/owner}{/repo}", "subscriptionsurl": "", "organizationsurl": "", "reposurl": "", "eventsurl": "{/privacy}", "receivedeventsurl": "", "type": "User", "site_admin": true }, ... ]
brings the whole set of member information, however you just want name and avatar URL::
query { organization(login:"github") { # select the organization members(first: 100) { # then select the organization's members edges { # edges + node: convention for paginated queries node { name avatarUrl } } } } }
Likewise, instead of 4 HTTP requests::
curl -v curl -v curl -v curl -v
A single GraphQL query brings all the needed information, and just the needed information::
query { repository(owner: "profusion", name: "sgqlc") { pullRequest(number: 9) { commits(first: 10) { # commits of profusion/sgqlc PR #9 edges { node { commit { oid, message } } } } comments(first: 10) { # comments of profusion/sgqlc PR #9 edges { node { body author { login } } } } reviews(first: 10) { # reviews of profusion/sgqlc/ PR #9 edges { node { state } } } } } }
sgqlc
As seen above, writing GraphQL queries is very easy, and it is equally easy to interpret the results. So what was the rationale to create sgqlc?
GraphQL has its domain-specific language (DSL), and mixing two languages is always painful, as seen with SQL + Python, HTML + Python... Being able to write just Python in Python is much better. Not to say that GraphQL naming convention is closer to Java/JavaScript, using
aNameFormatinstead of Python's
a_name_format.
Navigating dict-of-stuff is a bit painful:
d["repository"]["pullRequest"]["commits"]["edges"]["node"], since these are valid Python identifiers, we better write:
repository.pull_request.commits.edges.node.
Handling new
scalartypes. GraphQL allows one to define new scalar types, such as
Date,
Timeand
DateTime. Often these are serialized as ISO 8601 strings and the user must parse them in their application. We offer
sgqlc.types.datetimeto automatically generate :class:
datetime.date, :class:
datetime.timeand :class:
datetime.datetime.
Make it easy to write dynamic queries, including nested. As seen, GraphQL can be used to fetch lots of information in one go; however if what you need (arguments and fields) changes based on some variable, such as user input or cached data, then you need to concatenate strings to compose the final query. This can be error prone and servers may block you due to invalid queries. Some tools "solve" this by parsing the query locally before sending it to server. However usually the indentation is screwed and reviewing it is painful. We change that approach: use :class:
sgqlc.operation.Operationand it will always generate valid queries, which can be printed out and properly indented. Bonus point is that it can be used to later interpret the JSON results into native Python objects.
Usability improvements whenever needed. For instance
Relay_ published their
Cursor Connections Specification_ and its widely used. To load more data, you need to extend the previous data with newly fetched information, updating not only the nodes and edges, but also page information. This is done automatically by :class:
sgqlc.types.relay.Connection.
Future plans include generating the Python classes from the GraphQL schema, which can be automatically fetched from an endpoint using the introspection query.
Automatic::
pip install sgqlc
From source using
pip::
pip install .
To reach a GraphQL endpoint using synchronous
HTTPEndpointwith a hand-written query (see more at
examples/basic/01_http_endpoint.py):
.. code-block:: python
from sgqlc.endpoint.http import HTTPEndpoint
url = '' headers = {'Authorization': 'bearer TOKEN'}
query = 'query { ... }' variables = {'varName': 'value'}
endpoint = HTTPEndpoint(url, headers) data = endpoint(query, variables)
However, writing GraphQL queries and later interpreting the results may be cumbersome. That's solved by our
sgqlc.types, which is usually paired with
sgqlc.operationto generate queries and then interpret results (see more at
examples/basic/02_schema_types.py). The example below matches a subset of
GitHub API v4_. In GraphQL syntax it would be::
query { repository(owner: "profusion", name: "sgqlc") { issues(first: 100) { nodes { number title } pageInfo { hasNextPage endCursor } } } }
The output JSON object is:
.. code-block:: json
{ "data": { "repository": { "issues": { "nodes": [ {"number": 1, "title": "..."}, {"number": 2, "title": "..."} ] }, "pageInfo": { "hasNextPage": false, "endCursor": "..." } } } }
.. code-block:: python
from sgqlc.endpoint.http import HTTPEndpoint from sgqlc.types import Type, Field, listof from sgqlc.types.relay import Connection, connectionargs from sgqlc.operation import Operation
# Declare types matching GitHub GraphQL schema: class Issue(Type): number = int title = str
class IssueConnection(Connection): # Connection provides pageinfo! nodes = listof(Issue)
class Repository(Type): issues = Field(IssueConnection, args=connection_args())
class Query(Type): # GraphQL's root repository = Field(Repository, args={'owner': str, 'name': str})
# Generate an operation on Query, selecting fields: op = Operation(Query) #)
Since we don't want to clobber GraphQL fields, we cannot provide nicely named methods. Therefore we use overloaded methods such as
__iadd__,
__add__,
__bytes__(compressed GraphQL representation) and
__str__(indented GraphQL representation).
To select fields by name, use
__fields__(*names, **names_and_args). This helps with repetitive situations and can be used to "include all fields", or "include all except...":
.. code-block:: python
# just 'a' and 'b' typeselection.fields('a', 'b') typeselection.fields(a=True, b=True) # equivalent
# a(arg1: value1), b(arg2: value2): typeselection.fields_( a={'arg1': value1}, b={'arg2': value2})
# selects all possible fields typeselection.fields_()
# all but 'a' and 'b' typeselection.fields(exclude=('a', 'b')) typeselection.fields(a=False, b=False)
Manually converting an existing GraphQL schema to
sgqlc.typessubclasses is boring and error prone. To aid such task we offer a code generator that outputs a Python module straight from JSON of an introspection call:
.. code-block:: console
[email protected]$ python3 -m sgqlc.introspection \ --exclude-deprecated \ --exclude-description \ -H "Authorization: bearer ${GHTOKEN}" \ \ githubschema.json [email protected]$ sgqlc-codegen githubschema.json githubschema.py
This generates
github_schemathat provides the :class:
sgqlc.types.Schemainstance of the same name
github_schema. Then it's a matter of using that in your Python code, as in the example below from
examples/github/github-agile-dashboard.py:
.. code-block:: python
from sgqlc.operation import Operation from githubschema import githubschema as schema
op = Operation(schema.Query) # note 'schema.'
# -- code below follows as the original usage example:
#)
Gustavo Sverzut Barbieri_
sgqlcis licensed under the
ISC_.
You need to use
pipenv_.
::
pipenv install --dev pipenv shell
Install the git hooks:
::
./utils/git/install-git-hooks.sh
Run the tests (one of the below):
::
./utils/git/pre-commit # flake8 and nose
./setup.py nosetests # only nose (unit/doc tests) flake8 --config setup.cfg . # style checks
Keep 100% coverage. You can look at the coverage report at
cover/index.html. To do that, prefer
doctest_ so it serves as both documentation and test. However we use
nose_ to write explicit tests that would be hard to express using
doctest.
Build and review the generated Sphinx documentation, and validate if your changes look right:
::
./setup.py build_sphinx open doc/build/html/index.html
To integrate changes from another branch, please rebase instead of creating merge commits (
read more_).
The following repositories provides public schemas generated using
sgqlc-codegen:
Mogost/sgqlc-schemas_ GitHub, Monday.com | https://xscode.com/profusion/sgqlc | CC-MAIN-2020-45 | en | refinedweb |
#include <genesis/utils/io/gzip_stream.hpp>
Inherits StrictFStreamHolder< StrictIFStream >, and istream.
Input file stream that offers on-the-fly gzip-decompression if needed.
The class accesses an internal std::ifstream. This can be used to open a file and read decompressed data from it.
If
auto_detect is
true (default), the class seamlessly auto-detects whether the source stream is compressed or not. The following compressed streams are detected:
1F 8B. See GZip format.
78 01,
78 9C, or
78 DA. See answer here.
If none of these formats are detected, the class assumes the input is not compressed, and it produces a plain copy of the source stream. In order to fully work for compressed files however, we always use
std::ios_base::binary for opening. This means that on Windows, end-of-line chars are not properly converted for uncompressed files. See for a workaround for this, for example by using our is_gzip_compressed_file() function before opening the file.
The class is based on the zstr::ifstream class of the excellent zstr library by Matei David; see also our Acknowledgements.
If genesis is compiled without zlib support, constructing an instance of this class will throw an exception.
Definition at line 276 of file gzip_stream.hpp.
Definition at line 537 of file gzip_stream.cpp.
Definition at line 552 of file gzip_stream.cpp. | http://doc.genesis-lib.org/classgenesis_1_1utils_1_1_gzip_i_f_stream.html | CC-MAIN-2020-45 | en | refinedweb |
Created on 2008-01-28 14:39 by agoucher, last changed 2008-01-28 15:57 by gvanrossum. This issue is now closed.
There are a couple places in unittest where 'issubclass(something,
TestCase)' is used. This prevents you from organizing your test code via
class hierarchies. To solve this problem, issubclass should be looking
whether the object is a subclass of unittest.TestCase to walk the
inheritance tree all the way up and not just a single level.
Currently, this will not work.
module A..
class A(unittest.TestCase):
pass
module B...
import A
class B(A.A)
def testFoo(self):
print "blah blah blah
I have attached a patch which will address all locations where this
could happen.
I don't really understand what problem you are trying to solve. Can you
attach a sample script to show it more clearly?
Also, the only thing your patch does is rename Test(Case|Suite)
references to unittest.Test(Case|Suite)... I doubt it would have any
effect unless you were monkeypatching the unittest module to replace
those classes with other ones (which should certainly be considered very
dirty ;-)).
This patch seems to be based upon a misunderstanding of how Python
namespaces work. | https://bugs.python.org/issue1955 | CC-MAIN-2020-45 | en | refinedweb |
libpfm_intel_ivbep_unc_ubo — support for Intel Ivy Bridge-EP U-Box uncore PMU
Synopsis
#include <perfmon/pfmlib.h> PMU name: ivbep_unc_ubo PMU desc: Intel Ivy Bridge-EP U-Box uncore PMU
Description
The library supports the Intel Ivy Bridge system configuration unit (U-Box) uncore PMU. This PMU model only exists on Ivy Bridge model 62.
Modifiers
The following modifiers are supported on Intel Ivy Bridge]. | https://dashdash.io/3/libpfm_intel_ivbep_unc_ubo | CC-MAIN-2020-45 | en | refinedweb |
Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
INPUTS
Values
values = "10,3,4,7" # You can change these.
Weights
weights = "2,5,1,3" # You can change these.
Knapsack total capacity
capacity = 10
ALGORITHM
def fractional_knapsack(values, weights, capacity): # index = [0, 1, 2, ..., n - 1] for n items index = list(range(len(values)))
# value/weight ratio ratio = [v / w for v, w in zip(values, weights)] # index sorted by descending value/weight ratio index.sort(key=lambda i: ratio[i], reverse=True) print(index) max_value = 0 fractions = [0] * len(values) for i in index: if weights[i] <= capacity: fractions[i] = 1 max_value += values[i] capacity -= weights[i] else: fractions[i] = capacity / weights[i] max_value += values[i] * capacity / weights[i] break return max_value, fractions
Process inputs
values = [int(v) for v in values.split(',')] weights = [int(w) for w in weights.split(',')]
FUNCTION CALL
max_value, fractions = fractional_knapsack(values, weights, capacity)
Output
print('Total value that can be carried:', max_value) print('Fractions by which to multiply the items:', fractions)
Suggested playgrounds
Open Source Your Knowledge: become a Contributor and help others learn. Create New Content | https://tech.io/playgrounds/53440/fractional-knapsack | CC-MAIN-2020-45 | en | refinedweb |
Fl_Widget | +----Fl_Clock_Output | +----Fl_Clock
#include <FL/Fl_Clock.H>
This widget can be used to display a program-supplied time. The time shown on the clock is not updated. To display the current time, use Fl_Clock instead.
The third form of value returns the displayed time in seconds since the UNIX epoch (January 1, 1970). | http://www.fltk.org/doc-1.1/Fl_Clock_Output.html | crawl-001 | en | refinedweb |
a simple wiki application using the classic Model-View-Controller pattern
Document options requiring JavaScript are not displayed
Sample code
Help us improve this content
Level: Intermediate
Brandon J.W. Smith ([email protected]), Software Engineer, IBM Hanumanth R. Kanthi ([email protected]), IT Architect,
IBM
08 Jan 2008
Project Zero is a simplified development platform focused on the agile development of Web 2.0 applications following a Service-Oriented Architecture (SOA). Among Project Zero's arsenal of libraries is a simplified API for executing SQL queries. Learn how to leverage these APIs to build a simple wiki. you have a basic understanding of
Project Zero. If you need to get up to speed,
you can get started by taking a look at both the introductory article titled "Building RESTful services
for your Web application" and the introductory tutorial titled "Get
started with Project Zero and PHP." When you're comfortable using Project Zero, you
can download it (see Resources) and write a simple application yourself.
Introduction
Larry Wall said it best: make easy things easy and hard things possible.
Data access has use cases that range from the simple to the very complex. As such, many
developers pine for a data access API that is flexible enough to handle complex situations
without making the simple cases too cumbersome. Project Zero's data access API (called
zero.data) addresses this exact pain point. It is not intended as an
abstraction layer like Hibernate or the Java™ Persistence Architecture. Rather, it is a
library that strives to make the easy things easy and the hard things possible by providing a
thin wrapper around SQL as this article illustrates later. For instance,
Listing 1 shows how to execute a simple query and obtain the results
as a list of bean instances:
Manager data = Manager.create("myDb");
List<Person> results = data.queryList("SELECT * FROM person", Person.class);
List<Map<String,Object>> resultsMap = data.queryList("SELECT * FROM person");
Over the last year, the Project Zero team has collaborated closely with IBM®'s Information
Management developers on pureQuery, a high-performance data access platform that includes development
tools, APIs, and an advanced runtime for Java applications. zero.data
leverages pureQuery's APIs and runtime. zero.data does not preclude the use
of pureQuery's toolset because pureQuery tools generate Java artifacts. However, at the time of this writing, there is no Project Zero-specific integration with this robust toolset. (You can get more details
about pureQuery in Resources.)
Figure 1 shows the relationship between
Project Zero's APIs and pureQuery:
zero.data provides a thin wrapping API around functionality provided by pureQuery's
APIs and runtime. This is preferred rather than using pureQuery APIs in order to enable
some simplifying assumptions in a Project Zero application (for instance, configuration and
connection pooling) and to provide a common API in which both Groovy and PHP scripts can
leverage the robust APIs in ways that make sense for each respective language. For instance,
several methods exposed in the Groovy interface make use of closures, whereas its PHP
equivalent uses PHP resource identifiers.
This article focuses on the Groovy APIs. You can learn more about the Java and PHP APIs
from Project Zero's documentation. In the rest of the article, you'll learn the basics
of using zero.data in your Zero applications. First, you will learn to
manage your data using the
zero.data.groovy.Manager interface and the motivations behind its
design, including looking at the APIs from both the Java and the Groovy
perspective. Next, you will
quickly get practical and create
an application to use the zero.data APIs. Once the application is created, you
will build a simple wiki, initialize tables, and finally code the application. Following is the high-level outline:
zero.data.groovy.Manager
Manage your data
zero.data.groovy.Manager (referred to hereafter as the
Manager)
defines a convenience API for a user to query and manipulate relational databases. The most
basic use of the API is to pass an SQL string. In Groovy, you pass in the parameters embedded in
the string as follows: data.queryFirst("SELECT * FROM t WHERE id=$id").
The Manager then prepares the string using a
PreparedStatement, executes it, and returns the results. It also
intelligently manages closing resources created: for instance, the result set, statement, and
connection.
Manager
data.queryFirst("SELECT * FROM t WHERE id=$id")
PreparedStatement
The Manager provides a series of methods to perform the following operations:
Simple API
As a data point, it is worthwhile to compare how the Manager
reduces boilerplate
code and allows you to focus on functionality rather than potentially buggy
'supporting' code. For instance, Listing 2 shows a JDBC-only implementation
of the code found in
Listing 1 and shows how dramatically Manager makes
the simple things
easy:
// assumes Connection has already been initialized
// see JDBC documentation for more details
List<Map<String,Object>> results = new ArrayList<Map<String,Object>>();
try {
PreparedStatement stmt =
connection.prepareStatement("SELECT * FROM person");
ResultSet rs = stmt.executeQuery();
try {
while(rs.next()) {
ResultSetMetaData meta = rs.getMetaData();
int numColumns = meta.getColumnCount();
Map<String,Object> row = new HashMap<String,Object>(numColumns);
for (int i = 1; i <= numColumns; i++) {
row.put(meta.getColumnName(i).toLowerCase(), rs.getObject(i));
}
results.add(row);
}
} finally {
rs.close();
}
} finally {
stmt.close();
}
Rarely does any application code utilize JDBC directly, as illustrated in
Listing 2. Most frameworks provide a layer of indirection and
simplifications to some extent. We believe the Manager
goes to tremendous lengths to
make the simple things easy and the hard things possible. For instance, in addition to
returning a single instance or List, Iterator, or Array of a Java bean class
or Map as shown in Listing 1, zero.data provides a template
method approach to allow you as a developer, or
possibly third parties, to implement complex ResultSet processing.
ResultSet
These more advanced uses of the template method approach will be discussed in a later
developerWorks article. As mentioned previously, zero.data is built on top of the
pureQuery runtime engine. The pureQuery runtime not only provides
a framework for constraining
database access in a predictable, yet flexible, manner through the use of an open API,
but it
encourages a high degree of code reuse. For instance, much of the pureQuery runtime itself
is composed of implementations of the very interfaces that you as a developer can implement
to extend the engine. In this way, the pureQuery runtime is
flexible enough to handle everything from 'one-off' extensions to its engine for a given
application to an organization endorsing a library of reusable ResultSet
processing components.
As depicted in Figure 1, Manager
actually wraps the zero.data.Manager, providing more Groovy friendly
shortcuts.
Following is a comparison of Manager's data access
APIs and zero.data.Manager's. You'll notice that they carry a common theme.
Namely, there are a variety of methods types
that accomplish nearly the same thing but for special use cases. For instance,
queryList returns a java.util.List,
whereas queryIterator returns an
java.util.Iterator. Fairly self-explanatory. Each of these methods
types has three overloaded methods and are symmetrical in signature and function across all
the method types. Table 1 depicts all the method types and overloaded
methods complete with descriptions. You can find more information in Project Zero's
Javadoc API documentation.
zero.data.Manager
queryList
java.util.List
queryIterator
java.util.Iterator
int
Java
programming and Groovy
As you can gather from Table 1, zero.data has a comprehensive
set of data access methods to suit a variety of situations. We will only use the
zero.data.groovy.Manager version of
queryFirst(GString),
queryList(GString),
update(GString),
and insert(GString, List).
queryFirst(GString)
queryList(GString)
update(GString)
insert(GString, List)
Groovy is a dynamic, object-oriented programming language that runs on the Java platform. A frequent misconception is that Groovy attempts to replace the Java programming language. Because Groovy compiles to Java bytecode, it can leverage the standard Java APIs and many libraries that are also written in the Java language. Thus, Groovy doesn't attempt to replace the Java syntax but rather to complement it by offering an alternative syntax containing many language constructs allowing for more concise, terse code.
For example, using the Java version of Project Zero's zero.data APIs, you can
already reduce the lines of code from standard JDBC method calls, as illustrated previously.
However, compare Listing 3, which shows functionally equivalent
Java and Groovy code:
// Java
Manager data = zero.data.Manager.create("wiki");
data.inTransaction(new LocalTransaction() {
public void execute() {
data.update("UPDATE mytable SET name=? WHERE id=?", 1, "new name");
data.insert("INSERT INTO anothertable (name) VALUES (?)", "another name");
List<Map<String,Object>> results = data.queryList("SELECT * FROM sometable");
for (Map<String,Object> row : results) {
for (String key : row.keySet()) {
System.out.println("key: " + key + ", value: " + row.get(key));
}
}
}
});
// Groovy
def data = zero.data.groovy.Manager.create("wiki")
data.inTransaction {
data.update("UPDATE mytable SET name=$name WHERE id=$id")
data.insert("INSERT INTO anothertable (name) VALUES ($name)")
data.eachRow("SELECT * FROM sometable") { row ->
row.each { key, value ->
println("key: $key, value: $value")
}
}
}
From Listing 3, you can see that not only are the number of lines of code
reduced, but the Groovy language tends to stay out of the way. For instance, in the Java
version of the inTransaction method, we must explicitly create
an anonymous instance of LocalTransaction. However, in Groovy,
we hide this behind a closure. Closures are a way in Groovy to create, in a way, "portable"
code. Similar to how you can create an anonymous implementation of the abstract
LocalTransaction class and pass it to the inTransaction method within the Java language,
in Groovy you can zip up the code between the closure brackets ("{" and "}") and send it
to the inTransaction method in a much more concise manner. The Groovy documentation (see Resources) has a more comprehensive explanation about closures in the Groovy
language, including examples and how to write APIs that take a closure as an argument.
inTransaction
LocalTransaction
The Groovy version of the code in Listing 3 has a second and a third use of closures. The eachRow method can take a closure and
will execute the code in between the brackets for each row in the results. As you can
see in the listing, that code calls another closure, which simply loops over the entries
in the row Map and prints out the keys and values.
eachRow
Creating an application
java.util.Map
For the remainder of this article, you will see how the power zero.data provides by creating a simple wiki application using the classic Model View Controller pattern. Figure 2 illustrates the high level architecture of how this works in a Zero application; later in the article, you will learn where to place your code artifacts to support the Zero conventions.
The Model-View-Controller Pattern
Take a look at Figure 2, which illustrates the Model-View-Controller implementation in a Project Zero application:
To jump-start your application, you first need to create a Project Zero application.
The figures that follow show the use of the Project Zero plug-in for Eclipse, which provides
user interface shortcuts for developing applications (but everything can also be done using
the command-line utility equivalent functionality).
First, you must create a Zero application. "Introducing Project
Zero, Part 1" has detailed steps illustrating how to use the Project
Zero Eclipse user interface to create an application. Further, it details the various
folders and artifacts that make up a Project Zero application. But you'll see the basic
steps here.
First, create an application by clicking the File > Project.. menu item. A
dialog box provides the option to select a project wizard. Select Project Zero > Project Zero Application
to start the wizard as shown in Figure 3:
The first screen in the wizard prompts for an application name. It can be named anything you wish,
but we'll keep it obvious by naming it mywiki, as shown in Figure 4:
This creates a project in your Eclipse workspace in the Project Zero application folder structure
and some default application artifacts. Find the mywiki project. It should
look as shown in Figure 5:
Thanks to Project Zero's use of the Ivy framework to maintain dependencies, obtaining zero.data
is very simple. Dependencies are declared in the application in
${app_home}/config/ivy.xml. For convenience and simplicity, we will use Apache Derby
in embedded mode for this demonstration. Edit this file to include the two lines of
XML found in Listing 4:
${app_home}/config/ivy.xml
<dependency org="zero" name="zero.data" rev="1.0+"/>
<dependency org="org.apache.derby" name="derby" rev="10.3.1.4"/>
Resolve your new dependencies as described in
" Introducing Project Zero, Part 1", and then return here.
Configure zero.data
zero.data uses a javax.sql.DataSource to connect to
databases. You can configure and connect to one or more databases by configuring the connection
properties in your Project Zero application's zero.config file. Listing 5
illustrates how to configure a named database with a key of 'wiki':
javax.sql.DataSource
/config/db/wiki = {
"class" : "org.apache.derby.jdbc.EmbeddedDataSource",
"databaseName" : "db/wiki",
"connectionAttributes" : "create=true"
}
Obtain a Manager instance
With your database connection properties properly configured, you're now ready to obtain a
Manager instance. With this instance, you can execute queries and
manipulate the data using the convenience method described above. You obtain a configured
Manager by using the key you selected in the zero.config file. In
this case, we chose 'wiki'. Pass this as an argument to the create method as shown in
Listing 6:
create
def data = zero.data.groovy.Manager.create('wiki')
As you can see from
Listing 4,
Listing 5,
and Listing 6, very little is required to get
your Zero application up and running. Next, we will start to build our simple wiki by
defining some basic requirements and translating that into code.
Build a simple wiki
To illustrate the uses of zero.data, we will build a simple wiki application. In its
simplest form, a wiki creates, edits, links to, and renders wiki pages. Thus, in this
example, we will illustrate the Create, Retrieve, and Update operations from the canonical CRUD
operation (but we will not illustrate the Delete operation).
Why a wiki?
Why implement a wiki to illustrate the zero.data APIs? First, a simple wiki is,
well, simple. Thus, our focus on the zero.data APIs can remain in the forefront
without dwelling too much on other implementation details that arise in a more complex
wiki implementation.
Second, a wiki provides a good base in which additional functionality can be
built up later to illustrate more advanced uses of zero.data and other features of
Project Zero.
Wiki requirements
As mentioned previously, a basic wiki implementation needs to create, retrieve, and update
wiki pages. Common to most wiki implementations is that pages are created by linking to them
from another page first. The wiki user clicks such a link, and a form prompting the user to
create the page is displayed. If the page already exists, it is simply rendered with
a link that allows the user to edit the page in a form similar to the create form.
We will use combination of Project Zero's script and
templating features and leverage some know-how around familiar MVC patterns to prototype this implementation. Namely, we will
place our controller scripts in the application's public folder. These scripts will be accessible
through URIs — much like you'd expect from a PHP application. Folder hierarchies and files on
the file system match the URI structure. Our views will live in the application's /app/views
folder. Because we will implement this in Groovy, our controllers will have the
.groovy suffix and views will be suffixed with .gt.
To get a better understanding of where we are going, Figure 6 shows what the application directory
will look like after creating our application artifacts:
Enough background. Let's dive into the code.
Initialize tables
Before we get too carried away with our controller and view implementations, we need a model
that they can use to retrieve, display, and manipulate data. We will persist wiki pages in a
database table. For simplicity, our model will just be a Map mapped to a row in a database
table. (This wouldn't be much of an article about zero.data if we didn't.) Our storage
needs are very modest. We simply need a name for the page and the actual page content. This
is reflected in the SQL code snippet in Listing 7 placed in the setup.sql file:
CREATE TABLE pages (
id int NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
name varchar(20) NOT NULL,
content varchar(3000),
updated timestamp,
PRIMARY KEY (id)
);
Start it up!
Unlike other application artifacts discussed earlier, there is no convention around
setup.sql. The data definition contained in this file is executed using a startup
handler — code you provide to a Project Zero application that is executed at a certain stage when the application is, you guessed it, started.
First, we create the Groovy script that should be executed when your Project Zero application
is started. We want it to execute the statements found in setup.sql if the
database does not yet exist. We are about to do a development time trick, which
should not be repeated in production. Currently, no support exists for modifying an
existing database schema over time. As such, if the application's
schema changes, you must delete the database from the file system using a startup handler. For purposes of illustration, we'll leave best
practices at the front door. But this simple — if not naive — implementation will suffice to
illustrate the use of zero.data APIs (see Listing 8):
import zero.data.groovy.Manager
//if derby database already exists, don't run the script
def db_dir = new File('db/wiki')
if (!db_dir.isAbsolute())
db_dir = new File(config.root[], 'db/wiki')
if (!db_dir.exists()) {
def buffer = new StringBuffer()
new File('setup.sql').text.split('\n').each() { line ->
line = line.trim()
if (line && !line.startsWith("--") && !line.toLowerCase().startsWith("connect"))
buffer << line << '\n'
}
def statements = buffer.toString().split(';')
def data = Manager.create('wiki')
try {
data.inTransaction() {
statements.each() { statement ->
statement = statement.trim()
if (statement)
data.update(statement)
}
}
} catch (Throwable error) {
throw new RuntimeException("Setup database error: ${error.message}", error);
}
}
Before this script can execute at startup, we need to configure it to do so. This is done
by registering the script as a startup handler in the config/zero.config file, as shown
in Listing 9:
/config/handlers += {
"events" : "start",
"handler" : "startup.groovy"
}
This registration tells the Project Zero runtime to execute the startup.groovy script under
all conditions when the 'start' event is fired. Start the Project Zero application and the
CREATE TABLE statement will now be executed. Because the database does not yet exist,
embedded Derby will create it on disk for you (because you previously configured the
connection properties to create it if it didn't exist).
Code the application
Leave your application running. Project Zero is dynamic and can pick up most changes. Now that
you have your table created, you can start coding your controller and views. Let's start with
rendering a page.
Rendering dynamic HTML
To render a page, we need a hook that the Project Zero runtime uses to hand off control
to our wiki application. We will implement this in the form of a controller in the app/public
directory. We'll call the render functionality 'view.groovy'. First, render needs a
Manager to execute the query for a page. Next, we will
execute a query to retrieve a row in the database. If the row exists, we render it. If it
does not exist, we display a create page. The implementation is shown in Listing
10:
def name = request.params.name[]
def data = new zero.data.Manager('wiki')
def page = data.queryFirst("SELECT * FROM pages WHERE name=$name")
if (page) {
request.page.name = page.name
request.page.content = page.content
request.view = 'view.gt'
} else {
request.page.name = name
request.view = 'create.gt'
}
render()
We now need to create the 'view.gt' and 'create.gt' views to support the view.groovy
controller. Both views simply render data in the request scope using Groovy templates
(see Listings 11 and 12):
<html>
<head>
<title><%= request.page.name[] %></title>
</head>
<body>
<h1><%= request.page.name[] %></h1>
<%= request.page.content[] %>
<hr/>
<a href="<%= "${getRelativeUri('/edit.groovy')}?page=${request.page.name[]}" %>">
Edit this page?
</a>
</body>
</html>
<html>
<head>
<title><%= request.page.name[] %> - Create</title>
</head>
<body>
<h1><%= request.page.name[] %></h1>
This page does not exist.
<a href="<%= "${getRelativeUri("/edit.groovy")}?name=${request.page.name[]}" %>">
Create?
</a>
</body>
</html>
Updating the database
In the previous section, you created two views: a wiki page rendered and a form to create a
wiki page. To support these views, you need to create additional controllers and views.
Namely, the view.gt view links to the edit.groovy controller, and the create.gt HTTP POSTs its
form to save.groovy by rendering edit.gt. edit.groovy retrieves data and displays a form
similar to create, only now it's populated with the existing wiki page content. save.groovy either
inserts a row or updates an existing row in the database table. Their implementations
are shown in Listings 13 through 15:
def name = request.params.name[]
def data = zero.data.groovy.Manager('wiki')
def page = data.queryFirst("SELECT COUNT(*) FROM pages WHERE name=$name")
if (page) {
request.page.content = page.content
} else {
request.page.content = ""
}
request.page.name = name
request.view = 'edit.gt'
render()
<html>
<head>
<title><%= request.page.name[] %> - Editing</title>
</head>
<body>
<h1>Editing <%= request.page.name[] %></h1>
<form method="POST"
action="<%= "${getRelativeUri('/save.groovy')?name=${request.page.name[]}" %>">
<textarea name="content" rows="20" cols="60"><%= request.page.content[] %></textarea>
<input type="submit" value="Save Page"/>
</form>
</body>
</html>
def name = request.params.name[]
def data = zero.data.groovy.Manager('wiki')
def page data.queryFirst("SELECT * FROM pages WHERE name=${name}")
def content = request.params.content[]
if (page) {
data.update("UPDATE pages SET content=${content}")
} else {
data.update("INSERT INTO pages (name,content) VALUES ($page_name,$content")
}
uri = "${getAbsoluteUri('/view.groovy')}?name=${name}"
request.headers.out.Location = uri
request.status = HttpURLConnection.HTTP_MOVED_TEMP
Testing the application
You now have a very basic wiki application. So basic, in fact, that you must provide a wiki page that doesn't
exist in order to bootstrap the application. You can do this by entering
to get the page shown in Figure 7:
We are instructed that the page does not exist. This is good because it means that it is working. We could have
put any value into the URI's name parameter, but "HomePage" just seemed like a good start. Next, we need
to create content for the HomePage. Let's enter in any arbitrary HTML. Perhaps add a link in the
form that we know will link to a page (even if we know it doesn't exist) as shown in
Figure 8:
Now, we click the Save Page button to put the page in the database. We are then redirected to
view the page we just edited, as shown in Figure 9:
If you linked to a non-existing page (it's pretty easy because only one page exists in the database now), then
you can click on that link and it will take you to the missing page screen again. You can create a page
for your linked page as shown in Figure 10:
And that's it for our simple wiki. Naturally, our wiki could do many more things. We could use a markup text like Markdown or Textile to simplify content creation and linking, among other things. (You can try these as exercises later!)
Conclusion
Project Zero is a simplified development platform focused on agile development of Web
2.0 applications following an SOA. Among Project Zero's arsenal of libraries is a simplified API for executing
SQL queries. You learned how to leverage these APIs to build a simple wiki.
We discussed the motivations behind the zero.data API, including its wrapping of the pureQuery APIs.
We also looked in-depth at zero.data.Manager and illustrated some of the
high-level differences between the Java and Groovy versions of the APIs.
We then immediately got practical by creating an application
using the zero.data APIs by building a simple wiki, initializing database tables, and finally
coding the implementation. We hope this article has been useful to you and that you'll dive in to creating Zero applications! The active Project Zero community will be there to help you if you need it.
Download
Resources
About the authors
Brandon Smith works on IBM's Project Zero, heading up all things data. He has a master's degree
from Carnegie Mellon University. Catch up with him at 16cards.com
and [email protected].
Hanumanth Kanthi works as an IT Architect with IBM Software Services for WebSphere.
He has a master’s degree in computer science from Victoria University of Technology, Australia.
He can be reached at [email protected].. | http://www.ibm.com/developerworks/web/library/wa-pz-wiki/ | crawl-001 | en | refinedweb |
#include <linbox/blackbox/diagonal.h>
Random diagonal matrices are used heavily as preconditioners.
This is a class of n by n diagonal two template parameters. The first is the field in which the arithmetic is to be done. The second is the vector trait indicating dense or sparse vector interface, dense by default. This class is then specialized for dense and sparse vectors.
The default class is not implemented. It's functions should never be called because partial template specialization should always be done on the vector traits. | http://www.linalg.org/linbox-html/classLinBox_1_1Diagonal.html | crawl-001 | en | refinedweb |
Duncan Mackenzie writes about the issue of Categories vs Tags..
Way back when I announced the first Roadmap for Subtext, I stated that Subtext would remove the multiple blogs feature and only support a single blog. Fortunately I was persuaded by many commenters to abandon that change and continue to support multiple blogs. Instead, I set out to simplify the process of configuring multiple blogs.
Now I am really glad that I did so. I currently have three blogs running off of a single installation of Subtext..) do CALL COPY /Y Output.sql + %%A Output.sql
Yeah, that would work, but it is so sloooooow.
Szeryf points out that I can simply pass *.sql to the COPY command and get the same result.
*.sql
COPY..
Jon Galloway is my batch file hero. He’s the one who introduced me to the FOR %%A in ... syntax.
FOR %%A in ...
Today I needed to rename a bunch of files. On one project, we haven’t kept our file extensions consistent when creating a stored procedure file in a Database project. Some of them had .prc extensions and others have .sql extensions.
.prc
.sql
I wanted to rename every file to use the .sql extension. I couldn’t simply use a batch rename program because I wanted these files renamed within Subversion, which requires running the svn rename command.
svn rename
So using a batch file Jon sent me, I wrote the following.
FOR %%A in (*.prc) do CALL :Subroutine %%A
GOTO:EOF
:Subroutine
svn rename %~n1.prc %~n1.sql
GOTO:EOF
Pretty nifty. For each file in the current directory that ends in the .prc extension, I call a subroutine. That subroutine makes use of the %~n1 argument which provides the filename without the extension.
%~n1
For help in writing your batch files, type help call in the command prompt.
help call
I can see using this technique all over the place. I will leave it to my buddy Tyler to provide the Powershell version..
Pics like this make me bummed that I couldn’t make it to the Burning Man festival this year. How cool is that?
Original pic here.
UPDATE:!
tags: ASP.NET, Atlas, Comment Spam!
Well he must be smart! His forehead is even bigger than yours!
See for yourself with these comparison photos.
Ready for a picnic)?
tags: Open Source, Microsoft, Subtext.
Source
SourceUrl
<source>.
If you’ve read my blog you know I have a bit of a thing for Microformats. I once wrote a little special effect script to highlight links to your friends when marked up using the XFN (XHTML Friends Network) Microformat used to denote relationships to people you link to.
Ever since I wrote and started using this script, I ran into a bit of interpersonal angst everytime I would link to someone. Every link spurred the following internal dialog.
Do I mark so-and-so as a friend or acquaintance? Well we’ve never met but I think he’d consider me a friend. But would it be presumptuous if I classified him as a friend. What if I mark him as a friend and he links to me as an acquaintance? I would be crushed! But what if I link to him as an acquaintance and he considers me a friend. Some feelings could be hurt!
By now you probably think I have some serious issues (very true) and am being overly paranoid. But check out Scott Hanselman’s response when I metadata’d him as an acquaintance. He called me a dick! *sniff* *sniff* Ouch! Well technically he used well formed markup (no namespace declared) to make that point, which softened the impact, but only slightly.
I have since realized that the standard XFN relationships are not granular enough to capture the nuances of real world relationships. To save others from such social insecurity and XFN relationship angst, I humbly propose some new relationships I think should be added to the format. For your reference, the current list is located here. I will group by proposed additions in the appropriate existing categories.
rev="dumped"
I hope to submit this Tantek, Matthew, and Eric for their consideration. Unfortunately I have a few strikes against this proposal becoming accepted.
For example, there’s this point on the background page of the XFN site..
There’s also this point on the same page.
XFN values are by implication present tense. We have chosen to omit any temporal component for the sake of simplicity.
XFN values are by implication present tense.
We have chosen to omit any temporal component for the sake of simplicity.
So yes, it appears I have my work cut out for me as many of my proposed additions completely violate the spirit and guidelines of XFN. But that is a minor quibble I’m sure we can resolve with your help. Thank you and good night.
I found out recently that many of my family members and friends who used to read my blog stopped doing so because most of my blog posts were pure gibberish to them. Apparently not everyone is fascinated by topics such as how many CPU cycles it takes to make a method call in a dynamic language? Neither are they enthralled by matching HTML with Regular Expressions. Go figure...
UPDATE: Doesn’t seem to align quite right for me in Firefox, but looks great in IE.
Don’t you love jumping in on the latest fad? What will be next? BlackVelvet.js?
Of
Some computer scientist by the name of Donald Knuth once said,
Premature optimization is the root of all evil (or at least most of it) in programming..
tags: Performance, Optimization, Software, Programming
Jo
Just a little shout out to my wife to wish us a happy anniversary. We’ve been married for four years and each one has been better than the last. I love you honey!
She’s got a rock solid sense of humor (have you seen her gravatar?) and a smile with a gleam so bright it makes you shout Eureka!
I would post a picture, but my wife’s sense of online privacy would make Bruce Schneier look like a MySpace exhibitionist. In fact, I’ve already said too much.
Instead, I’ll post a picture of a ninja because ninjas have a lot in common with my wife. They both kick ass, they are both Japanese (except for this one), they are both concerned about privacy, and like my wife, ninjas are so totally cool!
I mean who doesn’t love ninjas!?
Picture from AskANinja.com PM...
Tonight at Soccer practice, we scrimmaged for a while then ran through some drills. We have an English guy and a Scottish guy (who hardly anyone can understand) on the team who are a laugh a minute. You can imagine their surprise when we started a shooting drill and our team manager tells them that we all have to shag our own balls.
Seems!
This past weekend my wife and I drove up to San Francisco to attend a friend’s wedding, which ended up being a lot of fun. We always like visiting The City because of the many friends we have in the area, though being there reinforces the fact that it is not a place where we’d want to live (no offense to anybody who lives there, it’s just not our style).. | http://haacked.com/archive/2006/09.aspx | crawl-001 | en | refinedweb |
Via Dare's blog, I found this interesting post on Random Number Generation on Michael Brundage's website. My undergrad thesis was on the topic of pseudorandom number generation so I thought I'd take the two classes he provided for a quick spin.
Unfortunately, the C# samples did not compile as is. In his post he discusses how the C++ samples are optimized. I figured I might be able to use them to guide changes to the C# port and could post the results here. Please note that I have not tested them yet and need to verify that my changes were correct. Enjoy and let me know if I got anything wrong.
[CLSCompliant(false)]
public class MersenneTwister
{
private ulong _index;
private ulong[] _buffer = new ulong[624];
/// <summary>
/// Creates a new <see cref="MersenneTwister"/> instance.
/// </summary>
public MersenneTwister()
{
Random r = new Random();
for (int i = 0; i < 624; i++)
_buffer[i] = (ulong)r.Next();
_index = 0;
}
/// Returns a random long integer.
/// <returns></returns>
public ulong Random()
if (_index == 624)
{
_index = 0;
long i = 0;
ulong s;
for (; i < 624 - 397; i++)
{
s = (_buffer[i] & 0x80000000) | (_buffer[i+1] & 0x7FFFFFFF);
_buffer[i] = _buffer[i + 397] ^ (s >> 1) ^ ((s & 1) * 0x9908B0DF);
}
for (; i < 623; i++)
_buffer[i] = _buffer[i - (624 - 397)] ^ (s >> 1) ^ ((s & 1) * 0x9908B0DF);
s = (_buffer[623] & 0x80000000) | (_buffer[0] & 0x7FFFFFFF);
_buffer[623] = _buffer[396] ^ (s >> 1) ^ ((s & 1) * 0x9908B0DF);
}
return _buffer[_index++];
}
public sealed class R250Combined521
private ulong r250_index;
private ulong r521_index;
private ulong[] r250_buffer = new ulong[250];
private ulong[] r521_buffer = new ulong[521];
/// Creates a new <see cref="R250Combined521"/> instance.
public R250Combined521()
ulong i = 521;
ulong mask1 = 1;
ulong mask2 = 0xFFFFFFFF;
while (i-- > 250)
r521_buffer[i] = (ulong)r.Next();
while (i-- > 31)
r250_buffer[i] = (ulong)r.Next();
/*
Establish linear independence of the bit columns
by setting the diagonal bits and clearing all bits above
*/
while (i-- > 0)
r250_buffer[i] = (((uint)r.Next()) | mask1) & mask2;
r521_buffer[i] = (((uint)r.Next()) | mask1) & mask2;
mask2 = mask2 ^ mask1;
//mask2 ^= mask1;
mask1 >>= 1;
r250_buffer[0] = mask1;
r521_buffer[0] = mask2;
r250_index = 0;
r521_index = 0;
public ulong random()
ulong i1 = r250_index;
ulong i2 = r521_index;
ulong j1 = i1 - (250-103);
if (j1 < 0)
j1 = i1 + 103;
ulong j2 = i2 - (521-168);
if (j2 < 0)
j2 = i2 + 168;
ulong r = (r250_buffer[j1] ^ r250_buffer[i1]);
r250_buffer[i1] = r;
ulong s = (r521_buffer[j2] ^ r521_buffer[i2]);
r521_buffer[i2] = s;
i1 = (i1 != 249) ? (i1 + 1) : 0;
r250_index = i1;
i2 = (i2 != 521) ? (i2 + 1) : 0;
r521_index = i2;
return r ^ s;.
My.
My friend Thomas Wagner has a small epiphany of sorts. At first, I thought he was writing the biography of my first year at my current job. But apparently this is quite common among businesses...
This is something a client is waiting to see and technically it’s already late. Yadda yadda please get it done ASAP.
My reply was [edited slightly to protect the guilty] (note the weird pronoun use as time travel is involved).... | http://haacked.com/archive/2005/02.aspx | crawl-001 | en | refinedweb |
inet_addr()
Convert a string into a numeric Internet address
Synopsis:
#include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> in_addr_t inet_addr( const char * cp );
Since:
BlackBerry 10.0.0
Arguments:
- cp
- A pointer to a string that represents an Internet address.
Description:
The inet_addr() routine converts a string representing an IPv4 Internet address (for example, "127.0.0.1") into a numeric Internet address. To convert a hostname such as, call gethostbyname().
All Internet addresses are returned in network byte order (bytes are ordered from left to right). All network numbers and local address parts are returned as machine-format integer values. For more information on Internet addresses, see inet_net_ntop().
Returns:
An Internet address, or INADDR_NONE if an error occurs.
Classification:
Caveats:
Although the value INADDR_NONE (0xFFFFFFFF) is a valid broadcast address, inet_addr() always indicates failure when returning that value. The inet_aton() function doesn't share this problem.
Last modified: 2014-11-17
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/inet_addr.html | CC-MAIN-2016-50 | en | refinedweb |
I am working on a text-based RPG, and I am trying to create magic spells that affect stats such as base attack, turn order, etc. I am using various classes for the spells in the format:
class BuffSpell(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
bardSpells = {
1: BuffSpell(name= "Flare", level= 0, stat= baseAttack, value -1)
}
def useBuffSpell(target, spell):
target.spell.stat = target.spell.stat + spell.value
goblin
BuffSpell[1]
Use
getattr and
setattr, and make the attribute reference in the spell a
str:
bardSpells = { 1: BuffSpell(name= "Flare", level= 0, stat="baseAttack", value -1) } def useBuffSpell(target, spell): setattr(target, spell.stat, getattr(target, spell.stat, 0) + spell.value)
setattr is a builtin that, given an object, the name of an attribute, and a value, sets that attribute of the object to the value.
getattr is a builtin that, given an object, the name of an attribute, and optionally a default (
0 in the above example), returns that attribute of the object, or the default if the object does not have an attribute by that name. If no default is given, nonexistent attribute access raises an
AttributeError.
The above example, if you did
useBuffSpell(goblin, bardSpells[1]), would reduce the
goblins
baseAttack by 1, or set it to
-1 if it did not have a
baseAttack. | https://codedump.io/share/9mWocLK085wk/1/python-how-do-i-create-effects-that-apply-to-variables-in-classes | CC-MAIN-2016-50 | en | refinedweb |
Set-NetIsatapConfiguration
Set-NetIsatapConfiguration
Syntax
Parameter Set: ByName Set-NetIsatapConfiguration [[-State] <State> {Default | Automatic | Enabled | Disabled} ] [[-Router] <String> ] [[-ResolutionState] <State> {Default | Automatic | Enabled | Disabled} ] [[-ResolutionIntervalSeconds] <UInt32> ] [-CimSession <CimSession[]> ] [-GPOSession <String> ] [-IPInterface <CimInstance> ] [-PassThru] [-PolicyStore <String> ] [-ThrottleLimit <Int32> ] [-Confirm] [-WhatIf] [ <CommonParameters>] [ <WorkflowParameters>] Parameter Set: InputObject (cdxml) Set-NetIsatapConfiguration [[-State] <State> {Default | Automatic | Enabled | Disabled} ] [[-Router] <String> ] [[-ResolutionState] <State> {Default | Automatic | Enabled | Disabled} ] [[-ResolutionIntervalSeconds] <UInt32> ] [-CimSession <CimSession[]> ] [-PassThru] [-ThrottleLimit <Int32> ] [-Confirm] [-WhatIf] [ <CommonParameters>] [ <WorkflowParameters>]
Detailed Description
The Set-NetIsatapConfiguration cmdlet sets an ISATAP configuration on a computer or on a Group Policy Object (G to which to store the configuration information.
You can use this with the NetGPO cmdlets to aggregate multiple operations performed on a GPO.
You cannot use this parameter with the PolicyStore parameter.
-IPInterface<CimInstance>
Specifies the IP interface on which to set the ISATAP configuration.
-PassThru
Returns an object representing the item with which you are working. By default, this cmdlet does not generate any output.
-PolicyStore<String>
Specifies the policy store that contains the configuration to set. The acceptable values for this parameter are:
-- PersistentStore
-- ActiveStore
-- GPO
To set the configuration of a GPO, specify the GPO name using the following format: Domain\GPOName
You cannot use this parameter with the GPOSession parameter.
-ResolutionIntervalSeconds<UInt32>
Specifies how often in seconds that Windows Server® 2012 attempts to contact the specified ISATAP server.
-ResolutionState<State>
Specifies the state of router name resolution. The state of the router name resolution specifies how often Windows Server 2012 resolves the ISATAP router name.
-Router<String>
Specifies the policy setting that allows you to specify a router name or IPv4 address for an ISATAP router.
If you enable this policy setting, then you can specify a router name or IPv4 address for an ISATAP router. If you enter an IPv4 address of the ISATAP router, then DNS services are not required.
-State<State>
Specifies the policy setting that allows you to configure ISATAP, an address-to-router and host-to-host, host-to-router and router-to-host automatic tunneling technology that provides unicast IPv6 connectivity between IPv6 hosts across an IPv4 intranet. You can specify one of the following three policy setting states:
-- Default. If the ISATAP router name is resolved successfully, then ISATAP is configured with a link-local address and an address for each prefix received from the ISATAP router through stateless address auto-configuration.
---- If the ISATAP router name is not resolved successfully, then ISATAP connectivity is not available on the host using the corresponding IPv4 address.
-- Enabled.
---- If the ISATAP name is resolved successfully, then ISATAP is configured with a link-local address and an address for each prefix received from the ISATAP router through stateless address auto-configuration.
---- If the ISATAP name is not resolved successfully, then the ISATAP interface is configured with a link-local address.
-- Disabled. No ISATAP interfaces are present on theISATAPConfiguration
The
Microsoft.Management.Infrastructure.CimInstanceobject is a wrapper class that displays Windows Management Instrumentation (WMI) objects. The path after the pound sign (
#) provides the namespace and class name for the underlying WMI object.
When the Passthru parameter is specified, this cmdlet outputs a modified ISATAP configuration object.
Examples
Example 1: Set ISATAP configuration
This command modifies the router name.
Example 2: Set a router using an input object
This set of commands uses the Get-NetIPInterface and Get-NetIsatapConfiguration cmdlets to get the ISATAP configuration of the interface at index 14 and stores it in a variable named $config, and then sets the router name to SuperIsatap using this cmdlet.
Related topics | https://technet.microsoft.com/en-us/library/jj613704.aspx | CC-MAIN-2016-50 | en | refinedweb |
Timeline
02/12/07:
- 20:49 Changeset [19595] by
- 7 edits in trunk/WebCore
- 2 edits in trunk/WebKit
- 118 edits8 deletes in trunk
Rolling out r19588 as it caused a build failure and a hang in layout tests after the obvious build fix was applied.
- 17:37 Changeset [19592] by
- 1 edit in tags/Safari-521.34.1/WebKit/Info.plist
Versioning.
- 17:35 Changeset [19591] by
- 2 edits in tags/Safari-521.34.1/WebKit
- 17:35 Changeset [19590] by
- 21 edits6 deletes in trunk
- 1 copy in tags/Safari-521.34.1
New tag.
- 15:57 Changeset [22936] by
- 12 edits in branches/WindowsMer
- 120 edits8 adds in trunk
- 16 edits2 adds in trunk
- 5 edits in S60/trunk/WebCore
- 2 edits in S60/branches/3.1m/WebCore
- 7 edits2 adds in trunk
- 2 edits in trunk/WebCore
- 8 edits2 adds in trunk
Reviewed by Mitz.
Do not create child renderers for table column groups
if the child has not a table column display type.
- 12:28 Changeset [19581] by
- 2 edits in trunk/WebCore
Reviewed by Mitz
Fix assertion failure in layout test.
- html/HTMLMapElement.cpp: (WebCore::HTMLMapElement::parseMappedAttribute):
- 11:58 Changeset [19580] by
- 3 edits in trunk/WebCore
- 29 edits in trunk
- 4 edits4 adds in trunk
- 2 edits in trunk/LayoutTests
Updated results for this failing test. It looks like Maciej generated
the original results before he made the test "dump as text."
- fast/text/text-shadow-extreme-value-expected.txt:
- 07:28 Changeset [19576] by
- 5 edits in S60/trunk/WebKit
bujtas, Reviewed by Yongjun, merged by Brad.
DESC: Merge of r19540 to s60/trunk: Can not open the Browser application ALES-6Y9GG7
fix: delay formmanager construct
- 06:59 Changeset [19575] by
- 3 edits in S60/trunk/WebKit
yaharon, reviewed by yongjun
DESC: [S60] BrowserNG: Passwords stored without notifying the user MLIO-6XXE6N
- 06:56 Changeset [19574] by
- 2 edits in S60/trunk/WebKit
sareen, reviewed by <[email protected]>.
DESC: Squares are displayed instead of characters for Greek web pages.
- 03:59 Changeset [19573] by
- 3 edits2 adds in trunk.
02/11/07:
- 23:37 Changeset [19572] by
- 3 edits in trunk/WebCore
Reviewed by Maciej
First in what will be a series of HistoryItem enhancements to help debugging
- history/HistoryItem.cpp: (WebCore::HistoryItem::showTree): (WebCore::HistoryItem::showTreeWithIndent): (showTree): Outside of WebCore namespace, and extern "C" - to make even the DWARF debugger able to find it... *sigh*
- history/HistoryItem.h:
- 23:21 Changeset [19571] by
- 7 edits2 adds in trunk
LayoutTests:
Test case not reviewed; based on manual test by David Kilzer.
- test case for <rdar://problem/4975133> ASSERT failure and crash right-clicking on image in SVG use test
- svg/custom/use-events-crash.svg: Added.
WebCore:
Reviewed by Anders.
- fixed <rdar://problem/4975133> ASSERT failure and crash right-clicking on image in SVG use test
Test: svg/cust/use-events-crash.svg
- bindings/js/kjs_dom.cpp: (KJS::toJS):
- ksvg2/svg/SVGElementInstance.cpp: (WebCore::SVGElementInstance::toNode):
- ksvg2/svg/SVGElementInstance.h:
WebKitTools:
Reviewed by Mitz.
- add contextClick() operation to eventSender to be able to test this
- DumpRenderTree/EventSendingController.m: (+[EventSendingController isSelectorExcludedFromWebScript:]): (-[EventSendingController contextClick]):
- 23:06 Changeset [19570] by
- 13 edits3 adds in trunk/LayoutTests
Binary portion of patch landed in r19490 that I forgot to land.
- 20:47 Changeset [22935] by
- 3 edits in branches/WindowsMerge/WebKitWin
Fixing line endings.
- WebResource.cpp:
- WebResource.h:
- 20:06 Changeset [19569] by
- 3 edits4 adds in trunk
LayoutTests:
Reviewed by Maciej.
- test for REGRESSION: Reproducible assertion failure in DeleteSelectionCommand::fixupWhitespace()
- fast/text/delete-hard-break-character-expected.checksum: Added.
- fast/text/delete-hard-break-character-expected.png: Added.
- fast/text/delete-hard-break-character-expected.txt: Added.
- fast/text/delete-hard-break-character.html: Added.
WebCore:
Reviewed by Maciej.
- fix REGRESSION: Reproducible assertion failure in DeleteSelectionCommand::fixupWhitespace()
Test: fast/text/delete-hard-break-character.html
The bug was caused by not updating a line whose line break object and offset
has been deleted. When deleting text, all lines containing the deleted text
are marked dirty. However, if the first character being deleted is a newline
which serves as a hard line break for the previous line, then that line will
not be marked, and since it will be a clean line ending with a line break,
relayout will begin at the next line. The fix is to check for this when
determining where to relayout from.
- rendering/bidi.cpp: (WebCore::RenderBlock::determineStartPosition): Changed the condition for including the last clean line in relayout to include the case where the last clean line ends with a line break, but that line break is a newline that has been deleted.
- 17:19 Changeset [19568] by
- 2 edits in trunk/WebKit
Reviewed by Mark.
Switch the initial value of box-sizing property from "border-box" to "content-box".
- WebInspector/webInspector/inspector.js:
- 16:18 Changeset [19567] by
- 3 edits4 adds in trunk
LayoutTests:
Reviewed by Maciej.
Test for REGRESSION: Google Calendar cell highlight misplaced
- fast/block/positioning/offsetLeft-offsetTop-borders-expected.checksum: Added.
- fast/block/positioning/offsetLeft-offsetTop-borders-expected.png: Added.
- fast/block/positioning/offsetLeft-offsetTop-borders-expected.txt: Added.
- fast/block/positioning/offsetLeft-offsetTop-borders.html: Added.
WebCore:
Reviewed by Maciej.
REGRESSION: Google Calendar cell highlight misplaced
Make offsetLeft/offsetTop/offsetParent behavior match Firefox.
- rendering/RenderObject.cpp: (WebCore::RenderObject::offsetLeft): (WebCore::RenderObject::offsetTop): (WebCore::RenderObject::offsetParent):
- 15:50 Changeset [19566] by
- 3 edits4 adds in trunk
LayoutTests:
Reviewed by Maciej.
Test for REGRESSION: No day/week scrollbar in Google Calendar
- fast/layers/overflow-scroll-auto-switch-expected.checksum: Added.
- fast/layers/overflow-scroll-auto-switch-expected.png: Added.
- fast/layers/overflow-scroll-auto-switch-expected.txt: Added.
- fast/layers/overflow-scroll-auto-switch.html: Added.
WebCore:
Reviewed by Maciej.
- fixed REGRESSION: No day/week scrollbar in Google Calendar
Make sure overflow:auto scrollbars are always enabled. If they were overflow:scroll
and dynamically changed to auto they might still be disabled.
- rendering/RenderLayer.cpp: (WebCore::RenderLayer::updateScrollInfoAfterLayout):
- 10:28 Changeset [19565] by
- 2 edits in trunk/WebKitTools
- Scripts/check-for-global-initializers: Fix case where executable doesn't exist at all so it doesn't give a perl exception (happens in clean builds, for example).
- 01:05 Changeset [19564] by
- 3 edits2 adds in trunk
Reviewed by Hyatt.
XPath title shouldn't match <title> in XHTML
Test: fast/xpath/ensure-null-namespace.xhtml
- xml/XPathStep.cpp: (WebCore::XPath::Step::nodeTestMatches): Only let a null namespace match any for HTML.
- 00:28 Changeset [19563] by
- 2 edits1 add in trunk/WebCore
Reviewed by Adam.
REGRESSION: After javascript, onChange not triggered when selecting same option
- html/HTMLSelectElement.cpp: (WebCore::HTMLSelectElement::setSelectedIndex): Remember m_lastOnChangeIndex value, as it can change from setSelected() now. (WebCore::HTMLSelectElement::notifyOptionSelected): Update m_lastOnChangeIndex.
- manual-tests/select-onchange-after-js.html: Added. Also tests for bug 11402.
02/10/07:
- 23:31 Changeset [22934] by
- 2 edits in branches/WindowsMerge/WebCore
Rubberstamped by Oliver.
- 22:01 Changeset [22933] by
- 3 edits in branches/WindowsMerge/WebKitWin
Reviewed by Steve.
Fix <rdar://problem/4989705> Loading eBay puts many items into back list
The bug was that WebFrame::createFrame was calling loadRequest on the
new child frame, which would initiate a load of FrameLoadTypeStandard,
when in fact we wanted to do a FrameLoadTypeInternal load. I ported
-[WebFrame _loadURL:referrer:intoChild:] to WebFrame, which contains
the logic we need.
- WebFrame.cpp: (WebFrame::createFrame): Use a COMPtr to manage the new WebFrame, and call loadURLIntoChild instead of just calling loadRequest. (WebFrame::loadURLIntoChild): Ported from Mac WebFrame.
- WebFrame.h: Added declaration.
- 18:59 Changeset [19562] by
- 2 edits in trunk/WebKitTools
WebKitTools:
Reviewed by Adam.
- Scripts/svn-create-patch: (findSourceFileAndRevision($)): Use File::Spec->abs2rel() instead of substr() to generate a relative path to the copied file.
- 18:32 Changeset [19561] by
- 2 edits in trunk/WebKitTools
Reviewed by Sam Weinig.
- Drosera/Drosera.icns: updated the icon with 512px and 256px variants
- 18:29 Changeset [19560] by
- 2 edits in trunk/WebKitTools
WebKitTools:
Reviewed by Timothy.
- Scripts/svn-apply: Binary patches don't need a trailing newline after the base64 encoded text.
- 18:17 Changeset [19559] by
- 5 edits3 adds in trunk):
WebCore:
Reviewed by Maciej.
Manual tests for
Call different Java methods that take a variety of Array parameters
from Javascript, passing a Javascript array.
- manual-tests/liveconnect-applet-array-parameters.html: Added.
- manual-tests/resources/ArrayParameterTestApplet.class: Added.
- manual-tests/resources/ArrayParameterTestApplet.java: Added.
- 18:06 Changeset [19558] by
- 3 edits2 adds in trunk
LayoutTests:
Reviewed by Maciej.
Crash when enumerating XPath namespace axis
Test adapted from Python-based 4XPath test suite,
<>
- fast/xpath/namespace-nodes-expected.txt: Added.
- fast/xpath/namespace-nodes.html: Added.
WebCore:
Reviewed by Maciej.
Crash when enumerating XPath namespace axis
- xml/XPathStep.cpp: (WebCore::XPath::Step::nodesInAxis): Namespace axis enumeration was broken in that it crashed, and also in that it returned attribute nodes instead of XPath namespace ones. Removed it altogether.
- 17:05 Changeset [19557] by
- 2 edits1 add in trunk/WebCore
WebCore:
Reviewed by Adam.
- fix ASSERTION failure on some declarative animations <rdar://problem/4975132>
- ksvg2/svg/SVGAnimationElement.cpp: (WebCore::parseValues): Changed the string length math to avoid truncating the last character of each value.
- manual-tests/svg-animation-parseValues.svg: Added.
- 17:01 Changeset [19556] by
- 2 edits1 add in trunk/WebCore
WebCore:
Reviewed by Adam.
- fix REGRESSION (Native slider): slider thumb not updated when the mouse is dragged/released out of range
No automated test case because dumping the render tree updates layer positions anyway.
- manual-tests/slider-thumb-tracking.html: Added.
- rendering/RenderSlider.cpp: (WebCore::RenderSlider::setCurrentPosition): Added call to updateLayerPosition() for the thumb's layer.
- 16:52 Changeset [19555] by
- 3 edits in trunk/LayoutTests
LayoutTests:
Reviewed by Adam.
- fix Layout test failure: fast/events/frame-click-focus.html
- fast/events/frame-click-focus-expected.txt: Update results to include main frame blur.
- fast/events/frame-click-focus.html: Update to click in main frame first.
- 06:00 Changeset [19554] by
- 2 edits in trunk/WebKit
Reviewed by Maciej.
- fix REGRESSION (SearchField): Dragging to select in the Web Inspector's search fields drags the inspector window
- WebInspector/webInspector/inspector.css: Added the search field to the undraggable dashboard-region.
- 02:38 Changeset [19553] by
- 6 edits6 adds in trunk
LayoutTests:
Reviewed by Maciej
<rdar://problem/4965133> WebKit sends file:// url referrers
Added a new category of http tests - "local" where the test is run as a local file but
the test involves remote resources from the httpd.
This test had to be done with cached subresources to tickle the code path that was failing before,
hence the bizarre different-sized images instead of simple success/failure text
- http/tests/local/file-url-sent-as-referer-expected.txt: Added.
- http/tests/local/file-url-sent-as-referer.html: Added - document.writes an img source that ends up testing the http-referer
- http/tests/security/resources/green250x50.png: Added.
- http/tests/security/resources/red200x100.png: Added.
- http/tests/security/resources/showRefererImage.php: Added - By scanning the referrer, sends back either the success or failure image
WebCore:
Reviewed by Maciej
<rdar://problem/4965133> WebKit sends file:// url referrers
- loader/SubresourceLoader.cpp: (WebCore::SubresourceLoader::create): In SubresourceLoader::create(), we make a copy of the original request to use for the load. We then call FrameLoader::canLoad() which tells us if we should hide the referer. Before this fix if it said to hide the referrer, we would simply not apply a new referrer to our copy of the request. But if the original request already had a referrer, so did our copy. We simply have to clear the referrer from the copied request.
WebKitTools:
Reviewed by Maciej
<rdar://problem/4965133> WebKit sends file:// url referrers
- Scripts/run-webkit-tests: Enhanced the http tests so that we can run layout tests on local files, but have an httpd for remote resources
- 02:35 Changeset [19552] by
- 3 edits1 add in trunk/LayoutTests
- add missing result and update tests to work right from DumpRenderTree
- fast/dom/Window/resources/window-appendages-cleared-results.html:
- fast/dom/Window/window-appendages-cleared-expected.txt: Added.
- fast/dom/Window/window-appendages-cleared.html:
- 01:12 Changeset [19551] by
- 4 edits2 adds in trunk
LayoutTests:
Regression not reviewed, based loosely on test case from Ian Eng.
- test case for <rdar://problem/4988091> objects attached to Window not cleared (memory leak)
- fast/dom/Window/resources/window-appendages-cleared-results.html: Added.
- fast/dom/Window/window-appendages-cleared.html: Added.
WebCore:
Reviewed by me, patch from Ian Eng (cleaned up by me some).
- fixed <rdar://problem/4988091> objects attached to Window not cleared (memory leak)
Test case: fast/dom/Window/window-appendages-cleared.html
- bindings/js/kjs_window.cpp: (KJS::Window::clearHelperObjectProperties): (KJS::Window::clear):
- bindings/js/kjs_window.h:
02/09/07:
- 21:15 Changeset [22932] by
- 7 edits in branches/WindowsMerge/WebKitWin
Land Maciej's patch so I can check in.
(Looks like he forgot, since he did move the bug to integrate)
- 17:57 Changeset [22931] by
- 4 edits in branches/WindowsMerge
WebCoreWin:
Reviewed by Brady and Adam.
Fixed <rdar://4986194> Typing in content editable body does not automatically scroll to reveal cursor
- platform/win/ScrollViewWin.cpp: (WebCore::ScrollView::updateScrollbars): The scrollview's scrolloffset wasn't getting set in cases where there were no scrollbars You can have a scrolloffset even if you don't have scrollbars
WebKitWin:
Reviewed by Brady and Adam.
Fixed <rdar://4986194> Typing in content editable body does not automatically scroll to reveal cursor
fix depends on corresponding fix in WebCoreWin
- WebView.cpp: (WebViewWndProc): call the editor to handle inserting text and scrolling/focus changes
- 17:39 Changeset [19550] by
- 2 edits in trunk/WebCore
Reviewed by Tim Hatcher
<rdar://problem/4985497> - Plugs a potential null DocumentLoader deref when
transitioning out of the Bookmarks View
- loader/FrameLoader.cpp: (WebCore::FrameLoader::recursiveGoToItem): add a null check
- 16:58 Changeset [19549] by
- 2 edits in trunk/WebKit
Reviewed by Darin & Maciej.
Fixed: <rdar://problem/4930688> REGRESSION: missing images when reloading webarchives (11962)
- WebCoreSupport/WebFrameLoaderClient.mm: (WebFrameLoaderClient::canUseArchivedResource): The bug here is that because a reload sets a cache policy of NSURLRequestReloadIgnoringCacheData (rightfully so), this method was refusing to load subresources in WebArchives. It's OK to use archive subresources for the NSURLRequestReloadIgnoringCacheData cache policy because we're not worried about the actual contents of a WebArchive changing on disk.
- 16:15 Changeset [19548] by
- 1 edit in trunk/WebCore/WebCore.xcodeproj/project.pbxproj
Attempt to fix the build when using buildit.
- 15:53 Changeset [19547] by
- 2 edits in trunk/WebCore
Rubber-stamped by John . . . and Adam.
- page/ContextMenuController.cpp: (WebCore::ContextMenuController::contextMenuItemSelected): Missing break.
- 15:27 Changeset [19546] by
- 7 edits4 adds in trunk
LayoutTests:
Reviewed by darin
- editing/selection/4975120-expected.checksum: Added.
- editing/selection/4975120-expected.png: Added.
- editing/selection/4975120-expected.txt: Added.
- editing/selection/4975120.html: Added.
WebCore:
Reviewed by darin
<rdar://problem/4975120>
REGRESSION: double-cursor after switching window away/back (11770)
<>
Gmail Editor: Caret can simultaneously appear in both the TO: and message body fields
- page/mac/WebCoreFrameBridge.h: Removed two unused methods left over from the old form control implementation.
WebKit:
Reviewed by darin
<rdar://problem/4975120>
REGRESSION: double-cursor after switching window away/back (11770)
<>
Gmail Editor: Caret can simultaneously appear in both the TO: and message body fields
- WebCoreSupport/WebFrameBridge.mm: Removed unused methods.
- WebView/WebHTMLView.mm: Ditto. (-[WebHTMLView _web_firstResponderCausesFocusDisplay]): Don't appear focused if a descendant view is firstResponder. (-[WebHTMLView _updateActiveState]): Removed the check for a BOOL that was always false.
- WebView/WebHTMLViewInternal.h: Removed a BOOL that's always false.
- 14:51 Changeset [19545] by
- 2 edits in trunk/WebCore
- rendering/bidi.cpp: (WebCore::bidiNext): At Darin's suggestion, moved the "next = 0" line from my previous patch to the start of the loop body
- 14:37 Changeset [19544] by
- 3 edits in S60/branches/3.1m/WebKit
yaharon, Reviewed by yongjun
DESC: [S60] BrowserNG: Passwords stored without notifying the user MLIO-6XXE6N
- 14:35 Changeset [19543] by
- 3 edits4 adds in trunk
LayoutTests:
Reviewed by john
<rdar://problem/4960116>
REGRESSION: Nightly Safari crashes in WebCore::SelectionController::xPosForVerticalArrowNavigation (12416)
- editing/selection/4960116-expected.checksum: Added.
- editing/selection/4960116-expected.png: Added.
- editing/selection/4960116-expected.txt: Added.
- editing/selection/4960116.html: Added.
WebCore:
Reviewed by john
<rdar://problem/4960116>
REGRESSION: Nightly Safari crashes in WebCore::SelectionController::xPosForVerticalArrowNavigation (12416)
- editing/SelectionController.cpp: (WebCore::SelectionController::xPosForVerticalArrowNavigation): Null check. VisiblePosition creation can fail if a node that contains the selection was made invisible after the selection was made and before this function is called during a selection modification operation.
- 14:25 Changeset [19542] by
- 3 edits4 adds in trunk
LayoutTests:
Reviewed by john
<rdar://problem/4983858>
REGRESSION: In a new mail message, attempting to select a single word causes the selection to extend to the previous line
- editing/selection/4983858-expected.checksum: Added.
- editing/selection/4983858-expected.png: Added.
- editing/selection/4983858-expected.txt: Added.
- editing/selection/4983858.html: Added.
WebCore:
Reviewed by john
<rdar://problem/4983858>
REGRESSION: In a new mail message, attempting to select a single word causes the selection to extend to the previous line
- editing/TextIterator.cpp: (WebCore::SimplifiedBackwardsTextIterator::exitNode): We recently split shouldEmitNewlineForNode into shouldEmitNewline{Before, After}Node, so this function now needs an implementation that is different from SimplifiedBackwardsTextIterator::handleNonTextNode. The difference is that we must call shouldEmit*BeforeNode instead of shouldEmit*AfterNode since we are a) exiting nodes and b) moving backward.
- 14:11 Changeset [19541] by
- 13 edits4 adds in trunk
LayoutTests:
Reviewed by john
<rdar://problem/4916541>
Some of the selection isn't preserved during an Indent opreration
Added:
- editing/execCommand/4916541-expected.checksum: Added.
- editing/execCommand/4916541-expected.png: Added.
- editing/execCommand/4916541-expected.txt: Added.
- editing/execCommand/4916541.html: Added. Fixed:
- editing/execCommand/4641880-2-expected.checksum:
- editing/execCommand/4641880-2-expected.png:
- editing/execCommand/4641880-2-expected.txt:
- editing/execCommand/indent-selection-expected.checksum:
- editing/execCommand/indent-selection-expected.png:
- editing/execCommand/indent-selection-expected.txt: Added a FIXME:
- editing/execCommand/indent-list-item-expected.checksum:
- editing/execCommand/indent-list-item-expected.png:
- editing/execCommand/indent-list-item-expected.txt:
- editing/execCommand/indent-list-item.html:
WebCore:
Reviewed by john
<rdar://problem/4916541>
Some of the selection isn't preserved during an Indent opreration
- editing/IndentOutdentCommand.cpp: (WebCore::indexForVisiblePosition): Added. (WebCore::IndentOutdentCommand::indentRegion): Use rangeLength and rangeFromLocationAndLength to restore the selection after the repeated moveParagraph calls necessary to perform indent are finished.
- 14:05 Changeset [19540] by
- 5 edits in S60/branches/3.1m/WebKit
bujtas, Reviewed by Yongjun.
DESC: Can not open the Browser application. fix: delay formmanager construct ALES-6Y9GG7
- 14:05 Changeset [19539] by
- 2 edits in trunk/WebCore
Reviewed by Kevin Decker
- fixed <rdar://problem/4960095> REPRODUCIBLE HANG: WebKit freezes when printing as PDF a certain kind of code (12449)
No test case because I don't know how to make the bug occur without printing.
- rendering/bidi.cpp: (WebCore::bidiNext): reset loop's "next" variable after using it; we were setting "current" to the same value of "next" each subsequent time through the loop, which is not helpful.
- 13:50 Changeset [22930] by
- 2 edits in branches/WindowsMerge/WebCore
Reviewed by Adele.
- Add TextEvent to the DOM (based on the proposed DOM level 3) to be used to fix some international input bugs soon. At this point, we don't send any text events.
- Remove some obsolete files.
- WebCore.vcproj/WebCore.vcproj: Add new files, remove obsolete files.
- 13:49 Changeset [19538] by
- 14 edits3 adds4 deletes in trunk/WebCore
Reviewed by Adele.
- Add TextEvent to the DOM (based on the proposed DOM level 3) to be used to fix some international input bugs soon. At this point, we don't send any text events.
- Remove some obsolete files.
- CMakeLists.txt:
- WebCore.pro:
- WebCore.xcodeproj/project.pbxproj:
- WebCoreSources.bkl: Add new files, remove obsolete files.
- DerivedSources.make: Add TextEvent to the Objective-C and JavaScript bindings lists.
- WebCore.exp: Export DOMTextEvent Objective-C wrapper.
- bindings/js/kjs_events.cpp: (KJS::toJS): Added TextEvent to the list of Event subclasses so we make the right kind of JS wrapper.
- bindings/objc/DOMEvents.mm: (+[DOMEvent _eventWith:]): Same thing, for Objective-C.
- bindings/objc/DOMInternal.h: Added DOMTextEventInternal.h.
- dom/DOMImplementation.cpp: (WebCore::DOMImplementation::hasFeature): Added "TextEvents", "3.0" to the list of things we'll answer true for (as specified in the DOM Level 3 documentation). This isn't so great until we actually send textInput events, but that's coming soon.
- dom/Document.cpp: (WebCore::Document::createEvent): Add "TextEvent" as a way to make a TextEvent (as specified in the DOM Level 3 documentation).
- dom/Event.h:
- dom/Event.cpp: (WebCore::Event::isTextEvent): Added virtual function to be used for runtime type checking of Event objects (as for other event types).
- dom/TextEvent.cpp: Added.
- dom/TextEvent.h: Added.
- dom/TextEvent.idl: Added.
- platform/mac/WebCoreWidgetHolder.h: Removed.
- rendering/CounterListItem.h: Removed.
- rendering/CounterResetNode.cpp: Removed.
- rendering/CounterResetNode.h: Removed.
- 13:16 Changeset [22929] by
- 2 edits in branches/WindowsMerge/WebCore
Reviewed by Adam.
- plugins/win/PluginViewWin.cpp: (WebCore::PluginViewWin::performRequest): Add the stream to the m_streams hash set.
- 12:33 Changeset [19537] by
- 2 edits in trunk/WebCore
Reviewed by Geoff.
<rdar://problem/4816376>
REGRESSION: NetNewsWire 3.0 - Crashes in WebDocumentLoaderMac::attachToFrame() (12674)
The bug was that the NNW policy delegate never calls back on the policy listener so we'll try to do a load
while there's a policy decision underway. The extra call to setPolicyDocumentLoader would cause a detached (and deallocated)
WebDataSource to be reattached and thus causing a crash.
- loader/FrameLoader.cpp: (WebCore::FrameLoader::load): Remove extra call to setPolicyDocumentLoader.
- 12:06 Changeset [19536] by
- 8 edits in trunk
WebCore:
Fix for <rdar://problem/4674537> REGRESSION: Adobe Acrobat 8 - Text
blinks when mouse is moved, and is invisible otherwise
Acrobat 8 was relying on a WebKit bug that was fixed about a year
ago with r12753. The bug was that we would not reload a page if the
source of an iframe was set to the same value it already was. Now
that we have fixed the bug, Acrobat constantly reloads their EULA,
making it blinky and impossible to read.
No layout test since the fix is to add an Acrobat-specific quirk.
- WebCore.exp:
- html/HTMLFrameElementBase.cpp: (WebCore::HTMLFrameElementBase::setLocation): If the new url is the same as the old one and we are honoring the Acrobat quirk, don't do anything.
- page/Settings.cpp: (WebCore::Settings::Settings): (WebCore::Settings::setNeedsAcrobatFrameReloadingQuirk):
- page/Settings.h: (WebCore::Settings::needsAcrobatFrameReloadingQuirk):
WebKit:
Reviewed by Darin.
Fix for <rdar://problem/4674537> REGRESSION: Adobe Acrobat 8 - Text
blinks when mouse is moved, and is invisible otherwise
Allow quirk if the Application was linked before 3.0 and if the
application is Adobe Acrobat.
- Misc/WebKitVersionChecks.h:
- WebView/WebView.mm: (-[WebView _updateWebCoreSettingsFromPreferences:]):
- 12:00 Changeset [19535] by
- 2 edits in trunk/WebCore
Rubberstamped by Dave Harrison
Disable the thread-check assertion in WebCore, as well as Webkit
- WebCore.xcodeproj/project.pbxproj:
- 11:28 Changeset [19534] by
- 8 edits in trunk):
WebCore:
Reviewed by Geoff.
No need to pause timeout checks anymore.
- bindings/js/kjs_window.cpp: (KJS::WindowFunc::callAsFunction):
- 11:07 Changeset [19533] by
- 2 edits in S60/trunk/WebKit
raalexan, Reviewed by Yongjun.
DESC: Input element deactivation methods not working (TSW TMCN-6XYRVX)
- 11:04 Changeset [19532] by
- 3 edits in S60/trunk/WebCore
yaharon, reviewed by zalan
DESC: [S60] Browser crashes when selecting the left Soft key Options when the cursor is in textarea field (SCHY-6Y7SHD)
- 10:48 Changeset [19531] by
- 2 edits in trunk/WebKit
Reviewed by Brady.
- WebKit.exp: Add WebBaseNetscapePluginView to the export list.
- 10:44 Changeset [19530] by
- 1 edit in trunk/WebKitTools/DumpRenderTree/EventSendingController.m
Build fix. Use 0 or 0.0 instead of nil to prevent a compile warning.
- 08:50 Changeset [22928] by
- 2 edits in branches/WindowsMerge/WebKitWin
Reviewed by Beth
- WebKitWin part of fix for radar 4939636, problems with context menu items and binaries linked against WebKit 2.0.
- Interfaces/IWebUIDelegate.idl: bumped enum value for new SPI tags to match change in WebCore/WebKit
- 08:48 Changeset [19529] by
- 7 edits in trunk
WebCore:
Reviewed by Beth
- WebCore part of fix for radar 4939636, problems with context menu items and binaries linked against WebKit 2.0.
- platform/ContextMenuItem.h: (WebCore::): Tweaked comment; bumped enum value for new SPI tags to avoid conflict with pre-3.0 SPI tag values.
WebKit:
Reviewed by Beth
- WebKit part of fix for radar 4939636, problems with context menu items and binaries linked against WebKit 2.0.
- WebKit.xcodeproj/project.pbxproj: Changed DYLIB_CURRENT_VERSION to 2 (was 1)
- Misc/WebKitVersionChecks.h: Added constant WEBKIT_FIRST_VERSION_WITH_3_0_CONTEXT_MENU_TAGS, which is 2 but in the weird format that these version checks use.
- WebView/WebUIDelegatePrivate.h: Tweaked comments; included the old values for three tags for context menu items that changed from SPI to API in 3.0; renamed WEBMENUITEMTAG_SPI_START to WEBMENUITEMTAG_WEBKIT_3_0_SPI_START for clarity, and bumped its value to avoid conflict with the three old values
- WebCoreSupport/WebContextMenuClient.mm: (isAppleMail): new helper function that checks the bundle identifier (fixMenusToSendToOldClients): Removed return value for clarity; now checks linked-on version and also makes special case for Mail; now replaces three API tags with their old SPI values for clients that linked against old WebKit version, in addition to replacing new API with WebMenuItemTagOther for items that had no specific tag before. (fixMenusReceivedFromOldClients): Removed return value for clarity; removed defaultMenuItems parameter because it's no longer necessary; removed code that tried to recognize menus that got confused by the SPI -> API change (we now pass the old SPI values to these clients to avoid confusing them); now restores the tags for the items whose tags were replaced in fixMenusToSendToOldClients (this used to restore the tags of the default items rather than the new items, which was incorrect but happened to work since the clients we tested were using the objects from the default items array in their new items array) (WebContextMenuClient::getCustomMenuFromDefaultItems): Updated to account for the removed return values for the two fix-up methods; moved the autorelease of newItems here, which is clearer and was the source of a leak before.
- 05:55 Changeset [19528] by
- 6 edits18 adds in trunk
2007-02-09 Nicholas Shanks <[email protected]>
Reviewed by Dave Hyatt.
Removed broken recognition of :last-* and :only-* selectors
-):
2007-02-09 Nicholas Shanks <[email protected]>
Reviewed by Dave Hyatt.
Removed broken recognition of :last-* and :only-* selectors
Test results show red indicating property unsupported
Previous behaviour was to erroneously make everything green
- css3/expected_failures/css3-modsel-33-expected.checksum: Added.
- css3/expected_failures/css3-modsel-33-expected.png: Added.
- css3/expected_failures/css3-modsel-33-expected.txt: Added.
- css3/expected_failures/css3-modsel-33.html: Added.
- css3/expected_failures/css3-modsel-35-expected.checksum: Added.
- css3/expected_failures/css3-modsel-35-expected.png: Added.
- css3/expected_failures/css3-modsel-35-expected.txt: Added.
- css3/expected_failures/css3-modsel-35.html: Added.
- css3/expected_failures/css3-modsel-36-expected.checksum: Added.
- css3/expected_failures/css3-modsel-36-expected.png: Added.
- css3/expected_failures/css3-modsel-36-expected.txt: Added.
- css3/expected_failures/css3-modsel-36.html: Added.
- css3/expected_failures/css3-modsel-37-expected.checksum: Added.
- css3/expected_failures/css3-modsel-37-expected.png: Added.
- css3/expected_failures/css3-modsel-37-expected.txt: Added.
- css3/expected_failures/css3-modsel-37.html: Added.
- 05:24 Changeset [19527] by
- 2 edits in trunk/WebCore
Reviewed by Mark.
<rdar://problem/4980176>
- page/Frame.cpp: (WebCore::Frame::pageDestroyed): Since this frame is getting disconnected from its page, ensure it is not the focus node.
- 02:57 Changeset [19526] by
- 2 edits in trunk/WebCore
Reviewed by Maciej.
gdklauncher crashes when compiled with NDEBUG defined.
- Projects/gdk/webcore-gdk.bkl:
- 02:44 Changeset [19525] by
- 2 edits in trunk/WebCore
Reviewed by Mitz.
<rdar://problem/4971224> REGRESSION: ASSERT in WebCore with Mail (12491)
No test case. Not testable since there is no way to do substitute
data loads from layout tests.
- loader/MainResourceLoader.cpp: (WebCore::MainResourceLoader::continueAfterContentPolicy): Don't dispatch data load callback when loading empty data.
- 00:50 Changeset [19524] by
- 4 edits in trunk/WebCore
2007-02-09 Mark Rowe <[email protected]>
Reviewed by Maciej.
REGRESSION: Crash with user stylesheet set
Allow the Frame::canLoad check to skipped so that user stylesheets can be loaded in remote documents.
- ChangeLog:
- loader/DocLoader.cpp: (WebCore::DocLoader::requestCSSStyleSheet): Skip canLoad check if this is a user stylesheet. (WebCore::DocLoader::requestUserCSSStyleSheet): (WebCore::DocLoader::requestResource): Allow canLoad check to be skipped.
- loader/DocLoader.h:
- page/Frame.cpp: (WebCore::UserStyleSheetLoader::UserStyleSheetLoader):
02/08/07:
- 22:49 Changeset [19523] by
- 3 edits in trunk/WebCore
Reviewed by Darin.
Linux/gdk build fixes.
- Projects/gdk/webcore-gdk.bkl: Account for file renaming.
- platform/gdk/KeyEventGdk.cpp: Make gdk's tab key recognized as tab so that keyboard link walking works on gdk. (WebCore::keyIdentifierForGdkKeyCode):
- 22:29 Changeset [19522] by
- 5 edits2 adds in trunk
LayoutTests:
Reviewed by Brady.
Test for
<rdar://problem/4973507> REGRESSION: When replying in Gmail, the caret disappears when you start to type (12599)
- fast/frames/iframe-window-focus-expected.txt: Added.
- fast/frames/iframe-window-focus.html: Added.
WebCore:
Reviewed by Brady.
Fix for
<rdar://problem/4973507> REGRESSION: When replying in Gmail, the caret disappears when you start to type (12599)
When a frame's window was focused, the page didn't get updated about the new frame getting focus.
This was causing handleKeyPress to fail because it kept getting a selection for the wrong frame (which wasn't editable).
Test: fast/frames/iframe-window-focus.html
- page/Frame.cpp: (WebCore::Frame::focusWindow): (WebCore::Frame::unfocusWindow):
- page/Frame.h:
- page/mac/FrameMac.mm: (WebCore::FrameMac::focusWindow): (WebCore::FrameMac::unfocusWindow):
- 22:29 Changeset [22927] by
- 3 edits in branches/WindowsMerge/WebCore
Reviewed by Brady.
Fix for
<rdar://problem/4973507> REGRESSION: When replying in Gmail, the caret disappears when you start to type (12599)
This also fixes a bug where when you called window.focus() on a background window, it did not come to the front.
- bridge/win/FrameWin.h: Removed focusWindow() and unfocusWindow() stubs since there are now implementations in the base class.
- platform/win/TemporaryLinkStubs.cpp:
- 21:44 Changeset [22926] by
- 2 edits in branches/WindowsMerge/WebKitWin
WebKitWin:
Reviewed by Adam.
- WebView.cpp: (WebView::searchFor): Ever since 11396, the widget no longer handles frame focus changes. This is now the page's focus controller responsibility
- 21:43 Changeset [19521] by
- 2 edits in trunk/WebKitSite
2007-02-08 Mark Rowe <[email protected]>
Reviewed by Tim Hatcher.
- nav.inc: Add link to very work-in-progress DOM documentation.
- 21:28 Changeset [22925] by
- 6 edits in branches/WindowsMerge/WebKitWin
Initial checkin for resume support (compiles everywhere, needs ToT CFnetwork to actually work)
- 20:20 Changeset [19520] by
- 5 edits4 adds in trunk
LayoutTests:
Reviewed by Brady.
Test for
<rdar://problem/4971222> REGRESSION (NativeListBox): Deselecting option causes list to jump to top
- fast/forms/listbox-deselect-scroll-expected.checksum: Added.
- fast/forms/listbox-deselect-scroll-expected.png: Added.
- fast/forms/listbox-deselect-scroll-expected.txt: Added.
- fast/forms/listbox-deselect-scroll.html: Added.
WebCore:
Reviewed by Brady.
Fix for
<rdar://problem/4971222> REGRESSION (NativeListBox): Deselecting option causes list to jump to top
Test: fast/forms/listbox-deselect-scroll.html
- html/HTMLSelectElement.cpp: (WebCore::HTMLSelectElement::activeSelectionStartListIndex): Added. Returns the index for the active selection. If there is no active selection, it returns the first selected index. (WebCore::HTMLSelectElement::activeSelectionEndListIndex): Added. If there is no active selection, it returns the last selected index.
- html/HTMLSelectElement.h:
- rendering/RenderListBox.cpp: (WebCore::RenderListBox::scrollToRevealSelection): Instead of using the first and last selected indices, use the active selection indices to determine which item to reveal. This way, when you're selecting with the keyboard, or the mouse, no unnecessary scrolling will occur if the end of your active selection is already visible.
- 19:45 Changeset [19519] by
- 3 edits in S60/branches/3.1m/WebCore
yaharon, Reviewed by zalan
DESC: crash when selecting the left Soft key Options when the cursor is in textarea field SCHY-6Y7SHD
- 18:24 Changeset [19518] by
- 3 edits4 adds in trunk
LayoutTests:
Reviewed by Hyatt.
Test for: <rdar://problem/4963411> Items of SELECT element are incorrectly highlighted when display:block is set
- fast/forms/select-block-background-expected.checksum: Added.
- fast/forms/select-block-background-expected.png: Added.
- fast/forms/select-block-background-expected.txt: Added.
- fast/forms/select-block-background.html: Added.
WebCore:
Reviewed by Hyatt.
Fix for <rdar://problem/4963411> Items of SELECT element are incorrectly highlighted when display:block is set
Test: fast/forms/select-block-background.html
- rendering/RenderListBox.cpp: (WebCore::RenderListBox::paintObject): Paint the item backgrounds during the PaintPhaseChildBlockBackground or PaintPhaseChildBlockBackgrounds phase.
- 18:21 Changeset [19517] by
- 7 edits4 adds in trunk
2007-02-08 Mitz Pettel <[email protected]>
Reviewed by Adele.
- fix REGRESSION: Empty options cause the entire select to collapse
Test: fast/forms/select-empty-option-height.html
- fix REGRESSION (r16044): Clicking a popup changes layout around it
- rendering/RenderMenuList.cpp: (WebCore::RenderMenuList::setText): If the option text is empty, use a RenderBR as inner text, to ensure that the inner div has line height.
2007-02-08 Mitz Pettel <[email protected]>
Reviewed by Adele.
- test for REGRESSION: Empty options cause the entire select to collapse
- updated results for REGRESSION (r16044): Clicking a popup changes layout around it
- fast/forms/HTMLOptionElement_label07-expected.txt:
- fast/forms/form-element-geometry-expected.txt:
- fast/forms/select-baseline-expected.txt:
- fast/forms/select-empty-option-height-expected.checksum: Added.
- fast/forms/select-empty-option-height-expected.png: Added.
- fast/forms/select-empty-option-height-expected.txt: Added.
- fast/forms/select-empty-option-height.html: Added.
- fast/replaced/three-selects-break-expected.txt:
- 17:48 Changeset [19516] by
- 5 edits in trunk
WebCore:
Reviewed by Beth Dakin.
Added a hard counter for SubresourceLoaders because the leaks tool now
ignores them.
- loader/SubresourceLoader.cpp: (WebCore::): (WebCore::SubresourceLoaderCounter::~SubresourceLoaderCounter): (WebCore::SubresourceLoader::SubresourceLoader): (WebCore::SubresourceLoader::~SubresourceLoader):
- page/Frame.cpp: Removed unnecessary #define
WebKitTools:
Reviewed by Beth Dakin.
Ignore another false leak report.
- Scripts/run-webkit-tests:
- 17:45 Changeset [22924] by
- 5 edits in branches/WindowsMerge/WebCore
Reviewed by Geoff.
<rdar://problem/4955068>
PluginViewWin leaks memory.
Make streams ref-counted. Remove streams from the hash set once they're done loading.
Don't try to paint if painting is disabled.
- plugins/win/PluginStreamWin.cpp: (WebCore::PluginStreamWin::PluginStreamWin): (WebCore::PluginStreamWin::cancelAndDestroyStream): (WebCore::PluginStreamWin::destroyStream):
- plugins/win/PluginStreamWin.h:
- plugins/win/PluginViewWin.cpp: (WebCore::PluginViewWin::paint): (WebCore::PluginViewWin::stop): (WebCore::PluginViewWin::~PluginViewWin): (WebCore::PluginViewWin::disconnectStream):
- plugins/win/PluginViewWin.h:
- 17:44 Changeset [19515] by
- 4 edits in trunk
LayoutTests:
Reviewed by Beth Dakin.
Updated results now that we actually return the correct ones.
- fast/css/computed-style-expected.txt:
WebCore:
Reviewed by Beth Dakin.
Fixed <rdar://problem/4982374> CSSComputedStyleDeclaration::getPropertyCSSValue
leak reported by buildbot
The leak was a typo: "new" instead of "return new". I also generously
deployed RefPtr in places that were holding ref-counted objects in
bare pointers.
- css/CSSComputedStyleDeclaration.cpp: (WebCore::valueForShadow): (WebCore::CSSComputedStyleDeclaration::getPropertyCSSValue):
- 16:49 Changeset [22923] by
- 2 edits in branches/WindowsMerge/WebCore
WebCoreWin:
Reviewed by Adele.
Fix scrollbar painting.
- 15:07 Changeset [19514] by
- 2 edits in trunk/WebKitTools
Reviewed by Adam Roben.
Linux/gdk build fixes.
- GdkLauncher/main.cpp: Add -exit-after-loading and -dump-render-tree as debugging aid. (strEq): (main):
- 15:01 Changeset [19513] by
- 1 edit in trunk/WebKitTools/Scripts/run-webkit-tests
build fix, oops!
- 14:59 Changeset [19512] by
- 4 edits2 copies in trunk/WebCore
Reviewed by Adam Roben.
Linux/gdk build fixes.
- platform/gdk/EditorClientGdk.cpp: Added. Based on qt version. (WebCore::EditorClientGdk::shouldDeleteRange): (WebCore::EditorClientGdk::shouldShowDeleteInterface): (WebCore::EditorClientGdk::isContinuousSpellCheckingEnabled): (WebCore::EditorClientGdk::isGrammarCheckingEnabled): (WebCore::EditorClientGdk::spellCheckerDocumentTag): (WebCore::EditorClientGdk::shouldBeginEditing): (WebCore::EditorClientGdk::shouldEndEditing): (WebCore::EditorClientGdk::shouldInsertText): (WebCore::EditorClientGdk::shouldApplyStyle): (WebCore::EditorClientGdk::didBeginEditing): (WebCore::EditorClientGdk::respondToChangedContents): (WebCore::EditorClientGdk::didEndEditing): (WebCore::EditorClientGdk::didWriteSelectionToPasteboard): (WebCore::EditorClientGdk::didSetSelectionTypesForPasteboard): (WebCore::EditorClientGdk::selectWordBeforeMenuEvent): (WebCore::EditorClientGdk::isEditable): (WebCore::EditorClientGdk::registerCommandForUndo): (WebCore::EditorClientGdk::registerCommandForRedo): (WebCore::EditorClientGdk::clearUndoRedoOperations): (WebCore::EditorClientGdk::canUndo): (WebCore::EditorClientGdk::canRedo): (WebCore::EditorClientGdk::undo): (WebCore::EditorClientGdk::redo): (WebCore::EditorClientGdk::shouldInsertNode): (WebCore::EditorClientGdk::pageDestroyed): (WebCore::EditorClientGdk::smartInsertDeleteEnabled): (WebCore::EditorClientGdk::toggleContinuousSpellChecking): (WebCore::EditorClientGdk::toggleGrammarChecking): (WebCore::EditorClientGdk::handleKeyPress): (WebCore::EditorClientGdk::EditorClientGdk): (WebCore::EditorClientGdk::setPage):
- platform/gdk/EditorClientGdk.h: Added. Ditto.
- platform/gdk/FrameGdk.cpp: Add exitAfterLoading and dumpRenderTreeAfterLoading as small debugging features. Remove FrameGdkClient as no other platform has Frame*Client anymore. Adjust for new APIs. (WebCore::FrameGdk::FrameGdk): (WebCore::FrameGdk::onDidFinishLoad): (WebCore::FrameGdk::dumpRenderTree): (WebCore::FrameGdk::keyPress): (WebCore::FrameGdk::handleGdkEvent): (WebCore::FrameGdk::focusWindow): (WebCore::FrameGdk::unfocusWindow): (WebCore::FrameGdk::getObjectInstanceForWidget): (WebCore::FrameGdk::getEmbedInstanceForWidget): (WebCore::FrameGdk::bindingRootObject): (WebCore::FrameGdk::print): (WebCore::FrameGdk::getAppletInstanceForWidget): (WebCore::FrameGdk::issueCutCommand): (WebCore::FrameGdk::issueCopyCommand): (WebCore::FrameGdk::issuePasteCommand): (WebCore::FrameGdk::issueTransposeCommand): (WebCore::FrameGdk::issuePasteAndMatchStyleCommand): (WebCore::FrameGdk::markedTextRange): (WebCore::FrameGdk::shouldChangeSelection): (WebCore::FrameGdk::respondToChangedSelection): (WebCore::FrameGdk::mimeTypeForFileName):
- platform/gdk/FrameGdk.h: Ditto. (WebCore::FrameGdk::setExitAfterLoading): (WebCore::FrameGdk::exitAfterLoading): (WebCore::FrameGdk::setDumpRenderTreeAfterLoading): (WebCore::FrameGdk::dumpRenderTreeAfterLoading): (WebCore::GdkFrame):
- platform/gdk/TemporaryLinkStubs.cpp: Adjust to new APIs. Small cleanups. (FrameView::updateBorder): (Widget::setEnabled): (Widget::isEnabled): (Widget::enableFlushDrawing): (Widget::removeFromParent): (Widget::paint): (Widget::setIsSelected): (Widget::invalidate): (Widget::invalidateRect): (PlatformMouseEvent::PlatformMouseEvent): (WebCore::findWordBoundary): (ChromeClientGdk::chromeDestroyed): (ChromeClientGdk::closeWindowSoon): (ChromeClientGdk::canTakeFocus): (ChromeClientGdk::takeFocus): (ChromeClientGdk::canRunBeforeUnloadConfirmPanel): (ChromeClientGdk::addMessageToConsole): (ChromeClientGdk::runBeforeUnloadConfirmPanel): (ChromeClientGdk::runJavaScriptAlert): (ChromeClientGdk::runJavaScriptConfirm): (ChromeClientGdk::runJavaScriptPrompt): (ChromeClientGdk::setStatusbarText): (ChromeClientGdk::shouldInterruptJavaScript): (WebCore::inputElementAltText): (WebCore::resetButtonDefaultLabel): (WebCore::searchableIndexIntroduction): (WebCore::fileButtonChooseFileLabel): (WebCore::fileButtonNoFileSelectedLabel): (WebCore::contextMenuItemTagOpenLinkInNewWindow): (WebCore::contextMenuItemTagDownloadLinkToDisk): (WebCore::contextMenuItemTagCopyLinkToClipboard): (WebCore::contextMenuItemTagOpenImageInNewWindow): (WebCore::contextMenuItemTagDownloadImageToDisk): (WebCore::contextMenuItemTagCopyImageToClipboard): (WebCore::contextMenuItemTagOpenFrameInNewWindow): (WebCore::contextMenuItemTagCopy): (WebCore::contextMenuItemTagGoBack): (WebCore::contextMenuItemTagGoForward): (WebCore::contextMenuItemTagStop): (WebCore::contextMenuItemTagReload): (WebCore::contextMenuItemTagCut): (WebCore::contextMenuItemTagPaste): (WebCore::contextMenuItemTagNoGuessesFound): (WebCore::contextMenuItemTagIgnoreSpelling): (WebCore::contextMenuItemTagLearnSpelling): (WebCore::contextMenuItemTagSearchWeb): (WebCore::contextMenuItemTagLookUpInDictionary): (WebCore::contextMenuItemTagOpenLink): (WebCore::contextMenuItemTagIgnoreGrammar): (WebCore::contextMenuItemTagSpellingMenu): (WebCore::contextMenuItemTagShowSpellingPanel): (WebCore::contextMenuItemTagCheckSpelling): (WebCore::contextMenuItemTagCheckSpellingWhileTyping): (WebCore::contextMenuItemTagCheckGrammarWithSpelling): (WebCore::contextMenuItemTagFontMenu): (WebCore::contextMenuItemTagBold): (WebCore::contextMenuItemTagItalic): (WebCore::contextMenuItemTagUnderline): (WebCore::contextMenuItemTagOutline): (WebCore::contextMenuItemTagWritingDirectionMenu): (WebCore::contextMenuItemTagDefaultDirection): (WebCore::contextMenuItemTagLeftToRight): (WebCore::contextMenuItemTagRightToLeft): (PlugInInfoStore::createPluginInfoForPluginAtIndex): (PlugInInfoStore::pluginCount): (WebCore::PlugInInfoStore::supportsMIMEType): (WebCore::refreshPlugins): (SearchPopupMenu::saveRecentSearches): (SearchPopupMenu::loadRecentSearches): (SearchPopupMenu::SearchPopupMenu): (Path::apply): ): (ResourceHandle::willLoadFromCache): (ResourceHandle::loadsBlocked): (ResourceHandle::loadResourceSynchronously): (PageCache::close): (Editor::ignoreSpelling): (Editor::learnSpelling): (Editor::isSelectionUngrammatical): (Editor::isSelectionMisspelled): (Editor::guessesForMisspelledSelection): (Editor::guessesForUngrammaticalSelection): (Editor::markMisspellingsAfterTypingToPosition): (Editor::newGeneralClipboard): (Pasteboard::generalPasteboard): (Pasteboard::writeSelection): (Pasteboard::writeURL): (Pasteboard::clear): (Pasteboard::canSmartReplace): (Pasteboard::documentFragment): (Pasteboard::plainText): (Pasteboard::Pasteboard): (Pasteboard::~Pasteboard): (ContextMenu::ContextMenu): (ContextMenu::~ContextMenu): (ContextMenu::appendItem): (ContextMenu::setPlatformDescription): (ContextMenu::platformDescription): (ContextMenuItem::ContextMenuItem): (ContextMenuItem::~ContextMenuItem): (ContextMenuItem::releasePlatformDescription): (ContextMenuItem::type): (ContextMenuItem::setType): (ContextMenuItem::action): (ContextMenuItem::setAction): (ContextMenuItem::title): (ContextMenuItem::setTitle): (ContextMenuItem::platformSubMenu): (ContextMenuItem::setSubMenu): (ContextMenuItem::setChecked): (ContextMenuItem::setEnabled): (WebCore::systemBeep): (WebCore::userIdleTime):
- 14:37 Changeset [22922] by
- 6 edits2 adds in branches/WindowsMerge/WebKitWin
Reviewed by Adam.
<rdar://problem/4972772>
Implement IWebResource::Data.
<rdar://problem/4972777>
Implement IWebDataSource::subresourceForURL.
- MemoryStream.cpp: (MemoryStream::MemoryStream): (MemoryStream::createInstance): (MemoryStream::Clone):
- MemoryStream.h: Remove notion of buffer owner, it's not needed now that the buffer itself is reference counted.
- WebDataSource.cpp: (WebDataSource::subresourceForURL): Implement this.
- WebKit.vcproj/WebKit.vcproj: Add WebResource.cpp and WebResource.h
- WebResource.cpp: Added. (WebResource::WebResource): (WebResource::~WebResource): (WebResource::createInstance): (WebResource::QueryInterface): (WebResource::AddRef): (WebResource::Release): (WebResource::initWithData): (WebResource::data): (WebResource::URL): (WebResource::MIMEType): (WebResource::textEncodingName): (WebResource::frameName):
- WebResource.h: Added.
- WebView.cpp: (WebView::formDelegate): Return E_FAIL if there's no form delegate.
- 14:37 Changeset [19511] by
- 8 edits1 delete in trunk/WebCore
Reviewed by Adam Roben.
Linux/gdk build fixes.
- platform/GlyphPageTreeNode.h: Fix header guard name.
- platform/gdk/ChromeClientGdk.h:
- platform/gdk/CursorGdk.cpp: (WebCore::verticalTextCursor): (WebCore::cellCursor): (WebCore::contextMenuCursor): (WebCore::noDropCursor): (WebCore::copyCursor): (WebCore::progressCursor): (WebCore::aliasCursor):
- platform/gdk/MouseEventGdk.cpp: (WebCore::PlatformMouseEvent::PlatformMouseEvent):
- platform/gdk/PageGdk.cpp: Removed. No longer used.
- platform/gdk/RenderThemeGdk.cpp: (WebCore::RenderThemeGdk::getThemeData): (WebCore::RenderThemeGdk::setCheckboxSize): (WebCore::RenderThemeGdk::paintCheckbox): (WebCore::RenderThemeGdk::setRadioSize): (WebCore::RenderThemeGdk::paintRadio): (WebCore::RenderThemeGdk::paintButton): (WebCore::RenderThemeGdk::adjustTextFieldStyle): (WebCore::RenderThemeGdk::paintTextField): (WebCore::RenderThemeGdk::paintTextArea): (WebCore::RenderThemeGdk::systemFont):
- platform/gdk/RenderThemeGdk.h:
- platform/gdk/ScreenGdk.cpp: (WebCore::screenDepth): (WebCore::screenDepthPerComponent): (WebCore::screenIsMonochrome): (WebCore::screenRect): (WebCore::screenAvailableRect):
- 14:33 Changeset [19510] by
- 2 edits in trunk/WebKitTools
Minor fixup based on Maciej's review last night.
- Scripts/run-webkit-tests: Use normal "increment at end of loop" behavior, and do a little math to make it work.
- 14:31 Changeset [22921] by
- 1 edit in branches/WindowsMerge/WebKitWin/WebKit.vcproj/VERSION
Bump version for submit
- 14:30 Changeset [19509] by
- 1 copy in tags/Safari-521.34.2b
New tag.
- 14:15 Changeset [19508] by
- 3 edits in trunk/WebCore
Reviewed by Adam Roben.
Linux/gdk build fixes for cairo.
- platform/graphics/GraphicsContext.cpp:
- platform/graphics/cairo/GraphicsContextCairo.cpp: (WebCore::GraphicsContext::GraphicsContext): (WebCore::GraphicsContext::strokeArc): (WebCore::GraphicsContext::drawFocusRing): (WebCore::GraphicsContext::setFocusRingClip): (WebCore::GraphicsContext::clearFocusRingClip): (WebCore::GraphicsContext::drawLineForMisspellingOrBadGrammar): (WebCore::GraphicsContext::origin): (WebCore::GraphicsContext::setPlatformFillColor): (WebCore::GraphicsContext::setPlatformStrokeColor): (WebCore::GraphicsContext::setPlatformStrokeThickness): (WebCore::GraphicsContext::setPlatformStrokeStyle): (WebCore::GraphicsContext::setPlatformFont): (WebCore::GraphicsContext::setURLForRect): (WebCore::GraphicsContext::addRoundedRectClip): (WebCore::GraphicsContext::addInnerRoundedRectClip): (WebCore::GraphicsContext::setShadow): (WebCore::GraphicsContext::clearShadow): ::toCairoOperator): (WebCore::GraphicsContext::setCompositeOperation): (WebCore::GraphicsContext::clip): (WebCore::GraphicsContext::rotate): (WebCore::GraphicsContext::scale): (WebCore::GraphicsContext::clipOut): (WebCore::GraphicsContext::clipOutEllipseInRect): (WebCore::GraphicsContext::fillRoundedRect):
- 14:07 Changeset [19507] by
- 6 edits in trunk
WebCore:
Reviewed by Adam Roben.
Linux/gdk build fixes.
- Projects/gdk/webcore-gdk.bkl:
- WebCoreSources.bkl:
- webcore-base.bkl:
WebKitTools:
Reviewed by Adam Roben..
Linux/gdk build fixes.
- GdkLauncher/gdklauncher.bkl:
- 13:03 Changeset [22920] by
- 2 edits in branches/WindowsMerge/WebCore
Remove unused stub.
- platform/win/TemporaryLinkStubs.cpp:
- 13:01 Changeset [19506] by
- 3 edits in trunk/WebCore
Reviewed by Tim Hatcher
Tweaked the thread violation behavior to be disabled by default, and to provide
an easy breakpoint to set.
The possibilities for the "WebCoreThreadCheck" user defaults key are -
- The value "None" disables thread checking
- The value "Log" causes an NSLog on a violation
- The value "Exception" causes exceptions to be raised on a violation
- platform/Logging.h:
- platform/mac/LoggingMac.mm: (WebCore::_WebCoreThreadViolationCheck): (WebCoreReportThreadViolation): In the global namespace, making breakpoints cake!
- 12:56 Changeset [19505] by
- 3 edits in trunk/LayoutTests
Reviewed by Adam Roben, Darin Adler.
Updated results for tests that started failing after my run-webkit-tests
check-in.
These failures were not regressions. My check-in just caused the regular
bot to behave more like the leaks bot, so it started reporting the results
that the leaks bot had been reporting all along.
There does seem to be an underlying bug in the way XHTML documents report
line numbers for JavaScript exceptions. I've file that bug along with a
reduction:
JavaScript errors in XML documents have incorrect line numbers
However, that bug is not a regression, so I think we should treat it separately.
- dom/xhtml/level2/html/frame-expected.txt:
- dom/xhtml/level2/html/iframe-expected.txt:
- 12:45 S60Webkit edited by
- (diff)
- 12:42 S60Reindeer edited by
- (diff)
- 12:38 S60Webkit edited by
- minor update to urls on s60webkit page (diff)
- 12:23 Changeset [19504] by
- 2 edits in trunk/WebKitTools
Reviewed by
- Fix layout test failures.
- Scripts/run-webkit-tests:
- 11:59 Changeset [22919] by
- 3 edits in branches/WindowsMerge/WebKitWin
WebKitWin:
Reviewed by Adam.
Add shouldInterruptJavaScript to the API.
- Interfaces/IWebUIDelegatePrivate.idl:
- WebChromeClient.cpp: (WebChromeClient::addMessageToConsole): (WebChromeClient::shouldInterruptJavaScript):
- 11:58 Changeset [19503] by
- 5 edits in trunk/WebCore
Reviewed by Tim Hatcher
<rdar://problem/4983515> Need mechanism to protect against WebKit calls from secondary threads
This initial landing is a conservative move until we can be certain of performance impact.
By writing to the user defaults key @"WebCoreThreadCheck" for the WebKit app you're running -
- The value "None" disables thread checking
- The value "Exception" causes exceptions to be raised on a violation
- The default is to do the check, and NSLog each violation
- bindings/objc/ExceptionHandlers.h: Add a "Is Main Thread" assert macro
- bindings/scripts/CodeGeneratorObjC.pm: Use new mechanism in allocs and deallocs for now
- platform/Logging.h: Added WebCoreThreadViolationCheck macro
- platform/mac/LoggingMac.mm: (WebCore::_WebCoreThreadViolationCheck): Check for main-threadedness, and do some stuff
- 11:48 Changeset [22918] by
- 2 edits in branches/WindowsMerge/WebCore
Reviewed by Brady.
<rdar://problem/4888871>
Need to support synchronous XMLHttpRequest.
- platform/network/cf/ResourceHandleCFNet.cpp: (WebCore::ResourceHandle::loadResourceSynchronously):
- 11:19 Changeset [19502] by
- 2 edits in trunk/WebKit
Reviewed by
- fixing a build breakage.
- Misc/WebNSAttributedStringExtras.mm: (fileWrapperForElement):
- 10:52 Changeset [19501] by
- 8 edits in S60/trunk
brmorris <[email protected]>, rs'd by zalan
DESC: merge from s60/branches/3.1m to s60/trunk of r19464, r19466, r19472 & r19499
- 10:39 Changeset [19500] by
- 18 edits17 adds in trunk
LayoutTests:
Reviewed by Maciej, Darin and Mark.
rdar://problem/4922454
- No longer allow remote sites to access local resources.
- fast/loader/local-JavaScript-from-local-expected.txt: Added.
- fast/loader/local-JavaScript-from-local.html: Added.
- fast/loader/local-iFrame-source-from-local-expected.txt: Added.
- fast/loader/local-iFrame-source-from-local.html: Added.
- fast/loader/local-image-from-local-expected.txt: Added.
- fast/loader/local-image-from-local.html: Added.
- http/tests/security/local-JavaScript-from-remote-expected.txt: Added.
- http/tests/security/local-JavaScript-from-remote.html: Added.
- http/tests/security/local-iFrame-from-remote-expected.txt: Added.
- http/tests/security/local-iFrame-from-remote.html: Added.
- http/tests/security/local-image-from-remote-expected.txt: Added.
- http/tests/security/local-image-from-remote.html: Added.
- http/tests/security/resources/compass.jpg: Added.
- http/tests/security/resources/localPage.html: Added.
- http/tests/security/resources/localPage.html.orig: Added.
- http/tests/security/resources/localScript.js: Added.
WebCore:
Reviewed by Maciej, Darin, and Mark.
rdar://problem/4922454
- Prevents remote sites from executing local scripts.
- bindings/objc/DOM.mm: - renamed a function that is now in the base class (-[DOMElement image]): (-[DOMElement _imageTIFFRepresentation]):
- dom/XMLTokenizer.cpp: - removed needless asserts (WebCore::XMLTokenizer::notifyFinished):
- html/HTMLImageLoader.cpp: - renamed a function that is now in the base class (WebCore::HTMLImageLoader::dispatchLoadEvent):
- html/HTMLTokenizer.cpp: - removed needless asserts (WebCore::HTMLTokenizer::reset): (WebCore::HTMLTokenizer::notifyFinished):
- ksvg2/misc/SVGImageLoader.cpp: - renamed a function that is now in the base class (WebCore::SVGImageLoader::dispatchLoadEvent):
- loader/Cache.cpp: - return early if an error occured (WebCore::Cache::requestResource): (WebCore::Cache::remove):
- loader/CachedImage.h: - renamed a function that is now in the base class (WebCore::CachedImage::canRender):
- loader/CachedResource.h: - renamed a function that is now in the base class (WebCore::CachedResource::errorOccurred):
- loader/CachedScript.h: - renamed a function that is now in the base class (WebCore::CachedScript::schedule):
- loader/DocLoader.cpp: - The heart of the fix, prevents resources from being created or retrieved from the cache if a remote site is requesting the local resource. (WebCore::DocLoader::requestResource): (WebCore::DocLoader::setLoadInProgress):
- page/EventHandler.cpp: - renamed a function that is now in the base class (WebCore::selectCursor):
- rendering/HitTestResult.cpp: - renamed a function that is now in the base class (WebCore::HitTestResult::image):
- rendering/RenderImage.cpp: - renamed a function that is now in the base class (WebCore::RenderImage::setCachedImage): (WebCore::RenderImage::imageChanged): (WebCore::RenderImage::paint): (WebCore::RenderImage::layout): (WebCore::RenderImage::calcAspectRatioWidth): (WebCore::RenderImage::calcAspectRatioHeight):
- rendering/RenderImage.h: - renamed a function that is now in the base class (WebCore::RenderImage::errorOccurred):
- rendering/RenderListItem.cpp: - renamed a function that is now in the base class (WebCore::RenderListItem::setStyle):
- rendering/RenderListMarker.cpp: - renamed a function that is now in the base class (WebCore::RenderListMarker::isImage):
- 10:28 Changeset [19499] by
- 4 edits in S60/branches/3.1m/WebKit
yaharon, reviewed by yongjun.
DESC: [S60] Daily BAT 3.2: S60NG_Login- Password field is not filled automatically KDEA-6XZEDY
- 10:19 Changeset [19498] by
- 2 edits in trunk/WebKitTools
Reviewed by Anders.
- Scripts/check-for-global-initializers: For speed, only check files that have been modified since the last time we linked. For tidiness, capture stderr from nm, and prevent "nm: no name list" messages from going out.
- 00:43 Changeset [22917] by
- 1 edit in branches/WindowsMerge/WebCore/platform/win/FontCacheWin.cpp
Remove my garbled fixme that I didn't intend to leave in. :)
- 00:42 Changeset [19497] by
- 2 edits in trunk/WebKitTools
Reviewed by Maciej Stachowiak, Adam Roben.
Added 'nthly' support to run-webkit-tests. It's like 'singly', for an
arbitrary number n.
Plus some renames:
- DumpRenderTree => "dumpTool" (to match abstraction elsewhere)
- checkLeaks => "shouldCheckLeaks" (to match style guidelines)
- tool => dumpTool (to match abstraction elsewhere)
- httpdOpen => isHttpdOpen (to match style guidelines)
Plus a few logic fixups:
- Don't check isDumpToolOpen when we know we've called openDumpTool().
- Use a single code path to decide when to shut down dumpTool and when to check for leaks, since the operations are coincidental.
- Use a single code path for running the leaks tool, since the only thing that varies between configurations is the output file name.
- Increment $count after each test finishes, instead of at the end of the loop, to help with comparing to the length of the array and %-ing by n.
- Use a more robust test inside the loop to determine if we need to close dumpTool, instead of copying the closing code outside the loop.
Layout tests pass.
- Scripts/run-webkit-tests:
- 00:40 Changeset [22916] by
- 2 edits in branches/WindowsMerge/WebCore/platform/win
Make Lucida Grande bold work in the engine. Required sick special case hackery for now. Also fix the validity check for initial font construction. r=aroben | http://trac.webkit.org/timeline?from=2007-02-12T15%3A36%3A34-0800&precision=second | CC-MAIN-2016-50 | en | refinedweb |
Hi,
I'm developing a DirectX 9.0c 2D game engine now code named: "X-Caliber Engine"...
What is the best method to limit Frames Per Second?
I am currently doing this but I don't like it:
#include <mmsystem.h> //... static DWORD LastFrameTime = 0; DWORD FPSLimit = 60; //... //-MAIN-LOOP-------------------------------------------------------------------- { render(); DWORD currentTime = timeGetTime(); if ( (currentTime - LastFrameTime) < (1000 / FPSLimit) ) { Sleep(currentTime - LastFrameTime); } LastFrameTime = currentTime; } //--------------------------------------------------------------------MAIN-LOOP-
Above code adds an additional LIB dependency: "winmm.lib" which I don't like.
Any suggestions would be appreciated, thanks!
JeZ+Lee | http://www.gamedev.net/topic/649599-directx-9-best-method-to-limit-fps/ | CC-MAIN-2016-50 | en | refinedweb |
0
I began teaching myself programming 3 weeks ago. Its hard (for me) i'm trying to include some date validating within a program i am writing.
this is what i have so far, but invalid dates are validated unfortunately.
eg. November 31 2010
or february 30 2012
have a look and help me out please
#include <iostream.h> #include <conio.h> int main() { int day,dd,mm,yy,num; char ans; clrscr(); do{ cout<<"Enter Month: \n"; cin>> mm; }while (mm<1||mm>12); do{ cout<<"Enter Day: \n"; cin>> dd; }while (dd<1||dd>31); do{ cout<<"Enter Year: \n"; cin>> yy; }while (yy<1||yy==num); // validation switch (mm){ case 1: if (1,3,5,7,8,10,12); day=31; break; case 2: if (4,6,9,11); day=30; break; case 3 : if(((yy%4==0)&&(yy%100!=0))||(yy%400==0)) day=29; else day=28; } if (dd, mm, yy) cout<<"The Date "<<mm<<'/'<<dd<<'/'<<yy<<" is VALID!"<<endl; else cout<< "The Date "<<mm<<'/'<<dd<<'/'<<yy<<" is INVALID!"<<endl; do{ cout<<"do you want to try again?? (Y/N) "<<endl; cin>>ans; } while (ans!='n'&&ans!='N'); getch(); }
Edited 6 Years Ago by WaltP: Fixed CODE Tags | https://www.daniweb.com/programming/software-development/threads/277586/date-validating | CC-MAIN-2016-50 | en | refinedweb |
How to Convert AVI to iMovie on Mac
Problems while importing AVI to iMovie:.
How to Convert AVI to iMovie?.
How to use AVI to iMovie Converter for Mac?
This is a guide for Mac users. If you are a Windows user, you can go to the Video Converter guide and follow the steps of converting on Windows.
buy video converter mac download video converter mac
Step1 Lanuch the Video Converter for Mac
Download the app and copy it to app folder or anywhere on your Mac. Run it directly, you will see the main interface as below.
AVI to iMovie converter for Mac main interface
Step2 Import the AVI files to prepare the AVI to iMovie conversion..
import AVI files
Step3 Choose iMovie as the output format
Thanks for the AVI to iMovie converter, so you can directly set the output format as iMovie. The app will automatically provide you the most compatible format. First, click the "output setting" and choose iMovie under the list of Apple Software.
output format of iMovie
Step4 Start the AVI to iMovie conversion
After 3 prepare steps, you can get the output files by clicking the Start button.
Now, you can freely import the files to iMovie for editing or other reasons.
Tips for Importing AVI to iMovie on Mac:
1 You Mac OS version must be at least 10.5.
2 You can see the estimated size on the main screen of the program.
Try to download and convert AVI to iMovie now? Or get more functions and features of the Video Converter for Mac: | http://www.anddev.org/general-f3/how-to-convert-avi-to-imovie-on-mac-t2167464.html | CC-MAIN-2014-10 | en | refinedweb |
CS::Utility::ImportKit Class ReferenceCrystal Space Import Kit. More...
#include <cstool/importkit.h>
Detailed DescriptionCrystal Space Import Kit.
A class wraps access to loader plugins and mesh factories to allow simple access to mesh data (albeit the returned data is limited compared to what the engine supports).
Definition at line 53 of file importkit.h.
Constructor & Destructor Documentation
Initialize this kit.
Member Function Documentation
Open a Crystal Space container from filename.
filename can optionally be in the path relative to the current directory in path. (Note that the path can contain up to 1 zip file). It should point to a mesh library, meshfact file or world file; it is detected whether a file contains sensible data, so you can e.g. safely call this method for all files of a directory.
The documentation for this class was generated from the following file:
- cstool/importkit.h
Generated for Crystal Space 1.2.1 by doxygen 1.5.3 | http://www.crystalspace3d.org/docs/online/api-1.2/classCS_1_1Utility_1_1ImportKit.html | CC-MAIN-2014-10 | en | refinedweb |
2009 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. Public Working Draft of W3C XML Schema Definition Language (XSD) 1.1. It is here made available for review by W3C members and the public. XSD 1.1 retains all the essential features of XSD 1.0, but adds several new features to support functionality requested by users, fixes many errors in XSD 1.0, and clarifies wording.
xml:langattribute and thus to the language of the element's contents. This change was introduced to resolve issue 5003 Applicability of <alternative> element to xml:lang, raised by the W3C Internationalization Core Working Group.
schemaLocationattribute, but also in other schema documents referred to using <include> or <override> in that schema document.
schemaLocationattributes in the instance document being validated; this resolves issue 5476 xsi:schemaLocation should be a hint, should be MAY not SHOULD.
vc:typeAvailableand
vc:typeUnavailablehave been revised to make them complementary and to ensure that if two constructs in the input to conditional inclusion processing are marked
vc:typeAvailable="A B C"and
vc:typeUnavailable="A B C", respectively, then exactly one of the two will be chosen. This change resolves issue 5905 vc:typeAvailable and vc:typeUnavailable.
vcnamespace. Attributes may be added to that namespace in the future, so unrecognized attributes are not errors, but until that time, the presence of unrecognized attributes is likely to indicate a problem in the input. This change resolves issue 5904 Unknown attributes in vc namespace.
blockDefault="#all"has been removed from the schema for schema documents; this change resolves issue 6120 Reconsider blockDefault=#all.
refattributes are now retained in the ·post-schema-validation infoset· form of the containing element declaration. This change resolves issue 6144 annotation on IDC with a 'ref' attribute is lost.
For those primarily interested in the changes since version 1.0, the appendix Changes since version 1.0 (non-normative) (§H) is the recommended starting point. It summarizes both changes made since XSD 1.0 and some changes which were expected (and predicted in earlier drafts of this specification) but have not been made after all. Accompanying versions of this document display in color all changes to normative text since version 1.0 and since the previous Working Draft.
The Last Call review period for this document extends until 20 February 2009. Comments on this document should be made in W3C's public installation of Bugzilla, specifying "XML Schema" as the product. Instructions can be found at. If access to Bugzilla is not feasible, please send your comments to the W3C XML Schema comments mailing list, [email protected] (archive) Each Bugzilla entry and email message should contain only one comment.
Although feedback based on any aspect of this specification is welcome, there are certain aspects of the design presented herein for which the Working Group is particularly interested in feedback. These are designated "priority feedback" aspects of the design, and identified as such in editorial notes at appropriate points in this draft. Any feature mentioned in a priority feedback note should be considered a "feature at risk": the feature may be retained as is, modified, or dropped, depending on the feedback received from readers, schema authors, schema users, and implement XSD 1.1 are discussed in the document Requirements for XML Schema 1.1. The authors of this document are the members of the XML Schema Working Group. Different parts of this specification. Information about translations of this document is available at.
This document sets out the structural part of the XML Schema Definition Language.
Chapter 2 presents a Conceptual Framework (§2) for XSD, including an introduction to the nature of XSD schemas and an introduction to the XSD abstract data model, along with other terminology used throughout this document.
Chapter 3, Schema Component Details (§3), specifies the precise semantics of each component of the abstract model, the representation of each component in XML, with reference to a DTD and an XSD schema for an XSD Schema Documents (Structures) (normative) (§A) for the XML representation of schemas and References (normative) (§B).
The non-normative appendices include the DTD for Schemas (non-normative) (§L) and a Glossary (non-normative) (§ three main goals for this version of W3C XML Schema:
These goals are in tension with one another. The Working Group's strategic guidelines for changes between versions 1.0 and 1.1 can be summarized as follows:
The aim with regard to compatibility is that
The purpose of XML Schema Definition Language: Structures is to define the nature of XSD schemas and their component parts, provide an inventory of XML markup constructs with which to represent schemas, and define the application of schemas to XML documents.
The purpose of an XSD schema is to define and describe a class of XML documents by using schema components to constrain and document the meaning, usage and relationships of their constituent parts: datatypes, elements and their content and attributes and their values. Schemas can also provide for the specification of additional document information, such as normalization and defaulting of attribute and element values. Schemas have facilities for self-documentation. Thus, XML Schema Definition Language: Structures can be used to define, describe and catalogue XML vocabularies for classes of XML documents.
Any application that consumes well-formed XML can use the formalism defined here to express syntactic, structural and value constraints applicable to its document instances. The XSD formalism allows a useful level of constraint checking to be described and implemented for a wide spectrum of XML applications. However, the language defined by this specification does not attempt to provide all the facilities that might be needed by applications. Some applications will require constraint capabilities not expressible in this language, and so will need to perform their own additional validations.
xs)
The XML representation of schema components uses a vocabulary
identified by the namespace name.
For brevity, the text and examples in this specification use
the prefix
xs: to stand for this
namespace; in practice, any prefix can be used.
untyped,
untypedAtomic) which are not defined in this specification; see the [XDM] specification for details of those types.
Users of the namespaces defined here should be aware, as a matter of namespace policy, that more names in this namespace may be given definitions in future versions of this or other specifications.
xsi)
This specification defines
several attributes for direct use in any XML documents, as
described in Schema-Related Markup in Documents Being Validated (§2.6).
These attributes are in the namespace whose name is.
For brevity, the text and examples in this specification use
the prefix
xsi: to stand for this namespace; in
practice, any prefix can be used.
Users of the namespaces defined here should be aware, as a matter of namespace policy, that more names in this namespace may be given definitions in future versions of this or other specifications.
vc)
The pre-processing of schema documents described in
Conditional inclusion (§4.2.1) uses
attributes in the namespace.
For brevity, the text and examples in this specification use
the prefix
vc: to stand for this
namespace; in practice, any prefix can be used.
Users of the namespaces defined here should be aware, as a matter of namespace policy, that more names in this namespace may be given definitions in future versions of this or other specifications.
Components and source declarations must not specify as their
target namespace. If they do, then the schema
and/or schema document is in ·error·.
fnbound to(defined in [Functions and Operators]
htmlbound to
my(in examples) bound to the target namespace of the example schema document
rddlbound to
vcbound to(defined in this and related specifications)
xhtmlbound to
xlinkbound to
xmlbound to(defined in [XML 1.1] and [XML-Namespaces 1.1])
xsbound to(defined in this and related specifications)
xsibound to(defined in this and related specifications)
xslbound to
In practice, any prefix bound to the appropriate namespace
name may be used (unless otherwise specified by the definition
of the namespace in question, as for
xml and
xmlns).
Sometimes other specifications or Application Programming Interfaces (APIs) need to refer to the XML Schema Definition Language in general, sometimes they need to refer to a specific version of the language. To make such references easy and enable consistent identifiers to be used, we provide the following URIs to identify these concepts. XSD version 1.0 and XSD version 1.1.
X.Yof the XSD specification. For example, the second edition of XSD version 1.0.
X.Yof the XSD specification published on the particular date
yyyy-mm-dd. For example, the language defined in the XSD version 1.0 Candidate Recommendation (CR) published on 24 October 2000, and the language defined in the XSD version 1.0 Second Edition Proposed Edited Recommendation (PER) published on 18 March 2004.
Please see XSD Language Identifiers (non-normative) (§O) for a complete list of XML Schema Definition Language identifiers which exist to date.
The definition of XML Schema Definition Language: Structures depends on the following specifications: [XML-Infoset], [XML-Namespaces 1.1], [XPath 2.0], and [XML Schema: Datatypes].
See Required Information Set Items and Properties (normative) (§E) for a tabulation of the information items and properties specified in [XML-Infoset] which this specification requires as a precondition to schema-aware processing.
[XML Schema: Datatypes] defines some datatypes which depend on definitions in [XML 1.1] and [XML-Namespaces 1.1]; those definitions, and therefore the datatypes based on them, vary between version 1.0 ([XML 1.0], [XML-Namespaces 1.0]) and version 1.1 ([XML 1.1], [XML-Namespaces 1.1]) of those specifications. In any given schema-validity-·assessment· episode, the choice of the 1.0 or the 1.1 definition of those datatypes is ·implementation-defined·.
Conforming implementations of this specification may provide either the 1.1-based datatypes or the 1.0-based datatypes, or both. If both are supported, the choice of which datatypes to use in a particular assessment episode should be under user control.}.
For a given component C, an expression of the form "C.{example property}" denotes the (value of the) property {example property} for component C. The leading "C." (or more) is sometimes omitted, if the identity of the component and any other omitted properties is understood from the context. This "dot operator" is left-associative, so "C.{p1}.{p2}" means the same as "(C.{p1}) . {p2}" and denotes the value of property {p2} within the component or ·property record· which itself is the value of C's {p1} property. White space on either side of the dot operator has no significance and is used (rarely) solely for legibility.
For components C1 and C2, an expression of the form "C1 . {example property 1} = C2 . {example property 2}" means that C1 and C2 have the same value for the property (or properties) in question. Similarly, "C1 = C2" means that C1 and C2 are identical, and "C1.{example property} = C2" that C2 is the value of C1. determines which of several different components corresponds to the source declaration, Schema: Datatypes], a hyperlink
to
its definition therein is given.
The allowed content of the information item is shown as a
grammar fragment, using the Kleene operators
?,
* and
+. Each element name therein is
a hyperlink to its own illustration.
exampleElement Information Item
<example
count = integer
size = (large | medium | small) : medium>
Content: (all | any*)
</example>
size[attribute]
References to elements in the text are links to the relevant illustration as exemplified above, set off with angle brackets, for instance <example>.
Unless otherwise specified, references to attribute values
are references to the ·actual value· of the attribute information
item in question, not to its ·normalized value· or to other forms
or varieties of "value" associated with it.
For a given element information item E, expressions of the
form "E has
att1 = V"
are short-hand for "there is an attribute information
item named
att1 among the [attributes] of E and
its ·actual value·
is V."
If the identity of E is clear from context, expressions
of the form "
att1 = V"
are sometimes used.
The form "
att1 ≠ V" is also used
to specify that the ·actual value· of
att1 is
not V. "dot operator" described above for components and their properties is also used for information items and their properties. For a given information item I, an expression of the form "I . [new property]" denotes the (value of the) property [new property] for item I.
Lists of normative constraints are typically introduced with phrase like "all of the following are true" (or "... apply"), "one of the following is true", "at least one of the following is true", "one or more of the following is true", "the appropriate case among the following is true", etc. The phrase "one of the following is true" is used in cases where the authors believe the items listed to be mutually exclusive (so that the distinction between "exactly one" and "one or more" does not arise). If the items in such a list are not in fact mutually exclusive, the phrase "one of the following" should be interpreted as meaning "one or more of the following". The phrase "the appropriate case among the following" is used only when the cases are thought by the authors to be mutually exclusive; if the cases in such a list are not in fact mutually exclusive, the first applicable case should be taken. Once a case has been encountered with a true condition, subsequent cases must not be tested. 1.1].
Where these terms appear without special highlighting, they are used in their ordinary senses and do not express conformance requirements. Where these terms appear highlighted within non-normative material (e.g. notes), they are recapitulating rules normatively stated elsewhere.
This specification provides a further description of error and of conformant processors' responsibilities with respect to errors in Schemas and Schema-validity Assessment (§5).
This chapter gives an overview of XML Schema Definition Language: Structures at the level of its abstract data model. Schema Component Details (§3) provides details on this model, including a normative representation in XML for the components of the model. Readers interested primarily in learning to write schema documents will find it most useful first to read [XML Schema: Primer] for a tutorial introduction, and only then to consult the sub-sections of Schema Component Details (§3) named XML Representation of ... for the details.
An XSD schema some or all of the PSVI, as described in Subset of the Post-schema-validation Infoset (§D.1). The mechanisms by which processors provide such access to the PSVI are neither defined nor constrained by this specification..1] and [XML-Namespaces 1.1]. The concepts and definitions used herein regarding XML are framed at the abstract level of information items as defined in [XML-Infoset]. By definition, this use of the infoset provides a priori guarantees of well-formedness (as defined in [XML 1.1]) and namespace conformance (as defined in [XML-Namespaces 1.1]) for all candidates for ·assessment· and for all ·schema documents·.
Just as [XML 1.1] and [XML-Namespaces 1.1] can be described in terms of information items, XSD schemas can be described in terms of an abstract data model. In defining schemas in terms of an abstract data model, this specification rigorously specifies the information which must be available to a conforming XSD make up the abstract data model of the schema. [Definition:] An XSD schema is a set of ·schema components·. There are several kinds of schema component, falling into three groups. The primary schema components, which may (type definitions) or must (element and attribute declarations) have names, are as follows:
The secondary schema components, are as follows:
Finally, the "helper" schema components provide small parts of other schema components; they are dependent on their context:
The name [Definition:] Component covers all the different kinds of schema component defined in this specification. 1.1].
[Definition:] Several kinds of component have a target namespace, which is either ·absent· or a namespace name, also as defined by [XML-Namespaces 1.1]. The ·target namespace· serves to identify the namespace within which the association between the component and its name exists.
An expanded name, as defined in [XML-Namespaces 1.1], is a pair consisting of a namespace name, which may be ·absent·, and a local name. The expanded name of any component with both a ·target namespace· property and a ·component name· property is the pair consisting of the values of those two properties. The expanded name of a declaration is used to help determine which information items will be ·governed· by the declaration.
·Validation·, defined in detail in Schema Component Details (§3), is a relation between information items and schema components. For example, an attribute information item is ·validated· with respect to an attribute declaration, a list of element information items ·
xs:anyType·, every ·type definition· is, by construction,
either a ·restriction· or an
·extension· of some
other type definition. The graph of these relationships forms
a tree known as the Type Definition
Hierarchy.
[Definition:] The type definition used as the basis for an ·extension· or ·restriction· is known as the base type definition of that definition.
[Definition:] A type defined with the same constraints as its ·base type definition·, or with more, is said to be a restriction. The added constraints might include narrowed ranges or reduced alternatives. Given two types A and B, if the definition of A is a ·restriction· of the definition of B, then members of type A are always locally valid against type B as well.
[Definition:] A complex type definition which allows element or attribute content in addition to that allowed by another specified type definition is said to be an extension.
[Definition:] A special complex type definition, (referred to in earlier versions of this specification as 'the ur-type definition') whose name is anyType in the XSD namespace, is present in each ·XSD schema·. The definition of anyType serves as default type definition for element declarations whose XML representation does not specify one.
[Definition:] A special simple type
definition, whose name is error in the XSD
namespace, is also present in each ·XSD schema·. The
XSD
error type
has no valid instances. It can be used in any place where
other types are normally used; in particular, it can be used
in conditional type assignment to cause elements which satisfy
certain conditions to be invalid.
For brevity, the text and examples in this specification often
use the qualified names
xs:anyType and
xs:error for these type definitions. (In
practice, any appropriately declared prefix can be used, as
described in Schema-Related Markup in Documents Being Validated (§2.6).) Schema: Datatypes]) or user-defined, is a ·restriction· of its ·base type definition·.
[Definition:] A
special ·restriction· of
·
xs:anyType·, whose name is
anySimpleType in the
XSD namespace, is the root of the ·Type Definition Hierarchy· for all simple type
definitions. ·
xs:anySimpleType· has a lexical space containing
all sequences of characters in the Universal Character
Set (UCS) and a value space containing all
atomic values
and all finite-length lists of
atomic values.
As with ·
xs:anyType·, this
specification sometimes uses the qualified name
xs:anySimpleType to designate this type
definition. The
built-in list datatypes all have ·
xs:anySimpleType· as their
·base type
definition·.
[Definition:] There is a further special datatype
called anyAtomicType, a
·restriction· of
·
xs:anySimpleType·, which is the ·base type definition·
of all the primitive
datatypes. This type definition is often referred
to simply as "
xs:anyAtomicType".
It too is
considered to have an unconstrained lexical space. Its value
space consists of the union of the value spaces of all the
primitive datatypes.
The mapping from lexical space to value space is unspecified
for items whose type definition is ·
xs:anySimpleType· or ·
xs:anyAtomicType·. Accordingly
this specification does not constrain processors'
behavior in areas
where this mapping is implicated, for example checking such
items against enumerations, constructing default attributes or
elements whose declared type definition is ·
xs:anySimpleType·
or ·
xs:anyAtomicType·,
checking identity constraints involving such items.
[XML Schema: Datatypes]
provides mechanisms for defining new simple type definitions
by ·restricting·
some primitive
or ordinary datatype. It also
provides mechanisms for constructing new simple type
definitions whose members are lists of items
themselves constrained by some other simple type definition, or
whose membership is the union of the memberships of some other
simple type definitions. Such list and union simple type
definitions are also ·restrictions· of
·
xs:anySimpleType·.
For detailed information on simple type definitions, see Simple Type Definitions (§3.16) and [XML Schema:.
xs:anyType· is either
all-groups in ways that do not guarantee that the new material occurs only at the end of the content. Future versions may allow more kinds of extension, requiring more complex transformations to effect casting., XSD provides a more powerful model supporting substitution of one named element for another. Any top-level element declaration can serve as the defining member, or head, for an element ·substitution group·. Other top-level element declarations, regardless of target namespace, can be designated as members of the ·substitution group· headed by this element. In a suitably enabled content model, a reference to the head ·validates· not just the head itself, but elements corresponding to any other member of the ·substitution group· as well.
All such members must have type definitions which are either the same as the head's type definition or derived from it. Therefore, although the names of elements can vary widely as new namespaces and members of the ·substitution group· are defined, the content of member elements is constrained by the type definition of the ·substitution group· or element information item to
be ·valid· with respect to a
NOTATION simple type definition, its value must
have been declared with a notation declaration.
For detailed information on notation declarations, see Notation Declarations (§3.14).:
Each model group denotes a set of sequences of element information items. Regarding that set of sequences as a language, the set of sequences recognized by a group G may be written L(G). [Definition:] A model group G is said to accept or recognize the members of L(G)..
The name [Definition:] Term is used to refer to any of the three kinds of components which can appear in particles. All ·Terms· are themselves ·Annotated Components·. [Definition:] A basic term is an Element Declaration or a Wildcard. [Definition:] A basic particle is a Particle whose {term} is a ·basic term·.
Each content model, indeed each particle and each term, denotes a set of sequences of element information items. Regarding that set of sequences as a language, the set of sequences recognized by a particle P may be written L(P). [Definition:] A particle P is said to accept or recognize the members of L(P). Similarly, a term T accepts or recognizes the members of L(T).
If a sequence S is a member of L(P), then it is necessarily possible to trace a path through the ·basic particles· within P, with each item within S corresponding to a matching particle within P. The sequence of particles within P corresponding to S is called the ·path· of S in P. names and optionally on their local names.
For detailed information on wildcards, see Wildcards (§3.10).
An identity-constraint definition is an association between a name and one of several varieties of identity-constraint related to uniqueness and reference. All the varieties use [XPath 2.0]).
A type-alternative component (type alternative for short) associates a type definition with a predicate. Type alternatives are used in conditional type assignment, in which the choice of ·governing type definition· for elements governed by a particular element declaration depends on properties of the document instance. An element declaration may have a {type table} which contains a sequence of type alternatives; the predicates on the alternatives are tested, and when a predicate is satisfied, the type definition paired with it is chosen as the element instance's ·governing type definition·.
For detailed information on Type Alternatives, see Type Alternatives (§3.12).
An assertion is a predicate associated with a type, which is checked for each instance of the type. If an element or attribute information item fails to satisfy an assertion associated with a given type, then that information item is not locally ·valid· with respect to that type.
For detailed information on Assertions, see Assertions (§3.15).
The [XML 1.1].
Within the context of this specification, conformance can be claimed for schema documents, for schemas, and for processors. be schema-document aware. Such processors must, when processing schema documents, completely and correctly implement (or enforce):] Web-aware processors are network-enabled processors which are not only both ·minimally conforming· and ·schema-document aware·, but which additionally must be capable of accessing schema documents from the World Wide Web XSD. There is a single distinct symbol space within a given ·target namespace· for each kind of definition and declaration component identified in XSD Abstract Data Model (§2.2), except that within a target namespace, simple type definitions and complex type definitions share a symbol space. Within a given symbol space, names must.
XML Schema Definition Language: Structures defines
several attributes for direct use in any XML documents. These
attributes are in the schema instance namespace
() described in The Schema Instance Namespace (
xsi) (§1.3.1.2) above. All schema processors
must
have appropriate attribute declarations for these attributes
built in, see Attribute Declaration for the 'type' attribute (§3.2.7.1),
Attribute Declaration for the 'nil' attribute (§3.2.7.2), Attribute Declaration for the 'schemaLocation' attribute (§3.2.7.3) and
Attribute Declaration for the 'noNamespaceSchemaLocation' attribute (§3.2.7.4).
xsi:type", "
xsi:nil", etc. This is shorthand for "an attribute information item whose [namespace name] is whose [local name] is
type" (or
nil, etc.). resolution (Instance) (§3.17.6.3) for the means by which the ·QName· is associated with a type
definition.
XML Schema Definition Language: Structures introduces a mechanism for signaling that an element
must be accepted as ·valid·
when it has no content despite a content type which does not
require or even necessarily allow empty content. An element
can can, decimal).
Component properties are simply named values. Most properties have either other components or literals (that is, strings or booleans or enumerated keywords) for values, but in a few cases, where more complex values are involved, [Definition:] a property value may itself be a collection of named values, which we call a property record.
[Definition:] Throughout this specification, the term absent is used as a distinguished property value denoting absence. Again this should not be interpreted as constraining implementations, as for instance between using a null value for such properties or not representing them at all. [Definition:] A property value which is not ·absent· is present.
Any property not defined as optional is always present; optional properties which are not present are taken to have ·absent· as their value. Any property identified as a having a set, subset or list value might have an empty value unless this is explicitly ruled out: this is not the same as ·absent·. Any property value identified as a superset or subset of some set might be equal to that set, unless a proper superset or subset is explicitly called for. By 'string' in Part 1 of this specification is meant a sequence of ISO 10646 characters identified as legal XML characters in [XML 1.1].
The principal purpose of XML Schema Definition Language: Schema Documents (Structures) (normative) (§A) and DTD for Schemas (non-normative) (§L)) and schema components. The key element information items in
the XML representation of a schema are in the XSD namespace, that
is their [namespace
name].
A recurrent pattern in the XML
representation of schemas may also be mentioned here. In many
cases, the same element name (e.g.
element or
attribute or
attributeGroup), serves
both to define a particular schema component and to incorporate
it by reference. In the first case the
name
attribute is required, in the second the
ref
attribute is required. These
two usages are mutually exclusive, and sometimes also depend on
context.
The descriptions of the XML representation of components, and the ·Schema Representation Constraints·, apply to schema documents after, not before, the conditional-inclusion pre-processing described in Conditional inclusion (§4.2 expressed in the Schema for Schema Documents (Structures) Schema Documents (Structures) (normative) (§A), there is always a simple type definition associated with any such attribute information item. [Definition:] With reference to any string, interpreted as denoting an instance of a given datatype, the term actual value denotes the value to which the lexical mapping of that datatype maps the string. In the case of attributes in schema documents, the string used as the lexical representation is normally the ·normalized value· of the attribute. The associated datatype is, unless otherwise specified, the one identified in the declaration of the attribute, in the schema for schema documents; in some cases (e.g. the enumeration facet, or fixed and default values for elements and attributes) the associated datatype will be a more specific one, as specified in the appropriate XML mapping rules. The ·actual value· will often be a string, but can will in some cases be violated if one or more references cannot be ·resolved·., it is possible that an appropriately-named component will.
#x9(tab),
#xA(line feed) and
#xD(carriage return) are replaced with
#x20(space).
#x20s are collapsed to a single
#x20, and initial and/or final
#x20s are deleted.
When more than one pre-lexical facet applies, the whiteSpace facet is applied first; the order in which ·implementation-defined· facets are applied is ·implementation-defined·.
If the simple type definition used in an item's
·validation· is ·
xs:anySimpleType·,
then the
·normalized value· must be determined
as in the preserve case above.
There are three alternative validation rules which help supply the necessary background for the above: Attribute Locally Valid (§3.2.4.1) (clause 3), Element Locally Valid (Type) (§3.3.4.4) (clause 3.1.3) or Element Locally Valid (Complex Type) (§3.4.4.2) (clause 1.2).
These three levels of normalization correspond to the processing mandated in XML for element content, CDATA attribute content and tokenized attributed content, respectively. See Attribute Value Normalization in [XML 1.1] for the precedent for replace and collapse for attributes. Extending this processing to element content is necessary to ensure each attribute validated.
For an attribute declaration A, if A.{scope}.{variety} = global, then A is available for use throughout the schema. If A.{scope}.{variety} = local, then A is available for use only within (the Complex Type Definition or Attribute Group Definition) A.{scope}.{parent}.
The
{value constraint} property reproduces the functions of
XML default and
#FIXED attribute values. A {variety} of
default specifies that the attribute is to
appear unconditionally in the ·post-schema-validation infoset·, with {value} and {lexical form} used whenever the attribute is not
actually present; fixed indicates that the attribute
value if present must be equal to {value}, and if absent receives {value} and {lexical form} as for default. Note that
it is values that are checked, not
strings,
and that the test is for equality, not identity.
See Annotations (§3 given in this section.. When no simple type definition is
referenced or provided, the default is ·
xs:anySimpleType·, which
imposes no constraints at all.
attributeElement Information Item
<attribute
default = string
fixed = string
form = (qualified | unqualified)
id = ID
name = NCName
ref = QName
targetNamespace = anyURI
type = QName
use = (optional | prohibited | required) : optional
inheritable = boolean
{any attributes with non-schema namespace . . .}>
Content: (annotation?, simpleType?)
</attribute>
Editorial Note: Priority Feedback Request
Earlier versions of this specification did not
allow a
targetNamespace attribute on attribute.
Attribute information items ·validated· by a top-level
declaration must be qualified with the
{target namespace} of that
declaration. If the
{target namespace}.
ref[attribute] is absent, and the
use[attribute] is not
"prohibited", then it maps both to an Attribute Declaration and to an Attribute Use component, as described in Mapping Rules for Local Attribute Declarations (§3.2.2.2).
ref[attribute] is ·present·, and the
use[attribute] is not
"prohibited", then it maps to an Attribute Use component, as described in Mapping Rules for References to Top-level Attribute Declarations (§3.2.2.3).
use='prohibited', then it does not map to, or correspond to, any schema component at all.
useattribute is not allowed on top-level <attribute> elements, so this can only happen with <attribute> elements appearing within a <complexType> or <attributeGroup> element.
If the <attribute> element information item has <schema> as its parent, the corresponding schema component is as follows:
targetNamespace[attribute] of the parent <schema> element information item, or ·absent· if there is none.
type[attribute], if present, otherwise ·
xs:anySimpleType·.
defaultor a
fixed[attribute], then a Value Constraint as follows, otherwise ·absent·.
If
the <attribute> element information item has
<complexType> or <attributeGroup> as
an ancestor and the
ref [attribute] is absent,
it maps both to an attribute
declaration (see below) and
to an attribute use with properties as follows
(unless
use='prohibited', in which case the item
corresponds to nothing at all):
use=
required, otherwise false.
defaultor a
fixed[attribute], then a Value Constraint as follows, otherwise ·absent·.
The <attribute> element also maps to the {attribute declaration} of the attribute use just described, as follows:
targetNamespaceis present , then its ·actual value·.
targetNamespaceis not present and one of the following is true
form=
qualified
targetNamespace[attribute] of the ancestor <schema> element information item, or ·absent· if there is none.
type[attribute], if present, otherwise ·
xs:anySimpleType·.
If
the
<attribute> element information item has
<complexType> or <attributeGroup> as an
ancestor and the
ref [attribute] is
present, it
maps to an attribute use with properties as follows
(unless
use='prohibited', in which case the item
corresponds to nothing at all):
use=
required, otherwise false.
ref[attribute]
defaultor a
fixed[attribute], then a Value Constraint as follows, otherwise ·absent·.
inheritable[attribute], if present, otherwise {attribute declaration}.{inheritable}.
defaultand
fixedmust not both be present.
defaultand
useare both present,
usemust have the ·actual value·
optional.
refor
nameis present, but not both.
refis present, then all of <simpleType>,
formand
typeare absent.
typeattribute and a <simpleType> child element must not both be present.
fixedand
useare both present,
usemust not have the ·actual value·
prohibited.
targetNamespaceattribute is present then all of the following must be true:
nameattribute is present.
formattribute is absent.
targetNamespace[attribute] or its ·actual value· is different from the ·actual value· of
targetNamespaceof <attribute>, then all of the following are true:
base[attribute] of <restriction> does not ·match· the name of ·
xs:anyType·.
Informally, an attribute in an XML
instance is locally ·valid·
against an attribute declaration if and only if (a)
the name of the attribute matches
the name of the declaration, (b) after
whitespace normalization its ·normalized value· is locally valid
against the type declared for the attribute, and
(c) the
attribute obeys any relevant value constraint. Additionally,
for
xsi:type, it is required that the type named
by the attribute be present in the schema.
A logical prerequisite for checking the local validity of an
attribute against an attribute declaration is that the attribute
declaration itself and the type definition it identifies
both be present in the schema.
Local validity of attributes is tested as part of schema-validity ·assessment· of attributes (and of the elements on which they occur), and the result of the test is exposed in the [validity] property of the ·post-schema-validation infoset·.
A more formal statement is given in the following constraint.
xsi:type(Attribute Declaration for the 'type' attribute (§3.2.7.1)), then A's ·actual value· ·resolves· to a type definition.
[Definition:].
Schema-validity assessment of an attribute information item involves identifying its ·governing attribute declaration· and checking its local validity against the declaration. If the ·governing type definition· is not present in the schema, then assessment is necessarily incomplete.
[Definition:] For attribute information items, there is no difference between assessment and strict assessment, so the attribute information item has been strictly assessed if and only if its schema-validity has been assessed.
See also Attribute Default Value (§3.4.5.1), Match Information (§3.4.5.2) and Schema Information (§3.17.5.1), which describe other information set contributions related to attribute information items.
All attribute declarations (see Attribute Declarations (§3.2)) must satisfy the following constraints.
xmlnsNot Allowed
xsi:Not Allowed
xsi:Not Allowed(unless it is one of the four built-in declarations given in the next section).
xsi:attributes to specify default or fixed value constraints (e.g. in a component corresponding to a schema document construct of the form
<xs:attribute), but the practice is not recommended; including such attribute uses will tend to mislead readers of the schema document, because the attribute uses would have no effect; see Element Locally Valid (Complex Type) (§3.4.4.2) and Attribute Default Value (§3.4.5.1) for details.
There are four attribute declarations present in every schema by definition:
xsi:type
The
xsi:type attribute
is used to signal use of a type other than the declared type of
an element. See xsi:type (§2.6.1).
xsi:nil
The
xsi:nil attribute
is used to signal that an element's content is "nil"
(or "null"). See xsi:nil (§2.6.2).
xsi:schemaLocation
The
xsi:schemaLocation attribute
is used to signal possible locations of relevant schema documents.
See xsi:schemaLocation, xsi:noNamespaceSchemaLocation (§2.6.3).
xs:anySimpleType·
xsi:noNamespaceSchemaLocation
The
xsi:noNamespaceSchemaLocation attribute
is used to signal possible locations of relevant schema documents.
See xsi:schemaLocation, xsi:noNamespaceSchemaLocation (§2.6.3)..
For an element declaration E, if E.{scope}.{variety} = global, then E is available for use throughout the schema. If E.{scope}.{variety} = local, then E is available for use only within (the Complex Type Definition or Model Group Definition) E.{scope}.{parent}.
A ·non-absent· value of the {target namespace} property provides for ·validation· of namespace-qualified element information items. ·Absent· values of {target namespace} ·validate· unqualified items.
An element information item is normally
required to satisfy the {type definition}. For such an
item, schema information set
contributions appropriate to the {type definition} are added to the
corresponding element information
item in the ·post-schema-validation infoset·. The type
definition against which an element information item is
validated (its
·governing type definition·) can be different from the
declared {type definition}. The {type table} property of an Element Declaration, which governs conditional type assignment, and
the
xsi:type attribute of an element information item
(see xsi:type (§2.6.1)) can cause the ·governing type definition· and the
declared {type definition} to be different.
If {nillable} is true, then
an element with no text or element
content can be ·valid·
despite a
{type definition}
which would otherwise require
content, if it carries the
attribute
xsi:nil with the value
true (see xsi:nil (§2.6.2)).
Formal details of element ·validation· are described in
Element Locally Valid (Element) (§3.3.4.3).
xsi:nil= true.
{value constraint} establishes a default or fixed value for an element. If a {value constraint} with {variety} = default is present, and if the element being ·validated· is empty, then the element is treated as if {value constraint}.{lexical form} was used as the content of the element. If fixed is specified, then the element's content must either be empty, in which case fixed behaves as default, or its value must be equal to {value constraint}.{value}.
{identity-constraint definitions} express constraints establishing uniquenesses and reference relationships among the values of related elements and attributes. See Identity-constraint Definitions (§3.11).
Element declarations are potential members of the ·substitution groups·, if any, identified by {substitution group affiliations}. Potential membership is transitive but not symmetric; an element declaration is a potential member of any group of which any entry in its {substitution group affiliations} is a potential member. Actual membership may be blocked by the effects of {substitution group exclusions} or {disallowed substitutions}, see below.
An empty {substitution group exclusions} allows a declaration to be named in the {substitution group affiliations} of other element declarations having the same declared {type definition} or some type derived therefrom. The explicit values of {substitution group exclusions}, extension or restriction, rule out element declarations having types whose derivation from {type definition} involves any extension steps, or restriction steps, respectively . ·substitution group· headed by the declared element. If {disallowed substitutions} is empty, then all derived types and ·substitution group· members are allowed.
Element declarations for which {abstract} is true can appear in content models only when substitution is allowed; such declarations must not themselves ever be used to ·validate· element content.
See Annotations (§3 given in this section. = List of QName
targetNamespace = anyURI
type = QName
{any attributes with non-schema namespace . . .}>
Content: (annotation?, ((simpleType | complexType)?, alternative*, (unique | key | keyref)*))
</element>
Editorial Note: Priority Feedback Request
Earlier versions of this specification did not
allow a
targetNamespace attribute on element the
{target namespace} first
substitution-group head named in the
substitutionGroup [attribute], if present,
otherwise ·
xs XML Representation of Identity-constraint Definition Schema Components (§3.11.2) for <key>, <unique> and <keyref>.
ref[attribute] is absent, and it does not have
minOccurs=maxOccurs=0, then it maps both to a Particle, as described in Mapping Rules for Local Element Declarations (§3.3.2.3), and also to an Element Declaration, using the mappings described in Common Mapping Rules for Element Declarations (§3.3.2.1) and Mapping Rules for Local Element Declarations (§3.3.2.3).
ref[attribute] is present, and it does not have
minOccurs=maxOccurs=0, then it maps to a Particle as described in References to Top-Level Element Declarations (§3.3.2.4).
minOccurs=maxOccurs=0, then it maps to no component at all.
minOccursand
maxOccursattributes are not allowed on top-level <element> elements, so in valid schema documents this will happen only when the <element> element information item has <complexType> or <group> as an ancestor.
The following mapping rules apply in all cases where an <element> element maps to an Element Declaration component.
name[attribute].
type[attribute], if it is present.
substitutionGroup[attribute], if present.
xs:anyType·.
test[attribute].
test[attribute], then a Type Alternative corresponding to the <alternative>.
test) a Type Alternative with the following properties:
nillable[attribute], if present, otherwise false.
defaultor a
fixed[attribute], then a Value Constraint as follows, otherwise ·absent·. [Definition:] Use the name effective simple type definition for the declared {type definition}, if it is a simple type definition, or, if {type definition}.{content type}.{variety} = simple, for {type definition}.{content type}.{simple type definition}, or else for the built-in string simple type definition).
substitutionGroup[attribute], if present, otherwise the empty set.
block[attribute], if present, otherwise, substitution
};
blockDefault[attribute] of <schema> may include values other than extension, restriction or substitution, those values are ignored in the determination of {disallowed substitutions} for element declarations (they are used elsewhere).
finaland
finalDefault[attributes] in place of the
blockand
blockDefault[attributes] and with the relevant set being
{extension, restriction
}.
abstract[attribute], if present, otherwise false.
ref[attribute], as defined in XML Representation of Annotation Schema Components (§3.15.2).
If the <element> element information item has <schema> as its parent, it maps to an Element Declaration, using the mapping given in Common Mapping Rules for Element Declarations (§3.3.2.1), supplemented by the following.
targetNamespace[attribute] of the parent <schema> element information item, or ·absent· if there is none.
If
the <element> element information
item has
<complexType> or <group> as
an ancestor,
and the
ref [attribute] is absent,
and it does not have
minOccurs=maxOccurs=0,
then it maps both to a
Particle and to a local
Element Declaration which is the {term}
of that Particle. The Particle
is as follows:
maxOccurs[attribute] equals unbounded, otherwise the ·actual value· of the
maxOccurs[attribute], if present, otherwise
1.
The <element> element also maps to an element declaration using the mapping rules given in Common Mapping Rules for Element Declarations (§3.3.2.1), supplemented by those below:
targetNamespaceis present , then its ·actual value·.
targetNamespaceis not present and one of the following is true
form=
qualified
targetNamespace[attribute] of the ancestor <schema> element information item, or ·absent· if there is none.
If the
<element> element information
item has
<complexType> or <group> as an
ancestor,
and the
ref [attribute] is
present,
and it does not have
minOccurs=maxOccurs=0,
then it maps to
a Particle as follows.
maxOccurs[attribute] equals unbounded, otherwise the ·actual value· of the
maxOccurs[attribute], if present, otherwise
1.
ref[attribute].
>
xs:anyType· The second uses an embedded anonymous complex type definition. ·substitution group·. Two further elements are declared, each a member of the
facet·substitution group·. Finally a type is defined which refers to
facet, thereby allowing either
periodor
encoding(or any other member of the group).
messageelement will be assigned either to type
messageTypeor to a more specific type derived from it.
messageTypeaccepts any well-formed XML or character sequence as content, and carries a
kindattribute which can be used to describe the kind or format of the message. The value of
kindis either one of a few well known keywords or, failing that, any string.
<xs:complexType <xs:sequence> <xs:any </xs:sequence> <xs:attribute <xs:simpleType> <xs:union> <xs:simpleType> <xs:restriction <xs:enumeration <xs:enumeration <xs:enumeration <xs:enumeration <xs:enumeration </xs:restriction> </xs:simpleType> <xs:simpleType> <xs:restriction </xs:simpleType> </xs:union> </xs:simpleType> </xs:attribute> <xs:anyAttribute </xs:complexType>
messageTypeare defined, each corresponding to one of the three well-known formats:
messageTypeStringfor
kind="string",
messageTypeBase64for
kind="base64"and
kind="binary", and
messageTypeXMLfor
kind="xml"or
kind="XML".
<xs:complexType <xs:simpleContent> <xs:restriction <xs:simpleType> <xs:restriction </xs:simpleType> </xs:restriction> </xs:simpleContent> </xs:complexType> <xs:complexType <xs:simpleContent> <xs:restriction <xs:simpleType> <xs:restriction </xs:simpleType> </xs:restriction> </xs:simpleContent> </xs:complexType> <xs:complexType <xs:complexContent> <xs:restriction <xs:sequence> <xs:any </xs:sequence> </xs:restriction> </xs:complexContent> </xs:complexType>
messageelement itself uses
messageTypeboth as its declared type and as its default type, and uses
testattributes on its <alternative> [children] to assign the appropriate specialized message type to messages with the well known values for the
kindattribute:
<xs:element <xs:alternative <xs:alternative <xs:alternative <xs:alternative <xs:alternative </xs:element>.
typeattribute.
targetNamespaceis present then all of the following are true:
nameis present.
formis not present.
targetNamespace[attribute] or its ·actual value· is different from the ·actual value· of
targetNamespaceof <element>, then all of the following are true:
base[attribute] of <restriction> does not ·match· the name of ·
xs:anyType·.
test[attribute]; the last <alternative> element may have such an [attribute].
When an element is ·assessed·, it is first checked against its ·governing element declaration·, if any; this in turn entails checking it against its ·governing type definition·. The second step is recursive: the element's [attributes] and [children] are ·assessed· in turn with respect to the declarations assigned to them by their parent's ·governing type definition·.
The ·governing type definition· of an element is normally the declared {type definition} associated with the ·governing element declaration·, but this may be ·overridden· using conditional type assignment in the Element Declaration or using an ·instance-specified type definition·, or both. When the element is declared with conditional type assignment, the ·selected type definition· is used as the ·governing type definition· unless ·overridden· by an ·instance-specified type definition·.
xs:error·.
[Definition:] If the set of keywords controlling whether a type S is ·validly substitutable· for another type T is the empty set, then S is said to be validly substitutable for T without limitation or absolutely. The phrase validly substitutable, without mention of any set of blocking keywords, means "validly substitutable without limitation".
Sometimes one type S is ·validly substitutable· for another type T only if S is derived from T by a chain of restrictions, or if T is a union type and S a member type of the union. The concept of ·valid substitutability· is appealed to often enough in such contexts that it is convenient to define a term to cover this specific case. [Definition:] A type definition S is validly substitutable as a restriction for another type T if and only if S is ·validly substitutable· for T, subject to the blocking keywords {extension, list, union}.
The concept of local validity of an element information item against an element declaration is an important part of the schema-validity ·assessment· of elements. (The other important part is the recursive ·assessment· of attributes and descendant elements.) Local validity partially determines the element information item's [validity] property, and fully determines the [local element validity] property, in the ·post-schema-validation infoset·.
xsi:nilattribute on the element obeys the rules. The element is allowed to have an
xsi:nilattribute only if the element is declared nillable, and
xsi:nil = 'true'is allowed only if the element itself is empty. If the element declaration specifies a fixed value for the element,
xsi:nil='true'will make the element invalid.
xsi:typeattribute present names a type which is ·validly substitutable· for the element's declared {type definition}.
The following validation rule gives the normative formal definition of local validity of an element against an element declaration.
xsi:nilattribute.
xsi:nilattribute information item.
xsi:nil=
false.
xsi:nil=
true(that is, E is ·nilled·), and all of the following are true:
xsi:typeattribute, then all of the following are true:
xsi:typeattribute whose value does not ·resolve· to a type definition, or if the type definition fails to ·override· the ·selected type definition·, then the ·selected type definition· of its ·governing element declaration· becomes the ·governing type definition·. The local validity of the element with respect to the ·governing type definition· is recorded in the [local type validity] property.
The following validation rule specifies
formally what it means for an element to be locally valid
against a type definition. This concept is appealed to in the
course of checking an element's local validity against its
element declaration, but it is also part of schema-validity
·assessment· of an element when there is no ·governing element declaration·,
and when there would otherwise be neither a ·governing element declaration· nor a
·governing type definition·, ·lax assessment·
is performed, checking the local validity of an element
information item against
xs:anyType.
Informally, local validity against a type requires first
that the type definition be present in the schema and not declared abstract.
For a simple type definition, the element must lack attributes
(except for namespace declarations and the special attributes
in the
xsi namespace) and child elements, and must
be type-valid against that simple type definition.
For a complex type definition, the element must
be locally valid against that complex type definition.
Also, if the element has an
xsi:type attribute,
then it is not locally valid against any type other than the
one named by that attribute.
xsi:type,
xsi:nil,
xsi:schemaLocation, or
xsi:noNamespaceSchemaLocation.
xsi:type[attribute] and does not have a ·governing element declaration·, then the ·actual value· of
xsi:type·resolves· to T.
The following validation rule specifies document-level ID/IDREF constraints checked on the ·validation root· if it is an element; this rule is not checked on other elements. Informally, the requirement is that each ID identifies a single element within the ·validation root·, and that each IDREF value matches one ID..
xsi:typeattribute), otherwise the element will be ·laxly assessed·.
xs:anyType·.
xs:anyType· as per Element Locally Valid (Type) (§3.3.4.4) and assessing schema-validity of its [attributes] and [children] as per clause 2 and clause 3 above. If the element information item is ·skipped·, it must not be laxly assessed.:[attributes] be assessed with respect to the corresponding attribute declarations from Built-in Attribute Declarations (§3.2.7). The result of such assessment is present in the ·post-schema-validation infoset·, as defined in Attribute Declaration Information Set Contributions (§3.2.5).
xsi:typeattribute which fails to ·resolve· to a type definition that ·overrides· the declared {type definition}
xsi:typeattribute which fails to ·resolve· to a type definition that ·overrides· the ·selected type definition·
xs:anyType·
xs:anyType·.
See also Match Information (§3.4.5.2), Identity-constraint Table (§3.11.5), Validated with Notation (§3.14.5), and Schema Information (§3.17.5.1), which describe other information set contributions related to element information items.
All element declarations (see Element Declarations (§3.3)) must satisfy the following constraint.
xs:error·.
This and the following sections define relations appealed to elsewhere in this specification.
[Definition:] One element declaration is substitutable for another if together they satisfy constraint Substitution Group OK (Transitive) (§3.3.6.3).
[Definition:] Every element declaration (call this HEAD) in the {element declarations} of a schema defines a substitution group, a subset of those {element declarations}. An element declaration is in the substitution group of HEAD if and only if it is ·substitutable· for HEAD.:
Either an Element Declaration or a Complex Type Definition.
Complex type definitions are identified by their {name} and {target namespace}. Except
for anonymous complex type definitions (those with no {name}), since
type definitions (i.e. both simple and complex type definitions taken together) must be uniquely identified within an ·XSD.
The {context} property is only relevant for anonymous type definitions, for which its value is the component in which this type definition appears as the value of a property, e.g. {type definition}.
Complex types for which {abstract} is true have no valid instances and thus cannot be used in the normal way as the {type definition} for the ·validation· of element information items (if for some reason an abstract type is identified as the ·governing type definition· of an element information item, the item will invariably be invalid). It follows that such abstract types must not be referenced from an xsi:type (§2.6.1) attribute in an instance document. Abstract complex types can be used as {base type definition}s, or even as the declared {type definition}s of element declarations, provided in every case a concrete derived type definition is used for ·validation·, either via xsi:type (§2.6.1) or the operation of a ·substitution group·.
{attribute uses} are a set of attribute uses. See Element Locally Valid (Complex Type) (§3.4.4.2) and Attribute Locally Valid (§3.2.4.1) for details of attribute ·validation·.
{attribute wildcard}s provide a more flexible specification for ·validation· of attributes not explicitly included in {attribute uses}. See Element Locally Valid (Complex Type) (§3.4.4.2), The Wildcard Schema Component (§3.10.1) and Wildcard allows Expanded Name (§3.10.4.2) for formal details of attribute wildcard ·validation·.
xsi:typeattribute; see xsi:type (§2.6.1);
Editorial Note: Priority Feedback Request
In version 1.0 of this specification, {prohibited substitutions}
of a Complex Type Definition is only used when type substitution
(
xsi:type) or element substitution (substitution groups) appear in
the instance document. It has been changed to take effect whenever complex type
derivation is checked, including cases beyond type and element substitutions in
instance documents. In particular, it affects
clause 4 of Element Declaration Properties Correct (§3.3.6.1),
clause 2.1 and
clause 2.2 of Conditional Type Substitutable in Restriction (§3.4.4.5),
clause 1.6 of Derivation Valid (Extension) (§3.4.6.2),
clause 4 of Derivation Valid (Restriction, Complex) (§3.4.6.3),
and clause 4.5 of Content type restricts (Complex Content) (§3.4.6.4).
Because of the consideration of {prohibited substitutions},
existing schemas may be rendered invalid by the above rules. The XML Schema Working Group
solicits input from implementors and users of this specification as to whether
this change is desirable and acceptable.
{assertions} constrain elements and attributes to exist, not to exist, or to have specified values. Though specified as a sequence, the order among the assertions is not significant during assessment. See Assertions (§3.13).
See Annotations (§3.15) for information on the role of the {annotations} property.
The XML representation for a complex type definition schema component is a <complexType> element information item.
The XML representation for complex type definitions with a {content type} with {variety} simple is significantly different from that of those with other {content type}s, and this is reflected in the presentation below, which describes the mappings for the two cases in separate subsections. Common mapping rules are factored out and given in separate sections.
complexTypeElement Information Item
<complexType
abstract = boolean : false
block = (#all | List of (extension | restriction))
final = (#all | List of (extension | restriction))
id = ID
mixed = boolean
name = NCName
defaultAttributesApply = boolean : true
{any attributes with non-schema namespace . . .}>
Content: (annotation?, (simpleContent | complexContent | (openContent?, (group | all | choice | sequence)?, ((attribute | attributeGroup)*, anyAttribute?), assert*)))
</complexType>
(the parent element information item will be <element>), the Element Declaration corresponding to that parent information item.
When the <complexType> source declaration has a <simpleContent> child, the following elements are relevant (as are <attribute>, <attributeGroup>, and <anyAttribute>), and the <simpleContent>.
simpleContentElement Information Item et al.
| assertion | {any with namespace: ##other})*)?, ((attribute | attributeGroup)*, anyAttribute?), assert*)
</restriction>
<extension
base = QName
id = ID
{any attributes with non-schema namespace . . .}>
Content: (annotation?, ((attribute | attributeGroup)*, anyAttribute?), assert*)
</extension>
When the <complexType> element has a <simpleContent> child, then the <complexType> element maps to a complex type with simple content, as follows.
base[attribute] on the <restriction> or <extension> element appearing as a child of <simpleContent>, not repeated here), and the additional <complexContent>, but their content models are different in this case from the case above when they occur as children of <simpleContent>.
complexContentElement Information Item et al.
<complexContent
id = ID
mixed = boolean
{any attributes with non-schema namespace . . .}>
Content: (annotation?, (restriction | extension))
</complexContent>
<restriction
base = QName
id = ID
{any attributes with non-schema namespace . . .}>
Content: (annotation?, openContent?, (group | all | choice | sequence)?, ((attribute | attributeGroup)*, anyAttribute?), assert*)
</restriction>
<extension
base = QName
id = ID
{any attributes with non-schema namespace . . .}>
Content: (annotation?, openContent?, ((group | all | choice | sequence)?, ((attribute | attributeGroup)*, anyAttribute?), assert*))
</extension>
<openContent
id = ID
mode = (none | interleave | suffix) : interleave
{any attributes with non-schema namespace . . .}>
Content: (annotation?, any?)
</openContent> Explicit Complex Content (§3.4.2.3 a Content Type as follows:
<attributeGroup
id = ID
ref = QName
{any attributes with non-schema namespace . . .}>
Content: (annotation?)
</attributeGroup>.
mode≠
'none', then there must be an <any> among the [children] of <openContent>.
mixed[attribute] is present on both <complexType> and <complexContent>, then ·actual values· of those [attributes] must be the same.
s:anyType·, then E has no ·context-determined type table· in T.
xsi:type,
xsi:nil,
xsi:schemaLocation, or
xsi:noNamespaceSchemaLocation(see Built-in Attribute Declarations (§3.2.7)), the appropriate case among the following is true:
attribute uses, attribute wildcards, particles and open contents.
xs:anyType· and ST is ·validly substitutable as a restriction· for SB.
Editorial Note: Priority Feedback Request
The constraint Conditional Type Substitutable in Restriction (§3.4.4.5) above is intended to ensure that the use of Type Tables for conditional type assignment does not violate the usual principles of complex type restriction. More specifically, if T is a complex type definition derived from its base type B by restriction, then the rule seeks to ensure that a type definition conditionally assigned by T to some child element is always derived by restriction from that assigned by B to the same child. The current design enforces this using a "run-time" rule: instead of marking T as invalid if it could possibly assign types incompatible with those assigned by B, the run-time rule accepts the schema as valid if the usual constraints on the declared {type definition}s are satisified, without checking the details of the {type table}s. Element instances are then checked as part of validation, and any instances that would cause T (or any type in T's {base type definition} chain) to assign the incompatible types are made invalid with respect to T. This rule may prove hard to understand or implement. The Working Group is uncertain whether the current design has made the right trade-off and whether we should use a simpler but more restrictive rule. We solicit input from implementors and users of this specification as to whether the current run-time rule should be retained..
Editorial Note: Priority Feedback Request
The above constraint allows a complex type with an <all> model groups to restrict another complex type with either <all>, <sequence>, or <choice> model groups. Even when the base type has an <all> model group, the list of member elements and wildcard may be very different between the two types. The working group solicits feedback on how useful this is in practice, and on the difficulty in implementing this feature..
Editorial Note: Priority Feedback Request
The above rule allows an implementation to use a potentially non-conforming schema to perform schema assessment and produce PSVI. This results in an exception of rules specified in Errors in Schema Construction and Structure (§5.1). The Working Group solicits input from implementors and users of this specification as to whether this is an acceptable implementation behavior.
The following constraint defines a relation appealed to elsewhere in this specification.
xs:anyType·.
xsi:typeor ·substitution groups·), that the type used is actually derived from the expected type, and that that derivation does not involve a form of derivation.5) for the use of component
identifiers when importing one schema into another.
{attribute uses} is a set of given in this section.
attributeGroupElement Information Item
<attributeGroup
id = ID
name = NCName
ref = QName
{any attributes with non-schema namespace . . .}>
Content: (annotation?, ((attribute | attributeGroup)*, anyAttribute?))
</attributeGroup>
When an <attributeGroup> appears as a child of <schema> or <redefine>, it corresponds to an attribute group definition as below. When it appears as a child of <complexType> or <attributeGroup>, it does not correspond to any component as such.. Definition Schema Components XSD schema·. See References to schema components across namespaces (
<import>) (§4.2 given in this section.
groupElement Information Item
<group
id = ID
maxOccurs = (nonNegativeInteger | unbounded) : 1
minOccurs = nonNegativeInteger : 1
name = NCName
ref = QName
{any attributes with non-schema namespace . . .}>
Content: (annotation?, (all | choice | sequence)?)
</group>
If there is a
name [attribute] (in which case the item will
have <schema> or <redefine> as parent), then the item maps
to a model group definition component with properties as
follows:.
.15) for information on the role of the {annotations} property.
The XML representation for a model group schema component is either an <all>, a <choice> or a <sequence> element information item. The correspondences between the properties of those information items and properties of the component they correspond to are given in this section.
allElement Information Item et al.
<all
id = ID
maxOccurs = 1 : 1
minOccurs = (0 | 1) : 1
{any attributes with non-schema namespace . . .}>
Content: (annotation?, (element | any)*)
<> [One-Unambiguous Regular Languages]. W allows the [namespace name] of E, as defined in the validation rule Wildcard allows Namespace Name (§3.10.4.3).
constraints.
The following constraints define relations appealed to elsewhere in this specification. names and optionally on their local names.
<xs:any <xs:any <xs:any <xs:any <xs:anyAttribute
The wildcard schema component has the following properties:
See Annotations (§3.15) for information on the role of the {annotations} property.
Editorial Note: Priority Feedback Request
The keywords defined and sibling allow a kind of wildcard which matches only elements not declared in the current schema or contained within the current complex type, respectively. They are new in this version of this specification. The Working Group is uncertain whether their value outweighs their liabilities; we solicit input from implementors and users of this specification as to whether they should be retained or not.
The XML representation for a wildcard schema component is an <any> or <anyAttribute> element information item.
anyElement Information Item
<any
id = ID
maxOccurs = (nonNegativeInteger | unbounded) : 1
minOccurs = nonNegativeInteger : 1
namespace = ((##any | ##other) | List of (anyURI | (##targetNamespace | ##local)) )
notNamespace = List of (anyURI | (##targetNamespace | ##local))
notQName = List of (QName | (##defined | ##definedSibling))
processContents = (lax | skip | strict) : strict
{any attributes with non-schema namespace . . .}>
Content: (annotation?)
</any>
anyAttributeElement Information Item
<anyAttribute
id = ID
namespace = ((##any | ##other) | List of (anyURI | (##targetNamespace | ##local)) )
notNamespace = List of (anyURI | (##targetNamespace | ##local))
notQName = List of (QName | ##defined)
processContents = (lax | skip | strict) : strict
{any attributes with non-schema namespace . . .}>
Content: (annotation?)
</anyAttribute>
An <any> information item
corresponds both to a wildcard component and to
a particle containing that wildcard
(unless
minOccurs=maxOccurs=0, in which case the
item corresponds to no component at
all).
The mapping rules are given in the following two subsections..:
If a value is present, its {identity-constraint category} must be key or unique.
Identity-constraint definitions are identified by their {name} and {target namespace};
identity-constraint
definition identities must be unique within an ·XSD schema·. See References to schema components across namespaces (
<import>) (§4.2.5) for the use of component
identifiers when importing one schema into another.
These constraints are specified along side the specification of types for the
attributes and elements involved, i.e. something declared as of type integer
can also serve as a key. Each constraint declaration has a name, which exists in a
single symbol space for constraints. The
equality and inequality
conditions
appealed to in checking these constraints apply to the
values of
the fields selected, not their
lexical representation, so that for example
3.0 and
3
would be conflicting keys if they were both
decimal, but non-conflicting if
they were both strings, or one was a string and one a decimal.
When equality and
identity differ for the simple types involved, all three
forms of identity-constraint test for equality, not identity,
of values.
Overall the augmentations to XML's
ID/IDREF mechanism are:
{selector} specifies a restricted XPath ([XPath 2.0]) expression relative to instances of the element being declared. This must identify a sequence of element nodes that are contained within the declared element to which the constraint applies.
{fields} specifies XPath expressions relative to each element selected by a {selector}. Each XPath expression in the {fields} property.15) for information on the role of the {annotations} et al.
<unique
id = ID
name = NCName
ref = QName
{any attributes with non-schema namespace . . .}>
Content: (annotation?, (selector, field+)?)
</unique>
<key
id = ID
name = NCName
ref = QName
{any attributes with non-schema namespace . . .}>
Content: (annotation?, (selector, field+)?)
</key>
<keyref
id = ID
name = NCName
ref = QName
refer = QName
{any attributes with non-schema namespace . . .}>
Content: (annotation?, (selector, field+)?)
</keyref>
<selector
id = ID
xpath = a subset of XPath expression, see below
xpathDefaultNamespace = (anyURI | (##defaultNamespace | ##targetNamespace | ##local))
{any attributes with non-schema namespace . . .}>
Content: (annotation?)
</selector>
<field
id = ID
xpath = a subset of XPath expression, see below
xpathDefaultNamespace = (anyURI | (##defaultNamespace | ##targetNamespace | ##local))
{any attributes with non-schema namespace . . .}>
Content: (annotation?)
</field>
If the
ref [attribute] is absent,
the corresponding schema
component is as follows:
targetNamespace[attribute] of the <schema> ancestor element information item if present, otherwise ·absent·.
xpathas the designated expression [attribute].
xpathas the designated expression [attribute].
refer[attribute], otherwise ·absent·. | http://www.w3.org/TR/2009/WD-xmlschema11-1-20090130/ | CC-MAIN-2014-10 | en | refinedweb |
).
Since a few crypt(3) extensions allow different values, with different sizes in the salt, it is recommended to use the full crypted password as salt when checking for a password.
A simple example illustrating typical use:
import crypt, getpass, pwd def login(): username = | http://docs.python.org/3.2/library/crypt.html | CC-MAIN-2014-10 | en | refinedweb |
NAME
ng_h4 -- Netgraph node type that is also an H4 line discipline
SYNOPSIS
#include <sys/types.h> #include <netgraph/bluetooth/include/ng_h4.h>
DESCRIPTION
The h4 node type is both a persistent Netgraph node type and a H4 line discipline. It implements a Bluetooth HCI UART transport layer as per chapter H4 of the Bluetooth Specification Book v1.1. A new node is created when the corresponding line discipline, H4DISC, is registered on a tty device (see tty(4)). The node has a single hook called hook. Incoming bytes received on the tty device are re-assembled into HCI frames (according to the length). Full HCI frames are sent out on the hook. HCI frames received on hook are transmitted out on the tty device. No modification to the data is performed in either direction. While the line discipline is installed on a tty, the normal read and write operations are unavailable, returning EIO. Information about the node is available via the netgraph ioctl(2) command NGIOCGINFO. This command returns a struct nodeinfo similar to the NGM_NODEINFO netgraph(4) control message.
HOOKS
This node type supports the following hooks: hook single HCI frame contained in single mbuf structure.
CONTROL MESSAGES
This node type supports the generic control messages, plus the following: NGM_H4_NODE_RESET Reset the node. NGM_H4_NODE_GET_STATE Returns current receiving state for the node. NGM_H4_NODE_GET_DEBUG Returns an integer containing the current debug level for the node. NGM_H4_NODE_SET_DEBUG This command takes an integer argument and sets current debug level for the node. NGM_H4_NODE_GET_QLEN Returns current length of outgoing queue for the node. NGM_H4_NODE_SET_QLEN This command takes an integer argument and sets maximum length of outgoing queue for the node. NGM_H4_NODE_GET_STAT Returns various statistic information for the node, such as: number of bytes (frames) sent, number of bytes (frames) received and number of input (output) errors. NGM_H4_NODE_RESET_STAT Reset all statistic counters to zero.
SHUTDOWN
This node shuts down when the corresponding device is closed (or the line discipline is uninstalled on the device).
SEE ALSO
ioctl(2), netgraph(4), tty(4), ngctl(8)
HISTORY
The h4 node type was implemented in FreeBSD 5.0.
AUTHORS
Maksim Yevmenkin <[email protected]>
BUGS
This node still uses spltty(9) to lock tty layer. This is wrong. | http://manpages.ubuntu.com/manpages/oneiric/man4/ng_h4.4freebsd.html | CC-MAIN-2014-10 | en | refinedweb |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
The following is an updated version of the article "C++ Type traits" by John Maddock and Steve Cleary that appeared in the October 2000 issue of Dr Dobb's Journal.
Generic programming (writing code which works with any data type meeting a set of requirements) has become the method of choice for providing reusable code. However, there are times in generic programming when "generic" just isn't good enough - sometimes the differences between types are too large for an efficient generic implementation. This is when the traits technique becomes important - by encapsulating those properties that need to be considered on a type by type basis inside a traits class, we can minimize the amount of code that has to differ from one type to another, and maximize the amount of generic code.
Consider an example: when working with character strings, one common operation
is to determine the length of a null terminated string. Clearly it's possible
to write generic code that can do this, but it turns out that there are much
more efficient methods available: for example, the C library functions
strlen and
wcslen.
Class
char_traits is a classic
example of a collection of type specific properties wrapped up in a single
class - what Nathan Myers termed a baggage class[1]. In the Boost type-traits library,
we. As we will show,, namespace-qualification is omitted in most of the code samples
given.
There are far too many separate classes contained in the type-traits library
to give a full implementation here - see the source code in the Boost library
for the full details - however, most of the implementation is fairly repetitive
anyway, so here we will just give you a flavor for how some of the classes
are implemented. Beginning with possibly the simplest class in the library,
is_void<T> inherits
from
true_type
only if
T is
void.
template <typename T> struct is_void : public false_type{}; template <> struct is_void<void> : public true_type{};
Here we define a primary version of the template class
is_void,
and provide a full-specialization when
T
is
void. While full specialization
of a template class is an important technique, sometimes we need a solution
that is halfway between a fully generic solution, and a full specialization.
This is exactly the situation for which the standards committee defined partial
template-class specialization. As an example, consider the class
boost::is_pointer<T>:
here we needed a primary version that handles all the cases where T is not
a pointer, and a partial specialization to handle all the cases where T is
a pointer:
template <typename T> struct is_pointer : public false_type{}; template <typename T> struct is_pointer<T*> : public true_type{};
The syntax for partial specialization is somewhat arcane and could easily occupy an article in its own right; like full specialization, in order to write a partial specialization for a class, you must first declare the primary template. The partial specialization contains an extra <...> after the class name that contains the partial specialization parameters; these define the types that will bind to that partial specialization rather than the_extent<T>. This
class defines a single typedef-member
type
that is the same type as T but with any top-level array bounds removed; this
is an example of a traits class that performs a transformation on a type:
template <typename T> struct remove_extent { typedef T type; }; template <typename T, std::size_t N> struct remove_extent<T[N]> { typedef T type; };
The aim of
remove_extent
is this: imagine a generic algorithm that is passed an array type as a template
parameter,
remove_extent
provides a means of determining the underlying type of the array. For example
remove_extent<int[4][5]>::type would evaluate to the type
int[5]. This example also shows that the number of
template parameters in a partial specialization does not have to match the
number in the default template. However, the number of parameters that appear
after the class name do have to match the number and type of the parameters
in the default template.. In order to implement
copy in terms of
memcpy all
of the following conditions need to be met:
Iter1and
Iter2must be pointers.
Iter1and
Iter2must point to the same type - excluding const and volatile-qualifiers.
Iter1must have a trivial assignment operator.
By trivial assignment operator we mean that the type is either a scalar type[3] or:
If specialised the
examples. The code begins by defining a template function
do_copy that performs a "slow but safe"
copy. The last parameter passed to this function may be either a
true_type
or a
false_type.
Following that there is an overload of do_copy that uses
memcpy:
this time the iterators are required to actually be pointers to the same type,
and the final parameter must be a
true_type.
Finally, the version of
copy
calls
do_copy, passing
has_trivial_assign<value_type>() as the final parameter: this will dispatch
to the optimized version where appropriate, otherwise it will call the "slow
but safe version".
It has often been repeated in these columns that "premature optimization is the root of all evil" [4]. So the question must be asked: was our optimization premature? To put this in perspective the timings for our version of copy compared a conventional generic copy[5] are shown in table 1.
Clearly the optimization makes a difference in this case; but, to be fair, the timings are loaded to exclude cache miss effects - without this accurate comparison between algorithms becomes difficult. However, perhaps we can add a couple of caveats to the premature optimization rule:
The optimized copy example shows how type traits may be used to perform optimization decisions at compile-time. Another important usage of type traits is to allow code to compile that otherwise would not do so unless excessive partial specialization is used. This is possible by delegating partial specialization to the type traits classes. Our example for this form of usage is a pair that can hold references [6].
First, let us examine the definition of
std::pair, omitting
the comparison operators, default constructor, and template copy constructor
for simplicity:
template <typename T1, typename T2> struct pair { typedef T1 first_type; typedef T2 second_type; T1 first; T2 second; pair(const T1 & nfirst, const T2 & nsecond) :first(nfirst), second(nsecond) { } };
Now, this "pair" cannot hold references as it currently stands, because the constructor would require taking a reference to a reference, which is currently illegal [7]. Let us consider what the constructor's parameters would have to be in order to allow "pair" to hold non-reference types, references, and constant references:
A little familiarity with the type traits classes allows us to construct a single mapping that allows us to determine the type of parameter from the type of the contained class. The type traits classes provide a transformation add_reference, which adds a reference to its type, unless it is already a reference.
This allows us to build a primary template definition for
pair
that can contain non-reference types, reference types, and constant reference
types: allows
us to define a single primary template that adjusts itself auto-magically to
any of these partial specializations, instead of a brute-force partial specialization
approach. Using type traits in this fashion allows programmers to delegate
partial specialization to the type traits classes, resulting in code that is
easier to maintain and easier to understand.
We hope that in this article we have been able to give you some idea of what type-traits are all about. A more complete listing of the available classes are in the boost documentation, along with further examples using type traits. Templates have enabled C++ uses to take the advantage of the code reuse that generic programming brings; hopefully this article has shown that generic programming does not have to sink to the lowest common denominator, and that templates can be optimal as well as generic.
The authors would like to thank Beman Dawes and Howard Hinnant for their helpful comments when preparing this article. | http://www.boost.org/doc/libs/1_53_0_beta1/libs/type_traits/doc/html/boost_typetraits/background.html | CC-MAIN-2014-10 | en | refinedweb |
In my previous articles on CodeProject.com, I have explained the fundamentals of Windows Communication Foundation (WCF), LINQ, LINQ to SQL, Entity Framework, and LINQ to Entities, including:
All above articles are based on .NET 4.0/Visual Studio 2010, so if .NET 4.0/Visual Studio 2010 are good for you, you can open and read any of above articles. For those who are learning WCF/LINQ/EF 4.5 with Visual Studio 2012, above articles are outdated. So from now on, I will update these articles with .NET 4.5/Visual Studio 2012.
Following articles are updated ones with .NET 4.5/Visual Studio 2012. I will update the links when I have finished them.
In this article, we will implement a basic WCF 4.5 service from scratch. We will build a HelloWorld WCF service by carrying out the following steps:
Before we can build the WCF service, we need to create a solution for our service project. We also need a directory in which we will save all the files. Throughout this article, we will save our project source codes in the C:\SOAWithWCFandLINQ\Projects directory. We will have a subfolder for each solution we create, and under this solution folder, we will have one subfolder for each project.
For this HelloWorld solution, the final directory structure is shown in the following image:
You don't need to manually create these directories with Windows Explorer; Visual Studio will create them automatically when you create the solutions and projects.
Now follow these steps to create our first solution and the HelloWorld project:
You may have noticed that there is already a template for WCF Service Application in Visual Studio 2012 (actually there are a few WCF templates in Visual Studio). For this very first example, we will not use this template. Instead, we will create everything by ourselves to understand the purpose of each step. This is an excellent way for you to understand and master this new technology. In the next chapter of my new book (see bottom of this article for more details), we will use this template to create the project, so we don't need to manually type a lot of code.
Once you click on:
We have now created a new solution and project. Next, we will develop and build this service. But before we go any further, we need to do two things to this project:
Click on the Show All Files button on the Solution Explorer toolbar, as shown on previous image. Clicking on this button will show all files and directories in your hard disk under the project folder, even those items that are not included in the project. Make sure that you don't have the solution item selected. Otherwise you can't see the Show All Files button.
Lastly, in order to develop a WCF service, we need to add a reference to the System.ServiceModel assembly.
In the previous section, we created the solution and the project for the HelloWorld WCF service. From this section on, we will start building the HelloWorld WCF service. First, we need to create the service contract interface.
Now an empty service interface file has been added to the project, which we are going to use as service interface.
Follow the steps below to customize it.
using
using System.ServiceModel;
[ServiceContract]
[OperationContract]
string GetMessage(string name);
The final content of the file, IHelloWorldService.cs, should look like the following:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.ServiceModel;
namespace HelloWorldService
{
[ServiceContract]
public interface IHelloWorldService
{
[OperationContract]
string GetMessage(string name);
}
}
Now that we have defined a service contract interface, we need to implement it. For this purpose we will reuse HelloWorldService.cs.
Visual Studio is smart enough to change all the related files which are references to use this new name. You can also select the file and change its name from the Properties window.
Next, follow the steps below to customize this class file.
Class1
HelloWorldService
IHelloWorldService
public class HelloWorldService: IHelloWorldService
GetMessage
public string GetMessage(string name)
{
return "Hello world from " + name + "!";
}
The final content of the file, HelloWorldService.cs, should look like the following:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace HelloWorldService
{ because you didn't add the System.ServiceModel namespace reference correctly. Revisit the previous section to add this reference, and you are all set.
System.ServiceModel
Next, we will host this WCF service in an environment and create a client application to consume it.
HelloWorldService is a class library. It has to be hosted in an environment so that client applications may access it. In this section, we will learn how to host it using IIS Express. Later, in the next chapter, we will discuss more hosting options for a WCF service.
There are several built-in host applications for WCF services within Visual Studio 2012. However, in this section, we will manually create the host application so that you can have a better understanding of what a hosting application is really like under the hood. In subsequent chapters, we will learn and use the built-in hosting application.
To host the library using IIS Express, we need to add a new website to the solution. Follow these steps to create this website:
If you cannot find the template ASP.NET Empty Web SIte, make sure you have chosen New Web Site, not New Project in previous step.
Now we can run the website inside IIS Express. If you start the website, HostDevServer, by pressing Ctrl + F5 or by selecting Debug | Start Without Debugging… in the menu, you will see an empty website in your browser with an error.
If you pressed F5 (or selected Debug | Start Debugging from the menu), you may see a dialog saying, Debugging Not Enabled (as shown below). Choose the option, Run without debugging (equivalent to Ctrl + F5) and click on the OK button to continue. We will explore the debugging options of a WCF service in next chapter. Until then we will continue to use Ctrl + F5 to start the website without debugging.
At this point, you should have the HostDevServer site up and running. This site is actually running inside IIS Express. IIS Express is a lightweight, self-contained version of IIS optimized for developers. This web server is intended to be used by developers only and has functionality similar to that of the Internet Information Services (IIS) server. It also has some limitations, for example, it only supports the HTTP and HTTPS protocols. When a new website is created within Visual Studio, IIS Express will automatically assign a port for it. You can find your website’s port in the Properties of your website, like in the diagram shown below.
IIS Express is normally started from within Visual Studio when you need to debug or unit test a web project. If you really need to start it from outside Visual Studio, you can use a command line statement in the following format:
C:\Program Files\IIS Express\iisexpress /path:c:\myapp\ /port:1054 /clr:v4.0
For our website, the statement should be like this:
"C:\Program Files\IIS Express\iisexpress"
/path:C:\SOAwithWCFandLINQ\Projects\HelloWorld\HostDevServer /port:1054 /clr:v4.0
IISexpress.exe is located under your program files directory. In x64 system, it should be under your program files (x86) directory.
Although we can start the web site now, it is only an empty site. Currently, it does not host our HelloWorldService. This is because we haven't specified which service this web site should host, or an entry point for this web site.
To specify which service our web site will host, we can add a svc file to the web site. Since .NET 4.0, we can also use file-less (svc-less) activation service to accomplish this. In this section, we will take the file-less approach to specify the service (please refer to my previous article to see how to do it with a real svc file).
Now let’s modify the web.config file of the website to host our HelloWorldService WCF service. Open the web.config file of the website and change it to be like this:
<?xml version="1.0"?>
<!--
For more information on how to configure your ASP.NET application, please visit
<a href="">
-->
<configuration>
<system.web>
<compilation debug="false" targetFramework="4.5"/>
<httpRuntime targetFramework="4.5"/>
</system.web>
<system.serviceModel>
<serviceHostingEnvironment >
<serviceActivations>
<add factory="System.ServiceModel.Activation.ServiceHostFactory"
relativeAddress="~/HostDevServer/HelloWorldService.svc"
service="HelloWorldService.HelloWorldService"/>
</serviceActivations>
</serviceHostingEnvironment>
<behaviors>
<serviceBehaviors>
<behavior>
<serviceMetadata httpGetEnabled="true"/>
</behavior>
</serviceBehaviors>
</behaviors>
</system.serviceModel>
</configuration>
Note that the system.serviceModel node is the only code we have added to the config file.
Now, if you start the website by pressing Ctrl + F5 (don't use F5 or menu option Debug | Start Debugging until we discuss these later), you will still see the same empty website with the same error. However at this time we have a service hosted within this website, so just append HostDevServer/HelloWorldService.svc after the address (it should look like) and you will get the description of this service, that is, how to get the wsdl file of this service and how to create a client to consume this service. You should see a page similar to the following one..
Now that we have successfully created and hosted a WCF service, we need a client to consume the service. We will create a C# client application to consume HelloWorldService.
In this section, we will create a Windows console application to call the WCF service. Later in this book, we will create other types' applications to test our other WCF services, such as a WinForms application, and a WPF application.
First, we need to create a console application project and add it to the solution. Follow these steps to create the console application:
In order to consume a non-RESTful those two files:
"C:\Program Files\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools\SvcUtil.exe" /out:HelloWorldServiceRef.cs /config:app.config
You will see an output similar to that shown in the following screenshot:.
HelloWorldServiceClient
Inside the configuration file, you will see the definitions of HelloWorldService such as the endpoint address, binding, timeout settings, and security behaviors of the service.
You can also create a proxy dynamically at runtime, or call the service through a Channel Factory instead of a proxy. Beware if you go with the Channel Factory approach, you may have to share your interface DLL with the clients.
In a later chapter, we will learn how to generate the proxy and configuration files through Visual Studio when we add a reference to a WCF service (Visual Studio really just calls the same command line tool SvcUtil.exe to do the work).
Before we can run the client application, we still have some more work to do. Follow these steps to finish the customization:
HelloWorldServiceClient client = new HelloWorldServiceClient();
Using the default constructor on the HelloWorldServiceClient means that the client runtime will look for the default client endpoint in the file app.config, which is present due to using the SvcUtil.
Then we can call its method just as we would do for any other object:
Console.WriteLine(client.GetMessage("Mike Liu"));
Pass your name as the parameter to the GetMessage method so that it prints out a message for you.
We are now ready to run this client program.
First, make sure the service host application HostDevServer has been started. If you previously stopped it, start it now (you need to set HostDevServer as the startup project, and press Ctrl + F5 to start it in non-debugging mode) , or you can just right click on the project HostDevServer and select “View in Browser(Internet Explorer)” from the context menu).
Then, from Solution Explorer, right-click on the project, HelloWorldClient, select Set as StartUp Project, and then press Ctrl + F5 to run it.
You will see output as shown in the following image:
Because we know we have to start the service host application before we run the client program, we can make some changes to the solution to automate this task, that is, to automatically start the service immediately before we run the client program.
To do this, in Solution Explorer, right-click on Solution, select Properties from the context menu, and you will see the Solution 'HelloWorld' Property Pages dialog box.
On this page, first select the option button, Multiple startup projects. Then change the action of HostDevServer to Start without debugging. Change HelloWorldClient to the same action. Note HostDevServer must be above HelloWorldClient. If it is not, use the arrows to move it to the top.
Now to test it, first stop the service, and then press Ctrl + F5. You will notice that HostDevServer is started first, and then the client program runs without errors.
Note that this will only work inside Visual Studio IDE. If you start the client program from Windows Explorer (C:\SOAWithWCFandLINQ\Projects\HelloWorld\HelloWorldClient\bin\Debug\HelloWorldClient.exe) without first starting the service, the service won't get started automatically and you will get an error message saying 'There was no endpoint listening at'.
In this article, we have implemented a basic WCF service, hosted it within IIS Express, and created a command line program to reference and consume this basic WCF service. At this point, you should have a thorough understanding as to what a WCF service is under the hood. You will benefit from this when you develop WCF services using Visual Studio WCF templates or automation guidance packages. In next chapter, we will explore more hosting options and discuss how to debug a WCF service.. It is ideal for beginners who want to learn how to build scalable, powerful, easy-to-maintain WCF Services. This book is rich with example code, clear explanations, interesting examples, and practical advice. It is a truly hands-on book for C++ and C# developers.
You don't need to have any experience.
News on the future of C# as a language | http://www.codeproject.com/Articles/531332/Implementing-a-Basic-Hello-World-WCF-Service-v4-5?msg=4501416 | CC-MAIN-2014-10 | en | refinedweb |
Python FFI with ctypes and cffiMarch 9th, 2013 at 5:41 am
In a previous post, I demonstrated how to use libffi to perform fully dynamic calls to C code, where "fully dynamic" means that even the types of the arguments and return values are determined at runtime.
Here I want to discuss how the same task is done from Python, both with the existing stdlib ctypes package and the new cffi library, developed by the PyPy team and a candidate for inclusion into the Python stdlib in the future.
With ctypes
I’ll start with the shared object discussed before; the following code loads and runs it in Python using ctypes. I tested it on Python 3.2, but other versions should work too (including 2.7):
from ctypes import cdll, Structure, c_int, c_double, c_uint lib = cdll.LoadLibrary('./libsomelib.so') print('Loaded lib {0}'.format(lib)) # Describe the DataPoint structure to ctypes. class DataPoint(Structure): _fields_ = [('num', c_int), ('dnum', c_double)] # Initialize the DataPoint[4] argument. Since we just use the DataPoint[4] # type once, an anonymous instance will do. dps = (DataPoint * 4)((2, 2.2), (3, 3.3), (4, 4.4), (5, 5.5)) # Grab add_data from the library and specify its return type. # Note: .argtypes can also be specified add_data_fn = lib.add_data add_data_fn.restype = DataPoint print('Calling add_data via ctypes') dout = add_data_fn(dps, 4) print('dout = {0}, {1}'.format(dout.num, dout.dnum))
This is pretty straightforward. As far as dynamic language FFIs go, ctypes is pretty good. But we can do better. The main problem with ctypes is that we have to fully repeat the C declarations to ctypes, using its specific API. For example, see the description of the DataPoint structure. The return type should also be explicitly specified. Not only is this a lot of work for wrapping non-trivial C libraries, it’s also error prone. If you make a mistake translating a C header to a ctypes description, you will likely get a segfault at runtime which isn’t easy to debug without having a debug build of Python available. ctypes allows us to explicitly specify argtypes on a function for some measure of type checking, but this is only within the Python code – given that you got the declaration right, it will help with passing the correct types of objects. But if you didn’t get the declaration right, nothing will save you.
How does it work?
ctypes is a Python wrapper around libffi. The CPython project carries a version of libffi with it, and ctypes consists of a C extension module linking to libffi and Python code for the required glue. If you understand how to use libffi, it should be easy to see how ctypes works.
While libffi is quite powerful, it also has some limitations, which by extension apply to ctypes. For example, passing unions by value to dynamically-loaded functions is not supported. But overall, the benefits outweigh the limitations, which are not hard to work around when needed.
With cffi
cffi tries to improve on ctypes by using an interesting approach. It allows you to avoid re-writing your C declarations in ctypes notation, by being able to parse actual C declarations and inferring the required data types and function signatures automatically. Here’s the same example implemented with cffi (tested with cffi 0.5 on Python 3.2):
from cffi import FFI ffi = FFI() lib = ffi.dlopen('./libsomelib.so') print('Loaded lib {0}'.format(lib)) # Describe the data type and function prototype to cffi. ffi.cdef(''' typedef struct { int num; double dnum; } DataPoint; DataPoint add_data(const DataPoint* dps, unsigned n); ''') # Create an array of DataPoint structs and initialize it. dps = ffi.new('DataPoint[]', [(2, 2.2), (3, 3.3), (4, 4.4), (5, 5.5)]) print('Calling add_data via cffi') # Interesting variation: passing invalid arguments to add_data will trigger # a cffi type-checking exception. dout = lib.add_data(dps, 4) print('dout = {0}, {1}'.format(dout.num, dout.dnum))
Instead of tediously describing the C declarations to Python, cffi just consumes them directly and produces all the required glue automatically. It’s much harder to get things wrong and run into segfaults.
Note that this demonstrates what cffi calls the ABI level. There’s another, more ambitious, use of cffi which uses the system C compiler to auto-complete missing parts of declarations. I’m just focusing on the ABI level here, since it requires no C compiler. How does it work? Deep down, cffi also relies on libffi to generate the actual low-level calls. To parse the C declarations, it uses pycparser.
Another cool thing about cffi is that being part of the PyPy ecosystem, it can actually benefit from PyPy’s JIT capabilities. As I’ve mentioned in a previous post, using libffi is much slower than compiler-generated calls because a lot of the argument set-up work has to happen for each call. But once we actually start running, in practice the signatures of called functions never change. So a JIT compiler could be smarter about it and generate faster, more direct calls. I don’t know whether PyPy is already doing this with cffi, but I’m pretty sure it’s in their plans.
A more complex example
I want to show another example, which demonstrates a more involved function being called – the POSIX readdir_r (the reentrant version of readdir). This example is based on some demo/ code in the cffi source tree. Here’s code using ctypes to list the contents of a directory:
from ctypes import (CDLL, byref, Structure, POINTER, c_int, c_void_p, c_long, c_ushort, c_ubyte, c_char, c_char_p, c_void_p) # CDLL(None) invokes dlopen(NULL), which loads the currently running # process - in our case Python itself. Since Python is linked with # libc, readdir_r will be found there. # Alternatively, we can just explicitly load 'libc.so.6'. lib = CDLL(None) print('Loaded lib {0}'.format(lib)) # Describe the types needed for readdir_r. class DIRENT(Structure): _fields_ = [('d_ino', c_long), ('d_off', c_long), ('d_reclen', c_ushort), ('d_type', c_ubyte), ('d_name', c_char * 256)] DIR_p = c_void_p DIRENT_p = POINTER(DIRENT) DIRENT_pp = POINTER(DIRENT_p) # Load the functions we need from the C library. Specify their # argument and return types. readdir_r = lib.readdir_r readdir_r.argtypes = [DIR_p, DIRENT_p, DIRENT_pp] readdir_r.restype = c_int opendir = lib.opendir opendir.argtypes = [c_char_p] opendir.restype = DIR_p closedir = lib.closedir closedir.argtypes = [DIR_p] closedir.restype = c_int # opendir's path argument is char*, hence bytes. path = b'/tmp' dir_fd = opendir(path) if not dir_fd: raise RuntimeError('opendir failed') dirent = DIRENT() result = DIRENT_p() while True: # Note that byref() here is optional since ctypes can do it on its # own by observing the argtypes declared for readdir_r. I keep byref # for explicitness. if readdir_r(dir_fd, byref(dirent), byref(result)): raise RuntimeError('readdir_r failed') if not result: # If (*result == NULL), we're done. break # dirent.d_name is char[], hence we decode it to get a unicode # string. print('Found: ' + dirent.d_name.decode('utf-8')) closedir(dir_fd)
Here I went one step farther and actually described the required argument types for imported functions. Once again, this only helps us avoid errors to some extent. You’ll have to agree that the code is tedious. Using cffi, we can just "copy paste" the C declarations and focus on actual calling:
from cffi import FFI ffi = FFI() ffi.cdef(""" typedef void DIR; typedef long ino_t; typedef long off_t; */ }; DIR *opendir(const char *name); int readdir_r(DIR *dirp, struct dirent *entry, struct dirent **result); int closedir(DIR *dirp); """) # Load symbols from the current process (Python). lib = ffi.dlopen(None) print('Loaded lib {0}'.format(lib)) path = b'/tmp' dir_fd = lib.opendir(path) if not dir_fd: raise RuntimeError('opendir failed') # Allocate the pointers passed to readdir_r. dirent = ffi.new('struct dirent*') result = ffi.new('struct dirent**') while True: if lib.readdir_r(dir_fd, dirent, result): raise RuntimeError('readdir_r failed') if result[0] == ffi.NULL: # If (*result == NULL), we're done. break print('Found: ' + ffi.string(dirent.d_name).decode('utf-8')) lib.closedir(dir_fd)
I placed "copy paste" in quotes on purpose, because the man page for readdir_r doesn’t fully specify all the typedef declarations inside struct dirent. For example, you need to do some digging to discover that ino_t is long. cffi‘s other goal, the API level, is to enable Python programmers to skip such declarations and let the C compiler complete the details. But since this requires a C compiler, I see it as a very different solution from the ABI level. In fact, it’s not really a FFI at this point, but rather an alternative way for writing C extensions to Python code.
Related posts:
March 9th, 2013 at 10:40
IMHO one should use some C-’header’ to cytpe converter to do the job (e.g. wraptypes from pyglet or ctypesgen just to mention some) but the DRY aproach taken by cffi is really interesting. Let’s see how is going to evolve…
March 10th, 2013 at 00:41
Once you start using a compiler, you might as well go with Cython which lets you “cimport” from C-header files in a more pythonic way than copying stuff out of header files into strings in python modules. With both ctypes and Cython available, it’s hard to see the gap that cffi fills.
March 28th, 2013 at 18:34
Even with Cython you still have to “cdef” every function from the header files that you intend to actually use. Cython won’t parse the headers for you. | http://eli.thegreenplace.net/2013/03/09/python-ffi-with-ctypes-and-cffi/ | CC-MAIN-2014-10 | en | refinedweb |
I run my app for android which I made with flash cs 5.5 and Air 3.0 on my Emulator. Way I see only a black screen on my Emulator?
How are you installing the apk into the emulator?
I have the sdk emulator 5554 android 2.2. I see my app when I choise launcher icon -> settings-> applications -> manage applicatios and I see in downloaded. I have install my app whith commad line cmd.exe and adb commad line ./ adb -e install myapp.apk
Also, I run myapp on my emulator from flash pro cs 5.5 and File —> Air Android Settings and I choise emulator relese and then publish
Have you quick published for device debugging from flash before? Have you ever made any other apps or is this your first?
It requires Adobe AIR installed on an Android so you will need to make sure you're putting the captive runtime in your APK or it will not run.
I have install the adobe AIR .apk file on emulator but I cant download and run any other app
Here's a basic link from Adobe on installing AIR although it's not telling you where to get it from, so it will install whatever version of the SDK you have: 9cd0cb-7ff6.html
Your problem is your Android Emulator does not have "Adobe AIR Runtime" installed. For Android, your .apk file does not contain the Adobe AIR Runtime. So you need to get Adobe AIR Runtime installed into your Emulator or your apk will never run.
Captive Runtime is an option in Flash Builder when exporting an APK file. What it means is it embeds an Adobe AIR Runtime directly into the APK. I'm not certain if Flash Pro supports this option but if it does it will be in publish settings. You will notice your apk file size goes up drastically once you install the captive runtime. If you install it, your app will work without needing to install Adobe AIR Runtime on the Android Emulator.
Here's another link on this issue:
I can't find the apk file. Where can I found and download the Adobe Air runtime .apk file?????
I find in this site
the air runtime 2.7 but the adobe flash cs it haves the runtime.apk in the setup folder in program files 3.1 but nothing it works. May Must I run another android emulator 2.1 or 1.8 or less
If you read the links I posted, the AIR SDK you have installed has a runtime it will install for you on your device.
Connect your device and make sure it's visible to the SDK, then run this line:
adt -installRuntime -platform android
It will install the AIR runtime that is located in your AIR SDK in this path:
AIRSDK\runtimes\air\android\emulator
I have not a device only Emulator and I'd like to run my app on emulator on pc. Also, I have install the Runtime.apk (for emulator) on emulator with android 2.2. Must I run the AIR before I run myapp on emulator
You can run that to install it into emulator, the device isn't necessary just optional.
how can I incude the into emulator but not download it ?
Have you run the command I listed above?
adt -installRuntime -platform android
That should install the SDK into the emulator itself as mentioned in the links provided. If you have Android SDK version 3.0 then it should install AIR 3.0 Runtime into your emulator.
I'd suggest upgrading to AIR SDK 3.2 if you want some of the new 3d goodies.
edit:
Having the latest Android SDK helps too:
Also if your emulator is not running (run adb to see devices connected) then it will not be able to install it. By saying "make sure you have your device connected" (for clarity) I mean "run your emulator because it is considered a device".
where is the adt.exe? must I run with cmd.exe? I have only the adb.exe in this location C:\android-sdk-windows\platform-tools and now I extract the zip AdobeAIRSDK.zip in C:\AdobeAIRSDK location
adt comes with any AIR SDK you have installed (there's one in Flash Pro and one in Flash Builder or you can download the latest from the link I gave).
Don't get confused. The ADT tool is for Adobe AIR, the Android Emulator adb tool is for the Android Emulator. They're 2 completely different SDKs. What I mentioned (adb) is in the Android SDK not the AIR SDK.
Yes adb is installed in the platform-tools folder and yes you should be in a cmd prompt for ease. In the command prompt just type "adb devices" and it will tell you any devices (including emulators) you have running. Before you run it, you should have the emulator open running on a device profile you already set up via the AVD Manager.
It's worth mentioning, if you already haven't, that you should run the SDK Manager.exe and make sure you have already installed the version of Android platform you want to target. Then when you run the AVD Manager you can select that platform to run the emulator on. Then 'adb devices' will see it as a connected device.
After all of that, run the command from the AIR SDK bin folder:
adt -installRuntime -platform android
That should install the AIR runtime on what it thinks is your connected device (the emulator). After install your APKs should work.
whith cmd.exe and adb commant in this folder - location C:\android-sdk-windows\platform-tools
I see my Emulator -5554 device.
whith cmd.exe and adt comand "adt -installRuntime -platform android" in this folder - location C:\AdobeAIRSDK\bin
It can not install tell me the following message
Failure [INSTALL_FAILED_OLDER_SDK]
I try two times.
More over, I have target Android 2.1 platform 2.1 API Level 7 and SD Card 80MB
Aslo this location C:\AdobeAIRSDK\bin include the Adl.exe
What Happened?
Sometimes a picture is needed. Getting your device running in the emulator with a profile installed is key here.
What you see me have up is AVD. I installed the 2.3.3 platform, for example. I made a "new" device using it. I pressed the Start button and my Emulator pops up.
Then in the DOS window above you can see me navigate to my Flash Builder install (which has AIR 3.2 overlayed) and I'm in the bin folder. I run the command as I've mentioned. It takes about 3 minutes to complete and you see no errors.
I then change back to my AndroidSDK and go in platform tools and run "adb devices". You can see I have both my phone AND the Android Emulator running as devices. Yet the adt command still worked perfectly and installed in the Emulator as you see in the picture.
Look at the Emulator screenshot. After I ran the adt command, I now have Adobe AIR installed in the emulator.
Hope that helps.
I have not flash builder I have adobe flash cs 5.5 and I have not SDK folder must I put in flash pro cs 5.5 the downloaded AdobeAIRSDK?
Flash CS5.5 comes preloaded with AIR 2.6. The location is here:
C:\Program Files (x86)\Adobe\Adobe Flash CS5.5\AIR2.6\bin
But 2.6 is very old. You should overlay AIR 3.2 into Flash CS5.5 for best results. Here's a link to explain this process: ional.html
Don't worry that the tutorial doesn't say the latest version of AIR 3.2, the process is pretty much identical except you change the version of AIR they mentioned (the namespace, folder name, etc) to 3.2 instead of 3.0.
Otherwise.
All you really need to do is download the latest AIR SDK with the link mentioned above and extract it anywhere on your system. You do not need to have AIR3.2 installed in Flash Pro (although I'd recommend it). It doesn't matter where you stick the AIR SDK files at all. I just happen to have 3.2 overlayed in Flash Builder as that's what I use instead of Flash Pro (but I have both).
I have done this But in the bin folder it haves only 2 files adl.exe and adt.bat
in android 2.1 I have install an AIR apk file but it can run my app. When I run myapp see a black screen
Lets step back and make one thing clear.
AndroidSDK and AIR SDK are 2 different SDKs.
They are separate.
AIR SDK has adt and adl.
Android SDK has adb, AVD Manager and SDK Manager.
You need to have both of them.
Here's a picture of my overlayed AIR 3.2 on Flash Pro CS5.5 as well as a completely separate folder for my AndroidSDK. You can see the executables I am talking about, nested in their proper separate SDKs.
You can see how the top 2 Windows Explorers show the AndroidSDK. You can see adb as well as SDK Manager and AVD Manager.
You want to go to the location you see in the top 2 to set up your emulator to run.
The bottom Windows Explorer shows this path for me:
C:\Program Files (x86)\Adobe\Adobe Flash CS5.5\AIR2.6
This is the AIR SDK. Regardless that the name of the folder says AIR2.6, the instructions above tell you to overlay AIR 3.2 into this same folder. So this is actually my AIR 3.2 SDK. In there you can see clearly there is adt and adl.
You need to keep in mind which SDK has which executables.
If you do not follow the depicted, verbose instructions here to install the AIR SDK in your emulator and you are not producing an .apk that embeds AIR (captive runtime) then you will only ever see a black screen and your APK will never work.
I understand and I have both of SDKs. ok you are very clear. what can I put in this location C:\Program Files\Adobe\Adobe Flash CS5.5\AIR2.6\bin
to install my emulator.
I put the adl and adt in the androis sdk->....->platform tools
run the command line adt -installRuntime........ and it say me it have not access to adt.jar file om android sdk platform tools\...\bin
om the top 2 image "android sdk" it haves the emulator
The androind SDK emulator is not a part of Flash Pro. It is separate, yet they integrate.
You run your emulator from the AndroidSDK first, then Flash Pro can see it as a device and thus let you run on the emulator publish option.
Here's a picture. I installed AIR as I outlined above. I ran the emulator from AndroidSDK's AVD Manager. So before I selected to export for emulator I already had the emulator running.
I then just made a quick fake certificate so I could publish and pressed publish. It found the emulator, ran it, and I see my text (not a black screen).
You can clearly see I have a flash document with nothing but publish for AIR for Android being used, my emulator up on the right, I choose to export to Emulator and check to install and launch on the device. I click Publish and what do you see in the emulator? Not a black screen, you see my text.
okokokoko thanks a looooooot......!!!!!!!!!
before this action I have done this, I put in this location C:\Program Files\Adobe\Adobe Flash CS5.5\AIR2.6\bin
on the bin folder on adobe flash cs5.5 on the overlay AIR my app adb.exe and two dll , adbwinusbapi.dll and adbwinapi.dll (and the lib folder which include the dx.jar file but I suppose it's not important) and run the cmd.exe in the AIR2.6\bin and I have wrote the command adt -installRuntime -platform android and install successfully the AIR on my emulator 20.70 MB and then I setup my app with the command adb -e install myapp.apk success and now it say me to write my ip address dut in not neede it because when I choise cancel, rum my app.
Are you all set now? If so please mark the question as answered. Thanks
yes ofcourse , I have another question, how can I exit my app, What can I write in actionscript? I put a button on tree frame and I'd like to go in the first frame and exit with this button from myapp or who can I exit from myapp and when I run it to go to first frame.
and what it can open my mail to write a e-mail? the emulator say unsupported action. I hope it possible in real phone.
Exiting on Android? Run this:
NativeApplication.nativeApplication.exit();
For mailto you can try the old:
navigateToURL(new URLRequest("mailto:[email protected]"));
yes for android, if the name of button is ExitB must I write ExitB.NativeApplication.nativeApplication.exit(); ???
ok I made buttons and I write in the functions of instance-object. why my picture in every frame is smaller from my backround when I run my app on emulator????
I have in default size the background and my picture of every frame is smaller on emulator and in the fla file and the swf file which I run it everything is ok.
The "name" is a tricky subject. You can only access an object on the display list by a quick name if it is placed in your timeline from your library and you assign it an instance name of ExitB. What you name it in the library is meaningless. Just to get that straight.
I'll paste an old image of what an instance name is (in the properties panel) when you have something selected that is a movieclip on stage. This picture is for another thread but you'll get the general idea.
In the case it is as you say and your instance name is ExitB then you would write a listener like so:
ExitB.addEventListener(MouseEvent.CLICK, function(e:MouseEvent):void { NativeApplication.nativeApplication.exit(); });
As far as the pictures being too small you'll need to give more information on this. Do remember the emulator tries to mimic android devices but androids have the unique issue of unknown "form factor". It means any developer can take their phone, give it any resolution screen they want and run the OS. So your app is going to need to get ready to sniff out the available resolution and resize your interface to suit it.
if you wand answer again in the second discussion with theme "Basics stage and background" to give you another correct answer
I don't give instance name of any picture Must I do to control the image? Also, I tick on the emulator the "unknown "form factor". | http://forums.adobe.com/message/4344725 | CC-MAIN-2014-10 | en | refinedweb |
Related Titles
- Full Description.
Of course, Pro PHP Programming gives a thorough survey of PHP post-5.3. You'll begin by working through an informative survey and clear guide to object-oriented PHP. Then, you'll be set for the core of the book on modern PHP applications. Now, you'll be able to start with the chapter on PHP for mobile programming and move on to sampling social media applications. You'll also be guided through new PHP programming language features like closures and namespaces.. | http://www.apress.com/catalog/product/view/id/4910/s/9781430235606/category/1596/?___SID=U | CC-MAIN-2014-10 | en | refinedweb |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: Java to write a blob to disk, does any one have the java to read a blob from disk
Your Java example is used to get data from PL/SQL to disk. If you MUST use java to go the other way, I cannot help. If you want to get from a disk file to a PL/SQL blob there is no need for java - use BFILE.
Garry Gillies
Database Administrator
Business Systems
Weir Pumps Ltd
149 Newlands Road, Cathcart, Glasgow, G44 4EX
T: +44 0141 308 3982 F: +44 0141 633 1147 E: g.gillies_at_weirpumps.com "Juan Cachito Reyes Pacheco" To: <oracle-l_at_freelists.org> <jreyes_at_dazasoftwa cc: re.com> Subject: Java to write a blob to disk, does any one have Sent by: the java to read a blob from disk oracle-l-bounce_at_fr eelists.org 26/02/04 15:38 Please respond to oracle-l
>From Mark A. Williams from Indianapolis, IN USA
Here is the script in java to save a blob to disk (works perfectly)
Adjust for your environment and line wrapping may need to be undone...
connect / as sysdba;
grant javauserpriv to scott;
begin
dbms_java.grant_permission('SCOTT',
'java.io.FilePermission','c:\temp\blob.txt', 'write'); end;
/
connect scott/tiger;
create or replace java source named "exportBLOB" as
import java.lang.*; import java.io.*; import java.sql.*;
// get an input stream from the blob InputStream l_in = p_blob.getBinaryStream();
// get buffer size from blob and use this to create buffer for stream
int l_size = p_blob.getBufferSize();
byte[] l_buffer = new byte[l_size];
int l_length = -1;
// write the blob data to the output stream while ((l_length = l_in.read(l_buffer)) != -1) {
l_out.write(l_buffer, 0, l_length); l_out.flush();
// close the streams
l_in.close();
l_out.close();
}
};
/
-- Archives are at FAQ is at -----------------------------------------------------------------. ---------------------------------------------------------------- Please see the official ORACLE-L FAQ: ---------------------------------------------------------------- To unsubscribe send email to: oracle-l-request_at_freelists.org put 'unsubscribe' in the subject line. -- Archives are at FAQ is at -----------------------------------------------------------------Received on Thu Feb 26 2004 - 10:16:50 CST
Original text of this message | http://www.orafaq.com/maillist/oracle-l/2004/02/26/2692.htm | CC-MAIN-2014-10 | en | refinedweb |
#include <NOX_MultiVector.H>
Inheritance diagram for NOX::MultiVector:. | http://trilinos.sandia.gov/packages/docs/r8.0/packages/nox/doc/html/classNOX_1_1MultiVector.html | CC-MAIN-2014-10 | en | refinedweb |
Mediator pattern is the event-driven pattern, where handlers are registered for specific event and when the event is triggered, event handlers are invoked and the underlying logic is executed. It is highly used with the Microservices architecture which is one of the most demanding architecture for large-scale enterprise applications these days. In this blog post, I will show the usage of mediator pattern and how we implement using the MediatR library in .NET Core.
What is Mediator Pattern
As per the definition mediator pattern defines an object that encapsulates the logic of how objects interact with each other. Generally, in business applications we have form that contains some fields. For each action we call a controller to invoke the backend manager to execute particular logic. If any change is required in the underlying logic, same method needed to be modified. With mediator pattern, we can break this coupling and encapsulate the interaction between the objects by defining one or multiple handlers for each request in the system.
How to use MediatR in .NET Core
MediatR is the simple mediator pattern implementation library in .NET that provides support for request/response, commands, queries, notifications and so on.
To use MediatR we will simply add two packages namely MediatR and MediatR.Extensions.Microsoft.DependencyInjection into your ASP.NET Core project. Once these packages are added, we will simply add MediatR in our ConfigureServices method in Startup class as shown below
public void ConfigureServices(IServiceCollection services) { services.AddMediatR(); services.AddMvc(); }
MediatR provides two types of messages; one is of type Notification that just publishes the message executed by one or multiple handlers and the other one is the Request/Response that is only executed by one handler that returns the response of the type defined in the event.
Let’s create a notification event first that will execute multiple handlers when the event is invoked. Here is the simple LoggerNotification event that implements the INotification interface of MediatR library
public class LoggerEvent : INotification { public string _message; public LoggerEvent(string message) { _message = message; } }
And here are the three notification handlers used to log information into the database, filesystem and email.
Here is the sample implementation of DBNotificationHandler that implements the INotificationHandler of MediatR
public class DBNotificationHandler : INotificationHandler { public Task Handle(LoggerEvent notification, CancellationToken cancellationToken) { string message = notification._message; LogtoDB(message); return Task.FromResult(0); } private void LogtoDB(string message) => throw new NotImplementedException(); }
Same goes with EmailNotificationHandler for email and FileNotificationHandler to log for file as shown below.
public class EmailNotificationHandler : INotificationHandler { public Task Handle(LoggerEvent notification, CancellationToken cancellationToken) { //send message in email string message = notification._message; SendEmail(message); return Task.FromResult(0); } private void SendEmail(string message) => throw new NotImplementedException(); }
public class FileNotificationHandler : INotificationHandler { public Task Handle(LoggerEvent notification, CancellationToken cancellationToken) { string message = notification._message; WriteToFile(message); return Task.FromResult(0); } private void WriteToFile(string message) => throw new NotImplementedException(); }
Finally, we can use MediatR in our MVC Controller and call publish as shown below. It will invoke all the handlers attached to the LoggerEvent and execute the underlying logic.
[Produces("application/json")] [Route("api/Person")] public class PersonController : Controller { private readonly IMediator _mediator; public PersonController(IMediator mediator) { this._mediator = mediator; } [HttpPost] public void SavePerson(Person person) { _mediator.Publish(new LoggerEvent($"Person Id={person.PersonId}, Name ={person.FirstName + person.LastName}, Email={person.Email}")); } }
In the next post, I will show a simple example of sending a Request/Response message using MediatR.
Hope this helps! | https://ovaismehboob.com/2018/01/31/implementing-mediator-pattern-in-net-core-using-mediatr/ | CC-MAIN-2020-16 | en | refinedweb |
Secure Your Web Apps Using the Servlet API
Prior to Java Enterprise Edition (Java EE) 7, there were only a couple of ways to secure your servlets. These included HTTP Basic Authentication and oAuth. In either case, a callback mechanism was employed to prompt the user for his or her credentials. If the credentials—defined in the web.xml file—were accepted, the user would be "authenticated." On the server, his or her access rights had to be matched against the access rights required to access the resource. A successful match would allow the user to proceed; otherwise, he or she would receive a 403 error.
The Servlet API 3.1 introduced several new annotation types for use in Servlet classes, including @ServletSecurity, which is used to define access control constraints to servlets. By using annotations, we now can enforce security constraints without having to declare them in the web.xml file. In today's article, we'll configure a Banking servlet to invoke HTTP Basic Authentication using annotations.
Creating the Project
One of the best (if not the best) IDEs for Web application development is Eclipse. Its many amazing features and versatility make it a favorite among developers everywhere. We'll be using Eclipse Oxygen here today.
You'll also need a Web server that supports servlets. We'll be using Tomcat 9 in our project. It's an open source implementation of the Java Servlet, JavaServer Pages, Java Expression Language, and Java WebSocket technologies.
When you're ready:
- Launch Eclipse and create a new Dynamic Web project.
On the New Dynamic Web Project dialog:
- Enter a Project name of "AcmeBankingApp".
- Select "Apache Tomcat v9.0" for the Target runtime. If the "Apache Tomcat v9.0" item is not there, follow the steps listed in the next section, "Adding the Tomcat Runtime."
- Accept all of the defaults. The dialog should like this:
Figure 1: The Dynamic Web Project
- Click the Finish button to close the dialog and create the project.
Here's our new project in the Project Explorer:
Figure 2: The new project
Adding the Tomcat Runtime
If you haven't added your Tomcat server to the Eclipse Preferences page via Window -> Preferences -> Server Runtime Environments, you can add it from the Dynamic Web Project dialog by clicking the New Runtime...button.
On the New Server Runtime Environment dialog:
- Expand the Apache folder and select "Apache Tomcat v9.0" from the Runtime Environments list:
Figure 3: New Server Runtime Environment
- Click Next >.
- On the next page, use the Browse...button to navigate to the installation directory of your Tomcat server:
Figure 4: Tomcat Server
- Accept all of the other defaults and click Finish to add the Tomcat server runtime and close the dialog.
You now will be able to select the "Apache Tomcat v9.0" item from the Target runtime list.
The Index File
The index.jsp page will be the landing page of our app. On it, we'll place a button that invokes the account servlet's deposit() method.
- Right-click the WebContent folder in the Project Explorer and select New -> JSP File from the popup menu.
On the New JSP file dialog:
- Name the file "index.jsp" and click Finishto create the file.
Figure 5: The new JSP file
- In the Editor, paste the following code into the file:
<%@ page <title>ACME Banking App</title> </head> <body> <h1>Welcome to the ACME Banking App!</h1> <p>What would you like to do today?</p> <form action="account" method="post"> <button type="submit">Deposit $100.00</button> </form> </body> </html>
- Save the changes.
Clicking the Deposit button will send a POST request to the account servlet.
The Account Servlet
On the servlet, the doPost() method will handle the form request. To keep things simple, we'll calculate the new account balance within the doPost() and return it to the browser—first, without any security:
- Right-click the project root in the Project Explorer and select New -> Servlet from the popup menu.
On the Create Servlet dialog:
- Provide a Java package name of "com.acmebanking.servlet" and a Class nameof "AccountServlet":
Figure 6: Creating the servlet
- Click Finish to create the servlet.
- Add the following code to the AccountServlet.java file:
package com.acmebanking = "BankAccountServlet", description = "Represents an ACME Bank Account and its transactions", urlPatterns = {"/account", "/bankAccount" }) public class AccountServlet extends HttpServlet { private static final long serialVersionUID = 1L; private double accountBalance = 1000d; protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { this.accountBalance = accountBalance + 100d; PrintWriter writer = response.getWriter(); writer.println( "<html> Balance of the account is: $" + this.accountBalance + "</html>"); writer.flush(); } }
Securing the Servlet
At this point, anyone can access the servlet via the "/account" or "/bankAccount" path.
Let's protect it by using the @ServletSecurity annotation:
@ServletSecurity( value = @HttpConstraint(rolesAllowed = {"Member"}), httpMethodConstraints = {@HttpMethodConstraint(value = "POST", rolesAllowed = {"Admin"})})
In the preceding code, we're assigning a general HTTP Constraint as well as one that applies specifically to the POST method. The rolesAllowed element accepts an array of zero or more role names. We haven't defined any security roles yet, so when you build the project, the server will display a warning stating:
WARNING: Security role name [Admin] used in an <auth-constraint> without being defined in a <security-role>
Setting Role Names and Accounts
Tomcat maintains the list of user accounts in the tomcat-users.xml file. You can find it in this directory: CATALINA_HOME/conf (where the CATALINA_HOME environment variable is the Tomcat installation directory).
Make sure that you add the Admin user role and Jane user account:
<?xml version="1.0" encoding="UTF-8"?> <tomcat-users> <role rolename="Admin"/> <user username="Jane" password="password" roles="Admin"/> <role rolename="manager-gui"/> <user username="tomcat" password="tomcat" roles="manager-gui"/> </tomcat-users>
Running the App
Let's take the app out for a test run now and see what happens.
- Right-click the project in the Project Explorer and select Run As -> Run on Server from the context menu.
On the Run on Server dialog:
- Make sure that your Tomcat server is selected and click Next >:
Figure 7: Running on the server
- On the next page, make sure that the AcmeBankingApp appears in the Configured list, and click Finishto launch the app:
Figure 8: Ready to launch the app
- That will show the index.jsp page in the browser. Click the Deposit $100.00 buttonto invoke the servlet:
Figure 9: Invoking the servlet
- Doing so will result in an "HTTP Status 403 - Forbidden" error:
Figure 10: Access is forbidden
Don't panic; that is exactly the result that we're looking for. This error happened because we have included annotations on the servlet to only allow users with the role of "Admin" to access the servlet via POST submission.
In the next installment, we'll configure the "user" and "web" XML files to have the server present an authentication form to access the Banking Servlet.
For reference, this entire project is hosted on GitHub.
This article was originally published on June... | https://www.developer.com/java/ent/secure-your-web-apps-using-the-servlet-api.html | CC-MAIN-2020-16 | en | refinedweb |
do
Contents
Introduction
By getting involved with Phonon, you can choose between three different main tasks.
- Using the Phonon API, which allows you to develop your own multimedia application. This is discussed here.
- Hacking the Phonon library.
- Writing Phonon backend, this consists in writing interfaces that allows Phonon to use different sound/video engine. This usually requires good skills and knowledge of the engine you interface with.
Abstract
The following example lets you select a music file and plays it. To change the sound device that is used by default, use the command systemsettings.
The Code
All the code we need will be in one file, main.cpp. Create that file with the code below:
#include <phonon/mediaobject.h> #include <phonon/audiooutput.h> #include <QFileDialog> #include <QtGui/QApplication> #include <QtGui/QMainWindow> #include <QUrl> class MainWindow : public QMainWindow { Q_OBJECT public: MainWindow(); }; MainWindow::MainWindow() { Phonon::MediaObject* media = new Phonon::MediaObject(this); Phonon::createPath(media, new Phonon::AudioOutput(Phonon::MusicCategory, this)); media->setCurrentSource(QUrl::fromLocalFile(QFileDialog::getOpenFileName(0,QString("Select a file to play"),QString()))); media->play(); } int main(int argc, char **argv) { QApplication app(argc, argv); QApplication::setApplicationName("Phonon Tutorial 2"); MainWindow mw; mw.show(); return app.exec(); } #include "main.moc"
Build
As usual, we use CMake for build operations:
CMakeLists.txt
You need a file CMakeLists.txt to compile the software:
project . | https://techbase.kde.org/index.php?title=Phonon&diff=cur&oldid=8457 | CC-MAIN-2020-16 | en | refinedweb |
Active Directory
about domains and groups, 1
address Property, 1
configuration properties, 1
defaultrole Property, 1
determining user authorization levels, 1
logdetail Property, 1
overview, 1
port Property, 1
purposes used for, 1
state Property, 1
strictcertmode Property, 1
timeout Property, 1, 2
typical uses, 1
User Authentication and Authorization, 1
Active Directory Web Interface, 1
Administrator account
default username and password, 1
Administrator role
defined, 1
required to launch Remote Console, 1
alerts
CLI commands for managing alerts, 1
defining an alert rule, 1, 2
delivering SNMP traps, 1
disabling an alert rule, 1
generating email notification, 1
generating test alerts, 1
modifying an alert rule, 1
specifying destination, 1
types of levels, 1
types supported, 1, 2, 3
warnings for system failures, 1
Altiris Deployment Server, 1
baud rate, setting, 1
BIOS configurations
updating, 1
blade server modules, configuring IP addresses
editing through an Ethernet connection, 1 - 2
initializing
through DHCP, 1 - 2
through static assignment, 1 - 2
set command (ILOM), table of options, 1
Chassis Monitoring Module (CMM)
managing with ILOM, 1
Chassis Monitoring Module (CMM), configuring IP addresses
editing through an Ethernet connection, 1 - 2
initializing
through DHCP, 1 - 2
through static assignment, 1 - 2
CLI command syntax
cd command, 1
create command, 1
delete command, 1
exit command, 1
help command, 1
load command, 1
reset command, 1
set command, 1
show command, 1
start command, 1
stop command, 1
version command, 1
CLI commands
alert management commands, 1
clock settings commands, 1
general commands, 1
host system commands, 1
network and serial port commands, 1
SNMP commands, 1
syntax, 1
system access commands, 1
user commands, 1
clock settings
setting using the CLI, 1
setting using the web interface, 1, 2
command-line interface (CLI)
command quick reference, 1 - 2
command reference, 1
commands syntax, 1
ILOM target types, 1
logging in to ILOM, 1
logging out of ILOM, 1
overview, 1, 2
specification based on, 1
using hierarchical architecture, 1
data network
compared to management network, 1, 2
device redirection
behavior during Remote Console session, 1
discrete sensors
obtaining readings, 1
distinguished names
used with LDAP, 1
Dynamic Host Configuration Protocol (DHCP)
requirements for assigning IP address, 1
using to assign an IP address, 1
Ethernet management port
connecting to ILOM, 1, 2
label on server, 1
event log
capturing timestamps, 1
types of events displayed, 1
viewing and clearing using the CLI, 1
viewing and clearing using the web interface, 1
fault management
monitoring and diagnosing hardware, 1
viewing faulted components, 1 - 2
field-replaceable units (FRUs)
obtaining sensor readings, 1
firmware
updating, 1
firmware update process
overview, 1
hardware
redirecting keyboard and mouse, 1
host serial console, 1
HP OpenView, 1
HP Systems Insight Manager, 1
HTTP or HTTPS web access
enabling using the CLI, 1
enabling using the web interface, 1 - 2
IBM Director, 1
IBM Tivoli, 1
ILOM
capabilities, 1
description, 1
user interfaces supported, 1
ILOM service processor
what it runs on, 1
Integrated Lights Out Manager (ILOM)
commands
set command, blades, table of options, 1
configuring for Remote Console, 1
connecting to, 1
features of, 1
initial setup, 1
interfaces, 1
logging in using the web interface, 1
new 2.0 features, 1
preconfigured Administrator account
logging in, 1
redirecting keyboard and mouse, 1
Remote Console, configuring and launching, 1
resetting SP using the web interface, 1
roles assigned to accounts, 1
root account password, 1
system monitoring features, 1
updating firmware using the CLI, 1
updating firmware using the web interface, 1 - 2
using Sun N1 System Manager, 1
using third-party tools, 1
viewing version using the web interface, 1
Intelligent Platform Management Interface (IPMI)
Baseboard Management Controller, 1
functionality, 1
overview, 1, 2
Platform Event Trap alerts, 1
using IPMItool, 1
versions compliant with ILOM, 1
internal serial port, 1
Internet Protocol (IP) address
assigning a static IP address, 1
identifying DHCP assigned address, 1
IP address assignment
editing using the CLI, 1 - 2
editing using the web interface, 1 - 2
for DHCP assigned addresses, 1 - 2
for static assigned addresses to CMM, 1 - 2
for static assigned addresses to SP, 1 - 2
IPMItool
examples of how to use, 1 - 2
functions of, 1
references for, 1
LDAP
client operations, 1
client-server model, 1
configuring ILOM for LDAP, 1 - 2
configuring the LDAP server, 1
directory structure, 1 - 2
distinguished names, 1
overview, 1
logging in to ILOM
using the CLI, 1
using the web interface, 1
logging out of ILOM
using the CLI, 1
using the web interface, 1
Management Information Base (MIB)
description of, 1
supported MIBs used with ILOM, 1 - 2
management network
assigning IP addresses, 1
compared to data network, 1
overview, 1
media access control (MAC) address
obtaining for SP or CMM, 1
namespaces
accessed by SP, 1
network management port
connecting to ILOM, 1
network port assignment
identifying for SP and CMM, 1 - 2
network settings
configuring using the CLI, 1
configuring using the web interface, 1 - 2
pending and active properties, 1
viewing using the CLI, 1
viewing using the web interface, 1
Operator role, 1
out-of-band management, 1
PC Check Diagnostic setting
configuring for Remote Console, 1
power
monitoring available power, 1
monitoring individual power supply consumption, 1
monitoring permitted power consumption, 1
monitoring system actual power, 1
monitoring system total power consumption, 1
power monitoring
actual power, 1
available power, 1
interfaces, 1
terminology, 1
RADIUS
client-server model, 1
commands, 1 - 2
configuration parameters, 1
configuring, 1
default port number, 1
overview, 1
Remote Console
adding new server session, 1
configuring remote control settings, 1 - 2
connecting using the web interface, 1 - 2
controling device redirection, 1
exiting the application, 1
installation requirements, 1
launching using the web interface, 1 - 2
network ports and protocols, 1
overview, 1, 2
redirecting keyboard and mouse, 1
redirecting storage device or ISO image, 1 - 2
remote control settings, 1
single and multiple server views, 1 - 2
using keyboard control modes, 1
root account password
changing using the CLI, 1
changing using the web interface, 1
Scalent Virtual Operating Environment, 1
sensor readings
classes supported, 1
monitoring and diagnosing faults, 1
obtaining using the CLI, 1
obtaining using the web interface, 1
types of data reported, 1
serial console connection
configuring serial settings, 1
serial management port
connecting to ILOM, 1
serial port settings
configuring using the CLI, 1
configuring using the web interface, 1
default setting, 1
displaying using the web interface, 1
internal and external ports, 1
pending and active properties, 1
viewing using the CLI, 1
serial port, external
setting baud rate, 1
serial port, internal
setting baud rate, 1
service processor (SP)
managing with ILOM, 1
set command (ILOM)
blade options, table of, 1
Simple Network Management Protocol (SNMP)
agents functions, 1
Management Information Base, 1
management station monitoring, 1
overview, 1, 2
usage examples, 1 - 2
versions supported, 1
single sign on
enabling or disabling using the CLI, 1
enabling or disabling using the web interface, 1
overview, 1
using to launch the Remote Console, 1
SNMP traps
configuring destinations using the CLI, 1
configuring destinations using the web interface, 1
example of, 1
SNMP user accounts
managing using the web interface, 1 - 2
managing with the CLI, 1 - 2
targets, properties, and values of, 1
Solaris 10 Operating System, configuring the factory-installed OS
using a Secure Shell (SSH) connection, 1
procedure, 1, 2, 3
ssh command (Solaris)
connecting to a SP, 1, 2, 3, 4, 5, 6, 7, 8
SSH settings
key encryption using the CLI, 1
static IP address
requirements for assigning, 1
system indicators
customer changeable states, 1
illuminating conditions, 1
system assigned states, 1
viewing using the CLI, 1
viewing using the web interface, 1
system monitoring features
overview, 1
threshold sensors
obtaining readings, 1
uploading SSL certificate
using the web interface, 1
user accounts
adding and setting privileges using the web interface, 1
adding using the CLI, 1
Administrator privileges, 1
configuring using the CLI, 1
deleting using the CLI, 1
deleting using the web interface, 1
modifying using the CLI, 1
modifying using the web interface, 1
number of accounts supported, 1
roles assigned, 1
specifying names for, 1
viewing a list of, 1
viewing a specific account, 1
viewing an individual session using the CLI, 1
viewing using the CLI, 1
viewing using the web interface, 1
web interface
buttons, 1
logging in, 1
overview, 1, 2
supported browsers, 1
types of access, 1
uploading SSL certificate, 1 | https://docs.oracle.com/cd/E19469-01/820-1188-12/ix.html | CC-MAIN-2020-16 | en | refinedweb |
Upgrade A Custom Add-on To Plone 5.1¶
Installation Code¶
See PLIP 1340 for a discussion of this change.
From CMFQuickInstallerTool To GenericSetup¶
The add-ons control panel in Plone 5.1 no longer supports installation or uninstallation code
in
Extensions/install.py or
Extensions/Install.py.
If you have such code, you must switch to a GenericSetup profile.
GenericSetup is already the preferred way of writing installation code since Plone 3.
If you must use the old way, you can still use the
portal_quickinstaller in the Management Interface.
In a lot of cases, you can configure
xml files instead of using Python code.
In other cases you may need to write custom installer code (setuphandlers.py).
See the GenericSetup documentation.
Default Profile¶
Historically, when your add-on had multiple profiles,their names would be sorted alphabetically and the first one would be taken as the installation profile.
It was always recommended to use
default as name of this first profile.
Since Plone 5.1, when there is a
default profile, it is always used as the installation profile,
regardless of other profile names.
Exception: when this
default profile is marked in an
INonInstallable utility,
it is ignored and Plone falls back to using the first from the alphabetical sorting.
Uninstall¶
An uninstall profile is not required, but it is highly recommended.
Until Plone 5.0 the CMFQuickInstallerTool used to do an automatic partial cleanup, for example removing added skins and CSS resources.
This was always only partial, so you could not rely on it to fully cleanup the site.
Since Plone 5.1 this cleanup is no longer done. Best practice is to create an uninstall profile for all your packages.
If you were relying on this automatic cleanup, you need to add extra files to clean it up yourself.
You need to do that when your default profile contains one of these files:
actions.xml
componentregistry.xml
contenttyperegistry.xml. This seems rarely used.
Note
The contenttyperegistry import step only supports adding, not removing.
You may need to improve that code based on the old CMFQuickInstallerTool code.
cssregistry.xml
jsregistry.xml
skins.xml
toolset.xml
types.xml
workflows.xml
When there is no uninstall profile, the add-ons control panel will give a warning.
An uninstall profile is a profile that is registered with the name
uninstall.
See
Do Not Use portal_quickinstaller¶
Old code:
qi = getToolByName(self.context, name='portal_quickinstaller')
or:
qi = self.context.portal_quickinstaller
or:
qi = getattr(self.context, 'portal_quickinstaller')
or:
qi = getUtility(IQuickInstallerTool)
New code:
from Products.CMFPlone.utils import get_installer qi = get_installer(self.context, self.request)
or if you do not have a request:
qi = get_installer(self.context)
Alternatively, since it is a browser view, you can get it like this:
qi = getMultiAdapter((self.context, self.request), name='installer')
or with
plone.api:
from plone import api api.content.get_view( name='installer', context=self.context, request=self.request)
If you need it in a page template:
tal:define="qi context/@@installer"
Warning
Since the code really does different things than before, the method names were changed and they may accept less arguments or differently named arguments.
Products Namespace¶
There used to be special handling for the Products namespace. Not anymore.
Old code:
qi.installProduct('CMFPlacefulWorkflow')
New code:
qi.install_product('Products.CMFPlacefulWorkflow')
isProductInstalled¶
Old code:
qi.isProductInstalled(product_name)
New code:
qi.is_product_installed(product_name)
installProduct¶
Old code:
qi.installProduct(product_name)
New code:
qi.install_product(product_name)
Note
No keyword arguments are accepted.
installProducts¶
This was removed. You should iterate over a list of products instead.
Old code:
product_list = ['package.one', 'package.two'] qi.installProducts(product_list)
New code:
product_list = ['package.one', 'package.two'] for product_name in product_list: qi.install_product(product_name)
uninstallProducts¶
Old code:
qi.uninstallProducts([product_name])
New code:
qi.uninstall_product(product_name)
Note that we only support passing one product name. If you want to uninstall multiple products, you must call this method multiple times.
reinstallProducts¶
This was removed. Reinstalling is usually not a good idea: you should use an upgrade step instead. If you need to, you can uninstall and install if you want.
getLatestUpgradeStep¶
Old code:
qi.getLatestUpgradeStep(profile_id)
New code:
qi.get_latest_upgrade_step(profile_id)
isDevelopmentMode¶
This was a helper method that had got nothing to with the quick installer.
Old code:
qi = getToolByName(aq_inner(self.context), 'portal_quickinstaller') return qi.isDevelopmentMode()
New code:
from Globals import DevelopmentMode return bool(DevelopmentMode)
Note
The new code works already since Plone 4.3.
All Deprecated Methods¶
Some of these were mentioned already.
Some methods are no longer supported. These methods are still there, but they do nothing:
listInstallableProducts
listInstalledProducts
getProductFile
getProductReadme
notifyInstalled
reinstallProducts
Some methods have been renamed. The old method names are kept for backwards compatibility. They do roughly the same as before, but there are differences. And all keyword arguments are ignored. You should switch to the new methods instead:
isProductInstalled, use
is_product_installedinstead
isProductInstallable, use
is_product_installableinstead
isProductAvailable, use
is_product_installableinstead
getProductVersion, use
get_product_versioninstead
upgradeProduct, use
upgrade_productinstead
installProducts, use
install_productwith a single product instead
installProduct, use
install_productinstead
uninstallProducts, use
uninstall_productwith a single product instead.
INonInstallable¶
There used to be one
INonInstallable interface in
CMFPlone (for hiding profiles) and
another one in
CMFQuickInstallerTool (for hiding products).
In the new situation, these are combined in the one from CMFPlone.
Sample usage:
In configure.zcml:
<utility factory=".setuphandlers.NonInstallable" name="your.package" />
In setuphandlers.py:
from Products.CMFPlone.interfaces import INonInstallable from zope.interface import implementer @implementer(INonInstallable) class NonInstallable(object): def getNonInstallableProducts(self): # (This used to be in CMFQuickInstallerTool.) # Make sure this package does not show up in the add-ons # control panel: return ['collective.hidden.package'] def getNonInstallableProfiles(self): # (This was already in CMFPlone.) # Hide the base profile from your.package from the list # shown at site creation. return ['your.package:base']
When you do not need them both, you can let the other return an empty list, or you can leave that method out completely.
Note
If you need to support older Plone versions at the same time, you can let your class implement the old interface as well:
from Products.CMFQuickInstallerTool.interfaces import ( INonInstallable as INonInstallableProducts) @implementer(INonInstallableProducts) @implementer(INonInstallable) class NonInstallable(object): ...
Content Type Icons¶
Since Plone 3 there have been several breaking changes relating to content type icon rendering.
Plone 3
Content type icons where rendered as HTML tags, which were rendered with methods from plone.app.layout.icon …:
<span class="contenttype-document summary"> <img width="16" height="16" src="" alt="Page"> <a href="" class="state-published url">Welcome to Plone</a> </span>
Note
Related code in plone.app.layout (especially getIcon() and IContentIcon) and other locations was more then deprecated - it is obsolete and confusing and is getting removed.
The catalog metadata item getIcon used to be a string containing the file name of the appropriate icon (unused since Plone 4).
Since Plone 5.02 the catalog metadata item getIcon is reused for another purpose. Now it is boolean and it is set to True for items which are images or have an image property (e.g. a lead image).
Plone 4
Content type icons are rendered as background images using a sprite image and css:
<span class="summary"> <a href="" class="contenttype-document state-published url">Welcome to Plone</a> </span> .icons-on .contenttype-document { background: no-repeat transparent 0px 4px url(contenttypes-sprite.png);
Plone 5
Content type icons are rendered as fontello fonts using css elements before or after.
<span class="summary" title="Document"> <a href="" class="contenttype-document state-published url" title="Document">Welcome to Plone</a> </span> body#visual-portal-wrapper.pat-plone .outer-wrapper [class*="contenttype-"]:before, .plone-modal-body [class*="contenttype-"]:before { font-family: "Fontello"; font-size: 100%; padding: 0; margin: 0; position: relative; left: inherit; display: inline-block; color: inherit; width: 20px; height: 20px; text-align: center; margin-right: 6px; content: '\e834'; }
Example from plonetheme.barceloneta/plonetheme/barceloneta/theme/less/contents.plone.less:
body#visual-portal-wrapper.pat-plone .outer-wrapper, .plone-modal-body{ [class*="contenttype-"]:before { font-family:"Fontello"; font-size: 100%; padding: 0; margin:0; position: relative; left: inherit; display: inline-block; color: inherit; width: 20px; height: 20px; text-align: center; margin-right: @plone-padding-base-vertical; content: '\e834'; } .contenttype-folder:before { content: '\e801';} .contenttype-document:before { content: '\e80e';} .contenttype-file:before { content: none;} .contenttype-link:before { content: '\e806';} .contenttype-image:before { content: '\e810';} .contenttype-collection:before {content: '\e808';} .contenttype-event:before { content: '\e809';} .contenttype-news-item:before { content: '\e80f';} }
The wildcard definition
[class*="contenttype-"]:before ....content: '\e834'
renders the default icon for dexterity content types for all dexterity items
which have no specific CSS rule (e.g. custom dexterity content types).
The rule
.contenttype-file:before { content: none;} prevents rendering
a fontello font for file type items (e.g.
*.docx, etc..).
Instead a mimetype icon (fetched from the mime type registry) is rendered as HTML tag
(there would be too many fonts needed for all the mime types) in affected templates
e.g. in
plone.app.contenttypes.browser.templates.listing.pt:
<span class="summary" tal: <a tal: <image class="mime-icon" tal: </a> <a tal:Item Title </a> ..... </span> .
The design decision to use Fontello fonts throws up the question how to easily create custom fonts for new created custom dexterity items.
A workaround for that is to use an icon URL in the :before clause.
For the custom dexterity type dx1 you might add the line
.contenttype-dx1:before {content: url('dx1_icon.png')} to your less
file and place the icon file in to the same folder.
HiDPI Image Scales¶
In the Image Handling Settings control panel in Site Setup, you can configure HiDPI mode for extra sharp images. When you enable this, it will result in image tags like this, for improved viewing on HiDPI screens:
<img src="....jpeg" alt="alt text" title="some title" class="image-tile" srcset="...jpeg 2x, ...jpeg 3x" height="64" width="48">
To benefit from this new feature in add-on code, you must use the
tag method of image scales:
<img tal:
If you are iterating over a list of image brains, you should
use the new
@@image_scale view of the portal or the navigation root.
This will cache the result in memory, which avoids waking up the objects the next time.
<tal:block <tal:results tal: <img tal: </tal:results> </tal:block>
Assimilate collective.indexing¶
With the PLIP assimilate collective.indexing the operations for indexing, reindexing and unindexing are queued, optimized and only processed at the end of the transaction.
Only one indexing operation is done per object on any transaction. Some tests and features might expect that objects are being indexed/reindexed/unindexed right away.
You can force processing the queue directly in your code with to work around this:
from Products.CMFCore.indexing import processQueue processQueue()
For an example of a test that needed a change see
You can also disable queuing alltogether by setting the environment-variable CATALOG_OPTIMIZATION_DISABLED to 1:
CATALOG_OPTIMIZATION_DISABLED=1 ./bin/instance start
It is a good idea to try this when your tests are failing in Plone 5.1.
CMFDefault removal¶
CMFDefault was removed with Plone 5.0 but some addons still depend on in. If your addon depends on CMFDefault you need to include a specific zcml snippet.
<include package="Products.CMFPlone" file="meta-bbb.zcml" />
You can either do this by putting the above snippet as first declaration into the configure.zcml of your policy addon or by including it via buildout:
[instance] ... zcml += Products.CMFPlone-meta:meta-bbb.zcml ... | https://docs.plone.org/develop/addons/upgrade_to_51.html | CC-MAIN-2020-16 | en | refinedweb |
Torch compat
A plugin to enable or disable the torch of a device that works both on Android (including Android 4.x) and ioS.
Getting started
1) Dependency setup
First import the library to your project in your
pubspec.yaml:
torch_compat: ^1.0.2
2) Import the library in your Dart code
import 'package:torch_compat/torch_compat.dart';
3) Turn on or off the flash
TorchCompat.turnOn(); TorchCompat.turnOff(); | https://pub.dev/documentation/torch_compat/latest/index.html | CC-MAIN-2020-16 | en | refinedweb |
pls am new to java and i have a problem with this particular question.... must not be equal to 0.Decimal, getNum and getDen. The getDecimal method returns the decimal value of the current rational number that is stored. If the number 3 / 4 is stored, then getDecimal will return 0.75. Method getNum returns the numerator and method getDen returns the denominator.
well here is how far i have gone with it. the coments here are questions of where i do not understand
import java.text.DecimalFormat; public class Assignment08 { public static void main(String args[]) { System.out.println("LAB08A 70 POINT VERSION"); System.out.println(); Rational r = new Rational( Integer.parseInt(args[0]),Integer.parseInt(args[1]) ); System.out.println("Two parameter constructor"); System.out.println(r.getNum() + "/" + r.getDen()); System.out.println(r.getDecimal()); System.out.println(); } } class Rational { private int Numerator; private int Denorminator; private double d; public Rational(int a, int b) { Numerator = a; Denorminator = b; //DecimalFormat d = new DecimalFormat(0.00); // where do i paste my constructor. d = a / b; } public int getNum() {return Numerator;} public int getDen() {return Denorminator;} public double getDecimal() { DecimalFormat e = new DecimalFormat(0.00); // if i put it here it still return e.format(d); // wouldn't work. } }
if i run the program i get a compile error, -- cannot find symbol constructor DecimalFormat(double) in line 48 and also incompatable types in line 49.
pls i really dont just need the correct or right codes pls i do also need some brief explanation on where am wrong.
thanks ....
*admin edit: code tags
This post has been edited by e_okandeji: 04 September 2007 - 04:56 AM | http://www.dreamincode.net/forums/topic/32681-constructors/ | CC-MAIN-2016-50 | en | refinedweb |
In Today’s Programming Praxis problem we have to solve a logic puzzle. The provided solution uses a 182-line logic programming library and then takes 36 lines to solve the problem. I didn’t feel like porting 182 lines from Scheme to Haskell, so I rolled my own solution. It’s going to be a slightly longer one than usual though, so let’s dive right in.
Our imports:
import Data.List import qualified Data.Map as M
We’re going to handle the constraints by applying them to a two-dimensional grid. One axis holds the position of the house (first, second, etc.) and the other the various properties (nationality, color, etc.). Each cell holds the remaining options for that combination of house and property. By applying constraints we’re going to remove options until each cell has only one option left. It’s a bit like sudoku puzzles if you think about it.
type Grid = M.Map String (M.Map Int [String])
In the problem we have four types of constraints, which we encode in an ADT:
data Constraint = Link (String, String) (String, String) | PosLink (String, String) Int | NextTo (String, String) (String, String) | RightOf (String, String) (String, String) deriving Eq
A convenience type to keep the type signatures a bit easier to read:
type Solver = ([Constraint], Grid)
Adding a constraint to a solver is trivial:
addConstraint :: Constraint -> Solver -> Solver addConstraint c (cs, g) = (c : cs, g)
This function abstracts out some common logic. It removes options from the grid if the conditions to do so have been met.
Like removeIf, notAt abstract out some common code. It checks if a given value is still an option for the given property in another house.
notAt :: (Int -> Int) -> String -> String -> Int -> Grid -> Bool notAt f f1 v1 i g = M.notMember (f i) (g M.! f1) || notElem v1 (g M.! f1 M.! (f i))
With that out of the way, the function to apply a constraint looks like this. Since most constraints work in two directions, we have to apply them in both directions. applies a function to all elements of a map except the given one, which we need for the next function.
adjustOthers :: Eq k => (v -> v) -> k -> M.Map k v -> M.Map k v adjustOthers f k = M.mapWithKey (\k' v -> if k' == k then v else f v)
If a house has only one option left for a property than we can remove that option from all the other houses. Similarly, if a house is the only one that still has a certain option, we can remove the other options for that property. simply runs all the constraints once.
run :: Solver -> Solver run (cs, g) = (cs, simplify $ foldr runConstraint g cs)
Once all the constraints have been run, we might have fewer options available than we did at the beginning, which might open up new possibilities for more removal. apply keeps applying all the constraints until no further progress is made.
apply :: Solver -> Solver apply = head . head . dropWhile (null . tail) . group . iterate run
If we had enough constraints to solve the problem with just constraint propagation we could stop here. Unfortunately, this doesn’t work on the problem we have to solve. While it significantly reduces the available options, it can’t give a complete solution. So we’re going to have to do what any self-respecting logician would do in such a scenario: guess. If a property still has multiple options we choose one of them and see if we can solve it then. If not, we try the next option, or we do the same thing for the next property if none of the guesses helps solve the problem.
If any property still has more than one option the problem is not solved.
solved :: M.Map k (M.Map k' [v]) -> Bool solved g = and [False | (_, v) <- M.assocs g, (_, xs) <- M.assocs v, length xs /= 1]
solve takes care of the guesswork, and also reformats the output to be more readable. ]
And there we have our constraint solver. Now for the problem. First we create the grid with all the options:"]
Next we add all our constraints.")]
And finally we print the solution.
main :: IO () main = mapM_ putStrLn $ solve problem
That brings the total to 36 lines for the solver and 23 for the problem, and it runs in about 60 ms. I’d say that will do nicely.
Tags: constraint, Haskell, kata, logic, praxis, programming, zebra
June 22, 2009 at 12:11 pm |
[…] Owns The Zebra Reloaded By Remco Niemeijer For the Who Owns the Zebra problem I initially tried a solution based on a list comprehension. I was, however, unable to get […] | https://bonsaicode.wordpress.com/2009/06/16/programming-praxis-who-owns-the-zebra/ | CC-MAIN-2016-50 | en | refinedweb |
Is it possible to set the linestyle in a matplotlib step function to dashed, dotted, etc.?
I've tried:
step(x, linestyle='--'),
step(x, '--')
As of mpl 1.3.0 this is fixed upstream
You have to come at it a bit sideways as
step seems to ignore
linestyle. If you look at what
step is doing underneath, it is just a thin wrapper for plot.
You can do what you want by talking to
plot directly:
import matplotlib.pyplot as plt plt.plot(range(5), range(5), linestyle='--', drawstyle='steps') plt.plot(range(5), range(5)[::-1], linestyle=':', drawstyle='steps') plt.xlim([-1, 5]) plt.ylim([-1, 5])
['steps', 'steps-pre', 'steps-mid', 'steps-post'] are the valid values for
drawstyle and control where the step is drawn.
Pull request resulting from this question, I personally think this is a bug. [edit: this has been pulled into master and should show up in v1.3.0]. | https://codedump.io/share/A2QsNJtNHUF/1/linestyle-in-matplotlib-step-function | CC-MAIN-2016-50 | en | refinedweb |
Content-type: text/html
setlocale - Changes or queries the program's current locale
Standard C Library (libc.so, libc.a)
#include <locale.h>
char *setlocale(
int category,
const char *locale);
Interfaces documented on this reference page conform to industry standards as follows:
setlocale(): ISO C, XPG4, POSIX.1c
Refer to the standards(5) reference page for more information about industry standards and associated tags.
Specifies the category of the locale to set or query. The category can be LC_ALL, LC_COLLATE, LC_CTYPE, LC_MESSAGES, LC_MONETARY, LC_NUMERIC, or LC_TIME. Points to a string that specifies the locale.
The setlocale() function sets or queries the appropriate portion of the program's locale as specified by the category and locale parameters. The LC_ALL value for the category parameter names the entire locale; the other values name only a portion of the program locale, as follows: Affects the behavior of collation functions and regular expressions. Affects the behavior of character classification functions, character conversion functions, and regular expressions. Affects the language used to display application program and utilities messages (when translations of the messages are available) and the strings expected as affirmative and negative responses. Affects the behavior of functions that handle monetary values. Affects the radix character for the formatted input/output functions and the string conversion functions. Affects the behavior of the time conversion functions.
The behavior of the language information function defined in the nl_langinfo() function is also affected by settings of the category parameter.
The locale parameter points to a character string that identifies the locale that is to be used to set the category parameter. The locale parameter can specify either the name of a locale, such as fr_CA.ISO8859-1, or one of the following: Sets the locale to be the minimal environment for C-language translation. If setlocale() is not invoked, the C locale is the default. Operational behavior within the C locale is defined separately for each interface function that is affected by the locale string. Equivalent to C. Specifies that the locale should be set based on the user's current values for the locale environment variables. Queries the program's current locale setting and returns the name of the locale; does not change the current setting.
If the locale parameter is set to the empty string (""), setlocale() checks the user's environment variables in the following order: First it checks the value of the LC_ALL environment variable. If it is set, setlocale() sets the specified category of the international environment to that value and returns the string corresponding to the locale set (that is, the value of the environment variable, not "", the null string). If the environment variable LC_ALL is not set or is set to the empty string, setlocale() next checks the corresponding environment variable for the category specified. If the environment variable for the category is set, setlocale() sets the specified category of the international environment to that value. If the environment variable corresponding to the specified category is not set or is set to the empty string, then setlocale() checks the LANG environment variable. If the LANG environment variable is set, then setlocale() sets the category to the locale specified by the LANG environment variable. Lastly, if the LANG environment variable is not set or is set to the empty string, the setlocale() function sets the category to the POSIX (C) locale.
If the locale parameter is a pointer to NULL, the setlocale() function returns the name of the program's current locale for the specified category but does not change the locale.
If the locale specified by the locale parameter or by the environment variable is invalid, setlocale() returns a null pointer and does not change the program's locale.
The following example sets all categories in the international environment based on the user's environment variables:
void Do_stuff(void)
{
char *test_l, *saved_l;
test_l=setlocale(LC_ALL,NULL);
saved_l=strdup(test_l);
test_l=setlocale(LC_ALL,"C");
/* Perform operations in the C locale */
/* Restore the original locale */
test_l=setlocale(LC_ALL,saved_l);
return;
}
The POSIX.1c standard specifies that there be only one locale per process. This means that applications should call setlocale() only in the main part of a program before any threads are created.
If a call to setlocale() changes the setting of the LC_MESSAGES category, this operation has no effect on any message catalogs that are currently open.
If the setlocale() function succeeds in setting the program's locale to the one specified by the locale parameter, the function returns the string associated with the specified category parameter for the new locale. Note that the locale parameter can specify the locale name explicitly or, if locale is an empty string, the locale is specified by the value of the corresponding environment variable. If the setlocale() function cannot set the program's locale as requested, the function returns a null pointer and leaves the program's locale unchanged.
If the category parameter has a value of LC_ALL, the return value is a series of locale names separated by spaces. The locale names correspond to the categories in the following order: LC_COLLATE LC_CTYPE LC_MONETARY LC_NUMERIC LC_TIME LC_MESSAGES
If the locale parameter is a null pointer, the setlocale() function returns the string associated with the category parameter for the program's current locale, and leaves the program's locale unchanged.
The string returned by the setlocale() function is such that a subsequent call with that string and its associated category restores that part of the program's locale. The string returned must not be modified by the program, but is overwritten by a subsequent call to the setlocale() function.
Functions: atof(3), catclose(3), catgets(3), catopen(3), ctype(3), localeconv(3), nl_langinfo(3), printf(3), scanf(3), strfmon(3), strftime(3), string(3), wctype(3), wprintf(3), wscanf(3)
Files: locale(4)
Others: i18n_intro(5), l10n_intro(5), standards(5)
Writing Software for the International Market delim off | http://backdrift.org/man/tru64/man3/setlocale.3.html | CC-MAIN-2016-50 | en | refinedweb |
Ralf Wildenhues <address@hidden> writes: > [ dropping libtool@ ] > > * Ian Lance Taylor wrote on Mon, Nov 01, 2010 at 09:48:03PM CET: >> Ralf Wildenhues <address@hidden> writes: >> >> > We need a bit of new notation for this, and we need to teach automake >> > about languages that shouldn't have renamed objects even in the presence >> > of per-object flags. >> >> Hmmm, no, the objects can be renamed. The agreement between source >> level package name and file level package name is by convention only. > > Misunderstanding; sorry. What I meant was that automake shouldn't by > itself rename object file names. It currently does that for example > with setups like > > foo_SOURCES = foo.c > bar_SOURCES = foo.c > bar_CPPFLAGS = -Dbar Ah, I see. Yes, it would not make sense to do this for Go. >> A package can be installable by itself, sure. > > Would that then be in the form of an object, or would you make an > archive out of it? It could be either way. If there is only the one object, it might as well just be a .o file. >> >> > If the latter is to be supported, then >> >> > things like overlapping sources become a problem (i.e., both libfoo and >> >> > libbar use baz.o). >> >> >> >> That can not happen, because baz.go can only be in one package. >> > >> > Setups like the following are not possible in theory? >> > >> > if WANT_FEATURE_IN_FOO >> > foo_lo_SOURCES += baz.go >> > else >> > if WANT_FEATURE_IN_BAR >> > bar_lo_SOURCES += baz.go >> > endif >> > endif >> >> Right, that is not possible, unless foo.lo and bar.lo define the same >> package, which would be very odd. > > Why? With system-dependent differences that doesn't seem too remote > (the conditionals don't both have to be true on the same system). > Generally, Automake users will eventually come up with some use case > even for pretty remote cases ... It's normal to have a single package which had conditionally included source files. But what you are describing seems to be the case of the same source file going into two different packages. I don't know why that would ever be useful or interesting. But, yes, it could be done, if both of the two different packages used the same name at the source code level. Ian | http://lists.gnu.org/archive/html/automake/2010-11/msg00015.html | CC-MAIN-2016-50 | en | refinedweb |
XML::MyXML - A simple-to-use XML module, for parsing and creating XML documents
version 0.9403
use XML::MyXML qw(tidy_xml xml_to_object); use XML::MyXML qw(:all); my $xml = "<item><name>Table</name><price><usd>10.00</usd><eur>8.50</eur></price></item>"; print tidy_xml($xml); my $obj = xml_to_object($xml); print "Price in Euros = " . $obj->path('price/eur')->value; $obj->simplify is hashref { item => { name => 'Table', price => { usd => '10.00', eur => '8.50' } } } $obj->simplify({ internal => 1 }) is hashref { name => 'Table', price => { usd => '10.00', eur => '8.50' } }
xml_escape, tidy_xml, xml_to_object, object_to_xml, simple_to_xml, xml_to_simple, check_xml
This module can parse XML comments, CDATA sections, XML entities (the standard five and numeric ones) and simple non-recursive
<!ENTITY>s
It will ignore (won't parse)
<!DOCTYPE...>,
<?...?> and other
<!...> special markup
All strings (XML documents, attribute names, values, etc) produced by this module or passed as parameters to its functions, are strings that contain characters, rather than bytes/octets. Unless you use the
bytes function flag (see below), in which case the XML documents (and just the XML documents) will be byte/octet strings.
XML documents to be parsed may not contain the
> character unencoded in attribute values
Some functions and methods in this module accept optional flags, listed under each function in the documentation. They are optional, default to zero unless stated otherwise, and can be used as follows:
function_name( $param1, { flag1 => 1, flag2 => 1 } ). This is what each flag does:
strip : the function will strip initial and ending whitespace from all text values returned
file : the function will expect the path to a file containing an XML document to parse, instead of an XML string
complete : the function's XML output will include an XML declaration (
<?xml ... ?>) in the beginning
internal : the function will only return the contents of an element in a hashref instead of the element itself (see "SYNOPSIS" for example)
tidy : the function will return tidy XML
indentstring : when producing tidy XML, this denotes the string with which child elements will be indented (Default is the 'tab' character)
save : the function (apart from doing what it's supposed to do) will also save its XML output in a file whose path is denoted by this flag
strip_ns : strip the namespaces (characters up to and including ':') from the tags
xslt : will add a <?xml-stylesheet?> link in the XML that's being output, of type 'text/xsl', pointing to the filename or URL denoted by this flag
arrayref : the function will create a simple arrayref instead of a simple hashref (which will preserve order and elements with duplicate tags)
bytes : the XML document string which is parsed and/or produced by this function, should contain bytes/octets rather than characters
Returns the same string, but with the
<,
>,
&,
" and
' characters replaced by their XML entities (e.g.
&).
Returns the XML string in a tidy format (with tabs & newlines)
Optional flags:
file,
complete,
indentstring,
save,
bytes
Creates an 'XML::MyXML::Object' object from the raw XML provided
Optional flags:
file,
bytes
Creates an XML string from the 'XML::MyXML::Object' object provided
Optional flags:
complete,
tidy,
indentstring,
save,
bytes
Produces a raw XML string from either an array reference, a hash reference or a mixed structure such as these examples:
{ thing => { name => 'John', location => { city => 'New York', country => 'U.S.A.' } } } [ thing => [ name => 'John', location => [ city => 'New York', country => 'U.S.A.' ] ] ] { thing => { name => 'John', location => [ city => 'New York', city => 'Boston', country => 'U.S.A.' ] } }
All the strings in
$simple_array_ref need to contain characters, rather than bytes/octets. The
bytes optional flag only affects the produced XML string.
Optional flags:
complete,
tidy,
indentstring,
save,
xslt,
bytes
Produces a very simple hash object from the raw XML string provided. An example hash object created thusly is this:
{ thing => { name => 'John', location => { city => 'New York', country => 'U.S.A.' } } }
Since the object created is a hashref, duplicate keys will be discarded. WARNING: This function only works on very simple XML strings, i.e. children of an element may not consist of both text and elements (child elements will be discarded in that case)
All strings contained in the output simple structure, will always contain characters rather than octets/bytes, regardless of the
bytes optional flag.
Optional flags:
internal,
strip,
file,
strip_ns,
arrayref,
bytes
Returns true if the $raw_xml string is valid XML (valid enough to be used by this module), and false otherwise.
Optional flags:
file,
bytes
Returns the element specified by the path as an XML::MyXML::Object object. When there are more than one tags with the specified name in the last step of the path, it will return all of them as an array. In scalar context will only return the first one. Simple CSS3-style attribute selectors are allowed in the path next to the tagnames, for example:
p[class=big] will only return
<p>'; <people> <student> <name> <first>Alex</first> <last>Karelas</last> </name> </student> <student> <name> <first>John</first> <last>Doe</last> </name> </student> <teacher> <name> <first>Mary</first> <last>Poppins</last> </name> </teacher> <teacher> <name> <first>Peter</first> <last>Gabriel</last> </name> </teacher> </people> EOB my $obj = xml_to_object($xml); my @students = $obj->path('student'); foreach my $student (@students) { print $student->path('name/last')->value, "\n"; }
...or like this...
my @last = $obj->path('student/name/last'); foreach my $last (@last) { print $last->value, "\n"; }
If you wish to describe the root element in the path as well, prepend it in the path with a slash like so:
if( $student->path('/student/name/last')->value eq $student->path('name/last')->value ) { print "The two are identical", "\n"; }
Optional flags: none
When the element represented by the $obj object has only text contents, returns those contents as a string. If the $obj element has no contents, value will return an empty string.
Optional flags:
strip
Gets/Sets the value of the 'attrname' attribute of the top element. Returns undef if attribute does not exist. If called without the 'attrname' paramter, returns a hash with all attribute => value pairs. If setting with an attrvalue of
undef, then removes that attribute entirely.
Input parameters and output are all in character strings, rather than octets/bytes.
Optional flags: none
Returns the tag of the $obj element. E.g. if $obj represents an <rss:item> element,
$obj->tag will return the string 'rss:item'. Returns undef if $obj doesn't represent a tag.
Optional flags:
strip_ns
Returns the XML::MyXML::Object element that is the parent of $obj in the document. Returns undef if $obj doesn't have a parent.
Optional flags: none
Returns a very simple hashref, like the one returned with
&XML::MyXML::xml_to_simple. Same restrictions and warnings apply.
Optional flags:
internal,
strip,
strip_ns,
arrayref
Returns the XML string of the object, just like calling
object_to_xml( $obj )
Optional flags:
complete,
tidy,
indentstring,
save,
bytes
Returns the XML string of the object in tidy form, just like calling
tidy_xml( object_to_xml( $obj ) )
Optional flags:
complete,
indentstring,
save,
bytes) 2016 by Alexander Karelas.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~karjala/XML-MyXML/lib/XML/MyXML.pm | CC-MAIN-2016-50 | en | refinedweb |
> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> Sent: Thursday, 21 March 2002 8:31 AM
> To: [email protected]
> Subject: cvs commit:
> jakarta-ant/proposal/myrmidon/src/testcases/org/apache/myrmidon/componen
> ts/property/test AbstractPropertyResolverTestCase.java
>
> Index: Ant1CompatProject.java
> ===================================================================
> +
> + /**
> + * Returns the value of a property, if it is set.
> + *
> + * @param name The name of the property.
> + * May be <code>null</code>, in which case
> + * the return value is also <code>null</code>.
> + * @return the property value, or <code>null</code> for no match
> + * or if a <code>null</code> name is provided.
> + */
> + public String getProperty( String name )
> + {
> + Object value = m_context.getProperty( name );
> +
> + // In Ant1, all properties are strings.
> + if( value instanceof String )
> + {
> + return (String)value;
> + }
> + else
> + {
> + return null;
> + }
> + }
> +
How about:
public String getProperty( String name )
{
Object value = m_context.getProperty( name );
if ( value != null )
{
return value.toString();
}
else {
return null;
}
}
or, probably better:.
Some other random comments about Ant1CompatProject:
* First up, this really is very cool. Heaps simpler than I imagined a
compat layer looking (though I guess it's not done yet).
*.
Adam
--
To unsubscribe, e-mail: <mailto:[email protected]>
For additional commands, e-mail: <mailto:[email protected]> | http://mail-archives.apache.org/mod_mbox/ant-dev/200203.mbox/%[email protected]%3E | CC-MAIN-2016-50 | en | refinedweb |
The Vaadin Wiki is temporarily in read only mode due to large recent spam attacks.
Adding JPA to the Address Book Demo
Introduction #
The Vaading address book tutorial (the one hour version, that is) does a very good job introducing the different parts of Vaadin. However, it only uses an in-memory data source with randomly generated data. This may be sufficient for demonstration purposes, but not for any real world applications that manage data. Therefore, in this article, we are going to replace the tutorial's in-memory data source with the Java Persistence API (JPA) and also utilize some of the new JEE 6 features of GlassFish 3.
Prerequisites #
In order to fully understand this article, you should be familiar with JEE and JPA development and you should also have read through the Vaadin tutorial. System Architecture #
The architecture of the application is presented in the following diagram:
In addition to the Vaadin UI created in the tutorial, we will add a stateless Enterprise Java Bean (EJB) to act as a facade to the database. The EJB will in turn use JPA to communicate with a JDBC data source (in this example, the built-in
jdbc/sampledata source).
Refactoring the Domain Model #
Before doing anything else, we have to modify the domain model of the Address Book example.
The
Person class #
In order to use JPA, we have to add JPA annotations to the
Personclass:
#!java // Imports omitted @Entity public class Person implements Serializable { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Version @Column(name = "OPTLOCK") private Long version; private String firstName = ""; private String lastName = ""; private String email = ""; private String phoneNumber = ""; private String streetAddress = ""; private Integer postalCode = null; private String city = ""; public Long getId() { return id; } public Long getVersion() { return version; } // The rest of the methods omitted }
As we do not need to fit the domain model onto an existing database, the annotations become very simple. We have only marked the class as being an entity and added an ID and a version field.
The
PersonReference class #
There are many advantages with using JPA or any other Object Persistence Framework (OPF). The underlying database gets completely abstracted away and we can work with the domain objects themselves instead of query results and records. We can detach domain objects, send them to a client using a remote invocation protocol, then reattach them again.
However, there are a few use cases where using an OPF is not such a good idea: reporting and listing. When a report is generated or a list of entities is presented to the user, normally only a small part of the data is actually required. When the number of objects to fetch is large and the domain model is complex, constructing the object graphs from the database can be a very lengthy process that puts the users' patience to the test – especially if they are only trying to select a person's name from a list.
Many OPFs support lazy loading of some form, where references and collections are fetched on demand. However, this rarely works outside the container, e.g. on the other side of a remoting connection.
One way of working around this problem is to let reports and lists access the database directly using SQL. This is a fast approach, but it also couples the code to a particular SQL dialect and therefore to a particular database vendor.
In this article, we are going to select the road in the middle – we will only fetch the property values we need instead of the entire object, but we will use PQL and JPA to do so. In this example, this is a slight overkill as we have a very simple domain model. However, we do this for two reasons: Firstly, as Vaadin is used extensively in business applications where the domain models are complex, we want to introduce this pattern in an early stage. Secondly, it makes it easier to plug into Vaadin's data model.
In order to implement this pattern, we need to introduce a new class, namely
PersonReference:
#!java import com.vaadin.data.Item; import com.vaadin.data.Property; import com.vaadin.data.util.ObjectProperty; // Some imports omitted public class PersonReference implements Serializable, Item { private Long personId; private Map<Object, Property> propertyMap; public PersonReference(Long personId, Map<String, Object> propertyMap) { this.personId = personId; this.propertyMap = new HashMap<Object, Property>(); for (Map.Entry<Object, Property> entry : propertyMap.entrySet()) { this.propertyMap.put(entry.getKey(), new ObjectProperty(entry.getValue())); } } public Long getPersonId() { return personId; } public Property getItemProperty(Object id) { return propertyMap.get(id); } public Collection<?> getItemPropertyIds() { return Collections.unmodifiableSet(propertyMap.keySet()); } public boolean addItemProperty(Object id, Property property) { throw new UnsupportedOperationException("Item is read-only."); } public boolean removeItemProperty(Object id) { throw new UnsupportedOperationException("Item is read-only."); } }
The class contains the ID of the actual
Personobject and a
Mapof property values. It also implements the
com.vaadin.data.Iteminterface, which makes it directly usable in Vaadin's data containers.
The
QueryMetaData class #
Before moving on to the EJB, we have to introduce yet another class, namely
QueryMetaData:
#!java // Imports omitted public class QueryMetaData implements Serializable { private boolean[] ascending; private String[] orderBy; private String searchTerm; private String propertyName; public QueryMetaData(String propertyName, String searchTerm, String[] orderBy, boolean[] ascending) { this.propertyName = propertyName; this.searchTerm = searchTerm; this.ascending = ascending; this.orderBy = orderBy; } public QueryMetaData(String[] orderBy, boolean[] ascending) { this(null, null, orderBy, ascending); } public boolean[] getAscending() { return ascending; } public String[] getOrderBy() { return orderBy; } public String getSearchTerm() { return searchTerm; } public String getPropertyName() { return propertyName; } }
As the class name suggests, this class contains query meta data such as ordering and filtering information. We are going to look at how it is used in the next section.
The Stateless EJB #
We are now ready to begin designing the EJB. As of JEE 6, an EJB is no longer required to have an interface. However, as it is a good idea to use interfaces at the boundaries of system components, we will create one nonetheless:
#!java // Imports omitted @TransactionAttribute @Local public interface PersonManager { public List<PersonReference> getPersonReferences(QueryMetaData queryMetaData, String... propertyNames); public Person getPerson(Long id); public Person savePerson(Person person); }
Please note the
@TransactionAttributeand
@Localannotations that instruct GlassFish to use container managed transaction handling, and to use local references, respectively. Next, we create the implementation:
#!java // Imports omitted @Stateless public class PersonManagerBean implements PersonManager { @PersistenceContext protected EntityManager entityManager; public Person getPerson(Long id) { // Implementation omitted } public List<PersonReference> getPersonReferences(QueryMetaData queryMetaData, String... propertyNames) { // Implementation omitted } public Person savePerson(Person person) { // Implementation omitted } }
We use the
@Statelessannotation to mark the implementation as a stateless session EJB. We also use the
@PersistenceContextannotation to instruct the container to automatically inject the entity manager dependency. Thus, we do not have to do any lookups using e.g. JNDI.
Now we can move on to the method implementations.
#!java public Person getPerson(Long id) { return entityManager.find(Person.class, id); }
This implementation is very straight-forward: given the unique ID, we ask the entity manager to look up the corresponding
Personinstance and return it. If no such instance is found,
nullis returned.
#!java public List<PersonReference> getPersonReferences(QueryMetaData queryMetaData, String... propertyNames) { StringBuffer pqlBuf = new StringBuffer(); pqlBuf.append("SELECT p.id"); for (int i = 0; i < propertyNames.length; i++) { pqlBuf.append(","); pqlBuf.append("p."); pqlBuf.append(propertyNames[i]); } pqlBuf.append(" FROM Person p"); if (queryMetaData.getPropertyName() != null) { pqlBuf.append(" WHERE p."); pqlBuf.append(queryMetaData.getPropertyName()); if (queryMetaData.getSearchTerm() == null) { pqlBuf.append(" IS NULL"); } else { pqlBuf.append(" = :searchTerm"); } } if (queryMetaData != null && queryMetaData.getAscending().length > 0) { pqlBuf.append(" ORDER BY "); for (int i = 0; i < queryMetaData.getAscending().length; i++) { if (i > 0) { pqlBuf.append(","); } pqlBuf.append("p."); pqlBuf.append(queryMetaData.getOrderBy()[i]); if (!queryMetaData.getAscending()[i]) { pqlBuf.append(" DESC"); } } } String pql = pqlBuf.toString(); Query query = entityManager.createQuery(pql); if (queryMetaData.getPropertyName() != null && queryMetaData.getSearchTerm() != null) { query.setParameter("searchTerm", queryMetaData.getSearchTerm()); } List<Object[]> result = query.getResultList(); List<PersonReference> referenceList = new ArrayList<PersonReference>(result.size()); HashMap<String, Object> valueMap; for (Object[] row : result) { valueMap = new HashMap<String, Object>(); for (int i = 1; i < row.length; i++) { valueMap.put(propertyNames[i - 1], row[i]); } referenceList.add(new PersonReference((Long) row[0], valueMap)); } return referenceList; }
This method is a little more complicated and also demonstrates the usage of the
QueryMetaDataclass. What this method does is that it constructs a PQL query that fetches the values of the properties provided in the
propertyNamesarray from the database. It then uses the
QueryMetaDatainstance to add information about ordering and filtering. Finally, it executes the query and returns the result as a list of
PersonReferenceinstances.
The advantage with using
QueryMetaDatais that additional query options can be added without having to change the interface. We could e.g. create a subclass named
AdvancedQueryMetaDatawith information about wildcards, result size limitations, etc.
#!java public Person savePerson(Person person) { if (person.getId() == null) entityManager.persist(person); else entityManager.merge(person); return person; }
This method checks if
personis persistent or transient, merges or persists it, respectively, and finally returns it. The reason why
personis returned is that this makes the method usable for remote method calls. However, as this example does not need any remoting, we are not going to discuss this matter any further in this article.
Plugging Into the UI #
The persistence component of our Address Book application is now completed. Now we just have to plug it into the existing user interface component. In this article, we are only going to look at some of the changes that have to be made to the code. That is, if you try to deploy the application with the changes presented in this article only, it will not work. For all the changes, please check the source code archive attached to this article.
Creating a New Container #
First of all, we have to create a Vaadin container that knows how to read data from a
PersonManager:
#!java // Imports omitted public class PersonReferenceContainer implements Container, Container.ItemSetChangeNotifier { public static final Object[] NATURAL_COL_ORDER = new String[] {"firstName", "lastName", "email", "phoneNumber", "streetAddress", "postalCode", "city"}; protected static final Collection<Object> NATURAL_COL_ORDER_COLL = Collections.unmodifiableList( Arrays.asList(NATURAL_COL_ORDER) ); protected final PersonManager personManager; protected List<PersonReference> personReferences; protected Map<Object, PersonReference> idIndex; public static QueryMetaData defaultQueryMetaData = new QueryMetaData( new String[]{"firstName", "lastName"}, new boolean[]{true, true}); protected QueryMetaData queryMetaData = defaultQueryMetaData; // Some fields omitted public PersonReferenceContainer(PersonManager personManager) { this.personManager = personManager; } public void refresh() { refresh(queryMetaData); } public void refresh(QueryMetaData queryMetaData) { this.queryMetaData = queryMetaData; personReferences = personManager.getPersonReferences(queryMetaData, (String[]) NATURAL_COL_ORDER); idIndex = new HashMap<Object, PersonReference>(personReferences.size()); for (PersonReference pf : personReferences) { idIndex.put(pf.getPersonId(), pf); } notifyListeners(); } public QueryMetaData getQueryMetaData() { return queryMetaData; } public void close() { if (personReferences != null) { personReferences.clear(); personReferences = null; } } public boolean isOpen() { return personReferences != null; } public int size() { return personReferences == null ? 0 : personReferences.size(); } public Item getItem(Object itemId) { return idIndex.get(itemId); } public Collection<?> getContainerPropertyIds() { return NATURAL_COL_ORDER_COLL; } public Collection<?> getItemIds() { return Collections.unmodifiableSet(idIndex.keySet()); } public List<PersonReference> getItems() { return Collections.unmodifiableList(personReferences); } public Property getContainerProperty(Object itemId, Object propertyId) { Item item = idIndex.get(itemId); if (item != null) { return item.getItemProperty(propertyId); } return null; } public Class<?> getType(Object propertyId) { try { PropertyDescriptor pd = new PropertyDescriptor((String) propertyId, Person.class); return pd.getPropertyType(); } catch (Exception e) { return null; } } public boolean containsId(Object itemId) { return idIndex.containsKey(itemId); } // Unsupported methods omitted // addListener(..) and removeListener(..) omitted protected void notifyListeners() { ArrayList<ItemSetChangeListener> cl = (ArrayList<ItemSetChangeListener>) listeners.clone(); ItemSetChangeEvent event = new ItemSetChangeEvent() { public Container getContainer() { return PersonReferenceContainer.this; } }; for (ItemSetChangeListener listener : cl) { listener.containerItemSetChange(event); } } }
Upon creation, this container is empty. When one of the
refresh(..)methods is called, a list of
PersonReferences are fetched from the
PersonManagerand cached locally. Even though the database is updated, e.g. by another user, the container contents will not change before the next call to
refresh(..).
To keep things simple, the container is read only, meaning that all methods that are designed to alter the contents of the container throw an exception. Sorting, optimization and lazy loading has also been left out (if you like, you can try to implement these yourself).
Modifying the
PersonForm class #
We now have to refactor the code to use our new container, starting with the
PersonFormclass. We begin with the part of the constructor that creates a list of all the cities currently in the container:
#!java PersonReferenceContainer ds = app.getDataSource(); for (PersonReference pf : ds.getItems()) { String city = (String) pf.getItemProperty("city").getValue(); cities.addItem(city); }
We have changed the code to iterate a collection of
PersonReferenceinstances instead of
Personinstances.
Then, we will continue with the part of the
buttonClick(..)method that saves the contact:
#!java if (source == save) { if (!isValid()) { return; } commit(); person = app.getPersonManager().savePerson(person); setItemDataSource(new BeanItem(person)); newContactMode = false; app.getDataSource().refresh(); setReadOnly(true); }
The code has actually become simpler, as the same method is used to save both new and existing contacts. When the contact is saved, the container is refreshed so that the new information is displayed in the table.
Finally, we will add a new method,
editContact(..)for displaying and editing existing contacts:
#!java public void editContact(Person person) { this.person = person; setItemDataSource(new BeanItem(person)) newContactMode = false; setReadOnly(true); }
This method is almost equal to
addContact()but uses an existing
Personinstance instead of a newly created one. It also makes the form read only, as the user is expected to click an Edit button to make the form editable.
Modifying the
AddressBookApplication class #
Finally, we are going to replace the old container with the new one in the main application class. We will start by adding a constructor:
#!java public AddressBookApplication(PersonManager personManager) { this.personManager = personManager; }
This constructor will be used by a custom application servlet to inject a reference to the
PersonManagerEJB. When this is done, we move on to the
init()method:
#!java public void init() { dataSource = new PersonReferenceContainer(personManager); dataSource.refresh(); // Load initial data buildMainLayout(); setMainComponent(getListView()); }
The method creates a container and refreshes it in order to load the existing data from the database – otherwise, the user would be presented with an empty table upon application startup.
Next, we modify the code that is used to select contacts:
#!java public void valueChange(ValueChangeEvent event) { Property property = event.getProperty(); if (property == personList) { Person person = personManager.getPerson((Long) personList.getValue()); personForm.editContact(person); } }
The method gets the ID of the currently selected person and uses it to lookup the
Personinstance from the database, which is then passed to the person form using the newly created
editContact(..)method.
Next, we modify the code that handles searches:
#!java public void search(SearchFilter searchFilter) { QueryMetaData qmd = new QueryMetaData((String) searchFilter.getPropertyId(), searchFilter.getTerm(), getDataSource().getQueryMetaData().getOrderBy(), getDataSource().getQueryMetaData().getAscending()); getDataSource().refresh(qmd); showListView(); // Visual notification omitted }
Instead of filtering the container, this method constructs a new
QueryMetaDatainstance and refreshes the data source. Thus, the search operation is performed in the database and not in the container itself.
As we have removed container filtering, we also have to change the code that is used to show all contacts:
#!java public void itemClick(ItemClickEvent event) { if (event.getSource() == tree) { Object itemId = event.getItemId(); if (itemId != null) { if (itemId == NavigationTree.SHOW_ALL) { getDataSource().refresh(PersonReferenceContainer.defaultQueryMetaData); showListView(); } else if (itemId == NavigationTree.SEARCH) { showSearchView(); } else if (itemId instanceof SearchFilter) { search((SearchFilter) itemId); } } } }
Instead of removing the filters, this method refreshes the data source using the default query meta data.
Creating a Custom Servlet #
The original tutorial used an
ApplicationServletconfigured in web.xml to start the application. In this version, however, we are going to create our own custom servlet. By doing this, we can let GlassFish inject the reference to the
PersonManagerEJB using annotations, which means that we do not need any JDNI look ups at all. As a bonus, we get rid of the web.xml file as well thanks to the new JEE 6
@WebServletannotation. The servlet class can be added as an inner class to the main application class:
#!java @WebServlet(urlPatterns = "/*") public static class Servlet extends AbstractApplicationServlet { @EJB PersonManager personManager; @Override protected Application getNewApplication(HttpServletRequest request) throws ServletException { return new AddressBookApplication(personManager); } @Override protected Class<? extends Application> getApplicationClass() throws ClassNotFoundException { return AddressBookApplication.class; } }
When the servlet is initialized by the web container, the
PersonManagerEJB will be automatically injected into the
personManagerfield thanks to the
@EJBannotation. This reference can then be passed to the main application class in the
getNewApplication(..)method.
Classical Deployment #
Packaging this application into a WAR is no different from the Hello World example. We just have to remember to include the persistence.xml file (we are not going to cover the contents of this file in this article), otherwise JPA will not work. Note, that as of JEE 6, we do not need to split up the application into a different bundle for the EJB and another for the UI. We also do not need any other configuration files than the persistence unit configuration file.
The actual packaging can be done using the following Ant target:
#!xml <target name="package-with-vaadin" depends="compile"> <mkdir dir="${dist.dir}"/> <war destfile="${dist.dir}/${ant.project.name}-with-vaadin.war" needxmlfile="false"> <lib file="${vaadin.jar}"/> <classes dir="${build.dir}"/> <fileset dir="${web.dir}" includes="**"/> </war> </target>
Once the application has been packaged, it can be deployed like so, using the asadmin tool that comes with GlassFish:
$ asadmin deploy /path/to/addressbook-with-vaadin.war
Note, that the Java DB database bundled with GlassFish must be started prior to deploying the application. Now we can test the application by opening a web browser and navigating to. The running application should look something like this:
OSGi Deployment Options #
The OSGi support of GlassFish 3 introduces some new possibilities for Vaadin development. These are discussed in the Hello GlassFish 3 and Creating a Modular Vaadin Application with OSGi articles, respectively, and will not be covered in this article. If you are not interested in OSGi, you can stop reading here. Otherwise, you should read the aforementioned articles first and then continue.
If the Vaadin library is deployed as an OSGi bundle, we can package and deploy the address book application without the Vaadin library. The following Ant target can be used to create the WAR:
#!xml <target name="package-without-vaadin" depends="compile"> <mkdir dir="${dist.dir}"/> <war destfile="${dist.dir}/${ant.project.name}-without-vaadin.war" needxmlfile="false"> <classes dir="${build.dir}"/> <fileset dir="${web.dir}" includes="**"/> </war> </target>
Although it is possible to deploy both web applications and EJBs as pure OSGi bundles, we currently cannot deploy the Address Book as an OSGi bundle. The reason for this is that JPA is not currently working with OSGi due to conflicting class loading requirements. However, according to this blog entry, the GlassFish developers are working on solving this problem, so all we can do for now is wait. Update 2010-04-12: The problem should be solved now in the GlassFish trunk.
Summary #
In this article, we have extended the Address Book demo to use JPA instead of the in-memory container, with an EJB acting as the facade to the database. Thanks to annotations, the application does not contain a single JNDI lookup, and thanks to JEE 6, the application can be deployed as a single WAR. | https://vaadin.com/wiki/-/wiki/Main/Adding%20JPA%20to%20the%20Address%20Book%20Demo | CC-MAIN-2016-50 | en | refinedweb |
This content has been marked as final. Show 25 replies
15. Re: JNI sample problem837475 Mar 1, 2011 3:32 PM (in response to EJP)As I know, I define the type of the file that is created when I clean and build the c++/c project using the project properties.
First, on C compiler's properties I add include and win32 directories.
Then, on C compiler's properties I add in additional option field -m32 (for using 32 bit system) and -shared for creating a shared object
Last, in Linker, output field i add the path when the file will be created and its extension:
- if i said libName.so the result is shared object
-if i write .dll, the result is dll library
I use the same path written there as my VM options path in the Java project.
16. Re: JNI sample problemjschellSomeoneStoleMyAlias Mar 1, 2011 8:12 PM (in response to 837475)
System.load("C:/NetBeansProjects/HelloWorldC/dist/libHelloWorldC.so");Need to slow down a bit on this discussion.
There are only two possibilities for the load() method.
1. It loaded a library.
2. It didn't load a library and threw an exception.
The fact that the above name and path is odd doesn't mean that it has anything to do with the original problem in this thread. All that matters is that it is a library and that it did load.
If it did in fact load a library and it still doesn't find the method then one is left with
1. The method is not in that file (again if if loaded it is a library.)
2. The method signature is different.
17. Re: JNI sample problemjschellSomeoneStoleMyAlias Mar 1, 2011 8:18 PM (in response to 837475)
gotqn wrote:Meaningless since you are using load()
4. Changed the java project VM option to -Djava.library.path=C:\NetBeansProjects\HelloWorldC
18. Re: JNI sample problemEJP Mar 1, 2011 10:05 PM (in response to 837475)
- if i said libName.so the result is shared objectNo. In both cases the result is a linked file of the type that the linker produces. Just changing the extension doesn't change the result type.
-if i write .dll, the result is dll library
As you are running on Windows the file should be named .dll and it should not start with 'lib'. You should locate it in '.' and you should load it with System.loadLibrary(), providing just its name without the .dll extension and without specifying any directories.
I've said all this before. You could always try it, instead of just reiterating what you have done wrong, and dreaming up specious reasons for it.
19. Re: JNI sample problem837475 Mar 3, 2011 10:19 AM (in response to EJP)EJP, thanks for the advice, but it did not work for me.
I have deleted my projects and started the example again doing everything as you have told me. I have located the file in '.' and load it with System.loadLibrary() wtihout the .dll extension and without specifying any directories.
The message error is the same:
and as this is the code: row seems to be the reason for the error
/* * To change this template, choose Tools | Templates * and open the template in the editor. */ package jnidemojava; /** * * @author Joro */ public class Main { static { System.loadLibrary("JNIDemoCdl"); } public static void main(String[] args) { new Main().nativePrint(); } private native void nativePrint(); }
I have started to doubt that there is something wrong with my method, not the load of the library. Can you explain me what's the problem with signature of the method that the other man has told me about?
new Main().nativePrint();
20. Re: JNI sample problemEJP Mar 4, 2011 5:17 AM (in response to 837475)So you are now loading the library successfully. That's progress. You now have an issue with the actual call. Provided the correct version of the .dll is being loaded and provided the C source code looks as above and provided the compile and link options are correct and provided the Java source code looks as above it should work. I would suspect the compile/link options at the moment. Can you produce a map file and post the line that lists this entry point?
21. Re: JNI sample problem837475 Mar 4, 2011 8:55 AM (in response to EJP)Thanks for the paying attention, EGP.
How can I produce map file?
I want to know something more. I have been told that the problem may be not correct build of the .dll file. So, I doubt that I do not set the correct adjustments in the c/c++ dynamic library project before the build..
I am not sure what Additional options I have to add in the C project adjustments. I add only -m32 and -shared, but in a few other samples a lot more were add. Maybe I have to add something more there in order to build the correct dll file?
22. Re: JNI sample problemEJP Mar 4, 2011 9:01 AM (in response to 837475)
How can I produce map file?It's a linker option. You'll have to look it up. I don't know what linker you're using and in any case it's not a Java question..Use the latest one, or else the one from the JVM that you are targeting.
Maybe I have to add something more there in order to build the correct dll file?Maybe so. Again it's not a Java question. But I will add that I have never had success trying to use a shared C runtime with JNI DLLs.
23. Re: JNI sample problemjschellSomeoneStoleMyAlias Mar 4, 2011 6:46 PM (in response to 837475)As posted earlier...
The h file
JNIEXPORT void JNICALL Java_helloworld_Main_nativePrint
The source file.
JNIEXPORT void JNICALL Java_jnidemojava_Main_nativePrint
Those do not match.
24. Re: JNI sample problem837475 Mar 5, 2011 9:45 AM (in response to jschellSomeoneStoleMyAlias)That's because I try to different examples, In fact it is:
header file code:
and the source file:
/* DO NOT EDIT THIS FILE - it is machine generated */ #include <jni.h> /* Header for class jnidemojava_Main */ #ifndef _Included_jnidemojava_Main #define _Included_jnidemojava_Main #ifdef __cplusplus extern "C" { #endif /* * Class: jnidemojava_Main * Method: nativePrint * Signature: ()V */ JNIEXPORT void JNICALL Java_jnidemojava_Main_nativePrint (JNIEnv *, jobject); #ifdef __cplusplus } #endif #endif
#include <jni.h> #include <stdio.h> #include "JNIDemoJava.h" JNIEXPORT void JNICALL Java_jnidemojava_Main_nativePrint (JNIEnv *env, jobject obj) { printf("\nHello World from C\n"); }
25. Re: JNI sample problem888281 Sep 14, 2011 6:58 PM (in response to 837475) | https://community.oracle.com/thread/2184803?tstart=195&start=15 | CC-MAIN-2017-26 | en | refinedweb |
After getting a profile hash from the LinkedIn Ruby Gem, but I'm looking
for a way to get more information from the user, like skills, their
connections, their email, their profile pic, etc. I'm not having any
trouble accessing this info from the currently authenticated user, only
getting this kind of information from the authenticated user's connections,
and from public profiles.
I
i have a few devices that communicate through serial port. Since, they
are not always connected to the same serial port, so i need to know exactly
which device i'm communicating with when i send data. How can I check which
device is connected to which com port.
Here's what I have so far:
IService:
using
System;using System.Collections.Generic;using System.Linq;using System.Text;using System.ServiceModel;namespace
ServiceLibrary{ [ServiceContract(SessionMode =
SessionMode.Allowed, CallbackContract = typeof(IServiceCallback))]
public interface IService {
I want to share file from my Mob Phone to other Mob Phone which are
connected to the same network with which i am connected, is it can do
easily with tcp or dhcp or socket, please i don't know about all of these
also.
Whether or not the device has 3g/data activated. Any idea of what's
happening?Thank you
My code:
public
boolean isConnected3G(){ ConnectivityManager cm =
(ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE);
NetworkInfo[] networks = cm.getAllNetworkInfo(); for
(NetworkInfo ni : networks) if ("MOBILE".equal
Given the following code, which is the first code having anything to do
with my TIdTCPClient when my program runs
TIdTCPClient
try if not TCPclient.Connected then begin
TraceInfo(TCPclient.Host + ' Server not connected - reconnect');
TCPclient.Connect(); TCPclient.GetResponse(200);
ConnectedToServer(); TraceInfo(T
how to add in exception showing the input has to be enter, if the user
click on connect without entering any input? i would like the have a
message box show if the user click on the connect button without entering
the name, ip and port. [SOLVED]
using System;using
System.Collections.Generic;using System.ComponentModel;using
System.Data;using System.Drawi
sortable's connectWith seems strange. For
instance, I have a list of sortable items (orange) that I don't want them
to connect with other sortable connected list items (yellow).
sortable
connectWith
So I add a class name to those are connected
connected-sortable, but the ones (orange) are not connected
still can be dropped into the connected list.
connected-sortable
Why i
I would like to checkout a read only SVN projects (e.g. from Codeplex)
to a folder on my local disk, open the solution (let's say I am getting
Dotnetnuke and open it's solution, add some projects to the solution and
possibly changing some files, previously making them writable.I want
to regularly update the changes coming from Codeplex and do code mergers to
preserver my changes).
I am want to have controlled if my rmiport is available and I can have
my connection or if it is busy, not available or wathever. I know if it
cannot connect it is shown an exception message, but I would like to have
it under control, so if it cannot connect I want to show a message like
"Connection Failed" and stop the process. (I am working with Eclipse) is it
possible?
Thanks in | http://bighow.org/tags/connected/1 | CC-MAIN-2017-26 | en | refinedweb |
I have taken Problem #12 from Project Euler as a programming exercise and to compare my (surely not optimal) implementations in C, Python, Erlang and Haskell. In order to get some higher execution times, I search for the first triangle number with more than 1000 divisors instead of 500 as stated in the original problem.
The result is the following:
C:
lorenzo@enzo:~/erlang$ gcc -lm -o euler12.bin euler12.c lorenzo@enzo:~/erlang$ time ./euler12.bin 842161320 real 0m11.074s user 0m11.070s sys 0m0.000s
python:
lorenzo@enzo:~/erlang$ time ./euler12.py 842161320 real 1m16.632s user 1m16.370s sys 0m0.250s
python with pypy:
lorenzo@enzo:~/Downloads/pypy-c-jit-43780-b590cf6de419-linux64/bin$ time ./pypy /home/lorenzo/erlang/euler12.py 842161320 real 0m13.082s user 0m13.050s sys 0m0.020s
erlang:
lorenzo@enzo:~/erlang$ erlc euler12.erl lorenzo@enzo:~/erlang$ time erl -s euler12 solve Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:4:4] [rq:4] [async-threads:0] [hipe] [kernel-poll:false] Eshell V5.7.4 (abort with ^G) 1> 842161320 real 0m48.259s user 0m48.070s sys 0m0.020s
haskell:
lorenzo@enzo:~/erlang$ ghc euler12.hs -o euler12.hsx [1 of 1] Compiling Main ( euler12.hs, euler12.o ) Linking euler12.hsx ... lorenzo@enzo:~/erlang$ time ./euler12.hsx 842161320 real 2m37.326s user 2m37.240s sys 0m0.080s
Summary:
I suppose that C has a big advantage as it uses long for the calculations and not arbitrary length integers as the other three. Also it doesn't need to load a runtime first (Do the others?).
Question 1:
Do Erlang, Python and Haskell loose speed due to using arbitrary length integers or don't they as long as the values are less than
MAXINT?.
EDIT:
Question 4: Do my functional implementations permit LCO (last call optimization, a.k.a tail recursion elimination) and hence avoid adding unnecessary frames onto the call stack?
I really tried to implement the same algorithm as similar as possible in the four languages, although I have to admit that my Haskell and Erlang knowledge is very limited.
Source codes used:
#include <stdio.h> #include <math.h> int factorCount (long n) { double square = sqrt (n); int isquare = (int) square; int count = isquare == square ? -1 : 0; long candidate; for (candidate = 1; candidate <= isquare; candidate ++) if (0 == n % candidate) count += 2; return count; } int main () { long triangle = 1; int index = 1; while (factorCount (triangle) < 1001) { index ++; triangle += index; } printf ("%ld\n", triangle); }
#! /usr/bin/env python3.2 import math def factorCount (n): square = math.sqrt (n) isquare = int (square) count = -1 if isquare == square else 0 for candidate in range (1, isquare + 1): if not n % candidate: count += 2 return count triangle = 1 index = 1 while factorCount (triangle) < 1001: index += 1 triangle += index print (triangle)
-module (euler12). -compile (export_all). factorCount (Number) -> factorCount (Number, math:sqrt (Number), 1, 0). factorCount (_, Sqrt, Candidate, Count) when Candidate > Sqrt -> Count; factorCount (_, Sqrt, Candidate, Count) when Candidate == Sqrt -> Count + 1; factorCount (Number, Sqrt, Candidate, Count) -> case Number rem Candidate of 0 -> factorCount (Number, Sqrt, Candidate + 1, Count + 2); _ -> factorCount (Number, Sqrt, Candidate + 1, Count) end. nextTriangle (Index, Triangle) -> Count = factorCount (Triangle), if Count > 1000 -> Triangle; true -> nextTriangle (Index + 1, Triangle + Index + 1) end. solve () -> io:format ("~p~n", [nextTriangle (1, 1) ] ), halt (0).
factorCount number = factorCount' number isquare 1 0 - (fromEnum $ square == fromIntegral isquare) where square = sqrt $ fromIntegral number isquare = floor square factorCount' number sqrt candidate count | fromIntegral candidate > sqrt = count | number `mod` candidate == 0 = factorCount' number sqrt (candidate + 1) (count + 2) | otherwise = factorCount' number sqrt (candidate + 1) count nextTriangle index triangle | factorCount triangle > 1000 = triangle | otherwise = nextTriangle (index + 1) (triangle + index + 1) main = print $ nextTriangle 1 1
Using
GHC 7.0.3,
gcc 4.4.6,
Linux 2.6.29 on an x86_64 Core2 Duo (2.5GHz) machine, compiling using
ghc -O2 -fllvm -fforce-recomp for Haskell and
gcc -O3 -lm for C.
-O3)
-O2flag)
factorCount'code isn't explicitly typed and defaulting to
Integer(thanks to Daniel for correcting my misdiagnosis here!). Giving an explicit type signature (which is standard practice anyway) using
Intand the time changes to 11.1 seconds
factorCount'you have needlessly called
fromIntegral. A fix results in no change though (the compiler is smart, lucky for you).
modwhere
remis faster and sufficient. This changes the time to 8.5 seconds.
factorCount'is constantly applying two extra arguments that never change (
number,
sqrt). A worker/wrapper transformation gives us:
$ time ./so 842161320 real 0m7.954s user 0m7.944s sys 0m0.004s
That's right, 7.95 seconds. Consistently half a second faster than the C solution. Without the
-fllvm flag I'm still getting
8.182 seconds, so the NCG backend is doing well in this case too.
Conclusion: Haskell is awesome.
Resulting Code
factorCount number = factorCount' number isquare 1 0 - (fromEnum $ square == fromIntegral isquare) where square = sqrt $ fromIntegral number isquare = floor square factorCount' :: Int -> Int -> Int -> Int -> Int factorCount' number sqrt candidate0 count0 = go candidate0 count0 where go candidate count | candidate > sqrt = count | number `rem` candidate == 0 = go (candidate + 1) (count + 2) | otherwise = go (candidate + 1) count nextTriangle index triangle | factorCount triangle > 1000 = triangle | otherwise = nextTriangle (index + 1) (triangle + index + 1) main = print $ nextTriangle 1 1
EDIT: So now that we've explored that, lets address the questions
Question 1: Do erlang, python and haskell loose speed due to using arbitrary length integers or don't they as long as the values are less than MAXINT?
In Haskell, using
Integer is slower than
Int but how much slower depends on the computations performed. Luckily (for 64 bit machines)
Int is sufficient. For portability sake you should probably rewrite my code to use
Int64 or
Word64 (C isn't the only language with a
long)..
That was what I answered above. The answer was 0) Use optimization via
-O2 1) Use fast (notably: unbox-able) types when possible 2)
rem not
mod (a frequently forgotten optimization) and 3) worker/wrapper transformation (perhaps the most common optimization).
Question 4: Do my functional implementations permit LCO and hence avoid adding unnecessary frames onto the call stack?
Yes, that wasn't the issue. Good work and glad you considered.
What is the difference between the dot
(.) and the dollar sign
($)?. As I understand it, they are both syntactic sugar for not needing to use parentheses.
The
$ operator is for avoiding parenthesis. Anything appearing after it will take precedence over anything that comes before.
For example, let's say you've got a line that reads:
putStrLn (show (1 + 1))
If you want to get rid of those parenthesis, any of the following lines would also do the same thing:
putStrLn (show $ 1 + 1) putStrLn $ show (1 + 1) putStrLn $ show $ 1 + 1
The primary purpose of the
. operator is not to avoid parenthesis, but to chain functions. It lets you tie the output of whatever appears on the right to the input of whatever appears on the left. This usually also results in fewer parenthesis, but works differently.
Going back to the same example:
putStrLn (show (1 + 1))
(1 + 1)doesn't have an input, and therefore cannot be used with the
.operator.
showcan take an
Intand return a
String.
putStrLncan take a
Stringand return an
IO ().
You can chain
show to
putStrLn like this:
(putStrLn . show) (1 + 1)
If that's too many parenthesis for your liking, get rid of them with the
$ operator:
putStrLn . show $ 1 + 1 | http://boso.herokuapp.com/haskell | CC-MAIN-2017-26 | en | refinedweb |
By default, SharePoint folders do not support columns (metadata). They behave very much like Windows explorer folders with limited config options compared to document libraries. However, SharePoint folders exist inside document libraries, and document libraries have full support for columns (also called fields or metadata).
Most column types (like single line of text, and choice, and number) allow for setting default values. And when a default value is set, any new item uploaded to the library will automatically inherit that default value for the field in question.
Scenario
Let’s say you have a document library called DocLib with multiple folders – FolderOne, FolderTwo, and FolderThree. This document library also has a column called DocumentType. Now, depending on the folder into which an item is uploaded, you may want to set the value of the DocumentType column for the newly uploaded item. So instead of a global column default value, you want your default values to be folder-specific.
Let’s see how this can be done using some custom C# code.
Setting default column values using the MetadataDefaults class
To demonstrate my solution, I will be working with the scenario described above (as in, the document library and folders setup). In addition, I will configure a “global” column default on the DocumentType column and add a fourth folder FolderFour. We won’t change MetadataDefaults on FolderFour. Instead the folder will serve as a “control” to help us see how things would work by default.
Here’s how the setup looks like:
I have configured the default value of the DocumentType column to be “Global Default Value”.
I will be demonstrating this solution on a SharePoint 2016 server using a C# console application. Here’s the full code we will be working with:
using Microsoft.Office.DocumentManagement; using Microsoft.SharePoint; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace TestConsoleApp { class Program { static void Main(string[] args) { using (SPSite site = new SPSite("")) { using (SPWeb web = site.OpenWeb()) { SPList list = web.Lists["DocLib"]; MetadataDefaults folderOneColumnDefaults = new MetadataDefaults(list); SPFolder folderOne = list.RootFolder.SubFolders["FolderOne"]; folderOneColumnDefaults.SetFieldDefault(folderOne, "DocumentType", "Folder One Default Value"); folderOneColumnDefaults.Update(); MetadataDefaults folderTwoColumnDefaults = new MetadataDefaults(list); SPFolder folderTwo = list.RootFolder.SubFolders["FolderTwo"]; folderTwoColumnDefaults.SetFieldDefault(folderTwo, "DocumentType", "Folder Two Default Value"); folderTwoColumnDefaults.Update(); MetadataDefaults folderThreeColumnDefaults = new MetadataDefaults(list); SPFolder folderThree = list.RootFolder.SubFolders["FolderThree"]; folderThreeColumnDefaults.SetFieldDefault(folderThree, "DocumentType", "Folder Three Default Value"); folderThreeColumnDefaults.Update(); } } } } }
Note that you will need to add a reference to the Microsoft.Office.DocumentManagement namespace in order to use the MetadataDefaults class.
Run the above code. It only needs to be run once.
Now, manually upload documents into the different folders and note the default values that get set each time. Since our code did not configure MetadataDefaults on FolderFour, you will notice that when items are uploaded into FolderFour, they get the DocumentType default value of “Global Default Value”. This is also what happens when items are uploaded directly into the document library without being put into any named folder.
When items are uploaded in to FolderOne, or FolderTwo, or FolderThree, they get the DocumentType default value of “Folder One Default Value”, or “Folder Two Default Value” or “Folder Three Default Value” respectively.
You can view these default values by going to Document library settings > Column default value settings. Here’s how it looks for FolderOne.
Potential applications of this code could be inside of event receivers to keep default column values on child items updated when something changes somewhere. | https://ehikioya.com/default-column-values-sharepoint-folder/ | CC-MAIN-2018-22 | en | refinedweb |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
When executing a custom email listener listener the below error is displayed in the logs and the email is never sent. When I run the preview function, the email is generated correctly. I attempted to put logging statements in the templates but the import statements generated errors. Ho can I go about debugging this?
JIRA 6.4.14
ScriptRunner 4.1.3.24
Your request <A href="$baseurl/browse/$issue">$issue</A> has been submitted successfully. The aligned ESM <%
String esmKeyString = issue.getCustomFieldValue(componentManager.getCustomFieldManager().getCustomFieldObjectByName("ESM's Name"));
def (esmKey, value2) = esmKeyString.tokenize( '(' )
def esmName = componentManager.getUserUtil().getUserByKey(esmKey).getDisplayName()
out << esmName;
%> will be contacting you shortly.
Issue $issue: A <%
String companyName = issue.getCustomFieldValue(componentManager.getCustomFieldManager().getCustomFieldObjectByName("Company Name"));
out << companyName;
%> request has been submitted successfully
Time (on server): Fri Jan 27 2017 15:51:25 GMT-0500 (Eastern Standard Time)
The following log information was produced by this execution. Use statements like:log.info("...") to record logging information.
2017-01-27 14:51:25,773 ERROR [runner.AbstractScriptListener]: ************************************************************************************* 2017-01-27 14:51:25,773 ERROR [runner.AbstractScriptListener]: Script function failed on event: com.atlassian.jira.event.issue.IssueEvent, script: com.onresolve.scriptrunner.canned.jira.workflow.postfunctions.SendCustomEmail java.lang.NullPointerException: Cannot invoke method tokenize() on null object at com.onresolve.scriptrunner.canned.jira.workflow.postfunctions.SendCustomEmail.constructMail(SendCustomEmail.groovy:477) at com.onresolve.scriptrunner.canned.jira.workflow.postfunctions.SendCustomEmail$constructMail$0.callCurrent(Unknown Source) at com.onresolve.scriptrunner.canned.jira.workflow.postfunctions.SendCustomEmail.doScript(SendCustomEmail.groovy:553)
You'd be better off putting this in the configuration so you can inject it in as a variable, and handle null as you wish.
Main trouble is that variable "esmKeyString" is null. Since NullPointerExeption is raised.
It is null because there is no value for customfiled "ESM's Name"
There are two options to. | https://community.atlassian.com/t5/Jira-questions/ScriptRunner-error-in-custom-email-listener/qaq-p/318626 | CC-MAIN-2018-22 | en | refinedweb |
Dear NXP users,
I'm having from a few days a problem that I wasn't able to solve, and to which I can't find any solution.
I downloaded, after registering, the following example code from the Micrium uC-OS website: FRDM-K64F_OS3-TCPIP-HTTPs-DHCPc-KSDK-LIB | Micrium
The problem is that the project is already present only for IAR and Keil, but I need to work with KDS only.
I tested the project on a 30-days trial version of IAR, and it works perfectly.
Then, I tried adding the files I needed to an existing project in KDS, copying them in the correct folders and linking them to the KDS project.
The problem is the following: everything compiles... until I call some function that references to the library! In that case I keep getting "undefined reference to..." to functions that are defined in .h/.c files that are included!
After calling, for instante, AppTCPIP_Init(), that is a function provided by the library uC-TCP/IP, I cannot compile anymore.
For example, inside "app_dhcp-c.c" there's a function call:
dhcp_status = DHCPc_ChkStatus(if_nbr, &err_dhcp);
The compiler tells me:
"Undefined reference to 'DHCPc_ChkStatus'"
While DHCPc_ChkStatus() is definied in "dhcp-c.h", that is included though a chain of header files. Also including it directly by adding:
#include "dhcp-c.h"
does not change the situation at all.
Please help, I really need to be able to launch soon this project in KDS.
What am I missing?
Thanks a lot in advance!
Hello Francesco Raviglione ,
How about add the lib name and path in KDS ?
You can refer to the below thread to do it:
Creating and using Libraries with ARM gcc and Eclipse | MCU on Eclipse
Hope it helps,
Alice | https://community.nxp.com/thread/465738 | CC-MAIN-2018-22 | en | refinedweb |
Hello, I'm writing this because I've run out of options. I'm trying to write a java application to access a postgresql database here on our system, but I'm getting an error that I don't understand when the application runs. I would appreciate any help that you can give as I am unsure how to solve this. Here is the beginning of the code that is causing the error: public class QALogUpdater // Create the class { public void updateLog() throws ClassNotFoundException, SQLException { // The routine to update the QA Log String command = new String(); String db = new String("jdbc:postgresql://yngvi/qa_log_test"); String usr = new String("mda"); String pwd = new String("blah"); Class.forName("org.postgresql.Driver"); // Load database interface java.sql.DriverManager.setLogStream(java.lang.System.out); Connection conn = DriverManager.getConnection( db, usr, pwd ); Statement stmt = conn.createStatement(); When executing the "Connection conn = ..." statement, I get the following error: DriverManager.getConnection("jdbc:postgresql://yngvi/qa_log_test") trying driver[className=org.postgresql.Driver,org.postgresql.Driver@6a9d42] java.sql.SQLException: ERROR: MultiByte strings (MB) must be enabled to use this function at org.postgresql.Connection.ExecSQL(Connection.java:533) at org.postgresql.Connection.ExecSQL(Connection.java:400) at org.postgresql.Connection.openConnection(Connection.java:270) at org.postgresql.Driver.connect(Driver.java:122) at java.sql.DriverManager.getConnection(DriverManager.java:517) at java.sql.DriverManager.getConnection(DriverManager.java:177) at qa.shared.QALogUpdater.updateLog(QALogUpdater.java:50) at qa.shared.Result.reportResult(Result.java:127) at qa.analyzer.ConfigFileReader.<init>(ConfigFileReader.java:83) at QAAnalyzer.main(QAAnalyzer.java:156) java.lang.NullPointerException Our system manager assures me that he has re-configured Postgress with the multibyte option as indicated on the online help web page in the following manner: % ./configure --enable-multibyte I had him do this even though we are not using anything other than ASCII in our databases due to the error message. We are using Java version 1.3 on a Unix Solaris 7 OS machine. He isn't sure of which version of Postgres we are using but the pgsql application is version 6.5.3. He thinks it is fairly current but probably not the most current version. I'm not sure what the problem is or what MultiByte has to do with anything but I really need some help on this. If there is any other information you need, just ask. And if there is someone else that you think could help me more, please let me know. Interestingly, when I replace the Connection statement with the following: Connection conn = DriverManager.getConnection( db ); I don't get the stack trace, but I do get the NullPointerException. The database being accessed does not require passwords normally, if that makes a difference, but I've also created it with password protection enabled but that didn't affect anything. Thanks for any help you can offer. ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[JDBC] Help: Postgresql/JDBC database access error
Mark D. Apolinski Wed, 16 May 2001 13:39:05 -0700
- [JDBC] Help: Postgresql/JDBC database access error Mark D. Apolinski
- Mark D. Apolinski | https://www.mail-archive.com/[email protected]/msg00415.html | CC-MAIN-2018-22 | en | refinedweb |
My solution is quite straightforward. As any pair of two different points can determine a line. So I go through all the points-pairs, get lines of each pair, store lines as the key in HashMap, then store each point into HasSet belongs to a certain line. Finally, count how many points on each line and select the max.
Here is the equation of a line:
y = k * x + c
Reference:
So first I defined a class named Line, based on the equation of line in Math, y = k * x + c, in my Line class it has four properties:
- vertical, a boolean variable, if vertical is true it means the line is a vertical which is parallel with the y-axis;
- horizontal, a boolean variable, if horizontal is true it means the line is a horizontal which is parallel with the x-axis;
- k, an integer variable which means the slope of a line;
- c, an integer variable which means the y-intercept of a line.
Say if we have two points: (x1, y1), (x2, y2), then we can calculate the slope k and the y-intercept if and only if the line is not a vertical:
1. k = (y2 - y1) / (x2 - x1); 2. c = (x2 * y1 - x1 * y1) / (x2 - x1);
Here is one thing we need to take care, as we use integer type to store the slope k and y-intercept c, we need to keep three decimal places for them, so inside the Line class we need to multiply both k and c by 1000.
Another thing is we need to let the HashMap know how to identify each line. So inside the Line class we override both methods of equals(Object obj) and hashCode().
public class Solution { public int maxPoints(Point[] points) { if (points.length < 3) { return points.length; } int max = Integer.MIN_VALUE; Map<Line, Set<Point>> map = new HashMap<>(); for (int i = 0; i < points.length; i++) { for (int j = i + 1; j < points.length; j++) { Point p1 = points[i]; Point p2 = points[j]; Line l = new Line(p1, p2); if (!map.containsKey(l)) { map.put(l, new HashSet<>()); } map.get(l).add(p1); map.get(l).add(p2); max = Math.max(max, map.get(l).size()); } } return max; } } class Line { // y = k * x + c boolean vertical; boolean horizontal; int k; int c; public Line(Point p1, Point p2) { vertical = false; horizontal = false; if (p1.x == p2.x) { vertical = true; c = p1.x * 1000; } else if (p1.y == p2.y) { horizontal = true; c = p1.y * 1000; } else { k = 1000 * (p2.y - p1.y) / (p2.x - p1.x); c = 1000 * (p2.x * p1.y - p1.x * p2.y) / (p2.x - p1.x); } } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + c; result = prime * result + (horizontal ? 1231 : 1237); result = prime * result + k; result = prime * result + (vertical ? 1231 : 1237); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Line other = (Line) obj; if (c != other.c) return false; if (horizontal != other.horizontal) return false; if (k != other.k) return false; if (vertical != other.vertical) return false; return true; } } | https://discuss.leetcode.com/user/hunter-na | CC-MAIN-2018-22 | en | refinedweb |
Single sign on is a way for users to issue a security token for the first time login, login into multiple applications using one set of credentials i.e security token.
Adding sso to an application will make things easier for users, because they dont need to remember login credentials for multiple applications. User just need to enter their login credentials for first time instead of re-entering their credentials for every application login.
In this post, we'll see how to add single sign on to multiple django applications using django-simple-sso.
Using django-simple-sso, we should have single server, multiple clients.
1. Server will have all users information which'll authenticate user details at the time of login, creates token for the first time. Using their security tokens, it'll authenticates user details
2. Each Client or application needs to generate their public key, private key in the server to perform requests securely.
How Django SSO works for multiple applications?
User --> application -- > SSO Server --> application
1. When User log into an application, the client will send a request with next GET parameter, which have redirect url after successful login
2. Request details(application details: public key, private key, redirect url) will be validated at server
3. It returns user request token which'll be created for the first time login
4. Using request token, we're sending a request to server to verify user authorization. For successful authorization, it will return user security token. If user is not loggedin, it'll ask to enter user login details.
5. client will send a post request to server to verify user access token.
6. If the user access token is valid, the server returns a serialized Django User object.
7. The application logs the user in using the Django User recieved from the server.
Server Side:
1. Install django-simple-sso using the following command:
pip install django-simple-sso
2. Run the following command to store each client or application details, user tokens.
python manage.py migrate
3. Create application or client details(public key, priavte key) in the server side in the django shell.
from simple_sso.sso_server.models import Token, Consumer Consumer.objects.create(public_key='your_application_public_key', private_key='your_application_private_key', name='your_application_name')
4. Add 'simple_sso.sso_server' to INSTALLED_APPS
INSTALLED_APPS = INSTALLED_APPS + ( 'simple_sso.sso_server' )
5. Intialize the server and add the following url patterns to urls.py file:
from simple_sso.sso_server.server import Server test_server = Server() urlpatterns += [ url(r'^server/', include(test_server.get_urls())), ]
Client Side:
1. Install django-simple-sso using the following command:
pip install django-simple-sso
2. Add Public key, private key, server url to application settings
SSO_PRIVATE_KEY = 'Your Private Key' SSO_PUBLIC_KEY = 'Your Public Key' SSO_SERVER = 'SSO SERVER URL'
3. Initialize the client and add the follwing client urls to application urls:
from simple_sso.sso_client.client import Client test_client = Client(settings.SSO_SERVER, settings.SSO_PUBLIC_KEY, settings.SSO_PRIVATE_KEY)
url(r'^client/', include(test_client.get_urls())),Create 2 client apps with the above settings. Add different hostname to your appplication using /etc/hosts file.
visits your applications, it'll ask for user credentials if not logged in already. After successful login, visits other application, user'll be logged in already. | https://micropyramid.com/blog/django-single-sign-on-sso-to-multiple-applications/ | CC-MAIN-2018-22 | en | refinedweb |
Coding Step 1 - Hello World and Makefiles
Articles in this series:
- Coding Step 0 - Development Environments
- Coding Step 1 - Hello World and Makefiles
- Coding Step 2 - Source Control
- Coding Step 3 - High-Level Requirements
- Coding Step 4 - Design
Step 0 discussed how to install GCC and the make utility with the expectation of writing and compiling your first C program. In this article, I discuss how to use those tools we installed last time. Specifically, how to use GCC to compile a C program and how to write a makefile to automate the process.
While there are many other tutorials out there covering roughly similar ground, I hope that you (a novice with some background in programming) find this tutorial to be more useful and approachable than other tutorials. In addition, my goal with this whole series is to teach the basics of a Unix-like development environment. I've found that the Unix approach to developing code produces efficient programmers with more autonomy and a broader understanding of all the various pieces that contribute to turning code into useful programs. If you've developed code before but never been able to break away from the chains of an IDE then this is the tutorial for you. Additionally, the development tools discussed here have been adapted to a wide variety of embedded systems. You can use some variant of GCC to compile code for a wide variety of processors: AVRs, MSP430s, ARMs of all sizes, PowerPC - you name it and there's probably a GCC compiler for it. Learning to use Unix-like development tools has direct applicability to developing code for embedded systems and will make you a fundamentally better programmer.
For learning how to compile C files, it's best to start simple. Hello World is the simplest C program imaginable:
#include <stdio.h> main ( ) { printf("Hello World!"); }
Because it’s so simple, it’s easy to verify that it’s working correctly: if you see ‘Hello World!’ printed when you run it, you can be sure you did everything correctly. The steps to compile it are verbose at first, but they'll become second nature as you work more with code.
Compiling a C File
(Note: these steps were written for Windows 7, but the same result can be achieved on nearly all versions of Windows with only minor changes to the steps).
Save the the above source code into a file called helloWorld.c in the root of the C: drive using your favorite text editor.
Open the Start menu and enter ‘cmd’ into the run dialog and hit enter:
Note: If you’re running on Windows Vista or above, you’ll have to run cmd.exe as administrator by right-clicking the cmd.exe entry:
Afterwards, agree to the UAC dialog that pops up.
This brings up a command prompt:
Type the following commands
cd C:\ ←-This will change the current directory to the root of the C: drive
dir *.c ←- This will show a listing of all of the files in the root of the C: drive with an extension of .c
The window should look something like this:
If you see something like this:
It means you didn’t save your text file in the right place.
To compile this file, type the following command:
gcc helloWorld.c
This command produces a new file in the root of the C: drive: a.exe
This is the default name given to a program compiled with GCC. To run the program, type the following command:
a.exe
You should see something like this:
That ‘Hello World!’ means it worked - congratulations!
Now, it’s kinda odd for a Hello World program to be called ‘a.exe’. Luckily, we can change this by telling GCC what the executable should be called with this command:
gcc helloWorld.c -o helloWorld.exe
You’ll run this one by typing:
<strong><strong> </strong></strong>helloWorld.exe
And your command window should look like this if everything went correctly:
Success!
Code Organization
Now programming is messy business. At this point you’ve got three files sitting in the root of your C: drive that weren’t there before. If the simplest C program imaginable leaves three files sitting around imagine the trash a more complex project will leave on your hard drive. Organization is key when coding, so I recommend picking a special directory to store all of your code projects, then giving each project its own directory underneath. I do this in my Dropbox so I can have quick access to all of my code across multiple computers. Each project directory gets its own organization scheme based on the one present in my article on project directory structure.
For this project I’ve created a ‘helloWorld’ project directory in my Dropbox and given it two subdirectories:
- src - All (one) of the source files are stored here.
- bin - The executable gets put here.
If you save your helloWorld.c file into the ‘src’ subdirectory for your helloWorld project, then open up another command prompt and enter a similar set of commands, you’ll get the same result:
Makefiles
You might have noticed that using the command line is a lot of typing - even for a simple C program. You might be wondering - are larger C programs simply impossible to work with?
No! Thanks to makefiles.
Basically, makefiles are a method of specifying 'recipes' for compiling source files into executables, object files, libraries and basically anything else you can think of. Makefiles automate the mundane and boring parts of compiling source files. They're actually a lot more versatile and complex than this, but for a programmer just starting off the primary thing a makefile will do is save you repetitive typing.
Let’s make a makefile to compile this source into the exe - start by saving this text as a file called ‘makefile’ in the helloWorld project directory:
all: gcc src/helloWorld.c -o bin/helloWorld.exe
Gotchas:
- ‘makefile’ has no extension - some text editors will try to put a .txt extension on it, so watch yourselves.
- The indent MUST be a tab character - not three or four spaces as some text editors use.
If you don’t use a real tab character you’ll get an error when you try to use the file (as I’ll show you below).
Once you’ve saved the file, type this in the command prompt to run make and then the helloWorld executable (assuming you're still in the directory you earlier created for the helloWorld project):
mingw32-make .\bin\helloWorld.exe
And you’ll get this result:
As I mentioned earlier, you’ll get an error like this if you didn’t use a real tab character in the file:
Some explanation is in order here. Typing ‘mingw32-make’ executes the make utility. The make utility will automatically look for a file called ‘makefile’ or ‘makefile.mak’ in the current directory. If it finds this file it will try to build the target called ‘all’. What’s a target? Let’s go back to the text of the makefile:
all: gcc src/helloWorld.c -o bin/helloWorld.exe
The first line is the name of the target: ‘all’ with a colon after it. The next line is tabbed to the right and contains the command needed to build the ‘all’ target. This is the same command that you used to type in manually to build the executable. This is how make works: commands build targets. A target can have more than one command - as long as they’re all tabbed it will execute them in order from top to bottom whenever it rebuilds the target. By default, make will always attempt to build the ‘all’ target if you don’t specify a different one. Makefiles can contain multiple targets which can be chosen from on the command line:
<strong><strong> mingw32-</strong></strong>make <target>
Thus, typing in either of the following:
mingw32-make mingw32-make all
will both run the same command - the first time because the ‘all’ target is assumed and the second time because the ‘all’ target is explicitly specified on the command line. Any target name can replace the ‘all’ on the command line.
Typically there are multiple targets in a makefile that you can specify via the command line. These targets can either be identifiers (the ‘all’ target is an identifier) or they can be file names. We can add a file target to the makefile by adding these lines at the end of the makefile:
bin\helloWorld.exe: gcc src/helloWorld.c -o bin/helloWorld.exe
Add these lines to the end of the makefile, save it and then type this at the command prompt:
<strong><strong> </strong></strong>mingw32-make bin\helloWorld.exe
And then you should get this result:
That’s pretty confusing for someone who’s never used make before. Luckily I’ve been around the block a few times. Here’s what’s happening:
Make isn’t just a glorified batch file - it has intelligence. One of the intelligent things that it does is track dependencies. For example, helloWorld.exe is generated from helloWorld.c - helloWorld.exe is dependent on helloWorld.c. Make will never go to the trouble of rebuilding an output (in this case, helloWorld.exe) if none of the dependencies of helloWorld.exe have changed. If none of the dependencies have changed, the output is up to date (as the output of make told us) and recompiling the source file would produce the same output - so why do it? It might seem like this is a case of premature optimization and for this project, yes it is. But for a large project with hundreds or thousands of source files, only rebuilding the outputs that need to be rebuilt can easily save hours!
The problem with our makefile is that we haven’t specified any dependencies for our target. Because of this, make doesn’t know what files are needed to rebuild helloWorld.exe, so it assumes that if the file exists then there must be no other work to do. The ‘all’ target doesn’t have this problem because it’s not a file - it’s a ‘phony’ target. Whenever you rebuild the ‘all’ target it always performs the command(s). To ensure that make will rebuild the helloWorld.exe file, change the target definition to this:
bin\helloWorld.exe: src\helloWorld.c gcc src/helloWorld.c -o bin/helloWorld.exe
We’ve added ‘src\helloWorld.c’ to the right of the colon after the target name. By doing this, we’ve specified that the helloWorld.c file is a dependency of helloWorld.exe: the one is needed to make the other. You can have as many dependencies to the right of the colon as you want - just separate them by spaces. Dependencies can be other targets (such as ‘all’) or they can be files as is shown above. By adding this dependency, make will know when it has to rebuild helloWorld.exe and when it can avoid doing so to save time.
So, with the dependency added, re-run the previous command on the command line:
mingw32-make bin\helloWorld.exe
And now, you get this result:
Which is exactly the same as before. What gives?
The problem here is that make determines whether it has to rebuild a target from its dependencies based on timestamps. If helloWorld.exe has later timestamp than helloWorld.c, that means that helloWorld.exe must have been built from the current helloWorld.c. If there haven’t been any changes to the dependencies, there’s no reason to update the target - hence, it’s up to date. However, if we change the timestamp on helloWorld.c then make will know to rebuild helloWorld.exe. Edit helloWorld.c so that it looks like this:
#include <stdio.h> main ( ) { printf("I changed it okay?"); }
Your program has suffered a change in attitude, but more importantly it’s suffered from a change in timestamp: the timestamp on helloWorld.c is now later than that of helloWorld.exe. Watch what make does now when we try to build the target (using the same command as shown before):
And now it works! But now you have a problem: if you try to build bin\helloWorld.exe again, it won’t do anything because it’s up to date. Sometimes you want to force the target to be rebuilt. There’s two good ways to do this:
Specify a new target with the name ‘FORCE’ with no command after it. Then, make ‘FORCE’ a dependency of the file you always want to be rebuilt. The makefile would look like this thereafter:
all: gcc src/helloWorld.c -o bin/helloWorld.exe bin\helloWorld.exe: src\helloWorld.c FORCE gcc src/helloWorld.c -o bin/helloWorld.exe FORCE:
Now it will always rebuild helloWorld.exe every time you invoke it. This is an effective but not flexible solution. There’s an easy way to allow you to force a rebuild whenever you want without it happening all the time: the clean target.
The clean target is a common target in many makefiles. The point of the target is that it deletes all of the output files at once which forces everything to be rebuilt afterwards. Using the clean target approach, the makefile would look like this:
all: gcc src/helloWorld.c -o bin/helloWorld.exe bin\helloWorld.exe: src\helloWorld.c gcc src/helloWorld.c -o bin/helloWorld.exe FORCE: clean: del bin\helloWorld.exe
To force a rebuild, type the following commands on the command line:
- mingw32-make clean
- mingw32-make bin\helloWorld.exe
And you’ll see this result:
Works like a charm! Now you have the capability to force a rebuild any time. You’ll notice that I left the FORCE target in the makefile - it’s a common target that will most likely be useful to you in the future. Also, unused targets don’t really have any downside as long as they don’t make the makefile harder to read - certainly that isn’t the case yet.
That’s a good amount of basic knowledge about compiling C files and writing makefiles: you can write a C file, compile it and write a makefile to handle all of those tasks for you. I’ve literally only scratched the surface of both of those topics - there’s plenty of fodder left for future articles. I don’t want to wear you out on a single topic for too long though - that’s why the next article will focus on basic source control with Git.
See you next time!
Previous post by Stephen Friederichs:
Coding - Step 0: Setting Up a Development Environment
Next post by Stephen Friederichs:
Coding Step 2 - Source Control
- Write a CommentSelect to add a comment
your tutorial is so clear and very helpful. Many thanks!
I downloaded and installed the GCC etc...; I went on with the step1 to compile the helloWorld.c program a popup window tells me that the "libiconv procedure entry point in missing in the libiconv-2.dll.
Can you please tell me what it means .Could the libiconv-2.dll itself be corrupted?
Thanks for your work anyway.
Yesterday I told you about a problem in libiconv-2.dll that would have prevented my helloWorld.c from compiling.
Well,Good news! With the help of "DLL-Files Fixer" that file got fixed and I could compile the program which became helloWorld.exe.
Now I'll try on with the Makefiles Chapter.
THANKS again Stephen!
I found (at least I guess so) a problem in your makefile.When the target is clean, I had to change the command to rm bin/helloWorld.exe, from del bin/helloWorld.exe to work.
First, let me get this out of the way. I use Windows system daily. The system from which I am replying from now is a Windows 8 professional system.
With the above said, I feel it is a waste of time developing software on a Windows PC. Instead, I feel is it much more productive spending time developing software on Linux ( or UNIX ) using GNU tools such as gcc, make, autotools, etc. Why ? Well because much of the embedded programming going on now days is done using Linux. Embedded Linux to be precise. Not only that, many of the very useful tools are available on Linux systems for free.
Yes, yes, there are many tools which can be used on Windows as well. Then some of these are very easy to use, and perhaps are even free. But what do these tools really teach us ? How to do things specific for Microsoft platforms only ? Or do they really teach us about the lower level intricacies of that platform ? I would argue that they do not. Using various libc or POSIX api's on the other hand can and often do work cross platform, and offer a much broader insight as to how things really work.
Or do we only really care about such things as WIN32_LEAN_AND_MEAN ?
>>IMAGE. | https://www.embeddedrelated.com/showarticle/740.php | CC-MAIN-2018-22 | en | refinedweb |
Linear velocity of the rigidbody.
The velocity is specified as a vector with components in the X and Y directions (there is no Z direction in 2D physics). The value is not usually set directly but rather by using forces. Disable drag in the Inspector to stop the gradual decay of the velocity.
See Also: AddForce, drag, angularVelocity, Rigidbody.velocity.
no example available in JavaScript
//Create a GameObject and attach a Rigidbody2D component to it (Add Component > Physics2D > Rigidbody2D) //Attach this script to the GameObject
//This script moves a GameObject up or down when you press the up or down arrow keys
using UnityEngine;
public class Example : MonoBehaviour { Rigidbody2D m_Rigidbody2D; float m_Speed;
void Start() { //Fetch the RigidBody from the GameObject m_Rigidbody2D = GetComponent<Rigidbody2D>(); //Set the GameObject’s speed to 10 m_Speed = 10.0f; }
void Update() { //Press the Up arrow key to move the RigidBody upwards if (Input.GetKey(KeyCode.UpArrow)) { //Move RigidBody upwards m_Rigidbody2D.velocity = Vector2.up * m_Speed; }
//Press the Down arrow key to move the RigidBody downwards if (Input.GetKey(KeyCode.DownArrow)) { //Move RigidBody downwards m_Rigidbody2D.velocity = Vector2.down * m_Speed; } } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Rigidbody2D-velocity.html | CC-MAIN-2018-22 | en | refinedweb |
Persistent Variable Setup
Persistent Variables are variables that should not be cleared by the runtime startup code such as during a reset. If you have a variable which you don't want to have initialized upon a Power On Reset (POR), Watch Dog Timer Reset (WDT), or Master Clear Reset (MCLR), define it as Persistent. This is often needed for bootloader entry / exit, when the application needs to differentiate between a WDT, MCLR, or other reset condition. An example of how to establish a Persistent Variable is provided below:
/******************************************************************************* * Persistent RAM variables, which are not initialized at power up / reset. * ******************************************************************************/ #if defined(__XC8__) #ifdef __CCI__ __persistent unsigned char PORStatus; #else persistent unsigned char PORStatus; #endif #elif defined(__XC16__) #ifdef __CCI__ __persistent int PORStatus; #else int PORStatus __attribute__((persistent)); #endif #elif defined(__XC32__) #ifdef __CCI__ __persistent int PORStatus; #else int PORStatus __attribute__((persistent)); #endif #endif | https://microchipdeveloper.com/c:persistent-variables | CC-MAIN-2019-47 | en | refinedweb |
Hi all. Total beginner here taking first baby steps with a Pololu USB AVR programmer 2.1 and Baby Orangutan B-328 controller.
I’ve been going through the various install processes and I believe I have installed things correctly (grab attached). I’ve got the Programmer V2 config utility running (shows “connected”), and I’ve installed the CrossPack AVR for running on my Mac. I can run avr-gcc -v, make -v, and avrdude -v and all seem to show correct installation. I then try to run the Simple-Test in libpololu-avr/examples/atmegaXXX/simple-test/ but I get the following error:
fatal error: pololu/orangutan.h: No such file or directory
#include <pololu/orangutan.h>
What I can’t figure out is that I can see the orangutan.h file in libpololu-avr/pololu (grab attached). libpololu-avr is just on my desktop. Does this need to be located somewhere else? Or is there something else going on? Sorry if this is a super-basic or obvious. Thanks for any assistance. | https://forum.pololu.com/t/newb-trying-to-run-simple-test-on-usb-avr-2-1-orangutan-h-no-such-file-error/18272 | CC-MAIN-2019-47 | en | refinedweb |
Creating dynamic, reactive UIs for Shopify without React
pbj
・1 min read
There are a ton of sites on the internet which rely upon Shopify to power their e-commerce needs. Almost all of those websites have interactions which are responsible for a handful of UI-based updates. Managing those UI updates between various JavaScript files can sometimes get carried away, leading to unorganized or closely-coupled code.
The most recent site I built, Colugo, uses Shopify and I ran into this thought while planning the build. I knew I’d need to share component state(s) between components and the site overall. I considered adding React but then decided to use the publisher-subscriber pattern via an events library may be a simpler approach. And as most clickbait articles would say: The results will shock you!
Using Events in Shopify
It came to my surprise when I found out Shopify has an events library pre-packaged within Slate. By way of Webpack, Shopify ships with an event module, “events.js”, which implements the Node.js events module for browser environments.
This approach can be achieved with any other events module as well.
Dynamic, Reactive User Interfaces
My goal while building the e-commerce site for Colugo was to reduce direct coupling between my components as much as possible. Using a publisher-subscriber pattern to manage events made decoupling a breeze and ended with a much more organized code base.
Knowing I’d use this pattern up front enabled me to map out a few components which would require dynamic UI components and what some states would look like:
- When a user adds an item to the cart
- If successful, display the minicart and/or confirmation message
- If unsuccessful, display error messaging.
- When a user clicks the mobile “cart” icon to display the menu
- If the mobile menu is visible, close it and display the mini cart
- If the mobile menu is not visible, open the mini cart.
- When a user views the product detail page
- If a user selects a product color, then scrolls:
- Display current color selection
- User may update selection
- New selection reflected between shared components
- If a user updates color selection:
- Display current quantity
- When a user scrolls back up, new quantity should correctly display between shared components
If we were using React, a lot of this UI interaction would be composed via Redux or a HOC passing props and event handlers to smaller components. Since we can’t easily use React with Shopify we needed another way to share small pieces “state” updates between components.
Enter Event.js
Using Namespaces For Organized Events
Because our events were being fired in more than one module, we created a small object which exposed the event “types”. It looks like this:
export default eventTypes = { menuUpdate: 'colugo:menu-update', mobileMenuUpdate: 'colugo:mobile-nav-update', cartUpdate: 'colugo:cart-update', variantUpdate: 'colugo:variant-update', productNavUpdate: 'colugo:pn-udpate', miniCartUpdate: 'colugo:minicart-update', recircUpdate: 'colugo:recirc-update', recircCartUpdate: 'colugo:recirc-cart-add', };
This module isn’t special. It provides a consistent way to set up named event publications and subscriptions I know won’t collide within modules.
Payloads
Within each event, we passed in a payload object which represents the data any and all subscribers could use to update their UI. In the case of Colugo, the payloads were usually:
- Product
- Variant
- ID
- Title
- Price
- Inventory
- Color
- Cart
- Total Count
- Items Array
- Cart Update (Success or Failure)
- Display States
- Mini Cart Toggling
- Menu Toggling
- Newsletter Modals
Publishing Events
Now that we have our namespaces setup along with our payloads, we can publish an event:
import events from 'events' import eventTypes from 'eventTypes' events.emit(eventTypes.variantUpdate, { ... payload object });
Subscribing To Events
Our event bus now makes subscribing events super easy. Bringing together our namespace and payload, subscribing to events looks like this:
import events from 'events' import eventTypes from 'eventTypes' events.on(eventTypes.variantUpdate, (payload) => { ... run whatever you want }));
This separates UI updates from the events themselves. For instance, within Colugo, when a user selects a different product color a few UI changes happen:
- New color is selected as active
- Product variant name is updated
- Product price is updated
- Product inventory may be updated
- Product gallery images may be updated.
In past projects, I may have written all the UI changes into one “products.js” file encapsulating all of those DOM elements which make up the UI which changes. While this worked, we also had to keep track of multiple UI components, track which one was changed, when it changed and then propagate changes to other components. In a nutshell, the old approach meant a lot of arrays and looping.
In the case of Colugo, if a user interacted with the color picker, we subscribe to the
variantUpdate event and call the component methods to update the UI.
import events from 'events' import eventTypes from 'eventTypes' events.on(eventTypes.variantUpdate, (product) => { // destructure object for props if necessary // colorPicker.setActiveColor(product.variant.color) productContent.udpateTitle(product.title) productContent.updatePrice(product.price) productContent.updateInveotyr(product.inventoryCount) productGallery.updateGallery(product.variant.images) ... ... }); By leveraging the publisher-subscriber pattern and the built-in events library in Webpack we’re able to manage UI changes in a clean, consistent and organized manner. Our footprint is kept to a minimum and UI components (existing or newly made) can easily subscribe to events with very little overhead.
Aside on “events”
Throughout this article we use the term “events”. It’s worth clarifying we are not referring to browser events like “click” or “onchange” event. With the publisher-subscriber pattern, there are no browser events being fired so it can be beneficial for performance as well.
Helpful reference for further reading:
IRL Examples
Rather than just talking about the work, feel free to check out Colugo's Product Detail Page and poke around! You’ll see how pieces of UI interact with one another. Here are a few areas you can look at which take advantage of the publisher-subscriber method and events.js
- Select a color on a product page
- This should change the image gallery and some content in the hero
- Scroll down and use the sticky color picker to update the color
- This will sync up the content and gallery to the new current selection
- Add a product to the cart -If successful, will open the cart and update the cart number count
Demos
Pub/Sub - Counter Example
Pub/Sub - Cart Notification
Pub/Sub - Active Element
Big thanks to Zee Spencer for reviewing and editing this post before publishing.
Workspace Wednesday: Show off your desk/computer setup!
Let’s share pics of everybody’s setups. Feel free to add details about your ha...
| https://practicaldev-herokuapp-com.global.ssl.fastly.net/pbj/creating-dynamic-reactive-uis-for-shopify-without-react-3bn1 | CC-MAIN-2019-47 | en | refinedweb |
public class ProgressEvent extends TypedEvent
ProgressEventis sent by a
Browserto
ProgressListener's when a progress is made during the loading of the current URL or when the loading of the current URL has been completed.
data, display, time, widget
source
getSource
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
public int current
public int total
public ProgressEvent(Widget widget)
widget- the widget that fired the event
public String toString()
toStringin class
TypedEvent
Copyright (c) 2000, 2018 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs. | https://help.eclipse.org/photon/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/browser/ProgressEvent.html | CC-MAIN-2019-47 | en | refinedweb |
The.
This article will explore the Razor API and follow the lifetime of a Razor template from text to powerful templating solutions, including examples such as unit testing ASP.NET MVC views and creating a highly-maintainable email generator.
The Razor Template Lifecycle
Figure 1 shows the lifecycle of a Razor template.
Templates based on the Razor syntax combine code and content in powerful ways. The Razor API translates these plain text templates to .NET source code - the same kind of source code that you and I write every day. Just as with the source code that we developers write, Razor API-generated code compiles into new .NET types just like any other: simply create and execute a new instance to produce a rendered result!
Meet the Players
Before jumping into exactly how the API works, here is a quick overview of the relatively small number of components involved in the process of transforming a plain text template into an executable Razor template class:
- RazorEngineHost: Contains the metadata required for creating Razor Templating Engines. Things like the base class name, output class name (and namespace), as well as the assemblies and namespaces required to execute the generated template.
- RazorTemplateEngine: Using configuration data provided by a RazorEngineHost, the Template Engine accepts a stream of text and transforms this text into .NET code (represented by a CodeCompileUnit) that gets compiled into a .NET type.
- Custom Template Base Class: Though not technically part of the Razor API, the Razor Templating Engine requires a custom template base class to use as a base class for the generated template type.
- CodeDomProvider: Also not technically part of the Razor API, the CodeDomProvider class (from the System.CodeDom.Compiler namespace) compiles CodeCompileUnits into .NET Types, making them available for .NET applications to consume. The Razor Templating API offers two CodeDomProvider implementations to compile RazorTemplateEngine-generated CodeCompileUnits: The CSharpCodeProvider and VBCodeProvider. As their names indicate, these two implementations compile C#- and Visual Basic-based Razor templates respectively.
Compiling Templates with the Razor API
Consider the Razor template which renders customer order information shown below.
Customer ID: @Order.CustomerID Customer Name: @Order.CustomerName Order ID: @Order.ID Items: Quantity Unit Price Product @foreach(var item in @Order.LineItems) { @item.Quantity @item.Price @item.ProductName } Tax: @Order.Tax Order Total: @Order.Total
As you’ll soon see, the Razor Templating API provides the ability to transform this template into a .NET class and execute it against a model, rendering the output shown below.
Customer ID: HSIMPSON Customer Name: Homer Simpson Order ID: 1234 Items: Quantity Unit Price Product 1 $1.50 Jelly Doughnut 4 $0.75 Glazed Doughnut Tax: $0.10 Order Total: $4.60
Obviously, the customer order information template includes nothing inherently web related, so why should rendering it depend on the ASP.NET runtime? Luckily, it doesn’t have to. In fact, the output displayed in the previous snippet is the result of a command line application using the code snippets from this article!
Configuring the Razor Template Engine
The RazorTemplateEngine class does most of the heavy lifting to transform Razor template text into usable .NET source code. Before creating a RazorTemplateEngine, however, the application must provide a set of properties that inform the engine about how to properly translate the Razor template text it receives. These properties come in the form of a RazorEngineHost.
Creating a RazorEngineHost
The code snippet below contains an example RazorEngineHost initialization.
var language = new CSharpRazorCodeLanguage(); var host = new RazorEngineHost(language) { DefaultBaseClass = "OrderInfoTemplateBase", DefaultClassName = "OrderInfoTemplate", DefaultNamespace = "CompiledRazorTemplates", }; // Everyone needs the System namespace, right? host.NamespaceImports.Add("System");
To begin, the RazorEngineHost’s constructor accepts a RazorCodeLanguage specifying the target template’s code language. This example produces a host that can parse Razor templates written using C#. To support templates written in Visual Basic, supply a VBRazorCodeLanguage instance instead. The additional initializer properties instruct the code generator to emit code with a particular class name, deriving from a custom template base class, and residing in a particular namespace. Finally, add the System namespace to the list of imported namespaces required for the generated class to compile just as you would import a namespace in a normal, hand-written class.
The custom template base class - in this example named OrderInfoTemplateBase - is somewhat special. Though it does not need to implement any “official” .NET interface, the base class does need to provide methods with the following signatures:
- public abstract void Execute()Once populated with generated code, this method contains a series of calls to the Write methods to render the template contents.
- void Write(object value) and void WriteLiteral(object value)The RazorTemplateEngine populates the Execute() method with calls to the Write() and WriteLiteral() methods, much like using an HtmlTextWriter to render a Web Forms server control. While the Execute() method controls the flow of the template rendering, these two methods do the heavy lifting by converting objects and literal strings to rendered output.
This next code snippet contains the simplest possible implementation of a Razor template base class.
public abstract class OrderInfoTemplateBase { public abstract void Execute(); public virtual void Write(object value) { /* TODO: Write value */ } public virtual void WriteLiteral(object value) { /* TODO: Write literal */ } }
While this implementation will, of course, do nothing to render any content, it is the minimum code required to successfully compile and execute a template class. Later sections in this article will revisit and expand upon this class, making it much more useful.
Creating the RazorTemplateEngine
Using the configuration provided in the previously-created RazorEngineHost, this next example shows how straightforward it is to instantiate and generate code with a RazorTemplateEngine.
// Create a new Razor Template Engine RazorTemplateEngine engine = new RazorTemplateEngine(host); // Generate code for the template GeneratorResults razorResult = engine.GenerateCode([TextReader]);
The RazorTemplateEngine.GenerateCode() method accepts a TextReader to provide the Razor template text and produces generated code in the form of GeneratorResults. This result holds (among other things) a CodeCompileUnit representing the template’s generated source code.
To demonstrate, Listing 1 contains an example of what the GeneratorResults for the Customer Order Information template would look like represented as C# code.
Compiling a Razor Template
The final step in the process of converting Razor text into an executable .NET class is compiling the generated source code into a .NET assembly, as shown next.
CompilerResults compilerResults = new CSharpCodeProvider() .CompileAssemblyFromDom( new CompilerParameters(/*...*/), razorResult.GeneratedCode );
The CodeDomProvider.CompileAssemblyFromDom() method converts the CodeCompileUnit from the previous steps (razorResult.GeneratedCode) and outputs the compiled types in the form of CompilerResults. The CompilerResults object contains plenty of interesting data describing the compiled output, including a reference to the assembly with the newly-created template class type (in this example the template class type is named OrderInfoTemplate).
Executing a Razor Template
Configuring and compiling a Razor template produces a usable .NET type deriving from the base type specified in the RazorEngineHost properties. To process this template and render template output, simply create a new instance of the template type and execute it. Though there are several ways to create a new instance of a type, the Activator.CreateInstance(Type) function is the easiest (if perhaps not the most efficient) way.
Once you’ve created an instance of your custom Razor template type, simply call the .Execute() method to execute the generated code.
var template = (OrderInfoTemplateBase) Activator.CreateInstance(/*...*/); template.Execute();
Congratulations, you have now leveraged the Razor API directly to manually create, compile, and execute your first Razor template class!
Advanced Templating Logic
Previously we discovered that, at a minimum, a valid Razor template base class must implement the Execute(), Write() and WriteLiteral() methods. However, these methods are merely a starting point. Like any other .NET base class, template base classes can expose additional properties or methods to the template classes derived from them. This is how template base classes provide data and functionality to templates that derive from them.
For example, take a look at the template containing numerous references to a variable named @Order shown in the original snippet at the beginning of this article. For this template to compile and execute properly, the custom base class specified in the RazorEngineHost.DefaultBaseClass property must expose a protected (or greater) access level “Order” property. Thus, to qualify as a base class for this template an Order property must be added to the OrderInfoTemplateBase class. The result of this change can be seen in this snippet.
public abstract class OrderInfoTemplateBase { public CustomerOrder Order { get; set; } public abstract void Execute();
As this next snippet demonstrates, the application can now assign a CustomerOrder value to the template’s Order property prior to executing the template. With the Order property set, the template produces the rendered text originally featured.
var template = (OrderInfoTemplateBase) Activator.CreateInstance(/*...*/); template.Order = CustomerOrders.GetOrder(1234); template.Execute();
Putting the Razor Templating Engine to Work
Now that you understand the basics of creating and executing Razor templates outside of the ASP.NET framework, let’s take a look at a few common scenarios that might benefit from this new-found knowledge.
Unit Testing ASP.NET MVC or WebMatrix Views
The ASP.NET MVC 3 release introduces the Razor view engine, enabling developers to write ASP.NET MVC views using the Razor syntax. Though many best practices advocate keeping the logic in your views as limited and simple as possible, executing unit tests outside of the ASP.NET runtime against Razor-based MVC views can still be very beneficial.
Compiling an ASP.NET MVC Razor View without ASP.NET MVC
Take a look at the code snippet for an example of an ASP.NET MVC Razor view.
<p> Order ID: <span id='order-id'>@Model.OrderID</span> </p> <p> Customer: @(Html.ActionLink( @Model.CustomerName, "Details", "Customer", new { id = @Model.CustomerID }, null)) </p>
The default ASP.NET MVC Razor view class exposes properties such as Model, Html, etc., that this view relies on. Thus, in order to compile and execute the view outside of the ASP.NET MVC runtime, you must create a custom template base class that implements these properties as well. This next example contains a snippet from the OrderInfoTemplateBase, modified to include the Model and Html properties so that it may be used to compile the previous view.
public abstract class OrderInfoTemplateBase { public CustomerOrder Model { get; set; } public HtmlHelper Html { get; set; }
The OrderInfoTemplateBase class now fulfills the template’s dependencies on the ASP.NET MVC base classes, allowing the OrderInfoTemplateBase to act as a stand-in for the ASP.NET MVC base classes. Introducing custom base classes such as OrderInfoTemplateBase provides complete control over the properties and functionality provided to the template. Custom base classes also alleviate the need to execute ASP.NET MVC views within the ASP.NET MVC runtime. Listing 2 shows the power of swapping production components with mock objects. By replacing the production HtmlHelper class with a mock implementation, the unit test can easily make assertions against - and therefore confirm the validity of - code in the view without relying on the ASP.NET MVC runtime.
The ability to inject mock and stub objects to take the place of production types is a great boon for unit tests. Without this ability, most sites must resign to running all UI tests through slow and unreliable browser-based testing. In stark contrast, injecting mock and stub objects allows developers to create unit tests that execute in mere milliseconds.
Apply Razor Templates to Traditional Text Templating Scenarios
Generating template-based emails is just one more example in which Razor templates make perfect sense.
Many ASP.NET applications send email notifications. Although the .NET Framework provides an effective way to deliver emails, ASP.NET applications often need to generate the body of those emails using some sort of template. Third-party solutions can often make templating easier, however, instead of leveraging one of these solutions, many developers end up “rolling their own.” As with most do-it-yourself solutions, home-grown templating APIs often become difficult to maintain or their original creator moves on to their next opportunity leaving the next owner to scratch his/her head wondering how to maintain templates created using the proprietary techniques.
Basing templating solutions on a well-known and well documented syntax like Razor helps avoid this situation. Because Razor templates can output much more than just HTML, they are well suited to most templating tasks. To demonstrate, take a look at this next example which includes a sample email template written using Razor syntax:
Hello, @ServiceRequest.CustomerName! Thank you for requesting more information about @ServiceRequest.ServiceName on @ServiceRequest.CreateDateDisplayValue. Please find the information your requested below and we look forward to hearing from you again! @ServiceRequest.DetailedInformation Sincerely, @ServiceRequest.SenderInformation [ Information current as of @DateTime.Now ]
Listing 3 contains the full Razor email template base class necessary to compile the template text.
Notice how the ServiceRequestEmailGeneratorBase class’s Write() methods populate a string buffer. After it’s done populating the buffer, the class then converts the buffered text into the body of a new MailMessage. This particular base class remains happily unaware of how its descendants call the Write methods. In fact, it knows nothing about the Razor Templating API at all!
Listing 4 shows the template in action in the form of an application that pulls customer service requests from a database and sends the customer a custom email generated from the Razor template.
This application retrieves template data from a database and executes an instance of the ServiceRequestEmailGeneratorBase template class against each set of data, producing the previously-discussed email message as a result. The application then sends the resulting email to the user via an SmtpClient.
Hello, Homer Simpson! Thank you for requesting more information about Donuts on 1/15/2011 11:04:12 AM. Please find the information your requested below and we look forward to hearing from you again! Donuts are delicious! Sincerely, Big Corp. [ Information current as of 1/15/2011 11:04 AM ]
I have witnessed (and even developed) the email generation example many times, and each implementation seems drastically different than the next. Since this approach leverages standardized and well-documented components of the .NET Framework, the resulting solution becomes easy for any developer to understand and maintain.
Code Generators Generating Code Generators
Code generation is a powerful tool in any developer’s arsenal. Code generators help reduce repetitive typing (and thus human error) or even encourage a code standard across team members. Chances are you’ve been using code generation for quite some time, even if you weren’t aware of it. Ranging from generators built in to Visual Studio such as those which generate Windows Forms code-behind to the T4 templating engine and proprietary products, code generators make our lives much easier.
The last scenario we’ll explore shows an example of the Razor templating engine acting as yet one more option for code generation. Given an appropriate template written with the Razor syntax and a set of custom parameters, the Razor API can produce the same source code as any other code generator. This example shows how to use the same API functionality leveraged previously in this article to generate and even compile a C# class on the fly.
The API components required for C# code generation are no different than those used previously: a RazorEngineHost, a RazorTemplateEngine, and a custom template base class. The C# code generation base class shown in Listing 5 is almost exactly the same as the simple base class seen in previous examples, with only a few more simple properties added in order to specify the details for the generated C# class such as namespace, class name, etc.
Listing 6 contains the Razor template for the C# class we wish to generate. Notice the slightly more advanced logic which intelligently interprets the properties populated in the base class to do things such as generate using statements, and property and method declarations.
After compiling the Razor template, your application can execute the template against various sets of data to produce C# classes on the fly. Once generated, your application can manage these C# classes in many ways, such as saving the source code to disk or even passing it right back into the CSharpCodeProvider to compile and/or execute the C# code that the Razor template just generated. Listing 7 demonstrates the latter option, generating a C# class from a Razor template and immediately executing the generated code.
Take a moment to consider the power the ability to generate and execute code on the fly can provide. This ability makes many scenarios possible that simply aren’t possible using traditional static compilation and execution approaches.
Actually, this Might Be a Bad Idea…
Despite being an impressive demonstration of Razor’s power and flexibility, generating C# or Visual Basic code from Razor templates is, quite frankly, a bad idea. Razor’s ability to differentiate between code and content actually does more harm than good when producing output such as C# code. This confusion is quite understandable: how can the parser tell the difference between content and code when the content is code? Additionally, most scenarios requiring dynamic code generation and compilation would probably be better off leveraging one of the dynamic languages available via the Dynamic Language Runtime such as IronPython or IronRuby.
However unsuitable Razor templates might be for generating C# and Visual Basic code, this example proves that Razor templates can produce far more sophisticated output than simple HTML. Feel free to apply Razor templating to applications well beyond the scope of web development!
Conclusion
The Razor templating syntax is a very powerful solution to common template-based problems. Since its API is publicly exposed and very easy to use, Razor-based solutions don’t have to be restricted to the ASP.NET runtime. Properly configured, Razor templates can provide an effective and maintainable approach to solving a multitude of templating scenarios within any .NET environment.
What problems have you been facing that might be solved with Razor templates? Now you’ve got the tools to go fix them! Good luck and happy templating! | https://www.codemag.com/Article/1103081 | CC-MAIN-2019-47 | en | refinedweb |
The.
“It is quite likely that most container runtimes are vulnerable to this flaw, unless they took very strange mitigations beforehand,” explained Aleksa Sarai, a senior software engineer at SUSE and a maintainer for runC, in an email posted on Openwall. Sarai added that the flaw is blocked by the proper implementation of user namespaces “where the host root is not mapped into the container’s user namespace.”
A patch for the flaw has been developed and is being sent out to the runC community. A number of vendor and cloud providers have already taken steps to implement the patch.
Read more at SDx Central | https://www.linux.com/news/kubernetes-docker-containerd-impacted-runc-container-runtime-bug/ | CC-MAIN-2019-47 | en | refinedweb |
Configuring Entity Framework Core into Asp.Net Core
Anna
Updated on
・3 min read
Today I'm going to show you how to create configure project with ASP.Net core, Entity framework core and SQLite.
When I was doing this for the first time it was really difficult for me to understand and in my opinion there are not much tutorials end-to-end tutorials on this topic.
Let's say we have a e-commerce website with products, shopping carts, customers and etc. Let's create models that will transfer the data from our back-end to front-end and entities that will transfer the data from our back-end to database.
Here is the example of model and entity, representing product:
public class ProductModel { public Guid Id { get; set; } public string Description { get; set; } public List<ProductPictureModel> ProductPictures { get; set; } public CurrencyEnum Currency { get; set; } public ProductTypeColorEnum Color { get; set; } public decimal Price { get; set; } public ProductTypeEnum Type { get; set; } public ProductTypeSizeEnum Size { get; set; } public bool AvailableInWarehouse { get; set; } }
public class ProductEntity { public Guid Id { get; set; } public string Description { get; set; } public virtual ICollection<ProductPictureEntity> ProductPictures { get; set; } public CurrencyEnum Currency { get; set; } public ProductTypeColorEnum Color { get; set; } public decimal Price { get; set; } public ProductTypeEnum Type { get; set; } public ProductTypeSizeEnum Size { get; set; } public virtual ICollection<TagsEntity> Tags { get; set; } public bool AvailableInWarehouse { get; set; } public DateTime RegistrationDate { get; set; } public DateTime LastModified { get; set; } public ProductStatusEnum Status { get; set; } }
Note that the product model and entity can be different, in our case entity has some additional fields that have nothing to do with the front-end (product status, registration date, last modified) thus with end user. Here is an example of usage of models and entities:
public async Task<IActionResult> SaveProduct([FromBody]ProductModel product, CancellationToken cancellationToken = default) { var productEntity = new ProductEntity(); productEntity = _mapper.Map<ProductModel, ProductEntity>(product); productEntity.Status = ProductStatusEnum.Draft; productEntity.AdministratorId = Guid.NewGuid(); productEntity.RegistrationDate = DateTime.Now; productEntity.LastModified = DateTime.Now; _dbContext.Products.Add(productEntity); await _dbContext.SaveChangesAsync(cancellationToken); return Ok(productEntity.Id); }
Entity framework core is the ORM that works with asp.net core keeping the database scheme in sync with your source code. Here are some basic things that you need to know about EF core:
- it helps you to create the database tables and relations based on your entities,
- it automatically tracks the changes that have been done to the entity when you request it from the database.
Start configuring the EF core by creating an ApplicationDbContext class that will represent our database.
public class ApplicationDbContext : DbContext { public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options) { } }
Here's what's happening:
- EF core should recognize our projects database scheme, to do that we need to derive ApplicationDbContext class from EF core's DbContext class.
- DbContext is a class implemented with Repository pattern that makes it easier to build and query the database and also track the changes in the entities.
Now we need to provide our database credentials and location via connection string so EF Core can connect to database engine. For this we're injecting Asp.net core IConfiguration interface into ApplicationDbContext to get the connection string from configurations in appsettings.json file. Next we need to override OnConfiguring() virtual function, where we also will mention that our engine is SQLite.
private IConfiguration _config; public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options, IConfiguration config) : base(options) { _config = config; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder .UseSqlite(_config.GetConnectionString("DefaultConnection")); base.OnConfiguring(optionsBuilder); }
Now we have to tell EF core about our entities. For that we will use DbSet generic class, and we will add a new property Products.
public class ApplicationDbContext : DbContext { public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options) { } public DbSet<ProductEntity> Products { get; set; } }
This tells EF core to create a new table Products with columns corresponding to fields in ProductEntity class.
Now we need to tell the EF core to configure relation between entities. For that we will override a virtual function OnModelCreating().
protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.Entity<ProductEntity>() .HasMany(p => p.ProductPictures) .WithOne(); }
If we look at our ProductEntity class we can see that there's a field ProductPictures, which is an ICollection. In our case we have single product that can have a lot of pictures, database can't store multiple values in single cell, and if you try to store them you'll get and error.
And lastly we need to inject our ApplicationDbContext into Startup.cs:
public void ConfigureServices(IServiceCollection services) { services.AddDbContext<ApplicationDbContext>(); }
Now we can inject ApplicationDbContext into controllers and query database via.
Hope this will be useful for beginners!
What cool ideas have you seen for integrating new team members?
I came across this tweet from a couple days ago and it really struck a chord with...
| https://dev.to/annadante/configuring-entity-framework-core-into-asp-net-core-3kd4 | CC-MAIN-2019-47 | en | refinedweb |
This tutorial introduces Documents, Corpora, Vectors and Models: the basic concepts and terms needed to understand and use gensim.
import pprint
The core concepts of
gensim are:
Corpus: a collection of documents.
Vector: a mathematically convenient representation of a document.
Model: an algorithm for transforming vectors from one representation to another.
Let’s examine each of these in slightly more detail.
In Gensim, a document is an object of the text sequence type (commonly known as
str in Python 3).
A document could be anything from a short 140 character tweet, a single
paragraph (i.e., journal article abstract), a news article, or a book.
document = "Human machine interface for lab abc computer applications"
A corpus is a collection of Document objects. Corpora serve two roles in Gensim:
Input for training a Model. During training, the models use this training corpus to look for common themes and topics, initializing their internal model parameters.
Gensim focuses on unsupervised models so that no human intervention, such as costly annotations or tagging documents by hand, is required.
Documents to organize. After training, a topic model can be used to extract topics from new documents (documents not seen in the training corpus).
Such corpora can be indexed for Similarity Queries, queried by semantic similarity, clustered etc.
Here is an example corpus. It consists of 9 documents, where each document is a string consisting of a single sentence.
text_corpus = [ ", ]
Important
The above example loads the entire corpus into memory. In practice, corpora may be very large, so loading them into memory may be impossible. Gensim intelligently handles such corpora by streaming them one document at a time. See Corpus Streaming – One Document at a Time for details.
This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We’ll keep it simple and just remove some commonly used English words (such as ‘the’) and words that occur only once in the corpus. In the process of doing so, we’ll tokenize our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).
Important
There are better ways to perform preprocessing than just lower-casing and
splitting by space. Effective preprocessing is beyond the scope of this
tutorial: if you’re interested, check out the
gensim.utils.simple_preprocess() function.
# Create a set of frequent words stoplist = set('for a of the and to in'.split(' ')) # Lowercase each document, split it by white space and filter out stopwords texts = [[word for word in document.lower().split() if word not in stoplist] for document in text_corpus] # Count word frequencies from collections import defaultdict frequency = defaultdict(int) for text in texts: for token in text: frequency[token] += 1 # Only keep words that appear more than once processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts] pprint.pprint(processed_corpus)
Out:
[['human', 'interface', 'computer'], ['survey', 'user', 'computer', 'system', 'response', 'time'], ['eps', 'user', 'interface', 'system'], ['system', 'human', 'system', 'eps'], ['user', 'response', 'time'], ['trees'], ['graph', 'trees'], ['graph', 'minors', 'trees'], ['graph', 'minors', 'survey']]
Before proceeding, we want to associate each word in the corpus with a unique
integer ID. We can do this using the
gensim.corpora.Dictionary
class. This dictionary defines the vocabulary of all words that our
processing knows about.
from gensim import corpora dictionary = corpora.Dictionary(processed_corpus) print(dictionary)
Out:
Dictionary(12 unique tokens: ['computer', 'human', 'interface', 'response', 'survey']...)
Because our corpus is small, there are only 12 different tokens in this
gensim.corpora.Dictionary. For larger corpuses, dictionaries that
contains hundreds of thousands of tokens are quite common.
To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector of features. For example, a single feature may be thought of as a question-answer pair:
How many times does the word splonge appear in the document? Zero.
How many paragraphs does the document consist of? Two.
How many fonts does the document use? Five.
The question is usually represented only by its integer id (such as 1, 2 and 3).
The representation of this document then becomes a series of pairs like
(1, 0.0), (2, 2.0), (3, 5.0).
This is known as a dense vector, because it contains an explicit answer to each of the above questions.
If we know all the questions in advance, we may leave them implicit
and simply represent the document as
(0, 2, 5).
This sequence of answers is the vector for our document (in this case a 3-dimensional dense vector).
For practical purposes, only questions to which the answer is (or
can be converted to) a single floating point number are allowed in Gensim.
In practice, vectors often consist of many zero values.
To save memory, Gensim omits all vector elements with value 0.0.
The above example thus becomes
(2, 2.0), (3, 5.0).
This is known as a sparse vector or bag-of-words vector.
The values of all missing features in this sparse representation can be unambiguously resolved to zero,
0.0.
Assuming the questions are the same, we can compare the vectors of two different documents to each other.
For example, assume we are given two vectors
(0.0, 2.0, 5.0) and
(0.1, 1.9, 4.9).
Because the vectors are very similar to each other, we can conclude that the documents corresponding to those vectors are similar, too.
Of course, the correctness of that conclusion depends on how well we picked the questions in the first place.
Another approach to represent a document as a vector is the bag-of-words
model.
Under the bag-of-words model each document is represented by a vector
containing the frequency counts of each word in the dictionary.
For example, assume we have a dictionary containing the words
['coffee', 'milk', 'sugar', 'spoon'].
A document consisting of the string
"coffee milk coffee" would then
be represented by the vector
[2, 1, 0, 0] where the entries of the vector
are (in order) the occurrences of “coffee”, “milk”, “sugar” and “spoon” in
the document. The length of the vector is the number of entries in the
dictionary. One of the main properties of the bag-of-words model is that it
completely ignores the order of the tokens in the document that is encoded,
which is where the name bag-of-words comes from.
Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to:
pprint.pprint(dictionary.token2id)
Out:
{'computer': 0, 'eps': 8, 'graph': 10, 'human': 1, 'interface': 2, 'minors': 11, 'response': 3, 'survey': 4, 'system': 5, 'time': 6, 'trees': 9, 'user': 7}
For example, suppose we wanted to vectorize the phrase “Human computer
interaction” (note that this phrase was not in our original corpus). We can
create the bag-of-word representation for a document using the
doc2bow
method of the dictionary, which returns a sparse representation of the word
counts:
new_doc = "Human computer interaction" new_vec = dictionary.doc2bow(new_doc.lower().split()) print(new_vec)
Out:
[(0, 1), (1, 1)]
The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Note that “interaction” did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure.
We can convert our entire original corpus to a list of vectors:
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus] pprint.pprint(bow_corpus)
Out:
[[)]]
Note that while this list lives entirely in memory, in most applications you
will want a more scalable solution. Luckily,
gensim allows you to use any
iterator that returns a single document vector at a time. See the
documentation for more details.
Important
The distinction between a document and a vector is that the former is text,
and the latter is a mathematically convenient representation of the text.
Sometimes, people will use the terms interchangeably: for example, given
some arbitrary document
D, instead of saying “the vector that
corresponds to document
D”, they will just say “the vector
D” or
the “document
D”. This achieves brevity at the cost of ambiguity.
As long as you remember that documents exist in document space, and that vectors exist in vector space, the above ambiguity is acceptable.
Important
Depending on how the representation was obtained, two different documents may have the same vector representations.
Now that we have vectorized our corpus we can begin to transform it using
models. We use model as an abstract term referring to a transformation from
one document representation to another. In
gensim documents are
represented as vectors so a model can be thought of as a transformation
between two vector spaces. The model learns the details of this
transformation during training, when it reads the training
Corpus.
One simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space where the frequency counts are weighted according to the relative rarity of each word in the corpus.
Here’s a simple example. Let’s initialize the tf-idf model, training it on our corpus and transforming the string “system minors”:
from gensim import models # train the model tfidf = models.TfidfModel(bow_corpus) # transform the "system minors" string words = "system minors".lower().split() print(tfidf[dictionary.doc2bow(words)])
Out:
[(5, 0.5898341626740045), (11, 0.8075244024440723)]
The
tfidf model again returns a list of tuples, where the first entry is
the token ID and the second entry is the tf-idf weighting. Note that the ID
corresponding to “system” (which occurred 4 times in the original corpus) has
been weighted lower than the ID corresponding to “minors” (which only
occurred twice).
You can save trained models to disk and later load them back, either to continue training on new training documents or to transform new documents.
gensim offers a number of different models/transformations.
For more, see Topics and Transformations.
Once you’ve created the model, you can do all sorts of cool stuff with it. For example, to transform the whole corpus via TfIdf and index it, in preparation for similarity queries:
from gensim import similarities index = similarities.SparseMatrixSimilarity(tfidf[bow_corpus], num_features=12)
and to query the similarity of our query document
query_document against every document in the corpus:
query_document = 'system engineering'.split() query_bow = dictionary.doc2bow(query_document) sims = index[tfidf[query_bow]] print(list(enumerate(sims)))
Out:
[(0, 0.0), (1, 0.32448703), (2, 0.41707572), (3, 0.7184812), (4, 0.0), (5, 0.0), (6, 0.0), (7, 0.0), (8, 0.0)]
How to read this output? Document 3 has a similarity score of 0.718=72%, document 2 has a similarity score of 42% etc. We can make this slightly more readable by sorting:
for document_number, score in sorted(enumerate(sims), key=lambda x: x[1], reverse=True): print(document_number, score)
Out:
3 0.7184812 2 0.41707572 1 0.32448703 0 0.0 4 0.0 5 0.0 6 0.0 7 0.0 8 0.0
The core concepts of
gensim are:
Corpus: a collection of documents.
Vector: a mathematically convenient representation of a document.
Model: an algorithm for transforming vectors from one representation to another.
We saw these concepts in action. First, we started with a corpus of documents. Next, we transformed these documents to a vector space representation. After that, we created a model that transformed our original vector representation to TfIdf. Finally, we used our model to calculate the similarity between some query document and all documents in the corpus.
There’s still much more to learn about Corpora and Vector Spaces.
import matplotlib.pyplot as plt import matplotlib.image as mpimg img = mpimg.imread('run_core_concepts.png') imgplot = plt.imshow(img) plt.axis('off') plt.show()
Out:
/Volumes/work/workspace/gensim_misha/docs/src/gallery/core/run_core_concepts.py:331: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure. plt.show()
Total running time of the script: ( 0 minutes 1.265 seconds)
Estimated memory usage: 36 MB
Gallery generated by Sphinx-Gallery | https://radimrehurek.com/gensim/auto_examples/core/run_core_concepts.html | CC-MAIN-2019-47 | en | refinedweb |
gomi
Primitive macros in go, micros.
All gomi files should end with
.gomi
gomi introduces 2 features,
simple text replacement micro
#mi
and the
shout keyword, a micro for errors
Declaration of ALL micros should take place at the top of the file, between
package and
import keywords
package main #mi PI 3.14159 #mi obj_type Object.content.message.GetType() import ( "fmt" "os" )
shout
shout is a micro for errors, example usage:
shout err // or shout e := obj.Err()
will get converted to
if err != nil { panic(err) } // or if e := obj.Err(); e != nil { panic(e) }
The default shout error handler is set to
panic(V), but you can change it by using a micro
#shout log.Fatal(V)
How to use
setup
- install
parser.go
- build it by running
go build -o gomi.exe parser.go
- add the file path to the gomi executable into your
PATH
usage
cd into
.gomi file location and run
gomi gen sample.gomi to generate
.go file,
or, you can use it just like the go compiler , just change
go run .... to
gomi run .... | https://golangexample.com/primitive-macros-in-go-micros/ | CC-MAIN-2022-40 | en | refinedweb |
See: Description
---
All classes with the
Cfnprefix in this module (CFN Resources) are always stable and safe to use.
This module is part of the AWS Cloud Development Kit project.
import software.amazon.awscdk.services.kafkaconnect.*;
There are no official hand-written (L2) constructs for this service yet. Here are some suggestions on how to proceed:::KafkaConnect.
(Read the CDK Contributing Guide and submit an RFC if you are interested in contributing to this construct library.) | https://docs.aws.amazon.com/cdk/api/v1/java/software/amazon/awscdk/services/kafkaconnect/package-summary.html | CC-MAIN-2022-40 | en | refinedweb |
This is part two of a tutorial that describes how to use the ASTRA Toolbox to create a 3D reconstruction from 2D projection images that were taken with a cone-beam CT scanner. In part one, I’ve created a synthetic dataset. In this part, I’ll use that dataset to create a reconstruction. Note that, if you start from a real dataset instead of from this simulated one, you might still have to convert your X-ray images into projection images. This is explained in part 1 of my previous series of articles on tomography. However, apart from that, these projection images are completely equivalent to real ones, albeit that they look as if taken with a mechanically perfect (but not noiseless!) scanner.
With the risk of repeating a few things from part one, I’ll list all steps to take to go from the projection images to the final reconstruction.
Scan Parameters
The first thing to do is to gather all relevant information about the scan. This information will be in some sort of logfile that comes with the scan, since the parameters can be adjusted for each scan, even if the same scanner is used. For example, the position of the rotating stage with the sample can be shifted closer to or farther away from the detector, to change the magnification. For the purpose of this article, I’ll use the Python fragment from part one of this tutorial, which is repeated below.)
This “logfile” provides enough information to be able to create a reconstruction from the projection images. Of course, you also have to be sure that the projections were actually taken with a cone-beam scanner.
The origin of the coordinate system of the scanner is the center of the object (assuming that the object is in the center of the rotating stage, of course). From the Python code, you can see that the distance between the source and the origin, let’s call that \(d_\mathrm{so}\), is 300 mm, and that the distance between the origin and the detector, \(d_\mathrm{od}\), is 100 mm. This means that the magnification is \((d_\mathrm{so}+d_\mathrm{sd})/d_\mathrm{so}=4/3\) in this case. The detector has square pixels with size \(s_\mathrm{det}=1.05\,\mathrm{mm}\), and has 200 rows in the vertical direction and 200 columns in the horizontal direction. The angles are equally spaced in 180 steps over 360 degrees, which means that a projection was made every two degrees.
You should be able to adapt these values by looking at the logfiles of your own scans. Sometimes the distances are given as source–origin and source–detector, so you might to have to adapt for that. In the Python code that follows, I assume that your projections are in a directory called
dataset and that your files are named
proj####.tif.
Loading the dataset is done in two steps. The first step is reading the dataset from disk and putting it in a 3D NumPy array, as follows.
projections = np.zeros((detector_rows, num_of_projections, detector_cols)) for i in range(num_of_projections): im = imread(join(input_dir, 'proj%04d.tif' % i)).astype(float) im /= 65535 projections[:, i, :] = im
This is straighforward, but note that it is axis one that increases with each projection. The code assumes 16-bit TIFF files. This is quite common, but you might have to adapt this if your projection images are in a different format. I assume here that the projections use the full range of 0 to 65535. This wil normally never be the case with actual data, so you will have to rescale your data. The actual maximum is not important, because we are working with relative numbers here. I have rescaled for a maximum of one in this example. The minimum, however, should be zero after you have converted the X-ray images into projection images (again, see my existing series of articles on tomography). This is because, in places where there is no object in between the source and the detector, the measured density of the (non-existant) material should be exactly zero.
Creating the Projection Geometry
The second step is then to put the NumPy variable
projections into the toolbox by using the function
astra.data3d.create(). To do that, you first have to describe the geometry of your projection data by creating a projection geometry through calling
astra.create_proj_geom().
The creation of the projection geometry is one of the crucial steps that you need to understand well to successfully apply the ASTRA Toolbox. There is one important property of the geometries of the ASTRA Toolbox, which is that the voxels of the reconstruction area have a size of 1 (so 1×1×1 in the case of the 3D reconstruction that we are working on). This implies that you have to scale the rest of your parameters to make this so. A little “trick” to make this easier to handle is to put the detector virtually in the middle of the object. This is not mandatory, but it simplifies thinking about it. Or, at least, that’s what I think.
The parameters of the scanner as given in the Python code fragment above are the following.
\[\begin{align}
d_\mathrm{so}&=300\,\mathrm{mm}\\
d_\mathrm{od}&=100\,\mathrm{mm}\\
s_\mathrm{det}&=1.05\,\mathrm{mm}
\end{align}\]
Shifting the detector to the origin amounts to setting the origin–detector distance to zero and rescaling the size of the detector pixels to compensate for that. This results in a new set of parameters:
\[\begin{align}
d_\mathrm{so}'&=d_\mathrm{so}=300\,\mathrm{mm}\\
d_\mathrm{od}'&=0\,\mathrm{mm}\\
s_\mathrm{det}'&=\frac{d_\mathrm{so}}{d_\mathrm{so}+d_\mathrm{od}}s_\mathrm{det}\approx 0.79\,\mathrm{mm}
\end{align}\]
After shifting the detector to the origin, the parameter \(s_\mathrm{det}'\) gives you the actual size of the reconstruction voxels in real-world units. In the example scanner, each voxel in the reconstructed object will correspond to a cube of 0.79×0.79×0.79 mm. This is important information if you need to know or measure the actual dimensions of the scanned object in real-world units.
The next step is then to scale the parameters so that the reconstruction voxels get a size of 1×1×1. This results in the final set of parameters:
\[\begin{align}
d_\mathrm{so}''&=\frac{d_\mathrm{so}'}{s_\mathrm{det}'}=\frac{d_\mathrm{so}+d_\mathrm{od}}{s_\mathrm{det}}\approx 381\\
d_\mathrm{od}''&=0\\
s_\mathrm{det}''&=1
\end{align}\]
I’ve left out the units in this last set of expressions, since it seems confusing to keep including them after this scaling. In Python, this results in the following code.
proj_geom = \ astra.create_proj_geom('cone', 1, 1, detector_rows, detector_cols, angles, (distance_source_origin + distance_origin_detector) / detector_pixel_size, 0) projections_id = astra.data3d.create('-sino', proj_geom, projections)
The result of this step is that you now have a
projections_id handle to the projection data in the Toolbox that you need to provide to subsequent calls to the Toolbox.
Reconstruction
The reconstruction itself is normally not the difficult part. You create a volume geometry and then create a 3D data object inside of the Toolbox to hold the reconstruction. This provides you with a
reconstruction_id handle. You then choose an algorithm and specify its parameters. For a cone-beam dataset, the
FDK_CUDA algorithm is the obvious one to start with. The resulting Python code is shown below.)
After this part of the code has run, the NumPy array
reconstruction contains the 3D reconstructed object. You could then cut off negative values, which should be due to noise if the scaling of your dataset was done correctly. To save the reconstruction as a series of individual slices, you can convert the reconstruction volume to an 8-bit (or 16-bit)
unsigned int and save the slices as PNG or TIFF files:
reconstruction[reconstruction < 0] = 0 reconstruction /= np.max(reconstruction) reconstruction = np.round(reconstruction * 255).astype(np.uint8) for i in range(detector_rows): im = reconstruction[i, :, :] im = np.flipud(im) imwrite(join(output_dir, 'reco%04d.png' % i), im)
Note the essential call to
flipud(). This necessary because of the inverted Y-axis that you get when you convert from a NumPy array to an image. The resulting PNG for the so-called central slice, which is slice 100 in this case, is shown in Figure 1.
I hope that this extensive tutorial article helps you in applying the ASTRA Toolbox to your own cone-beam CT datasets!
Complete Python Program
The complete Python program follows below. Note that you need an NVIDIA GPU to run this code, since the 3D algorithms of the ASTRA Toolbox are implemented as CUDA GPU code.
from __future__ import division import numpy as np from os import mkdir from os.path import join, isdir from imageio import imread, imwrite import astra # Configuration.) input_dir = 'dataset' output_dir = 'reconstruction' # Load projections. projections = np.zeros((detector_rows, num_of_projections, detector_cols)) for i in range(num_of_projections): im = imread(join(input_dir, 'proj%04d.tif' % i)).astype(float) im /= 65535 projections[:, i, :] = im # Copy projection images into ASTRA Toolbox. proj_geom = \ astra.create_proj_geom('cone', 1, 1, detector_rows, detector_cols, angles, (distance_source_origin + distance_origin_detector) / detector_pixel_size, 0) projections_id = astra.data3d.create('-sino', proj_geom, projections) # Create reconstruction.) # Limit and scale reconstruction. reconstruction[reconstruction < 0] = 0 reconstruction /= np.max(reconstruction) reconstruction = np.round(reconstruction * 255).astype(np.uint8) # Save reconstruction. if not isdir(output_dir): mkdir(output_dir) for i in range(detector_rows): im = reconstruction[i, :, :] im = np.flipud(im) imwrite(join(output_dir, 'reco%04d.png' % i), im) # Cleanup. astra.algorithm.delete(algorithm_id) astra.data3d.delete(reconstruction_id) astra.data3d.delete(projections_id)
Hi,
Thank you very much for the example above. I have used it to test in my datasets successfully in the way that I managed to get the reconstruction. However some more issues are getting me stuck.
First, even having 8 GB of memory in the GPU, I get the following error for datasets of about 300 MB:
Error: CUDA error 2: out of memory.
Error: Failed to allocate 672x356x768 GPU buffer
Error: CUDA error 11: invalid argument.
The other point are:
How do I control some other parameters, such as the center of rotation, number of iterations, output histogram limits?
I will appreciate your help ...
Kind regards,
The errors that you get seem to point to insufficient memory, even though your dataset is only 300 MB. You have to take into account that a full reconstruction volume (typically n×n×n floats for n×n projection images) has to fit in the memory of the GPU card, in addition to your dataset that takes up a volume of n×n×p floats, with p the number of projections (even if your images are, e.g., 16-bit compressed TIFFs). I would start by downscaling the projection images and creating a smaller reconstruction first. Another option is to crop the projection images, although it is essential that you do not cut off parts of the object in any of the projections.
Hi Tom,
Thank you for this tutorial.
I have the same problem as Liebert. I want to use the code for a real CBCT case: n projections of 1840x1456 pixels. In my first test, the reconstruction volume is defined as 438x438x308 voxels. When n is less than 49, everything works. But when n = 50 or more, the call to astra_mex_algorithm_run() (when called astra_mex_algorithm('iterate', alg_id, this.IterNum)) crashes my matlab session with the following system error: "the display pilot was no longer responding and was recovered".
Looking at the GPU RAM consumption, I see that the crash occurs when the GPU RAM limit is reached (3GB) and 37 MB is used for each projection. So, for a complete projection of 1400 images, I will need a 51GB GPU RAM.
Does the Astra toolbox handle the usual CBCT cases, or is it limited to small datasets?
Thanks in advance for your answer.
Kind regards,
thanks for the question, really helped
Your other questions are more diverse and more difficult to answer. To correct the center of rotation, have a look at the
cone_vecgeometry. The
FDK_CUDAalgorithm of the example is not iterative, so there is no number of iterations. See the documentation for algorithms such as
SIRT_CUDAthat are iterative.
This is a wonderful tutorial and thank you very much for sharing.
I notice that these are Python codes and I'm using MATLAB to reconstruct 3D images. So could you please tell me how to obtain the projection id in MATLAB? I mean, in Python we can use "projections_id = astra.data3d.create('-sino', proj_geom, projections)", however I used "proj_id=astra_mex_data3d('create', '-proj3d', proj_geom, proj_data);" in MATLAB and got a wrong reconstruction result.
Thank you in advance!
I've solved this problem. I used "proj_id=astra_mex_data3d('create', '-sino', proj_geom, proj_data);" and got a correct result finally. It seems that changing '-sino' to '-proj3d' doesn't influence the result. The reason I got the wrong result before was that the angle of my observation is not correct. I mean, it was the correct result indeed. Thank you again for sharing!
Thank you very much Tom! It's a very useful tutorial, and amazing website! I've learned a lot from your fabulous work.
I have a similar problem as Liebert, it's also a memory issue. I just want to make sure, if my x-ray image is 2000*2000, is it necessary that my volume matrix must be 2000*2000*2000, that will be a huge RAM consumption. I saw you mentioned about downscaling the projection images, any suggestion how to do that? Sorry I'm very new to tomography.
Much appreciate your kind help!
Sussi
If the object does not reach the edges of your projection images, then you could first of all crop them. However, you should carefully check that no part of the object is cut off in any of the images.
If you have placed the projections virtually in the center of the reconstruction volume, as I do in the tutorial, then you indeed need a reconstruction volume that is the size of the projection images. However, you could start by reconstructing only the central (2D) slice. This will surely fit in the GPU memory (if you also adapt the projections), and it'll give you an idea of how a full size reconstruction would look.
Downscaling the projection images is simply making them smaller, e.g., convert your 2000x2000 images to 500x500 images). Of course, you have to adapt the geometry accordingly.
I'll update this answer with more information on reconstructions that don't fit in the memory of the GPU in the future, but I'll have to ask for your patience for now.
Hi,
Thank you very much for sharing this example! I have successfully used it and I have a question about the use of Optomo object with the astra toolbox. I know this was not covered in your tutorial but maybe you are also familiar with it. After using astra.create_projector(..), I created an OpTomo object with astra.OpTomo(..) and I would like to get the dense projection matrix (on a toy example so that I won't run out of memory). Do you know if it's possible with this package and how to do it?
Thank you in advance,
Kind regards
I have very little experience with the OpTomo class, but I don't think that it's possible to get the projection matrix out of it. However, using the normal Python API you can use
astra.projector.matrix()and
astra.matrix.get()to get the projection matrix. I've used that successfully (for small problems, of course), albeit using the MATLAB API of the Toolbox.
Thank you very much for sharing your amazing tutorial. I have a question about loading datasets and pre-processing.
Currently, I'm working on a Cone beam CT reconstruction for a sample tube containing a certain animal's tissue. The outline of the tube has been reconstructed almost clearly, but the contents of the tube has not been reconstructed. Nothing is painted.
I think this is a pre-processing issue. When the projection images are loaded in float format, the maximum value is about 60000, but the minimum value differed from 10000 to 30000. I think some rescale and pre-processing are necessary.
Should the minimum value is 0 as in this case?, Or Should I apply pre-processing technique such as minus-log using tomopy ? Would you please advise me about something?
If your pixels have the values that you mention, then I would guess that your images are still raw X-ray images (with the dense parts of the sample dark and the background white). These cannot be used for tomography; they must be, as you mention, "log-transformed". I explain this on Tomography, Part 1: Projections, but maybe I need to add an extra tutorial, because this questions pops regularly… You can estimate the value of I(0) from the images by averaging background pixels.
Thank you for your message. After checking, as you say, there is a high probability that my images still raw X-ray images. I'll try to apply the log-transformed or flat-field correction(?). Thank you very much.
Yes, you need to apply both corrections. However, the "log-transform" is much more important than the flat-field correction. I'd definitely start with that.
Thank you for your reply, I will verify it.
Have you tried Sophiabeads Datasets? I'm currently challenging these public datasets, but I can't generate reconstructed images well. Following this tutorial, rescaling is done so that the reconstruction voxels get a size of 1x1x1. Is this setting available to all cases when using ASTRA-toolbox? Are there any geometries that cannot be applied?
Hie Tom,
i have tiff images from a micro ct scan amounting to 1000 projections and they are about 7GB in memory size. i wanted to do a reconstruction of the images.
i seem to have difficulties in
1)storing projection data as a 3D array in the matlab environment
2)importing the projection and volume data to the astra environment in matlab
may you or anyone please assist with the neccesary steps as well as a code i can use to perform this
thanks
Hi,tom,thanks for your expertise. my question is about , we have actually dataset matrix(x,y,z ),as you illustrated above'' (detector_rows, num_of_projections, detector_cols)'', then before reconstruction we need to define vol_geom(a,b,c), i want to know the relationship between (x,y,z) and (a,b,c)?
i mean if i set vol_geom according,''vol_geom = astra.creators.create_vol_geom(detector_cols, detector_cols,
detector_rows'', then run the algorithm, there is a error shows did not match the dimension . only when i set vol_geom as vol_geom = astra.creators.create_vol_geom(projections, detector_cols,detector_rows) the algorithm can run but the resolution seems not correct. my question is the any relations between (x,y,z) and (a,b,c);
hi,tom, the two questions above you can ignore them, may be i did not get the point. form my perspective , i want to ask how to compute sinogram from real projections not from simulation volume? thanks.
Hi BIG,
I'm working on the same thing.. Did you figure out how to do it? I'm working with Fluroroscopy Images, and I'm getting an extruded kind of shape instead of a real 3-d shape...
Is there a specific number of images that you need to get, for instance?
Thanks,
SRR
Thanks for the tutorial. It was incredibly useful.
I am trying to just reconstruct the central slice from the full cone data set, while having the full data set in memory for future reconstruction.. I thought setting the vol_geom(detector_cols,detector_rows,1) would grab it, but doesn't seem to. Any advice?
i have used astra in my 3d construction program where i have 300 images from 1.2 degree angles and i am generating 3d model from that but my images are like gray color type so tomography avoid gray part which is only visible in 15-20 images out of 300 so is there any way i can cover that types of things in model?
Hi, Tom. It's a pretty nice and clear tutorial for the new developers like me to learn this toolbox.
I have a question about creating the projection geometry data. In the command "astra.data3d.create('-sino', proj_geom, projections)", you choose the object data set as "sino". Does this "sino" means sinogram? If it is means sinogram, should we need to use the command "creators_backprojeciton3d" to change it into sinogram. Then we do the reconstruction.
I ask this question is because I can use your example to test. However, when I use my own dataset, it cannot reconstruct the object which is only zero.
My object is a convex object which has a larger scale on the z-axis. I am wondering if it will generate some problems.
Could you give me some suggestions? Thanks so much!
The '-sino' is indeed short for sinogram. It tells the Toolbox that there is projection data in that particular data structure. The alternative is '-vol', short for volume, for the data structure that will hold the reconstruction.
The term sinogram is used here for a 3D object, although the name originally stems from the 2D sinogram that you get by projecting a single slice. The 3D volume of data is a stack of projection images in one direction and a stack of sinograms in another direction. So you don't need to use astra.creators.create_backprojection3d_gpu.
I'm currently working on a new tutorial that starts from a real dataset. I suspect that that will be useful for you when it is finished (part one is already published, but that's only the introduction...)
Hey, thank you for your Tutorial.
I tried it and the Code works perfectly. But with my own data, I got wrong reconstructions and I have no idea why...
I have 500 Projections of 200° then I have distance between the detector and the source of 500mm and between the specimen and the detector of 300mm. So i Thought doing this for my parameter:
distance_source_origin = 300 # [mm]
distance_origin_detector = int(500 - 300) # [mm]
Is that right?
detector_pixel_size = 0.2 # [mm]
detector_rows = 785 # Vertical size of detector [pixels].
detector_cols = 797 # Horizontal size of detector [pixels].
detector_sapcing_x = 159.4
detector_spacing_y = 157
num_of_projections = 500
angles = np.linspace(0, 2 * (200/360) * np.pi, num=num_of_projections, endpoint=False)
I think I have a geomtrical Problem but I can't find it by mysel at the moment.
The data looks like your example data with the Sinogram in X dimension and the Projections in Y dimension.
Thank you in advance
Thanks for the tutorial. It was incredibly useful.
I am using Matlab instead of Python. Therefore I translated the main part of the example (parts I and II) to Matlab, joining the two parts to avoid writing the projections in disk, and also without Poisson noise to try to get a clean, almost perfect, reconstruction.
Here you can download my code:
But I encounter a problem of artifacts in 3D reconstructions using CUDA with "cone" geometry.
These are the central slices of the reconstruction along the x-z and y-z plane, with unexpected artifacts:
I would like to know if these artifacts also appear in the Python implementation.
Could it be because the back-projectors and forward-projectors using GPUs are not matched?
It is a pity that you cannot compare using CPU projectors in 3D, which is not supported by ASTRA toolbox.
Thank you in advance!
These artifacts do not appear in the Python reconstruction, so there must still be something wrong with your Matlab code. Note that the illustration in this article is the central slice from the actual reconstruction made with the Python script.
Thank you for your prompt response!
I have installed AstraToolbox in Python and have found that the artifacts also appear with your code.
The slice to be checked is in the second or third dimension of the reconstructed volume. The first dimension (the one that is being saved is disk) is fine.
I am just adding this:
im = reconstruction[:,100,:] # Central slice, third dimension
import matplotlib.pyplot as plt
plt.imshow(im, cmap='gray')
If you look at the original reconstruction result, before removing values less than zero,
in the line
(reconstruction[reconstruction < 0] = 0)the artifact is even more evident.
Without non-negativity constraint:
With non-negativity constraint:
Add new comment | https://tomroelandts.com/articles/astra-toolbox-tutorial-reconstruction-from-projection-images-part-2 | CC-MAIN-2022-40 | en | refinedweb |
In this article, we’ll demonstrate how to implement reCAPTCHA v2 in a React application and how to verify user tokens in a Node.js backend.
Jump ahead:
- Prerequisites
- What is CAPTCHA?
- What is reCAPTCHA?
- Implementing reCAPTCHA in React
- Setting up a sample React project
- Using the Reaptcha router
Prerequisites
To follow along with the examples in the tutorial portion of this article, you should have a foundational knowledge of:
- React and its concepts
- Creating servers with Node.js and Express.js
- HTTP requests
What is CAPTCHA?
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a type of challenge-response security measure designed to differentiate between real website users and automated users, such as bots.
Many web services use CAPTCHAs to help prevent unwarranted and illicit activities such as spamming and password decryptions. CAPTCHAs require users to complete a simple test to demonstrate they are human and not a bot before giving them access to sensitive information.
What is reCAPTCHA?
There are several types of CAPTCHA systems, but the most widely used system is reCAPTCHA, a tool from Google. Luis von Ahn, the co-founder of Duolingo, created the tool back in 2007 and is being used by more than six million websites, including BBC, Bloomberg, and Facebook.
The first version of reCAPTCHA was made up of randomly generated sequences of distorted alphabetic and numeric characters and a text box.
To pass the test, a user needs to decypher the distorted characters and type them into the text box. Although computers are capable of creating images and generating responses, they can’t read or interpret information in the same way a person can to pass the test.
reCAPTCHA generates a response token for each request a user makes and sends it to Google’s API for verification. The API returns a score that determines if the user is human or an automated program.
reCAPTCHA currently has two working versions: v2 and v3. Although v3 is the most recent version of reCAPTCHA (released in 2018), most websites still use reCAPTCHA v2, which was released in 2014.
reCAPTCHA v2 has two variants: checkbox and invisible. The checkbox variant, which is also known as “I’m not a robot”, is the most popular. one option for this variant displays a checkbox widget that users can interact with to verify their identity.
The invisible variant displays a reCAPTCHA badge, indicating that the service is running in the background.
In some cases where a user’s behavior triggers suspicion, reCAPTCHA v2 will serve up a challenge that the user must pass to prove they’re not a bot.
Implementing reCAPTCHA in React
Now that we understand what reCAPTCHA is, let’s see how we can implement it in a React app. But first, we need to sign our app up for an API key in the Google reCAPTCHA console. The key pair consists of two keys: site key and secret key.
The site key invokes the reCAPTCHA service in our app. The secret key verifies the user’s response. It does this by authorizing the communication between our app’s backend and the reCAPTCHA server.
Go ahead and create your key pair here.
First, you’ll need to sign in with your Google account. Then, you’ll be redirected to the admin console. Next, you’ll fill out the registration form on the page to generate your site’s key pair.
The registration is fairly straightforward, but for clarity, I’ll explain what each form field means and how to fill each field.
Key pair registration
Label
For the label field, provide a name to help you recognize the purpose of the key pair that you’re creating. If you have more than one key pair set up on your account, the labels will help you distinguish between them.
Type
The type selector refers to the version of reCAPTCHA that you want to use on your site. You can choose either v3 or v2. Since this tutorial will only cover v2 implementation, go ahead and choose v2 and the “I am not a robot” variant.
Domains
The domains field is where you’ll set the domain names that will work with your reCAPTCHA. You can input a valid domain name or “localhost” if your project is still in development, and click + to add the domain.
Owner
The owner field is where you can provision access to your app’s reCAPTCHA to others. By default, you’ll be the owner, but you can add more individuals by providing their Google emails.
Once you’ve completed the form fields, check the necessary boxes and click Submit.
Now you should be able to see your site key and secret key. They will look similar to the ones shown here:
Next, we‘ll set up a React sample project and implement reCAPTCHA using the key pairs we just created.
Setting up a sample React project
To verify a user’s input with reCAPTCHA we require a server that’ll communicate with Google’s API. So we‘ll need to keep that in mind when setting up the project.
First, create a folder. For this sample project, I will name the folder
react-node-app, but you can use a different name of your choosing.
Next, open the folder in your preferred IDE and run the following command:
npm init -y
This will create a
package.json file that will help us manage our dependencies and keep track of our scripts.
Go ahead and bootstrap a React app with
create-react-app by typing the following command in your terminal:
npx create-react-app my-app
This command will create a
my-app folder inside the
react-node-app folder and will install React inside the
my-app folder.
After installation, open the
my-app folder and clean up the unnecessary boilerplate codes and files in the project folder, then create a
Form.js component within the
src folder.
Next, add the following code into the form component and import it inside the
App.js main component.
const Form = () =>{ return( <form> <label htmlFor="name">Name</label> <input type="text" id="name" className="input"/> <button>Submit</button> </form> ) } export default Form
The above code is a simple login form with an
input element and a
Submit button.
Styling the form component isn’t necessary, but if you’d like to add a little flair, add the following CSS code inside the
App.css file in the project folder.
.input{ width: 295px; height: 30px; border: rgb(122, 195, 238) 2px solid; display: block; margin-bottom: 10px; border-radius: 3px; } .input:focus{ border: none; } label{ display: block; margin-bottom: 2px; font-family: 'Courier New', Courier, monospace; } button{ padding: 7px; margin-top: 5px; width: 300px; background-color: rgb(122, 195, 238); border: none; border-radius: 4px; }
Now, start the development server with the command
npm start in the terminal.
You should see a form similar to this displayed on your browser:
N.B., It is recommended to use a framework that has support for SSR (server-side-rendering), like Next.js or Remix, when creating something similar for production.
Installing
react-google-recaptcha
The
react-google-recaptcha library enables the integration of Google reCAPTCHA v2 in React. The package provides a component that simplifies the process of handling and rendering reCAPTCHA in React with the help of useful props.
To install
react-google-recaptcha, type and run the following command:
npm install --save react-google-recaptcha
Adding reCAPTCHA
After installing
react-google-recaptcha, head over to the
form.js component file and import it, like so:
import reCAPTCHA from "react-google-recaptcha"
Now add the
reCAPTCHA component to the form, just before or after the
Submit button. Your placement of the component is optional, the reCAPTCHA widget will appear wherever the
reCAPTCHA component is placed in the form when rendered.
<form > <label htmlFor="name">Name</label> <input type="text" id="name" className="input"/> <reCAPTCHA /> <button>Submit</button> </form>
As mentioned previously, the
reCAPTCHA component accepts several props. However, the
sitekey prop is the only prop we need to render the component. This prop facilitates the connection between the site key we generated earlier from the reCAPTCHA key pair and the
reCAPTCHA component.
Here are other optional props of the
reCAPTCHA component:
theme: changes the widget’s theme to
lightor
dark
size: changes the size or type of CAPTCHA
onErrored: fires a callback function if the test returns an error
badge: changes the position of the reCAPTCHA badge (
bottomright,
bottomleft, or
inline)
ref: used to access the component instance API
<reCAPTCHA sitekey={process.env.REACT_APP_SITE_KEY} />
Here we add a
sitekey prop to the
reCAPTCHA component and pass it an environment variable with the reCAPTCHA site key.
To do the same in your project, create a
.env file in the root folder of your project. Next, add the following code to the file:
/*.env*/ REACT_APP_SECRET_KEY = "Your secret key" REACT_APP_SITE_KEY = "your site key"
This way, you can use your secret keys safely in your app by referencing the variable names where they’re needed.
Now, if save your code and go to the browser, a reCAPTCHA box should appear where the reCAPTCHA component is placed in your code. In this example, it appears before the submit button.
After each verification, we need to reset the reCAPTCHA for subsequent checks. To accomplish this, we need to add a
ref prop to the
reCAPTCHA component.
To use the
ref prop, first, import the
useRef hook from React:
import React, { useRef } from 'react';
Next, store the
ref value in a variable, like so:
const captchaRef = useRef(null)
Then, add the
ref prop to the
reCAPTCHA component and pass it the
captchaRef variable:
<reCAPTCHA sitekey={process.env.REACT_APP_SITE_KEY} ref={captchaRef} />
Here’s the entire code in our
Form component up to this point:
import reCAPTCHA from "react-google-recaptcha" const Form = () =>{ return( <form> <label htmlFor="name">Name</label> <input type="text" id="name" className="input"/> <reCAPTCHA sitekey={process.env.REACT_APP_SITE_KEY} ref={captchaRef} /> <button>Submit</button> </form> ) } export default Form
Now that we have a working widget, we just need to complete three steps to get reCAPTCHA functioning:
- Get the response token from the
reCAPTCHAcomponent
- Reset the
reCAPTCHAcomponent for subsequent checks
- Verify the response token in the backend
Getting the response token
We can also use the
ref prop to get the generated token from our reCAPTCHA. All we have to do is get the value of the
ref with the following code:
const token = captchaRef.current.getValue();
Resetting reCAPTCHA for subsequent checks
If we add the above code to the form component, it will actually cause an error. This is because the value of the
ref is still null, since the reCAPTCHA is in an unchecked state. To solve this issue, we we’ll add an
onSubmit event handler to the form with a function that encapsulates the code:
const handleSubmit = (e) =>{ e.preventDefault(); const token = captchaRef.current.getValue(); captchaRef.current.reset(); } return( <form onSubmit={handleSubmit} > … </form> )
In the above code, we created a
handleSubmit function. Inside this function, we added the
token variable for getting the response token from reCAPTCHA, as well as a code that resets the reCAPTCHA each time the form is submitted.
This way, the
getValue() method will only attempt to get the ref’s value, which is the response token, when the submit button is clicked.
Now if you log the
token variable to the console, check the reCAPTCHA box, and submit the form, you should see a generated response token similar to the one below in your console:
Verifying the token in the Node.js backend
The token we generated in the previous section is only valid for two minutes, which means we need to verify it before it expires. To do so, we’ll need to set up our app’s backend and send the token to Google’s API to check the user’s score.
Setting up the Node.js backend
To set up a Node.js server, navigate back to the
react-node-app folder, create a new folder, and name it
server. Inside the
server folder, create a new file and name it
index.js. This file will serve as the entry point for our Node app.
Next, cd into the
server folder and run the following command to install Express.js and Axios:
npm i express axios dotenv --save
Now, add the following code inside the
index.js file:
const express = require("express"); const router = express.Router(); const app = express(); const cors = require('cors'); const axios = require('axios'); const dotenv = require('dotenv').config() const port = process.env.PORT || 2000; //enabling cors app.use(cors()); //Parse data app.use(express.json()); app.use(express.urlencoded({extended: true})); //add router in express app.use("/", router); //POST route router.post("/post", async (req, res) => { //Destructuring response token from request body const {token} = req.body; //sends secret key and response token to google await axios.post( `{process.env.SECRET_KEY}&response=${token}` ); //check response status and send back to the client-side if (res.status(200)) { res.send("Human 👨 👩"); }else{ res.send("Robot 🤖"); } }); app.listen(port, () =>{ console.log(`server is running on ${port}`); });
In the above code, we set up an Express server and created a
/post route. Inside the endpoint function, we destructured the request body to get the
token data that will be sent from the client side.
Then we created an
axios.post request to Google’s API with our
SECRET_KEY passed in as an environment variable, as well as the
token from the client side.
To set up an environment variable in Node.js, cd back to the
react-node-app folder and run the following command:
npm install dotenv --save
After installation, create a
.env file within the
react-node-app folder, open the file, then add your site’s secret key.
Beneath the
axios.post request is an
if statement that checks the status of the response returned by the API and sends it to the client side.
Ok, let’s move on. Navigate back to the
react-node-app folder, open the
package.json file, and replace the script command with the following:
… "scripts": { "start": "node server/index.js" }, …
The above code will let us start our server using the
npm start command when we run it in the terminal.
Save the project. Then, go to your terminal, open a new terminal tab, cd into the
server folder, and start the server by running
npm start.
Checking the user’s score
Next, we’ll send an
axios.post request from the client side (React app) to our server, with the generated token as the data.
To do this, navigate back to your React app and paste the following code inside the
handleSubmit function we created earlier:
const handleSubmit = async (e) =>{ e.preventDefault(); const token = captchaRef.current.getValue(); captchaRef.current.reset(); await axios.post(process.env.REACT_APP_API_URL, {token}) .then(res => console.log(res)) .catch((error) => { console.log(error); }) }
This code is an
axios.post request that sends the generated token from reCAPTCHA to the Node.js backend.
If you save your code and run the app, you should see a reCAPTCHA form similar to this:
Using the
reaptcha wrapper
react.captcha (Reaptcha) is an alternative solution for implementing reCAPTCHA in React. The library shares similar features with react-google-recaptcha, but unlike the former, Reaptcha handles reCAPTCHA’s callbacks inside React components and automatically injects the reCAPTCHA script into the head DOM element.
This way, your applications would not have to depend on the library and directly communicate with the reCAPTCHA API when deployed.
To install Reaptcha, run the following command within your terminal:
npm install --save reaptcha
After installation, go to the
form.js file and import the Reaptcha component like so:
import Reaptcha from 'reaptcha';
The Reaptcha component provides several props that can be used to customize the rendering. Here is a list of the available props:
sitekey: This prop accepts the client key (site key we generated in the previous sections)
theme: an optional prop for changing the widget’s appearance (light or dark)
onLoad: an optional callback function that gets called when the Google reCAPTCHA script has been loaded
onVerify: an optional callback function that gets called when a user completes the captcha
onExpire: an optional callback function that gets called when the challenge is expired and has to be redone
explicit: an optional prop that allows the widget to be rendered explicitly, i.e., invisible
size: an optional prop that allows you to change the size of the widget to either of these: compact, normal, invisible
ref: prop used for accessing the component’s instance methods
Although most of these props look similar to the ones exposed by the react-google-recaptcha’s component, not all of them work as you’d expect. The
ref prop for one doesn’t have a method like
getValue() for getting the response token. Instead, it uses a
getResponse() instance method that returns the token with a promise.
Therefore, adding the component to the form component and retrieving the response token will be as follows:
const [captchaToken, setCaptchaToken] = useState(null); const captchaRef = useRef(null); const verify = () =>{ captchaRef.current.getResponse().then(res => { setCaptchaToken(res) }) } return( <form onSubmit={handleSubmit} > <Reaptcha sitekey={process.env.REACT_APP_SITE_KEY} ref={captchaRef} onVerify={verify} > </form> ) }
Here, we created a
verify function. Inside it, we’re fetching the response token from the
ref variable using the
getResponse() instance method. Since the method returns a promise, we chained a
then method to it and pass the response to the
captchaToken state variable.
We also pass the
verify function to the
onVerify prop on the component so that the function will only attempt to fetch the responses token when a user completes the captcha.
The component’s instance methods are utility functions that can be called to perform certain actions. Just as we used the
getResponse method to grab the response token earlier, we can use other methods to perform different actions, like resetting the widget after every form submission. Here is a list of available instance methods:
reset
renderExplicitly
execute
getResponse
Visit the documentation to learn more about these methods and the Reapatcha library.
That’s it! You’ve successfully implemented a working Google reCAPTCHA and a backend server that verifies users’ responses in React.
Conclusion
In this article, we examined what reCAPTCHA is and how it works. We also walked through a tutorial to demonstrate how to implement reCAPTCHA in a React application and how to verify a user’s response token with a Node.js backend server.
I hope this article will help you build secure and bot-free.
4 Replies to “How to implement reCAPTCHA in a React application”
Great post. I ran into some trouble after naming my component in lower case and React wasnt reconizing it as a component until i changed it lo upper case.
Thank you, Mark. React components always start with uppercase letters. The library treats any component with lowercase initials as HTML elements.
Hi I ran into an issue that form still submit even when reCaptcha is not clicked
Hi Rasam, you have to perform a conditional check based on the response you get from the server. If it’s positive, submit the form. If not, do otherwise. I hope this helps. | https://blog.logrocket.com/implement-recaptcha-react-application/ | CC-MAIN-2022-40 | en | refinedweb |
As a followup to newpm -time-passes fix (D59366), now adding a similar
functionality to legacy time-passes.
Enhancing llvm::reportAndResetTimings to accept an optional stream
for reporting output. By default it still reports into info-output-file.
Also fixing to actually reset after printing as declared.
Just two inline nits.
You have a lot of namespace switches here. If you reorder slightly, that's unnecessary.
moved legacy test code around to remove the need for additional namespace manipulations
Sorry, this second nit got lost: Is it clear from context what info-output-file is? Otherwise reference CreateInfoOutputFile() instead.
Is this one still necessary?
Well, both of these would require a search through sources if you do not know what it is.
info-output-file is a command-line level control, CreateInfoOutputFile is an API control.
I really do not have any preferences here.
well... as I see it is quite common through the unittests to use anonymous namespaces to hide test stuff
(I gather to avoid any clashes with LLVM interfaces).
Say, LegacyPassManagerTest does pretty much the same.
Its just that weird requirement of INITIALIZE_PASS for legacy passes to be inside namespace llvm that requires
extra namespace busywork.
I'm open for any suggestions here.
Will it be more clear if I say "By default it uses file stream specified by -info-output-file" ?
Technically that's not correct though, per default it uses CreateInfoOutputFile as far as the API is concerned. That that then reads the command line option is still one more hop.
Sure, you're opening another anonymous namespace 30 lines down though.
ok
*that* is a completely different anonymous namespace, which is not nested into llvm.
this one:
namespace llvm {
namespace {
... passes here
}
}
is a namespace for Passes, which are required to be llvm::something, otherwise INITIALIZE_PASS does not seem to work.
and the next one:
namespace {
... tests here
}
is a namespace for tests, which can be whatever you want.
I can remove the latter and put everything (passes and tests) inside the former (nested inside llvm::) one.
I dont see it as a good solution, but I dont have strong feelings about that.
On final language nit, otherwise LGTM!
Maybe 'into the stream created by CreateInfoOutputFile()'? Also in order to get the doxygen linking to kick in. | https://reviews.llvm.org/D59416?id=191476 | CC-MAIN-2022-40 | en | refinedweb |
06-24-2020
11:27 AM
- edited
06-24-2020
12:59 PM
I'm downloading WebEx recordings via the NBR API's DownloadNBRStorageFile call. However, it appears there is no option to stream the response -- instead it returns the entire recording and the other xml data at once. I have some very large recordings (3+hours), so holding an entire recording in memory is not an option. Normally I'd just stream the download, but I don't see how that's possible with the multi-part SOAP response. Does anyone have any guidance for this?
For reference, here's what I'm doing now:
import requestsimport io
from requests_toolbelt.multipart import decoder
resp = requests.post(url, headers, data, stream=True) # huge object that consumes lots of memory
multipart_data = decoder.MultipartDecoder.from_response(resp)
audio = multipart_data.parts[2].content
b = io.BytesIO(audio) # another huge object
self.logger.info(f"Writing audio/video data to myfile.mp4")
with open("myfile.mp4", "wb") as outfile:
outfile.write(b.getbuffer())
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: | https://community.cisco.com/t5/cloud-collaboration/streaming-download-for-large-nbr-recordings/td-p/4108911 | CC-MAIN-2022-40 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.