text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
sequelize-typescriptsequelize-typescript
Decorators and some other features for sequelize (v6).
- Installation
- Model Definition
- Usage
- Model association
- Indexes
- Repository mode
- Model validation
- Scopes
- Hooks
- Why
() => Model?
- Recommendations and limitations
InstallationInstallation
- this assumes usage of
sequelize@6
- sequelize-typescript requires sequelize
- additional typings as documented here and reflect-metadata
npm install --save-dev @types/node @types/validator npm install sequelize reflect-metadata sequelize-typescript
Your
tsconfig.json needs the following flags:
"target": "es6", // or a more recent ecmascript version "experimentalDecorators": true, "emitDecoratorMetadata": true
Sequelize OptionsSequelize Options
SequelizeConfigrenamed to
SequelizeOptions
modelPathsproperty renamed to
models
Scopes OptionsScopes Options
The
@Scopes and
@DefaultScope decorators now take lambda's as options
@DefaultScope(() => ({...})) @Scopes(() => ({...}))
instead of deprecated way:
@DefaultScope({...}) @Scopes({...}))
Model definitionModel definition
import { Table, Column, Model, HasMany } from 'sequelize-typescript' @Table class Person extends Model { @Column name: string @Column birthday: Date @HasMany(() => Hobby) hobbies: Hobby[] }
Less strictLess strict
import { Table, Model } from 'sequelize-typescript' @Table class Person extends Model {}
More strictMore strict
import { Optional } from 'sequelize' import { Table, Model } from 'sequelize-typescript' interface PersonAttributes { id: number name: string } interface PersonCreationAttributes extends Optional<PersonAttributes, 'id'> {} @Table class Person extends Model<PersonAttributes, PersonCreationAttributes> {}
The model needs to extend the
Model class and has to be annotated with the
@Table decorator. All properties that
should appear as a column in the database require the
@Column annotation.
See more advanced example here.
@Table
The
@Table annotation can be used without passing any parameters. To specify some more define options, use
an object literal (all define options
from sequelize are valid):
@Table({ timestamps: true, ... }) class Person extends Model {}
Table APITable API
Primary keyPrimary key
A primary key (
id) will be inherited from base class
Model. This primary key is by default an
INTEGER and has
autoIncrement=true (This behaviour is a native sequelize thing). The id can easily be overridden by marking another
attribute as primary key. So either set
@Column({primaryKey: true}) or use
@PrimaryKey together with
@Column.
@CreatedAt,
@UpdatedAt,
@DeletedAt
Annotations to define custom and type safe
createdAt,
updatedAt and
deletedAt attributes:
@CreatedAt creationDate: Date; @UpdatedAt updatedOn: Date; @DeletedAt deletionDate: Date;
@Column
The
@Column annotation can be used without passing any parameters. But therefore it is necessary that
the js type can be inferred automatically (see Type inference for details).
@Column name: string;
If the type cannot or should not be inferred, use:
import {DataType} from 'sequelize-typescript'; @Column(DataType.TEXT) name: string;
Or for a more detailed column description, use an object literal (all attribute options from sequelize are valid):
@Column({ type: DataType.FLOAT, comment: 'Some value', ... }) value: number;
Column APIColumn API
ShortcutsShortcuts
If you're in love with decorators: sequelize-typescript provides some more of them. The following decorators can be used together with the @Column annotation to make some attribute options easier available:
Type inferenceType inference
The following types can be automatically inferred from javascript type. Others have to be defined explicitly.
AccessorsAccessors
Get/set accessors do work as well
@Table class Person extends Model { @Column get name(): string { return 'My name is ' + this.getDataValue('name') } set name(value: string) { this.setDataValue('name', value) } }
UsageUsage
Except for minor variations sequelize-typescript will work like pure sequelize. (See sequelize docs)
ConfigurationConfiguration
To make the defined models available, you have to configure a
Sequelize instance from
sequelize-typescript(!).
import { Sequelize } from 'sequelize-typescript' const sequelize = new Sequelize({ database: 'some_db', dialect: 'sqlite', username: 'root', password: '', storage: ':memory:', models: [__dirname + '/models'] // or [Player, Team], })
Before you can use your models you have to tell sequelize where they can be found. So either set
models in the
sequelize config or add the required models later on by calling
sequelize.addModels([Person]) or
sequelize.addModels([__dirname + '/models']):
sequelize.addModels([Person]) sequelize.addModels(['path/to/models'])
globsglobs
import {Sequelize} from 'sequelize-typescript'; const sequelize = new Sequelize({ ... models: [__dirname + '/**/*.model.ts'] }); // or sequelize.addModels([__dirname + '/**/*.model.ts']);
Model-path resolvingModel-path resolving
A model is matched to a file by its filename. E.g.
// File User.ts matches the following exported model. export class User extends Model {}
This is done by comparison of the filename against all exported members. The
matching can be customized by specifying the
modelMatch function in the
configuration object.
For example, if your models are named
user.model.ts, and your class is called
User, you can match these two by using the following function:
import {Sequelize} from 'sequelize-typescript'; const sequelize = new Sequelize({ models: [__dirname + '/models/**/*.model.ts'] modelMatch: (filename, member) => { return filename.substring(0, filename.indexOf('.model')) === member.toLowerCase(); }, });
For each file that matches the
*.model.ts pattern, the
modelMatch function
will be called with its exported members. E.g. for the following file
//user.model.ts import {Table, Column, Model} from 'sequelize-typescript'; export const UserN = 'Not a model'; export const NUser = 'Not a model'; @Table export class User extends Model { @Column nickname: string; }
The
modelMatch function will be called three times with the following arguments.
user.model UserN -> false user.model NUser -> false user.model User -> true (User will be added as model)
Another way to match model to file is to make your model the default export.
export default class User extends Model {}
⚠️When using paths to add models, keep in mind that they will be loaded during runtime. This means that the path may differ from development time to execution time. For instance, using
.tsextension within paths will only work together with ts-node.
Build and createBuild and create
Instantiation and inserts can be achieved in the good old sequelize way
const person = Person.build({ name: 'bob', age: 99 }) person.save() Person.create({ name: 'bob', age: 99 })
but sequelize-typescript also makes it possible to create instances with
new:
const person = new Person({ name: 'bob', age: 99 }) person.save()
Find and updateFind and update
Finding and updating entries does also work like using native sequelize. So see sequelize docs for more details.
Person.findOne().then((person) => { person.age = 100 return person.save() }) Person.update( { name: 'bobby' }, { where: { id: 1 } } ).then(() => {})
Model associationModel association
Relations can be described directly in the model by the
@HasMany,
@HasOne,
@BelongsTo,
@BelongsToMany
and
@ForeignKey annotations.
One-to-manyOne-to-many
@Table class Player extends Model { @Column name: string @Column num: number @ForeignKey(() => Team) @Column teamId: number @BelongsTo(() => Team) team: Team } @Table class Team extends Model { @Column name: string @HasMany(() => Player) players: Player[] }
That's all, sequelize-typescript does everything else for you. So when retrieving a team by
find
Team.findOne({ include: [Player] }).then((team) => { team.players.forEach((player) => console.log(`Player ${player.name}`)) })
the players will also be resolved (when passing
include: Player to the find options)
Many-to-manyMany-to-many
@Table class Book extends Model { @BelongsToMany(() => Author, () => BookAuthor) authors: Author[] } @Table class Author extends Model { @BelongsToMany(() => Book, () => BookAuthor) books: Book[] } @Table class BookAuthor extends Model { @ForeignKey(() => Book) @Column bookId: number @ForeignKey(() => Author) @Column authorId: number }
Type safe through-table instance accessType safe through-table instance access
To access the through-table instance (instanceOf
BookAuthor in the upper example) type safely, the type
need to be set up manually. For
Author model it can be achieved like so:
@BelongsToMany(() => Book, () => BookAuthor) books: Array<Book & {BookAuthor: BookAuthor}>;
One-to-oneOne-to-one
For one-to-one use
@HasOne(...)(foreign key for the relation exists on the other model) and
@BelongsTo(...) (foreign key for the relation exists on this model)
@ForeignKey,
@BelongsTo,
@HasMany,
@HasOne,
@BelongsToMany API
Note that when using AssociationOptions, certain properties will be overwritten when the association is built, based on reflection metadata or explicit attribute parameters. For example,
as will always be the annotated property's name, and
through will be the explicitly stated value.
Multiple relations of same modelsMultiple relations of same models
sequelize-typescript resolves the foreign keys by identifying the corresponding class references. So if you define a model with multiple relations like
@Table class Book extends Model { @ForeignKey(() => Person) @Column authorId: number @BelongsTo(() => Person) author: Person @ForeignKey(() => Person) @Column proofreaderId: number @BelongsTo(() => Person) proofreader: Person } @Table class Person extends Model { @HasMany(() => Book) writtenBooks: Book[] @HasMany(() => Book) proofedBooks: Book[] }
sequelize-typescript cannot know which foreign key to use for which relation. So you have to add the foreign keys explicitly:
// in class "Books": @BelongsTo(() => Person, 'authorId') author: Person; @BelongsTo(() => Person, 'proofreaderId') proofreader: Person; // in class "Person": @HasMany(() => Book, 'authorId') writtenBooks: Book[]; @HasMany(() => Book, 'proofreaderId') proofedBooks: Book[];
Type safe usage of auto generated functionsType safe usage of auto generated functions
With the creation of a relation, sequelize generates some method on the corresponding
models. So when you create a 1:n relation between
ModelA and
ModelB, an instance of
ModelA will
have the functions
getModelBs,
setModelBs,
addModelB,
removeModelB,
hasModelB. These functions still exist with sequelize-typescript.
But TypeScript wont recognize them and will complain if you try to access
getModelB,
setModelB or
addModelB. To make TypeScript happy, the
Model.prototype of sequelize-typescript has
$set,
$get,
$add
functions.
@Table class ModelA extends Model { @HasMany(() => ModelB) bs: ModelB[] } @Table class ModelB extends Model { @BelongsTo(() => ModelA) a: ModelA }
To use them, pass the property key of the respective relation as the first parameter:
const modelA = new ModelA() modelA .$set('bs', [ /* instance */ ]) .then(/* ... */) modelA.$add('b' /* instance */).then(/* ... */) modelA.$get('bs').then(/* ... */) modelA.$count('bs').then(/* ... */) modelA.$has('bs').then(/* ... */) modelA.$remove('bs' /* instance */).then(/* ... */) modelA.$create('bs' /* value */).then(/* ... */)
IndexesIndexes
@Index
The
@Index annotation can be used without passing any parameters.
@Table class Person extends Model { @Index // Define an index with default name @Column name: string @Index // Define another index @Column birthday: Date }
To specify index and index field options, use an object literal (see indexes define option):
@Table class Person extends Model { @Index('my-index') // Define a multi-field index on name and birthday @Column name: string @Index('my-index') // Add birthday as the second field to my-index @Column birthday: Date @Index({ // index options name: 'job-index', parser: 'my-parser', type: 'UNIQUE', unique: true, where: { isEmployee: true }, concurrently: true, using: 'BTREE', operator: 'text_pattern_ops', prefix: 'test-', // index field options length: 10, order: 'ASC', collate: 'NOCASE' }) @Column jobTitle: string @Column isEmployee: boolean }
Index APIIndex API
createIndexDecorator()
The
createIndexDecorator() function can be used to create a decorator for an index with options specified with an object literal supplied as the argument. Fields are added to the index by decorating properties.
const SomeIndex = createIndexDecorator() const JobIndex = createIndexDecorator({ // index options name: 'job-index', parser: 'my-parser', type: 'UNIQUE', unique: true, where: { isEmployee: true }, concurrently: true, using: 'BTREE', operator: 'text_pattern_ops', prefix: 'test-' }) @Table class Person extends Model { @SomeIndex // Add name to SomeIndex @Column name: string @SomeIndex // Add birthday to SomeIndex @Column birthday: Date @JobIndex({ // index field options length: 10, order: 'ASC', collate: 'NOCASE' }) @Column jobTitle: string @Column isEmployee: boolean }
Repository modeRepository mode
With
sequelize-typescript@1 comes a repository mode. See docs for details.
The repository mode makes it possible to separate static operations like
find,
create, ... from model definitions.
It also empowers models so that they can be used with multiple sequelize instances.
How to enable repository mode?How to enable repository mode?
Enable repository mode by setting
repositoryMode flag:
const sequelize = new Sequelize({ repositoryMode: true, ..., });
How to use repository mode?How to use repository mode?
Retrieve repository to create instances or perform search operations:
const userRepository = sequelize.getRepository(User) const luke = await userRepository.create({ name: 'Luke Skywalker' }) const luke = await userRepository.findOne({ where: { name: 'luke' } })
How to use associations with repository mode?How to use associations with repository mode?
For now one need to use the repositories within the include options in order to retrieve or create related data:
const userRepository = sequelize.getRepository(User) const addressRepository = sequelize.getRepository(Address) userRepository.find({ include: [addressRepository] }) userRepository.create({ name: 'Bear' }, { include: [addressRepository] })
⚠️This will change in the future: One will be able to refer the model classes instead of the repositories.
Limitations of repository modeLimitations of repository mode
Nested scopes and includes in general won't work when using
@Scope annotation together with repository mode like:
@Scopes(() => ({ // includes withAddress: { include: [() => Address] }, // nested scopes withAddressIncludingLatLng: { include: [() => Address.scope('withLatLng')] } })) @Table class User extends Model {}
⚠️This will change in the future: Simple includes will be implemented.
Model validationModel validation
Validation options can be set through the
@Column annotation, but if you prefer to use separate decorators for
validation instead, you can do so by simply adding the validate options as decorators:
So that
validate.isEmail=true becomes
@IsEmail,
validate.equals='value' becomes
@Equals('value')
and so on. Please notice that a validator that expects a boolean is translated to an annotation without a parameter.
See sequelize docs for all validators.
ExceptionsExceptions
The following validators cannot simply be translated from sequelize validator to an annotation:
ExampleExample
const HEX_REGEX = /^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$/ @Table export class Shoe extends Model { @IsUUID(4) @PrimaryKey @Column id: string @Equals('lala') @Column readonly key: string @Contains('Special') @Column special: string @Length({ min: 3, max: 15 }) @Column brand: string @IsUrl @Column brandUrl: string @Is('HexColor', (value) => { if (!HEX_REGEX.test(value)) { throw new Error(`"${value}" is not a hex color value.`) } }) @Column primaryColor: string @Is(function hexColor(value: string): void { if (!HEX_REGEX.test(value)) { throw new Error(`"${value}" is not a hex color value.`) } }) @Column secondaryColor: string @Is(HEX_REGEX) @Column tertiaryColor: string @IsDate @IsBefore('2017-02-27') @Column producedAt: Date }
ScopesScopes
Scopes can be defined with annotations as well. The scope options are identical to native sequelize (See sequelize docs for more details)
@DefaultScope and
@Scopes
@DefaultScope(() => ({ attributes: ['id', 'primaryColor', 'secondaryColor', 'producedAt'] })) @Scopes(() => ({ full: { include: [Manufacturer] }, yellow: { where: { primaryColor: 'yellow' } } })) @Table export class ShoeWithScopes extends Model { @Column readonly secretKey: string @Column primaryColor: string @Column secondaryColor: string @Column producedAt: Date @ForeignKey(() => Manufacturer) @Column manufacturerId: number @BelongsTo(() => Manufacturer) manufacturer: Manufacturer }
HooksHooks
Hooks can be attached to your models. All Model-level hooks are supported. See the related unit tests for a summary.
Each hook must be a
static method. Multiple hooks can be attached to a single method, and you can define multiple methods for a given hook.
The name of the method cannot be the same as the name of the hook (for example, a
@BeforeCreate hook method cannot be named
beforeCreate). That’s because Sequelize has pre-defined methods with those names.
@Table export class Person extends Model { @Column name: string @BeforeUpdate @BeforeCreate static makeUpperCase(instance: Person) { // this will be called when an instance is created or updated instance.name = instance.name.toLocaleUpperCase() } @BeforeCreate static addUnicorn(instance: Person) { // this will also be called when an instance is created instance.name += ' 🦄' } }
Why
() => Model?
@ForeignKey(Model) is much easier to read, so why is
@ForeignKey(() => Model) so important? When it
comes to circular-dependencies (which are in general solved by node for you)
Model can be
undefined
when it gets passed to @ForeignKey. With the usage of a function, which returns the actual model, we prevent
this issue.
Recommendations and limitationsRecommendations and limitations
One Sequelize instance per model (without repository mode)One Sequelize instance per model (without repository mode)
Unless you are using the repository mode, you won't be able to add one and the same model to multiple Sequelize instances with differently configured connections. So that one model will only work for one connection.
One model class per fileOne model class per file
This is not only good practice regarding design, but also matters for the order
of execution. Since Typescript creates a
__metadata("design:type", SomeModel) call due to
emitDecoratorMetadata
compile option, in some cases
SomeModel is probably not defined(not undefined!) and would throw a
ReferenceError.
When putting
SomeModel in a separate file, it would look like
__metadata("design:type", SomeModel_1.SomeModel),
which does not throw an error.
MinificationMinification
If you need to minify your code, you need to set
tableName and
modelName
in the
DefineOptions for
@Table annotation. sequelize-typescript
uses the class name as default name for
tableName and
modelName.
When the code is minified the class name will no longer be the originally
defined one (So that
class User will become
class b for example).
ContributingContributing
To contribute you can:
- Open issues and participate in discussion of other issues.
- Fork the project to open up PR's.
- Update the types of Sequelize.
- Anything else constructively helpful.
In order to open a pull request please:
- Create a new branch.
- Run tests locally (
npm install && npm run build && npm run cover) and ensure your commits don't break the tests.
- Document your work well with commit messages, a good PR description, comments in code when necessary, etc.
In order to update the types for sequelize please go to the Definitely Typed repo, it would also be a good idea to open a PR into sequelize so that Sequelize can maintain its own types, but that might be harder than getting updated types into microsoft's repo. The Typescript team is slowly trying to encourage npm package maintainers to maintain their own typings, but Microsoft still has dedicated and good people maintaining the DT repo, accepting PR's and keeping quality high.
Keep in mind
sequelize-typescript does not provide typings for
sequelize - these are seperate things.
A lot of the types in
sequelize-typescript augment, refer to, or extend what sequelize already has. | https://www.npmjs.com/package/sequelize-typescript | CC-MAIN-2021-31 | en | refinedweb |
from datascience import * import numpy as np import matplotlib.pyplot as plt %matplotlib inline import plotly.express as px Table.interactive_plots()
Note: We're not going to be able to work through this entire notebook in lecture; you should definitely review whatever we don't get a chance to finish.
Our dataset comes from Times Higher Education (THE)'s World University Rankings 2020. These are slightly outdated as there is a 2021 ranking now, but the data is still relevant.
world = Table.read_table('data/World_University_Rank_2020.csv')
world
... (1386 rows omitted)
It's always good to check how many schools we're dealing with:
world.num_rows
1396
Some columns (
'Number_students',
'International_Students',
'Percentage_Female',
'Percentage_Male') have commas and percentage symbols, meaning they can't be stored as integers. Let's clean them.
# Notice how we use apply here! def remove_symbol(s): return int(s.replace('%', '').replace(',', ''))
# Remember, the result of calling apply is an array world.apply(remove_symbol, 'Number_students')
array([20664, 2240, 18978, ..., 15236, 17101, 9285])
world = world.with_columns( 'Number_students', world.apply(remove_symbol, 'Number_students'), 'International_Students', world.apply(remove_symbol, 'International_Students'), 'Percentage_Female', world.apply(remove_symbol, 'Percentage_Female'), 'Percentage_Male', world.apply(remove_symbol, 'Percentage_Male') )
Now we can sort by any numeric column we want.
world.sort('Percentage_Female')
... (1386 rows omitted)
It seems like the above schools didn't report their sex breakdown, since 0% is the listed percentage of female and male students.
Let's start asking questions.
world = world.relabeled('International_Students', '% International')
world
... (1386 rows omitted)
Then, to compute the number of international students at each school, we take the number of students at each school, multiply by the percentage of international students at each school, and divide by 100.
world.column('Number_students') * world.column('% International') / 100
array([8472.24, 672. , 7021.86, ..., 457.08, 0. , 185.7 ])
We should probably round the result, since we can't have fractional humans.
num_international = np.round(world.column('Number_students') * world.column('% International') / 100, 0) num_international
array([8472., 672., 7022., ..., 457., 0., 186.])
We can add this as a column to our table:
world = world.with_columns( '# International', num_international )
And we can sort by this column, while also selecting a subset of all columns just to focus on what's relevant:
world.select('University', 'Country', 'Number_students', '% International', '# International') \ .sort('# International', descending = True)
... (1386 rows omitted)
This tells us that the University of Melbourne has the most international students, with 21,797. That's larger than many universities!
There are no US universities in the top 10 here. How can we find the universities in the US with the most international students?
Fill in the blanks so that the resulting table contains the 15 universities in the US with the most international students, sorted by number of international students in decreasing order.
# __(a)__ means blank a # world.select('University', 'Country', 'Number_students', '% International', '# International') \ # .where(__(a)__, __(b)__) \ # .sort('# International', __(c)__) \ # .take(__(d)__)
If you do a quick Google search for "US universities with the most international students", you'll see NYU is usually #1. Cool!
This means they come up with a
'Teaching',
'Research',
'Citations',
'International_Outlook', and
'Industry_Income' score from 0 to 100 for each school, then compute a weighted average according to the above percentages to compute a school's
'Score_Result', which is how the schools are ranked.
Let's confirm this ourselves. First, let's get a subset of the columns since they won't all be relevant here.
scores_only = world.select('Score_Rank', 'University', 'Teaching', 'Research', 'Citations', 'International_Outlook', 'Industry_Income', 'Score_Result')
scores_only
... (1386 rows omitted)
The graphic tells us that the weights for each column are:
'Teaching': 0.3
'Research': 0.3
'Citations': 0.3
'International_Outlook': 0.075
'Industry_Income': 0.025
(Remember, to convert from percentage to proportion we divide by 100.)
Let's try and apply this to the school at the very top of the table, University of Oxford.
0.3 * 90.5 + \ 0.3 * 99.6 + \ 0.3 * 98.4 + \ 0.075 * 96.4 + \ 0.025 * 65.5
95.4175
The result, 95.4175, matches what we see in the
'Score_Result' column for University of Oxford.
We can apply the above formula to all rows in our table as well.
score_result_manual_calculation = \ 0.3 * scores_only.column('Teaching') + \ 0.3 * scores_only.column('Research') + \ 0.3 * scores_only.column('Citations') + \ 0.075 * scores_only.column('International_Outlook') + \ 0.025 * scores_only.column('Industry_Income')
score_result_manual_calculation
array([95.4175, 94.5475, 94.3775, ..., 11.055 , 10.9625, 10.6875])
To confirm that the results we got match the
'Score_Result' column in
scores_only, we can add the above array to our table:
scores_only.with_columns( 'Score Result Manual', score_result_manual_calculation )
... (1386 rows omitted)
This shows we've successfully reverse-engineered how the rankings work!
Now that we know how to compute
'Score_Result's using THE's percentages, we can also pick our own percentages if we want to prioritize different components in our ranking.
For instance, we may feel like THE's methodology places too much emphasis on research – together,
'Research' and
'Citations' make up 60% of the overall score.
We could choose to use the following breakdown, which we'll call "Breakdown 1":
'Teaching': 60%
'International_Outlook': 30%
'Industry_Income': 10%
breakdown_1 = 0.6 * scores_only.column('Teaching') \ + 0.3 * scores_only.column('International_Outlook') \ + 0.1 * scores_only.column('Industry_Income') breakdown_1
array([89.77, 88.81, 89.27, ..., 18.9 , 17.09, 18.39])
This gives us new overall scores for each school; we can add this column to our table and sort by it.
scores_only = scores_only.with_columns( 'Breakdown 1', breakdown_1 )
scores_only.sort('Breakdown 1', descending = True)
... (1386 rows omitted)
scores_only.sort('Breakdown 1', descending = True).take(23)
Note that when we choose this methodology, UC Berkeley is ranked much lower (24th instead of 13th). This is likely due to:
'Research'and
'Citations'scores not being included in the ranking
'Teaching'score
'Industry_Income'. This component factors in the amount that the university receives in funding from industrial partners – given that it's a public school it's unsurprising that this amount is low, but also many "wealthy" universities have a relatively low score here too, so it's not clear how much this should matter (see here for more).
Maybe we want to place some emphasis on research, but not as much as was placed in the initial ranking. We could then make "Breakdown 2":
'Teaching': 50%
'Research': 15%
'Citations': 15%
'International_Outlook': 15%
'Industry_Income': 5%
Assign
breakdown_2 to an array of overall scores for schools calculated according to our Breakdown 2 above, and add it as a column to
scores_only with the label
'Breakdown 2'. _Hint: Start by copying our code for
breakdown_1, which was:_
breakdown_1 = 0.6 * scores_only.column('Teaching') \ + 0.3 * scores_only.column('International_Outlook') \ + 0.1 * scores_only.column('Industry_Income')
# Answer QC here before running the next cell
scores_only.sort('Breakdown 2', descending = True)
"Breakdown 2" is much closer to THE's actual breakdown and the ordering here reflects that.
What do you care about in a university? Try your own breakdown!
We should note though that we haven't really thought about how THE comes up with the scores for each of the five categories (or the fact that university rankings have inherent flaws).
world
... (1386 rows omitted)
To determine the number of universities per country, we can group by
'Country':
world.group('Country')
... (82 rows omitted)
It's a good idea to sort too:
world.group('Country').sort('count', descending = True)
... (82 rows omitted)
How do we get the number of universities in each country with at least 25 universities on the list?
world.group('Country').where('count', are.above_or_equal_to(25))
... (7 rows omitted)
Run the cell below to see a bar graph of the number of universities in each country above.
world.group('Country').where('count', are.above_or_equal_to(25)).sort('count').barh('Country') | http://data94.org/resources/assets/lecture/lec22/lec22.html | CC-MAIN-2021-31 | en | refinedweb |
Ussuri Series Release Notes¶
10.2.0-66¶
New Features¶
Adds the ability to override the automatic detection of fluentd_version and fluentd_binary. These can now be defined as extra variables. This removes the dependency of having docker configured for config generation.
Adds support for collecting Prometheus metrics from RabbitMQ. This is enabled by default when Prometheus and RabbitMQ are enabled, and may be disabled by setting
enable_prometheus_rabbitmq_exporterto
false.
Bug Fixes¶
Fixes an issue with
kolla-ansible bootstrap-serversif Zun is enabled where Zun-specific configuration for Docker was applied to all nodes. LP#1914378
Fix the issue when Swift deployed with S3 Token Middleware enabled. Fixes LP#1862765
Fixes the Northbound and Southbound database socket paths in OVN.
chronyd crash loop if server is rebooted (Debian) LP#1915528
Fixed an issue when Docker was configured after startup on Debian/Ubuntu, which resulted in iptables rules being created - before they were disabled. LP#1923203
A bug where sriov_agent.ini wasn’t copied due to
Permission deniederror was fixed. LP#1923467
Fixed an issue where docker python SDK 5.0.0 was failing due to missing six - introduced a constraint to install version lower than 5.x. LP#1928915
Fixes more-than-2-node RabbitMQ upgrade failing randomly. LP#1930293.
Fixes Swift deploy when TLS enabled. Added the missing handler and corrected the container name. LP#1931097
Fixes missing region_name in keystone_auth sections. See bug 1933025 for details.
Fixes
iscsidfailing in current CentOS 8 based images due to pid file being needlessly set. LP#1933033
Fixes host bootstrap on Debian not removing the conflicting packages. It now behaves in accordance with the docs. LP#1933122
Fixes potential issue with Alertmanger in non-HA deployments. In this scenario, peer gossip protocol is now disabled and Alertmanager won’t try to form a cluster with non-existing other instances. LP#1926463
Fixes an issue when generating
/etc/hostsduring
kolla-ansible bootstrap-serverswhen one or more hosts has an
api_interfacewith dashes (
-) in its name. LP#1927357
Fixes some configuration issues around Barbican logging. LP#1891343
Fixes some configuration issues around Cinder logging. LP#1916752
Fix the wrong configuration of the ovs-dpdk service. this breaks the deployment of kolla-ansible. For more details please see bug 1908850.
Fixes an issue with keepalived which was not recreated during an upgrade if configuration is unchanged. LP#1928362
Fixes an issue with executing
kolla-ansiblewhen installed via
pip install --user. LP#1915527
Fixes an issue where
masakari.confwas generated for the
masakari-instancemonitorservice but not used.
Fixes an issue where
masakari-monitors.confwas generated for the
masakari-apiand
masakari-engineservices but not used.
Uses a consistent variable name for container dimensions for
masakari-instancemonitor-
masakari_instancemonitor_dimensions. The old name of
masakari_monitors_dimensionsis still supported.
Fixes an issue with Octavia deployment when using a custom service auth project. If
octavia_service_auth_projectis set to a project that does not exist, Octavia deployment would fail. The project is now created. LP#1922100
Fixes LP#1892376 by updating deprecated syntax in the Monasca Elasticsearch template.
Removes whitespace around equal signs in
zookeeper.cfgwhich were preventing the
zkCleanup.shscript from running correctly.
10.2.0¶
New Features¶
Adds a new flag,
docker_disable_default_iptables_rules, which defaults to
no. Docker is manipulating iptables rules by default to provide network isolation, and this might cause problems if the host already has an iptables based firewall. A common problem is that Docker sets the default policy of the
FORWARDchain in the
filterto
DROP. Setting
docker_disable_default_iptables_rulesto
yeswill disable Docker’s iptables manipulation. This feature will be enabled by default from the Victoria 11.0.0 release.
Improves performance of the
commonrole by generating all fluentd configuration in a single file.
Improves performance of the
commonrole by generating all logrotate configuration in a single file.
Known Issues¶
Since Ussuri, there is a bug in how Ceph (RBD) is handled with Cinder: the
backend_hostoption is missing from the generated configuration for external Ceph. The symptoms are that volumes become unmanageable until extra admin action is taken. This does not affect the data plane - running virtual machines are not affected.
There is a related issue regarding active-active
cinder-volumeservices (single-host
cinder-volumenot affected), which is that they should not have been configured with
backend_hostin the first place but with
clusterand proper coordination instead. Some users might have customised their config already to address this issue.
The Kolla team is investigating the best way to address this for all its users. In the meantime, please ensure that, before upgrading to Ussuri, the
backend_hostoption is set to its previous value (the default was
rbd:volumes) via a config override.
For more details please refer to the referenced bug. Do note this issue affects both new deployments and upgrades. LP#1904062
Upgrade Notes¶
When deploying Monasca with Logstash 6, any custom Logstash 2 configuration for Monasca will need to be updated to work with Logstash 6. Please consult the documentation.
baremetalrole now uses CentOS
8package repository for Docker CE (compared to
7previously).
The Prometheus OpenStack exporter now uses internal endpoints to communicate with OpenStack services, to match the configuration of other services deployed by Kolla Ansible. Using public endpoints can be retained by setting the
prometheus_openstack_exporter_endpoint_typevariable to
public.
The default value of
REST_API_REQUIRED_SETTINGSwas synchronized with Horizon. You may want to review settings exposed by the updated configuration.
Security Issues¶.
Bug Fixes¶
Add support to use bifrost-deploy behind proxy. It uses existing container_proxy variable.
Fixes handling of /dev/kvm permissions to be more robust against host-level actions. LP#1681461
IPv6 fully-routed topology (/128 addressing) is now allowed (where applicable). LP#1848941
When deploying Elasticsearch 6, Logstash 2 was deployed by default which is not compatible with Elasticsearch 6. Logstash 6 is now deployed by default.
Fix Castellan (Barbican client) when used with enabled TLS. LP#1886615
Fixes
--configdirparameter to apply to default
passwords.ymllocation. LP#1887180
fluentdis now logging to
/var/log/kolla/fluentd/fluentd.loginstead of
stdout. LP#1888852
Fixes
deploy-containersaction missing for the Masakari role. LP#1889611
An issue has been fixed when
keystonecontainer would be stuck in restart loop with a message that fernet key is stale. LP#1895723
Fixes
haproxy_single_service_splittemplate to work with default for
mode(
http). LP#1896591
Fixed invalid fernet cron file path on Debian/Ubuntu from
/var/spool/cron/crontabs/root/fernet-cronto
/var/spool/cron/crontabs/root. LP#1898765
Add with_first_found on placement for placement-api wsgi configuration to allow overwrite from users. LP#1898766
OVN will no longer schedule SNAT routers on compute nodes when
neutron_ovn_distributed_fipis enabled. LP#1901960
RabbitMQ services are now restarted serially to avoid a split brain. LP#1904702
Fixes LP#1906796 by adding notice and note loglevels to monasca log-metrics drop configuration
Fixes Swift’s stop action. It will no longer try to start
swift-object-updatercontainer again. LP#1906944
Fixes an issue with the
kolla-ansible precheckscommand with Docker 20.10. LP#1907436
Fixes an issue with
kolla-ansible mariadb_recoverywhen the
mariadbcontainer does not exist on one or more hosts. LP#1907658
fix deploy freezer failed when use kolla_dev_mod LP#1888242
Fixes issues with some CloudKitty commands trying to connect to an external TLS endpoint using HTTP. LP#1888544
Fixes an issue where Docker may fail to start if
iptablesis not installed. LP#1899060.
Fixes an issue during deleting evacuated instances with encrypted block devices. LP#1891462
Fixes an issue where Keystone Fernet key rotation may fail due to permission denied error if the Keystone rotation happens before the Keystone container starts. LP#1888512
Fixes an issue with Keystone startup when Fernet key rotation does not occur within the configured interval. This may happen due to one of the Keystone hosts being down at the scheduled time of rotation, or due to uneven intervals between cron jobs. LP#1895723
Fixes an issue with Kibana upgrade on Debian/Ubuntu systems. LP#1901614
Reverts the arp_responder option setting to the default (‘False’) for the LinuxBridge agent, as this is known to cause problems with l2_population as well as other issues such as not being fully compatible with the allowed-address-pairs extension. LP#1892776
Fixes an issue with the Neutron Linux bridge ML2 driver where the firewall driver configuration was not applied. LP#1889455
Fixes an issue with Masakari and internal TLS where CA certificates were not copied into containers, and the path to the CA file was not configured. Depends on masakari bug 1873736 being fixed. LP#1888655
Fixes an issue where Grafana instances would race to bootstrap the Grafana DB. See LP#1888681.
Fixes LP#1892210 where the number of open connections to Memcached from
neutron-serverwould grow over time until reaching the maximum set by
memcached_connection_limit(5000 by default), at which point the Memcached instance would stop working.
An issue where when Kafka default topic creation was used to create a Kafka topic, no redundant replicas were created in a multi-node cluster. LP#1888522. This affects Monasca which uses Kafka, and was previously masked by the legacy Kafka client used by Monasca which has since been upgraded in Ussuri. Monasca users with multi-node Kafka clusters should consultant the Kafka documentation to increase the number of replicas.
Fixes an issue where the
br_netfilterkernel module was not loaded on compute hosts. LP#1886796
The Prometheus OpenStack exporter now uses internal endpoints to communicate with OpenStack services, to match the configuration of other services deployed by Kolla Ansible.
Prevents adding a new Keystone host to an existing cluster when not targeting all Keystone hosts (e.g. due to
--limitor
--serialarguments), to avoid overwriting existing Fernet keys. LP#1891364
Reduce the use of SQLAlchemy connection pooling, to improve service reliability during a failover of the controller with the internal VIP. LP#1896635
No longer configures the Prometheus OpenStack exporter to use the
prometheusDocker volume, which was never required.
Updates the default value of
REST_API_REQUIRED_SETTINGSin Horizon
local_settings, which enables some features such as selecting the default boot source for instances. LP#1891024
10.1.0¶
Upgrade Notes¶
Changes the default value of
kibana_elasticsearch_ssl_verifyfrom
falseto
true. LP#1885110
Apache ZooKeeper will now be automatically deployed whenever Apache Storm is enabled.
Bug Fixes¶
Fixes an issue when using ip addresses instead of hostnames in Ansible inventory. OpenvSwitch role sets system-id based on inventory_hostname, which in case of ip addresses in is first ip octet. Such a deployment would result in multiple OVN chassis with duplicate name e.g. “10” connecting to OVN Southbound database - which spawns high numbers of create/delete events in Encap database table - leading to near 100% CPU usage of OVN/OVS/Neutron processes.
Fixes an issue with Manila deployment starting
openvswitchand
neutron-openvswitch-agentcontainers when
enable_manila_backend_genericwas set to
False. LP#1884939
Fixes the Elasticsearch Curator cron schedule run. LP#1885732
Fixes an incorrect configuration for nova-conductor when a custom Nova policy was applied, preventing the
nova_conductorcontainer from starting successfully. LP#1886170
Fixes an incorrect Ceph keyring file configuration in
gnocchi.conf, which prevented Gnocchi from connecting to Ceph. LP#1886711
In line with clients for other services used by Magnum, Cinder and Octavia also use endpoint_type = internalURL. In the same tune, these services also use the globally defined openstack_region_name.
Fix the configuration of the etcd service so that its protocol is independant of the value of the
internal_protocolparameter. The etcd service is not load balanced by HAProxy, so there is no proxy layer to do TLS termination when
internal_protocolis configured to be
https.
Fixes LP#1885885 where the default chunk size in the Monasca Fluentd output plugin increased from 8MB to 256MB for file buffering which exceeded the limit allowed by the Monasca Log / Unified API.
Adds a new variable
fluentd_elasticsearch_cacert, which defaults to the value of
openstack_cacert. If set, this will be used to set the path of the CA certificate bundle used by Fluentd when communicating with Elasticsearch. LP#1885109
Improves error reporting in
kolla-genpwdand
kolla-mergepwdwhen input files are not in the expected format. LP#1880220.
Fixes Magnum trust operations in multi-region deployments.
Deploys Apache ZooKeeper if Apache Storm is enabled explicitly. ZooKeeper would only be deployed if Apache Kafka was also enabled, which is often done implicitly by enabling Monasca.
10.0.0¶
Prelude¶
The Kolla Ansible
Ceph deployment support has been dropped
configuration of external Ceph integration has been streamlined
initial support for TLS encryption of backend API services, providing end-to-end encryption of API traffic for Barbican, Cinder, Glance, Heat, Horizon, Keystone, Nova and Placement
support for deployment of Open Virtual Network (OVN) and integration of it with Neutron
New Features¶
Adds Elasticsearch Curator for managing aggregated log data.
Adds configuration variables
cron_logrotate_rotation_intervaland
cron_logrotate_rotation_countto set the logrotate rotation interval and count.
Adds a mechanism to customize
prometheus.yml. Please read the the documentation. for more details.
Add support for two new Senlin services;
senlin-conductorand
senlin-health-manager. Both of these services are required for Senlin to be fully functional starting with the Ussuri release.
Adds a mechanism to copy user defined files via the
extrasdirectory of prometheus config. This can can be useful for certain prometheus config customizations that reference additional files. An example is setting up file based service discovery.
Adds a new variable,
influxdb_datadir_volume. This allows you control where the docker volume for InfluxDB is created. A performance tuning is to set this to a path on a high performance flash drive.
Adds a new variable,
kafka_datadir_volume. This allows you to control where the Kafka data is stored. Generally you will want this to be a spinning disk, or an array of spinning disks.
Add a new container
zun-cni-daemonfor Zun service. This container is a daemon service for implementing the CNI plugin for Zun.
Allow operators to use custom parameters with the ceilometer-upgrade command. This is quite useful when using the dynamic pollster subsystem; that sub-system provides flexibility to create and edit pollsters configs, which affects Gnocchi resource-type configurations. However, Ceilometer uses default and hard-coded resource-type configurations; if one customizes some of its default resource-types, he/she can get into trouble during upgrades. Therefore, the only way to work around it is to use the
--skip-gnocchi-resource-typesflag.
Adds new checks to
kolla-ansible prechecksthat validate that expected Ansible groups exist.
Kolla Ansible checks now that the local Ansible Python environment is coherent, i.e. used Ansible can see Kolla Ansible. LP#1856346
Adds support for CentOS 8 as a host Operating System and base container image. This is the only major version of CentOS supported from the Ussuri release. The Train release supports both CentOS 7 and 8 hosts, and provides a route for migration.
Introduces user modifiable variables instead of fixed names for Ceph keyring files used by external Ceph functionality.
Configures all openstack services to use the globally defined Certificate Authority file to verify HTTPS connections. The global CA file is configured by the
openstack_cacertparameter.
When
kolla_copy_ca_into_containersis configured to
yes, the certificate authority files in
/etc/kolla/certificates/cawill be copied into service containers to enable trust for those CA certificates. This is required for any certificates that are either self-signed or signed by a private CA, and are not already present in the service image trust store. Otherwise, either CA validation will need to be explicitly disabled or the path to the CA certificate must be configured in the service using the
openstack_cacertparameter.
Fluentd now buffers logs locally to file when the Monasca API is unreachable.
Adds configuration options to enable backend TLS encryption from HAProxy to the Keystone, Glance, Heat, Placement, Horizon, Barbican, and Cinder services. When used in conjunction with enabling TLS for service API endpoints, network communcation will be encrypted end to end, from client through HAProxy to the backend service.
Delegates execution of the Ansible
urimodule to service containers using
kolla_toolbox. This will enable any certificates that are already copied and extracted into the service container to be automatically validated. This is particularly useful in the case that the certificate is either self-signed or signed by a local (private) CA.
Introduce External Ceph user IDs as variables to allow non-standard Ceph authentication IDs in OpenStack service configuration without the need to override configuration files.
Adds a
--cleanargument to
kolla-mergepwd. It allows to clean old (no longer used) keys from the passwords file.
Adds support for generating self-signed certificates for both the internal and external (public) networks via the
kolla-ansible certificatescommand. If they are the same network, then the certificate files will be the same.
Self-signed TLS certificates can be used to test TLS in a development OpenStack environment. The
kolla-ansible certificatescommand will generate the required self-signed TLS certificates. This command has been updated to first create a self-signed root certificate authority. The command then generates the internal and external facing certificates and signs them using the root CA. If backend TLS is enabled, the command will generate the backend certificate and sign it with the root CA.
HAProxy - Add the ability to define custom HAProxy services in {{ node_custom_config }}/haproxy/services.d/
Adds a new precheck for supported host OS distributions. Currently supported distributions are CentOS/RHEL 8, Debian Buster and Ubuntu Bionic. This check can be disabled by setting
prechecks_enable_host_os_checksto
false.
Adds support for deployment of OVN and integration of it with Neutron. This includes deployment of:
OVN databases (
ovn-sb-dband
ovn-nb-db)
Southbound and Northbound databases connector (
ovn-northd)
Hypervisor components
ovn-controllerand
neutron-ovn-metadata-agent
Add Object Storage service (Swift) support for Ironic.
Adds support for managing Ceilometer dynamic pollster configuration in Kolla Ansible. This feature will look for configurations in
{{ node_custom_config }}/ceilometer/pollster.d/by default. If there are configs there, they are copied to the control nodes, to configure Ceilometer dynamic pollster sub-system.
Enable Galera node state checking by using
clustercheckscript that is used by HAProxy to define node up/down state.
Introduces a new configuration variable
mariadb_wsrep_extra_provider_optionsallowing users to set additional WSREP options.
Adds support for the Neutron policy file in both .json and .yaml format.
Adds a new variable,
openstack_tag, which is used as the default Docker image tag in place of
openstack_release. The default value is
openstack_release, with a suffix set via
openstack_tag_suffix. The suffix is empty except on CentOS 8 where it is set to
-centos8. This allows for the availability of images based on CentOS 7 and 8.
Prometheus server can now be disabled, allowing the exporters to be deployed without it. The default behaviour of deploying Prometheus server when Prometheus is enabled remains.
Known Issues¶
Python Requests library will not trust self-signed or privately signed CAs even if they are added into the OS trusted CA folder and update-ca-trust is executed. For services that rely on the Python Requests library, either CA verification must be explicitly disabled in the service or the path to the CA certificate must be configured using the
openstack_cacertparameter.
Upgrade Notes¶
Adds a maximum supported version check for Ansible. Kolla Ansible now requires at least Ansible
2.8and supports up to
2.9. See blueprint for details.
Avoids unnecessary fact gathering using the
setupmodule. This should improve the performance of environments using fact caching and the Ansible
smartfact gathering policy. See blueprint for details. the SCSI target daemon (
tgtd) has been removed for CentOS/RHEL 8. The default value of
cinder_target_helperis now
lioadmon CentOS/RHEL 8, but remains as
tgtadmon other platforms.
For cinder (
cinder-volumeand
cinder-backup),
glance-apiand
manilakeyrings behavior has changed and Kolla Ansible deployment will not copy those keys using wildcards (
ceph.*), instead will use newly introduced variables. Your environment may render unusable after an upgrade if your keys in
/etc/kolla/configdo not match default values for introduced variables.
The default
migration_interfaceis moved from
network_interfaceto
api_interface, which is treaded as internal and security network plane in most case.
The gnocchi-statsd daemon is no longer enabled by default. If you are using the daemon, you will need to set
enable_gnocchi_statsd: "yes"to continue using it in your deployment.
Erlang 22.x dropped support for HiPE so the
rabbitmq_hipe_compilevariable has been removed.
Changes default value of
enable_haproxy_memcachedto
no. Memcached has not been accessed via haproxy since at least the Rocky release. Users depending on haproxy for memcached for other software may want to change this back to
yes.
Python 2.7 support has been dropped. The last release of Kolla Ansible to support Python 2.7 is OpenStack Train. The minimum version of Python now supported by Kolla Ansible is Python 3.6.
The default behavior for generating the
cinder.conftemplate has changed. An
rbd-1section will be generated when external Ceph functionality is used, i.e.
cinder_backend_cephis set to
true. Previously it was only included when Kolla Ansible internal Ceph deployment mechanism was used.
The
rbdsection of
nova.conffor
nova-computeis now generated when
nova_backendis set to
"rbd". Previously it was only generated when both
enable_cephwas
"yes"and
nova_backendwas set to
"rbd".
The
kolla_logsDocker volume is now mounted into the Elasticsearch container to expose logs which were previously written erroneously to the container filesystem. It is up to the user to migrate any existing logs if they so desire and this should be done before applying this fix. LP#1859162
The default value for
kolla_external_fqdn_cacerthas been changed from: “{{ node_config }}/certificates/haproxy-ca.crt” to: “{{ node_config }}/certificates/ca/haproxy.crt”
and the default value for
kolla_external_fqdn_cacerthas been changed from: “{{ node_config }}/certificates/haproxy-ca-internal.crt” to: “{{ node_config }}/certificates/ca/haproxy-internal.crt”
These variables set the value for the
OS_CACERTenvironment variable in
admin-openrc.sh. This has been done to allow these certificates to be copied into containers when
kolla_copy_ca_into_containersis true.
Replaced
kolla_external_fqdn_cacertand
kolla_internal_fqdn_cacertwith
kolla_admin_openrc_cacert, which by default is not set.
OS_CACERTis now set to the value of
kolla_admin_openrc_cacertin the generated
admin-openrc.shfile.
Glance deployment now uses Multi-Store support. Users that have
default_storesin their service config overrides for
glance-api.confshould remove it and use
default_backendif needed.
The
enable_cadf_notificationsvariable was removed. CADF is the default notification format in keystone. To enable keystone notifications, users can now set
keystone_default_notifications_topic_enabledto
yesor enable Ceilometer via
enable_ceilometer.
Removes support for the
enable_xtrabackupvariable that was deprecated in favour of
enable_mariabackupin the Train (9.0.0) release.
Support for deploying Ceph has been removed, after it was deprecated in Stein. Please use an external tool to deploy Ceph and integrate it with Kolla Ansible deployed OpenStack by following the external Ceph guide.
The octavia user is no longer given the admin role in the admin project. Octavia does not require this role and instead uses octavia user with admin role in service project. During an upgrade the octavia user is removed from the admin project.
For existing deployments this may cause problems, so a
octavia_service_auth_projectvariable has been added which may be set to
adminto return to the previous behaviour.
To switch an existing deployment from using the
adminproject to the
serviceproject, it will at least be necessary to create the required security group in the
serviceproject, and update
octavia_amp_secgroup_listto this group’s ID. Ideally the Amphora flavor and network would also be recreated in the
serviceproject, although this does not appear to be necessary for operation, and will impact existing Amphorae.
See bug 1873176 for details.
Support for configuration of Neutron related to integration with ONOS has been removed.
Support for deployment of OpenDaylight controller and configuration of Neutron related to integration with OpenDaylight have been removed.
Neutron Linux bridge and Open vSwitch Agents config has been split out into
linuxbridge_agent.iniand
openvswitch_agent.inirespectively. Please move your custom service config from
ml2_conf.iniinto those files.
The Monasca Log API has been removed. All logs now go to the unified Monasca API when Monasca is enabled. Any custom Fluentd configuration and inventory files will need to be updated. Any monasca_log_api containers will be removed automatically.
Deprecation Notes¶
Deprecates support for deploying with Hyper-V integrations. In Victoria support for these will be removed from Kolla Ansible.
This is dictated by lack of interest and maintenance.
See also the post to openstack-discuss
Deprecates support for deploying MongoDB. In Victoria support for deploying MongoDB will be removed from Kolla Ansible. Note CentOS 8 already lost support for MongoDB due to decisions made upstream.
This affects Panko as it will no longer be possible to get automatic deployment of MongoDB database for it. However, the default, SQL, backend is and will be supported via MariaDB.
MongoDB lost its position in OpenStack environment after controversial relicensing under their custom SSPL (Server Side Public License) which did not pass OSI (Open Source Initiative) validation.
The neutron-fwaas project was deprecated in the Neutron stadium and will be removed from stadium in the Wallaby cycle. The support for neutron-fwaas in the Neutron and Horizon roles is deprecated as of the Ussuri release and will be removed in the Wallaby cycle.
Deprecates support for deploying with VMware integrations. In Victoria support for these will be removed from Kolla Ansible.
This is dictated by lack of interest and maintenance.
See also the post to openstack-discuss
Deprecates support for deploying with XenAPI integrations. In Victoria support for these will be removed from Kolla Ansible.
This is dictated by lack of interest and maintenance, and upstream decision of deprecation by Nova (for the same reasons).
See also the post to openstack-discuss. And the Nova notice.
The
congressproject is no longer maintained. This has been retired since Victoria and has not been used by other OpenStack services since.
Customizing Neutron Linux bridge and Open vSwitch Agents config via
ml2_conf.iniis deprecated. The config has been split out for these agents into
linuxbridge_agent.iniand
openvswitch_agent.inirespectively. In this release (Ussuri) custom service config
ml2_conf.inioverrides will still be used when merging configs - but but that functionality will be removed in the Victoria release.
Security Issues¶
Fixes leak of RabbitMQ password into Ansible logs. LP#1865840
Bug Fixes¶
Fix that the cyborg conductor failed to communicate with placement. See bug 1873717.
Fix that cyborg agent failed to start privsep daemon. Add privileged capability for cyborg agent. See bug 1873715.
Adds necessary
region_nameto
octavia.confwhen
enable_barbicanis set to
true. LP#1867926
Adds
/etc/timezoneto
Debian/Ubuntucontainers. LP#1821592
Fixes an issue with Nova live migration not using
migration_interface_addresseven when TLS was not used. When migrating an instance to a newly added compute host, if addressing depended on
/etc/hostsand it had not been updated on the source compute host to include the new compute host, live migration would fail. This did not affect DNS-based name resolution. Analogically, Nova live migration would fail if the address in DNS/
/etc/hostswas not the same as
migration_interface_addressdue to user customization. LP#1729566
Fixes Kibana deployment with the new E*K stack (6+). LP#1799689
Reworks Keystone fernet bootstrap which had tendencies to fail on multinode setups. See bug 1846789 for details.
Fix prometheus-openstack-exporter to use CA certificate.
Changes Manila cephfs share driver to
manila.share.drivers.cephfs.driver.CephFSDriver, as the old driver was deprecated.
External Ceph: copy also cinder keyring to nova-compute. Since Train nova-compute needs also the cinder key in case rbd user is set to Cinder, because volume/pool checks have been moved to use rbd python library. Fixes LP#1859408
Fix qemu loading of ceph.conf (permission error). LP#1861513
Remove /run bind mounts in Neutron services causing dbus host-level errors and add /run/netns for neutron-dhcp-agent and neutron-l3-agent. LP#1861792
Fixes an issue where old fluentd configuration files would persist in the container across restarts despite being removed from the
node_custom_configdirectory. LP#1862211
Use more permissive regex to remove the offending 127.0.1.1 line from /etc/hosts. LP#1862739
Each Prometheus mysqld exporter points now to its local mysqld instance (MariaDB) instead of VIP address. LP#1863041
Cinder Backup has now access to kernel modules to load e.g. iscsi_tcp module. LP#1863094
Makes RabbitMQ hostname address resolution precheck stronger by requiring uniqueness of resolution to avoid later issues. LP#1863363
Fix protocol used by
neutron-metadata-agentto connect to Nova metadata service. This possibly affected internal TLS setup. Fixes LP#1864615
Fixes haproxy role to avoid restarting haproxy service multiple times in a single Ansible run. LP#1864810 LP#1875228
Fixes an issue with deploying Grafana when using IPv6. LP#1866141
Fixes elasticsearch deployment in IPv6 environments. LP#1866727
Fixes failure to deploy telegraf with monitoring of zookeeper due to wrong variable being referenced. LP#1867179
Fixes deployment of fluentd without any enabled OpenStack services. LP#1867953
Fix missing glance_ca_certificates_file variable in glance.conf. LP#1869133
Add client ca_cert file in heat LP#1869137
Adds missing
vitrage-persistorservice, required by Vitrage deployments for storing data. LP#1869319
Fixes
designate-workernot to use
etcdas its coordination backend because it is not supported by Designate (no group membership support available via tooz). LP#1872205
Fixes Octavia in internally-signed (e.g. self-signed) cert TLS deployments by providing path to CA cert file in proper config places. LP#1872404
Fixes source-IP-based load balancing for Horizon when using the “split” HAProxy service template.
Fixes issue where HAProxy would have no backend servers in its config files when using the “split” config template style.
Manage nova scheduler workers through
openstack_service_workersvariable. LP#1873753
Fixes Grafana datasource update. LP#1881890
Removing chrony package and AppArmor profile from docker host if containerized chrony is enabled. LP#1882513
Add missing “become: true” on some VMWare related tasks. Fixed on
Copying VMware vCenter CA fileand
Copying over nsx.ini.
fix deploy nova failed when use kolla_dev_mod.
Remove the meta field of the Swift rings from the default rsync_module template. Having it by default, undocumented, can lead to unexpected behavior when the Swift documentation states that this field is not processed.
Fixes the default CloudKitty configuration, which included the
gnocchi_collectorand
keystone_fetcheroptions that were deprecated in Stein and removed in Train. See bug 1876985 for details.
When
etcdis used with
cinder_coordination_backendand/or
designate_coordination_backend, the config has been changed to use the
etcd3gw(aka
etcd3+http)
toozcoordination driver instead of
etcd3due to issues with the latter’s availability and stability.
etcd3does not handle well eventlet-based services, such as cinder’s and designate’s. See bugs 1852086 and 1854932 for details. See also tooz change introducing etcd3gw.
Adds configuration to set also_notifies within the pools.yaml file when using the Infoblox backend for Designate.
Pushing a DNS NOTIFY packet to the master does not cause the DNS update to be propagated onto other nodes within the cluster. This means each node needs a DNS NOTIFY packet otherwise users may be given a stale DNS record if they query any worker node. For details please see bug 1855085
Fixes an issue with Docker client timeouts where Docker reports ‘Read timed out’. The client timeout may be configured via
docker_client_timeout. The default timeout has been increased to 120 seconds. See bug for details.
Fixes IPv6 deployment on CentOS 7. The issues with RabbitMQ and MariaDB have been worked around. For details please see the following Launchpad bug records: bug 1848444, bug 1848452, bug 1856532 and bug 1856725.
Fixes an issue with Cinder upgrades that would cause online schema migration to fail. LP#1880753
Fix cyborg api container failed to load api paste file. For details please see bug 1874028.
Fix elasticsearch schema in fluentd when
kolla_enable_tls_internalis true.
Fixes an issue where
fernet_token_expirywould fail the pre-checks despite being set to a valid value. Please see bug 1856021 for more details.
Fixes an issue with HAProxy prechecks when scaling out using
--limitor
--serial. LP#1868986.
Fixes an issue with the HAProxy monitor VIP precheck when some instances of HAProxy are running and others are not. See bug 1866617.
The
kolla_logsDocker volume is now mounted into the Elasticsearch container to expose logs which were previously written erroneously to the container filesystem. LP#1859162
Fixes MariaDB issues in multinode scenarios which affected deployment, reconfiguration, upgrade and Galera cluster resizing. They were usually manifested by WSREP issues in various places and could lead to need to recover the Galera cluster. Note these issues were due to how MariaDB was handled during Kolla Ansible runs and did not affect Galera cluster during normal operations unless MariaDB was later touched by Kolla Ansible. Users wishing to run actions on their Galera clusters using Kolla Ansible are strongly advised to update. For details please see the following Launchpad bug records: bug 1857908 and bug 1859145.
Fixes an issue with Nova when deploying new compute hosts using
--limit. LP#1869371.
Adapts Octavia to the latest dual CA certificate configuration. The following files should exist in
/etc/kolla/config/octavia/:
client.cert-and-key.pem
client_ca.cert.pem
server_ca.cert.pem
server_ca.key.pem
See the Octavia documentation for details on generating these files.
Fixes an issue with RabbitMQ where tags would be removed from the
openstackuser after deploying Nova. This prevents the user from accessing the RabbitMQ management UI. LP#1875786
Fixes an issue where a failure in pulling an image could lead to a container being removed and not replaced. See bug 1852572 for details.
Since Openstack services can now be configured to use TLS enabled REST endpoints, urls should be constructed using the {{ internal_protocol }} and {{ external_protocol }} configuration parameters.
Construct service REST API urls using
kolla_internal_fqdninstead of
kolla_internal_vip_address. Otherwise SSL validation will fail when certificates are issued using domain names.
Fixes an issue with the
kolla-ansible stopcommand where it may fail trying to stop non-existent containers. LP#1868596.
Fixes Swift volume mounting failing on kernel 4.19 and later due to removal of nobarrier from XFS mount options. See bug 1800132 for details.
Fixes an issue with fluentd parsing of WSGI logs for Aodh, Masakari, Qinling, Vitrage and Zun. See bug 1720371 for details.
Fixes gnocchi-api script name for Ubuntu/Debian binary deployments. LP#1861688
Fixes glance_api to run as privileged and adds missing mounts so it can use an iscsi cinder backend as its store. LP#1855695
When upgrading from Rocky to Stein HAProxy configuration moves from using a single configuration to assembling a file from snippets for each service. Applying the HAProxy tag to the entire play ensures that HAProxy configuration is generated for all services when the HAProxy tag is specified. For details please see bug 1855094.
Fixes an issue with the
ironic_ipxecontainer serving instance images. See bug 1856194 for details.
Fixes an issue with Kibana deployment when
openstack_cacertis unset. See bug 1864180 for details.
Fixes an issue with Monasca deployment where an invalid variable (
monasca_log_dir) is referenced. See bug 1864181 for details.
Fixes an issue where host configuration tasks (
sysctl, loading kernel modules) could be performed during the
kolla-ansible genconfigcommand. See bug 1860161 for details.
Fixes an issue with port prechecks for the Placement service. See bug 1861189 for details.
Fixes templating of Prometheus configuration when Alertmanager is disabled. In a deployment where Prometheus is enabled and Alertmanager is disabled the configuration for the Prometheus will fail when templating as the variable
prometheus_alert_rulesdoes not contain the key
files. LP#1854540
Removes the
[http]/max-row-limit = 10000setting from the default InfluxDB configuration, which resulted in the CloudKitty v1 API returning only 10000 dataframes when using InfluxDB as a storage backend. See bug 1862358 for details.
Skydive’s API and the web UI now rely on Keystone for authentication. Only users in the Keystone project defined by skydive_admin_tenant_name will be able to authenticate. See LP#1870903 <> for more details.
Fixes an issue where Elasticsearch API requests made during Kibana, Elasticsearch and Monasca deployment could have an invalid body. See bug 1864177 for details.
masakari-monitorwill now use the internal API to reach masakari-api. LP#1858431
Switch endpoint_type from public to internal for octavia communicating with the barbican service. See bug 1875618 for details. | https://docs.openstack.org/releasenotes/kolla-ansible/ussuri.html | CC-MAIN-2021-31 | en | refinedweb |
#include <wx/filedlg.h>
This class represents the file chooser dialog.
The path and filename are distinct elements of a full file pathname. If path is wxEmptyString, the current directory will be used. If filename is wxEmptyString, no default filename will be supplied. The wildcard determines what files are displayed in the file selector, and file extension supplies a type extension for the required filename.
The typical usage for the open file dialog is:
The typical usage for the save file dialog is instead somewhat simpler:
This class supports the following styles:
wxFD_OPEN.
wxFD_SAVE.
wxFD_OPEN.
wxFD_OPENstyle always behaves as if this style was specified, because it is impossible to choose a file that doesn't exist from a standard macOS file dialog.
The type of function used as an argument for SetExtraControlCreator().
Constructor.
Use ShowModal() to show the dialog.
Destructor.
Returns the path of the file currently selected in dialog.
Notice that this file is not necessarily going to be accepted by the user, so calling this function mostly makes sense from an update UI event handler of a custom file dialog extra control to update its state depending on the currently selected file.
Currently this function is fully implemented under GTK and MSW and always returns an empty string elsewhere.
Returns the file type filter index currently selected in dialog.
Notice that this file type filter is not necessarily going to be the one finally accepted by the user, so calling this function mostly makes sense from an update UI event handler of a custom file dialog extra control to update its state depending on the currently selected file type filter.
Currently this function is fully implemented under macOS and MSW and always returns
wxNOT_FOUND elsewhere.
Returns the default directory.
If functions SetExtraControlCreator() and ShowModal() were called, returns the extra window.
Otherwise returns NULL.
Returns the default filename.
wxFD_MULTIPLEstyle, use GetFilenames() instead.
Fills the array filenames with the names of the files chosen.
This function should only be used with the dialogs which have
wxFD_MULTIPLE style, use GetFilename() for the others.
Note that under Windows, if the user selects shortcuts, the filenames include paths, since the application cannot determine the full path of each referenced file by appending the directory containing the shortcuts to the filename.
Returns the index into the list of filters supplied, optionally, in the wildcard parameter.
Before the dialog is shown, this is the index which will be used when the dialog is first displayed.
After the dialog is shown, this is the index selected by the user.
Returns the message that will be displayed on the dialog.
Returns the full path (directory and filename) of the selected file.
wxFD_MULTIPLEstyle, use GetPaths() instead.
Returns the file dialog wildcard.
Sets the default directory.
Customize file dialog by adding extra window, which is typically placed below the list of files and above the buttons.
SetExtraControlCreator() can be called only once, before calling ShowModal().
The
creator function should take pointer to parent window (file dialog) and should return a window allocated with operator new.
Sets the default filename.
In wxGTK this will have little effect unless a default directory has previously been set.
Sets the default filter index, starting from zero.
Sets the message that will be displayed on the dialog.
Sets the path (the combined directory and filename that will be returned when the dialog is dismissed).
Sets the wildcard, which can contain multiple file types, for example: "BMP files (*.bmp)|*.bmp|GIF files (*.gif)|*.gif".
Note that the native Motif dialog has some limitations with respect to wildcards; see the Remarks section above. | https://docs.wxwidgets.org/3.1.5/classwx_file_dialog.html | CC-MAIN-2021-31 | en | refinedweb |
Something.
If you haven’t figured it out already, Apple’s Documentation is trash. If you find a useful piece of documentation from Apple’s website that genuinely helps you on your mission to implement push notifications, I think you should double-check that you’re not dreaming and send me a link because I’d love to see it.
Anyways, it took me a surprisingly long time to figure out what is really a pretty simple process if you know what to do.
The basic concept is:
- Generate a certificate for your server to authenticate with Apple’s Push Notification Service.
- Convert that certificate into something that can actually be used by your web service.
- Request device Push Notification tokens
- Send a notification
How do you generate an Apple Push Notification Service Certificate?
The first step in sending push notifications is getting a certificate. You’ll want to generate two certificates: one to use when you’re developing and one to use in prod. Apple is pretty clear about which kind you’ll be generating during the process.
Follow the following steps to generate your certificates.
Step 1. Log into your apple developer account
Go to Go to developer.apple.com and login.
Step 2: Edit Certificates for your App
Once you’re logged in to your Apple Developer account, click on Certificates, Identifiers & Profiles.
Step 3: Edit Your App Identifier
Click on the Identifiers item in the lefthand menu once you’re on the Certificates, Identifiers & Profiles page.
Find the app you’re working on in the table that shows up and click on it to edit it.
Step 4: Enable and Edit Push Notifications
If you haven’t enabled push notifications already, do that now. Just check the box on the left side of the Push Notifications row, then click save in the top right corner.
Once your Push Notifications have been enabled, click the Edit or Configure button which will be in the same row to the right.
Click “Create Certificate” and you’ll be ready to upload your certificate signing request and download your certificate. Keep the page that it redirects you to open and we’ll revisit this in step 6.
Step 5: Generate a Certificate Signing Request
Before you get your certificate, you’ll need to generate and upload a certificate signing request.
To do this, follow these steps:
- Open up the Keychain Access app on your Mac.
- In the menu, go to Keychain Access > Certificate Assistant > Request a Certificate from a Certificate Authority
- Fill out the form with your email address, a useful description of the certificate your requesting, and select the option to save to disk.
Step 6: Upload your Certificate Signing Request (CSR) and Download your Certificate
Go back to the page from the end of Step 4. It should look like this:
Upload the CSR you just created, then, download your new push notification certificate. This should be called something like
aps.cer or
aps_development.cer.
Step 7: Convert your Certificate from a .cer to a .pem
Most frameworks you’ll use to actually send notifications, such as PyAPNS2, will actually require you to provide them with a
.pem file for your certificate. We can transform our newly generated
.cer file into a
.p12 file and then into a
.pem file using a relatively simple process.
- First, double-click your certificate file that you downloaded. This will add it to the Keychain Access app.
- Open Keychain Access and go to the “My Certificates” category
- Find your certificate in the list.
- Right click it and select the Export option
- Save the file as a
.p12.
- Run the following command to convert the
.p12into a
.pem:
openssl pkcs12 -in /path/to/cert.p12 -out /path/to/cert.pem -nodes -clcerts
Send a Push Notification
Now that you have a certificate in the correct format, it’s pretty easy to send a push notification. As an example, I’ll show you how to do this using PyAPNs2. PyAPNs2 is a neat little library based off the original PyAPNs2 but updated to meet the latest and greatest specs for sending push notifications.
Sending a notification should be as easy as this:
from apns2.client import APNsClient from apns2.payload import Payload token_hex = 'b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b87' payload = Payload(alert="Hello World!", sound="default", badge=1) topic = 'com.example.App' client = APNsClient('key.pem', use_sandbox=False, use_alternative_port=False) client.send_notification(token_hex, payload, topic)
Which combination of Sandbox and Prod/Dev Certificates should you use?
One thing I found confusing was knowing when to use the sandbox mode and when to use my dev/prod certs. When switching my app builds between Debug/Release/Testflight versions, I got some varying results using different combinations of sandbox/certificates. This is what I was able to find out:
- To send notifications to Xcode builds loaded directly on your device, you should be using sandbox mode with the development certificate. Although sandbox mode with the production certificate also works, I wouldn’t recommend it.
- For sending push notifications to Testflight builds, you should not use sandbox mode and should be using your production certificate.
If you’re curious, these were my actual findings when testing different combinations:
This was all a bit of a mess to work through, but at the end of the day I was able to get push notifications working in my app again after migrating onto the Expo bare workflow and deciding to upgrade from the deprecated token-only method of sending push notifications that was so easy to do with Expo’s exponent_server_sdk. Now I have notifications beautifully integrated into my Django API, using Celery to handle the heavy lifting of sending/retrying notifications.
If you have any questions or more takeaways from your experience, comment with them below. | https://michaelwashburnjr.com/blog/apple-push-notification-service-certificates | CC-MAIN-2021-31 | en | refinedweb |
Answered by:
How to downgrade Visual Studio for Mac?
Question
- User2823 posted
I need to downgrade from 7.2 to 7.1.5.2 Transitive project.json dependencies are broken in 7.2Tuesday, October 10, 2017 8:46 AM
Answers
-
All replies
- User18049 posted
You should be able to get previous versions from the account page.
What is the problem with transitive project.json dependencies in VS Mac 7.2?Tuesday, October 10, 2017 9:00 AM
- User29525 posted
@mattward From what I can see the account page only has Xamarin Studio, not Visual Studio for Mac. @RobLander Did you find the installation you needed?Tuesday, October 10, 2017 10:07 AM
-
- User29525 posted
Thanks @mattwardTuesday, October 10, 2017 10:34 AM
- User2823 posted
Thanks @mattward.
About regression: - I have portable project with project.json file referencing Xamarin.Forms - I have iOS project with "empty" project.json file, that references portable project - In VS 7.1 iOS compiles fine because Xamarin.Forms dependency is transitioned to iOS project - In VS 7.2 I have tonns of "The type or namespace 'Xamarin' could not be found" errorsTuesday, October 10, 2017 10:46 AM
- User2823 posted
I've also tried to downgrade Xamarin.iOS and Mononpackages leaving VS 7.2 but that doesn't help. So it seems like it is VS related.Tuesday, October 10, 2017 10:48 AM
- User18049 posted
Looks like something has changed in NuGet 4.3 maybe. I created a new Xamarin.Forms project using VS Mac 7.2, then uninstalled the Xamarin.Forms NuGet package and added a project.json file to the PCL project (also removed the :
{ "dependencies": { "Xamarin.Forms": "2.4.0.280" }, "frameworks": { ".NETPortable,Version=v4.5,Profile=Profile111": {} } }
Then uninstalled the NuGet package from the iOS project and added a project.json file:
``` { "frameworks": { "xamarin.ios10": { "imports": "portable-net45+win81+wp80+wpa81" } }, "runtimes": {"win-x86" : {} } }
```
Then reloaded the solution. The build fails in VS Mac 7.2 due to missing Xamarin.Forms types but works in VS Mac 7.1. The problem seems to be that the generated project.lock.json file does not have any Xamarin.Forms information when the NuGet restore is run using VS Mac 7.2. The main difference between VS Mac 7.1 and 7.2 is that NuGet was updated from 4.0 to 4.3.1.
However using NuGet.exe 4.3.0 from the command line and running a restore, or using
msbuild /t:restore, both seem to generated project.lock.json files with Xamarin.Forms information in them. Just VS Mac 7.2 does not.
A potential workaround here would be to disable automatic package restore and run the restore outside VS Mac using
msbuild /t:restoreor nuget.exe.Wednesday, October 11, 2017 5:46 PM
- User2823 posted
So, it's a some kind of regression in newer nuget. Is it going to be fixed in Visual Studio for Mac updates?Wednesday, October 25, 2017 9:55 AM
- User18049 posted
Looks like a bug in VS for Mac. On upgrading to a more recent NuGet version some code in VS for Mac was removed due to a restructuring of various classes in NuGet itself. So it looks like project reference information is not being correctly setup when a restore is happening. I believe this then causes the transitively referenced NuGet packages to not be added to the project.lock.json. The lock file is also missing project references.Wednesday, October 25, 2017 10:59 AM
- User18049 posted
@RobLander - I have created a patch NuGet addin dll that should fix the problem with project.json files for VS Mac 7.2. You can download it from GitHub.
Download that MonoDevelop.PackageManagement.dll file from GitHub. You may need to unblock it:
xattr -d -r com.apple.quarantine MonoDevelop.PackageManagement.dll
Then you can replace the existing NuGet addin .dll. I tested this locally by doing the following:
// Make backup of NuGet addin .dll
cp /Applications/Visual\ Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.PackageManagement/MonoDevelop.PackageManagement.dll /Applications/Visual\ Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.PackageManagement/MonoDevelop.PackageManagement.dll-backup
// Replace existing NuGet addin .dll with new one.
cp MonoDevelop.PackageManagement.dll /Applications/Visual\ Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.PackageManagement/MonoDevelop.PackageManagement.dll
Then the iOS project I was using for testing was building successfully and the project.lock.json file had the expected project references.Wednesday, October 25, 2017 9:07 PM
- User2823 posted
Btw maybe 15.4.2 service release contains this fix?Wednesday, November 1, 2017 9:46 AM
- User18049 posted
@RobLander - A fix will be in Visual Studio for Mac 7.3 at some point, not released currently. There is no fix for VS Mac 7.2 apart from using the patched binaries I linked to.Wednesday, November 1, 2017 9:48 AM
- User253803 posted
I updated to version 7.6.11 and it broke my project. I contacted support via visualstudio.com and was told there is no way to downgrade VS for MAC now. I'm posting here in the hopes that someone can point me to a repo or link somewhere to previous VS for mac versions?Wednesday, November 7, 2018 4:14 PM
- User18049 posted
@"RonanA.7363" How did it break your project?Wednesday, November 7, 2018 9:54 PM
- User253803 posted
@mattward In fairness, it didn't break the project. I can still build it, I just can't deploy it to simulator or device. The options (Project > Configuration > Device) that are usually there are disabled. If I set the start up project to the android project, the options are populated and work. If i then use the drop down to change project from android to iOS, the application crashes with the message:
"A fatal error has occurred Details of this error have been automatically sent to Microsoft for analysis. Visual Studio will now close."
I've tried updating xcode and then opening xcode and it installed some extensions. I restarted the machine but no change. Resintalled VS For mac and no change.
I tried creating a new Xamarin Forms project and the same issues as my own project.
Xcode version: 10.0 (10A255)
If you have any ideas, I'm definitively willing to try it as right now I have no way to test my project.Thursday, November 8, 2018 9:19 AM
- User18049 posted
Can you attach the IDE log after reproducing the error? Thanks.Thursday, November 8, 2018 11:23 AM
- User253803 posted
I've pulled this out of the logs that seems to be the problem:
FATAL ERROR [2018-11-09 09:59:38Z]: Visual Studio failed to start. Some of the assemblies required to run Visual Studio (for example gtk-sharp)may not be properly installed in the GAC. System.InvalidOperationException: Failed to compare two elements in the array. ---> System.ArgumentOutOfRangeException: Length cannot be less than zero. Parameter name: length at System.String.Substring (System.Int32 startIndex, System.Int32 length) [0x00087] in /Users/builder/jenkins/workspace/build-package-osx-mono/2018-02/external/bockbuild/builds/mono-x64/mcs/class/referencesource/mscorlib/system/string.cs:1257 at MonoDevelop.IPhone.IPhoneSimulatorExecutionTargetGroup.ParseIPhone (System.String name, System.String& device, System.Int32& version, System.String& subversion, MonoDevelop.IPhone.IPhoneSimulatorExecutionTargetGroup+IPhoneDeviceType& type) [0x00022] in /Users/vsts/agent/2.141.1/work/1/s/md-addins/MonoDevelop.IPhone/MonoDevelop.IPhone/Execution/IPhoneSimulatorExecutionTarget.cs:190
In case I'm wrong, I'm attached the full logFriday, November 9, 2018 10:23 AM
- User18049 posted
The crash looks like it is the same as that reported on the developer community site. This bug is fixed in Visual Studio for Mac 7.7.0.1552. There is also a workaround mentioned in the developer community forum post. I believe the problem is that more recent versions of Xcode one of the simulators is 'X' which breaks Visual Studio for Mac. As far as I am aware downgrading Visual Studio for Mac will not help in this case. A workaround is to rename the simulators in Xcode so that no iOS simulator name begins with a number (i.e. 0-9) or the letter X or S.Friday, November 9, 2018 1:47 PM
- User253803 posted
@mattward spot on sir. well played
I had a custom simulator set up called "11.4 iPhone 5". Once i renamed that to "iPhone 5 11.4", everything worked.
Thanks very much for the helpSunday, November 11, 2018 8:50 PM | https://social.msdn.microsoft.com/Forums/en-US/bcaca47d-d196-4dbc-82a6-440e3022be38/how-to-downgrade-visual-studio-for-mac?forum=xamarinvisualstudio | CC-MAIN-2021-31 | en | refinedweb |
Version 1.7.2¶
OEChem 1.7.2¶
OEPDBIFlag,
OEPDBOFlag,
OEMDLOFlag,
OEMOPACFlagconstants namespaces are deprecated.
New features¶
OEReadMDLQueryFilenow sets the title of the query molecule.
OEChem now recognizes the MDL “wavy” bond. An
OEBondBasecan be queried for its stereo type by querying its generic data for the
OEProperty.BondStereotag. The returned
unsigned intwill be from the
OEBondStereonamespace.
OEChem now supports MDL query file R group definitions. These are lines starting with
M RGP. The R group is represented as a
OEQAtomBasewith a
map indexthe same as the R group number in the file.
OEChem will now correct amidine and guanidinium functional groups where the bonds between C and N are marked as aromatic in MOL2 files.
Significant performance improvement to reading single-conformer molecules from OEB files. Other minor performance improvements were made to OEB as well (including multi-conformer molecules). This speeds up
OEDBMolcompression and uncompression as well.
Removed
OECopyHistoryfunction.
OEChem::OEReadHeaderis now smart enough to read the header into the
OEHeaderhistory if it is already populated with another header’s information.
oemolthreadsnow support reading and writing
OEHeaderobjects.
Added the
oemolthreadbase.PeekMolmethod.
Added support for
OEBitVector.ToHexStringin Java.
Major bug fixes¶
Fixed a
OEReadMDLQueryFilesegmentation fault when perceiving aromaticity with spirane rings containing generic atoms.
Fixed a bug where atom parity was ignored when reading a 3D MOL file.
Fixed a memory leak when initializing an
OEMCSSearchwith its
constructor. The workaround in 1.7.0 is to use the
OEMCSSearch.Initmethod instead.
Fixed a memory leak with
OEMCSSearchwhen
OEMCSSearch.SetMaxMatcheswas set to a number lower than the total number of matches found.
Fixed memory leak when creating and destroying multiple
OELingoSimobjects.
Some
OEOFlavor.PDBflavor combinations would generate corrupt data.
OESmilesAtomCountnow includes atoms of the form
[#6]in its count.
Fixed a bug where map indices on explicit hydrogens would be ignored when generating SMILES. For example,
[CH]would get generated even if the hydrogen had a map index specified. Now
[H:1][C]will be generated.
Fixed a crash that was introduced in 1.7.0 that would occur when using
oemolostream.flushor any
oemolostreammethod that returned an instance of itself.
Minor bug fixes¶
Energy set on an
OEGraphMolusing
OEMolBase.SetEnergywould not get written out to OEB files. Energy data will now round-trip through the OEB format.
Automatic conformer combining through the
oemolistream.SetConfTestinterface will now combine separate single-conformer molecules in an OEB into a multi-conformer molecule similar to the other file formats OEChem supports for that behavior (MOL2, SDF, etc).
OEPerceiveBondOrdersis more stable to pre-existing bond orders on the molecule.
Rare non-deterministic bug fixed for sulfur monoxide bug in
OEPerceiveBondOrders.
OEChem now recognizes the
DUresidue name and the mythical
DTresidue which is synonym for `` T``.
The new overload of
OEIsReadableto take a string would return
trueif there was no periods (‘.’) in the string and the string began with a readable file extension, for example, “MDL-FOO”.
Fixed a bug when
OELibraryGenfailed to generated products with correct Kekulé-form.
OESystem 1.7.2¶
New features¶
Added the
OEGetBitCountsfunction to efficiently get certain bit counts when comparing two
OEBitVectorobjects. This function is useful for implementing custom fingerprint similarity measures.
Added the
OEBitVector.SetDatamethod to allow a rapid method of initializing an
OEBitVectorwith binary data.
Added
OEErrorHandler.Verbosemethod to send
OEErrorLevel.Verboselevel messages.
Added
OESystem::OEHeader::operator boolto test whether the
OEHeaderhas any data in it.
Added
OEBoundedBuffer.Peekand
OEProtectedBuffer.Peekmethods.
Optimized writing generic data attached to an
OEBase. This optimized
OEDBMol::Compress.
Minor optimization to a common OEChem iterator.
Minor bug fixes¶
Renamed the
OEMakeFPfunction to
OEParseHexto be more explicit about what is does.
Valgrind would throw warnings about a change made in 1.7.0 for thread safety issues. Though this was not a true memory leak the code has been slightly changed to silence this warning.
Fixed an exponential growth bug when adding history to an
OEHeader.
Assignment operators for
OEMultiGridnow properly copy the titles of the subgrids.
OEPlatform 1.7.2¶
New features¶
Added the
OESetLicenseFilefunction to allow dynamically setting which OpenEye license file to use.
Added the
OEGetCurrentWorkingDirectoryfunction to return the current working directory for the process.
Optimized
oeigzstream.seekto only rewind the stream if seeking backwards. Therefore, the optimal way to seek in a gzipped file is in increasing
oefpos_torder.
Optimized
oeisstream.getbuffer.
Optimized
oeosstream.clear.
Minor bug fixes¶
Revised the
oestream.lengthand
oestream.sizedocumentation to be more accurate.
oefpos_tnow works for files greater than 4GB on 64-bit windows.
Fixed an infinite loop that would occur from the following code:
oeosstream sfs; sfs.open("foobar"); sfs.write("blah");
oeosstream.openwill return
false. Ignoring the
falsereturn value would cause
oeosstream.writeto loop indefinitely.
OEBio 1.7.2¶
New features¶
Introduced new OEBio classes to manage alternate locations in protein data bank structures along with a new PDB input flavor
OEIFlavor.PDB.ALTLOCto retain alternate location atoms on input.
See also
Major bug fixes¶
The crystal symmetry routines were broken in the 1.7.0 release by a corrupted matrix.
When reading symmetry from external sources (i.e. PDB files or maps) a warning is thrown when reading out of date space groups rather than fail to read symmetry.
Setting space groups will fail unless the most current space group constraints are used.
Added the following space group aliases for older style space groups.
I 1 2 1 -> C 1 2 1 P 1- -> P -1
OEGrid 1.3.3¶
New features¶
OEMakeGridFromCenterAndExtentsadded for a grid of an arbitrary template type.
Major bug fixes¶
Fixed a rare crash in
OEMakeRegularGridthat was caused by floating point round-off error. | https://docs.eyesopen.com/toolkits/java/oechemtk/releasenotes/version1_7_2.html | CC-MAIN-2021-31 | en | refinedweb |
Data Visualization is a big part of data analysis and data science. In a nutshell data visualization is a way to show complex data in a form that is graphical and easy to understand. This can be especially useful when trying to explore the data and get acquainted with it. Visuals such as plots and graphs can be very effective in clearly explaining data to various audiences. Here is a beginners guide to data visualisation using Matplotlib from a Pandas dataframe.
Fundamental design principals
All great visuals follow three key principles: less is more, attract attention, and have impact. In other words, any feature or design you include in your plot to make it more attractive or pleasing should support the message that the plot is meant to get across and not distract from it.
Matplotlib and its architecture
Let's learn first about Matplotlib and its architecture. Matplotlib is one of the most widely used, if not the most popular data visualization libraries in Python. Matplotlib tries to make basic things easy and hard things possible. You can generate plots, histograms, box plots, bar charts, line plots, scatterplots, etc., with just a few lines of code. Keep reading to see code examples.
Matplotlib's architecture is composed of three main layers: the back-end layer, the artist layer where much of the heavy lifting happens, and the scripting layer. The scripting layer is considered a lighter interface to simplify common tasks and for quick and easy generation of graphics and plots.
Import Matplotlib and Numpy.
First import Matplotlib and Matplotlib's pyplot. Note that you need to have Numpy installed for Matplotlib to work. If you work in Jupiter Notebooks you will need to write
%matplotlib inline for your matplotlib graphs to be included in your notebook, next to the code.
import pandas as pd import numpy as np
%matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt mpl.style.use('ggplot')
The Pandas Plot Function
Pandas has a built in
.plot() function as part of the DataFrame class. In order to use it comfortably you will need to know several key parameters:
kind — Type of plot that you require. ‘bar’,’barh’,’pie’,’scatter’,’kde’ etc .
color — Sets color. It accepts an array of hex codes corresponding to each data series / column.
linestyle — Allows to select line style. ‘solid’, ‘dotted’, ‘dashed’ (applies to line graphs only)
x — label or position, default: None.
y — label, position or list of label, positions, default None. Allows plotting of one column against another.
legend— a boolean value to display or hide the legend
title — The string title of the plot
These are fairly straightforward to use and we’ll do some examples using .plot() later in the post.
Line plots in Pandas with Matplotlib
A line plot is a type of plot which displays information as a series of data points called 'markers' connected by straight line segments. It is a basic type of chart common in many fields. Use line plots when you have continuous data sets. These are best suited for trend-based visualizations of data over a period of time.
# Sample data for examples # Manually creating a dataframe # Source: df = pd.DataFrame({ 'Year':['1958','1963','1968','1973','1978','1983','1988', '1993', '1998', '2003', '2008', '2013', '2018'], 'Average population':[51652500, 53624900, 55213500, 56223000, 56178000, 56315000, 56916000, 57713000, 58474000, 59636000, 61823000, 64105000, 66436000] })
The
df.plot() or
df.plot(kind = 'line') commands create a line graph, and the parameters passed in tell the function what data to use. While you don't need to pass in parameter
kind = 'line' in the command to get a line plot it is better to add it for the sake of clarity.
The first parameter, year, will be plotted on the x-axis, and the second parameter, average population, will be plotted on the y-axis.
df.plot(x = 'Year', y = 'Average population', kind='line')
If you want to have a title and labels for your graph you will need to specify them separately.
plt.title('text') plt.ylabel('text') plt.xlabel('text')
Calling
plt.show() is required for your graph to be printed on screen. If you use Jupiter Notebooks and you already run line
%matplotlib inline your graph will show even without you running
plt.show() but, it will print an unwanted text message as well. This is why it is better to run
plt.show() regardless of the environment. When run, the output will be as follows:
Bar charts in Pandas with Matplotlib
A bar plot is a way of representing data where the length of the bars represents the magnitude/size of the feature/variable. Bar graphs usually represent numerical and categorical variables grouped in intervals.
Bar plots are most effective when you are trying to visualize categorical data that has few categories. If we have too many categories then the bars will be very cluttered in the figure and hard to understand. They’re nice for categorical data because you can easily see the difference between the categories based on the size of the bar.
Now lets create a dataframe for our'])
To create a bar plot we will use
df.plot() again. This time we can pass one of two arguments via
kind parameter in
plot():
kind=barcreates a vertical bar plot
kind=barhcreates a horizontal bar plot
Simmilarly
df.plot() command for bar chart will require three parameters: x values, y values and type of plot.
df.plot(x ='Country', y='GDP_Per_Capita', kind = 'bar') plt.title('GDP Per Capita in international dollars') plt.ylabel('GDP Per Capita') plt.xlabel('Country') plt.show()
Sometimes it is more practical to represent the data horizontally, especially if you need more room for labelling the bars. In horizontal bar graphs, the y-axis is used for labelling, and the length of bars on the x-axis corresponds to the magnitude of the variable being measured. As you will see, there is more room on the y-axis to label categetorical variables.
To get a horizontal bar chart you will need to change a
kind parameter in
plot() to
barh. You will also need to enter correct x and y labels as they are now switched compare to the standart bar chart.
df.plot(x ='Country', y='GDP_Per_Capita', kind = 'barh') plt.title('GDP Per Capita in international dollars') plt.ylabel(' Country') plt.xlabel('GDP Per Capita') plt.show()
The
df.plot() command allows for significant customisation. If you want to change the color of your graph you can pass in the
color parameter in your
plot() command. You can also remove the legend by passing
legend = False and adding a title using
title = 'Your Title'.
df.plot(x = 'Country', y = 'GDP_Per_Capita', kind = 'barh', color = 'blue', title = 'GDP Per Capita in international dollars', legend = False) plt.show()
Scatter plots in Pandas with Matplotlib
Scatterplots are a great way to visualize a relationship between two variables without the potential for getting a misleading trend line from a line graph. Just like with the above graphs, creating a scatterplot in Pandas with Matplotlib only requires a few lines of code, as shown below.
Let's start by creating a dataframe for the scatter plot.
# Sample dataframe # Source: # # Data for the 2015 Data = {'Country': ['United States','Singapore','Germany', 'United Kingdom','Japan'], 'GDP_Per_Capita': [52591,67110,46426,38749,36030], 'Life_Expectancy': [79.24, 82.84, 80.84, 81.40, 83.62] } df = pd.DataFrame(Data,columns=['Country','GDP_Per_Capita','Life_Expectancy'])
Now that you understand how the
df.plot() command works, creating scatterplots is really easy. All you need to do is change the
kind parameter to
scatter.
df.plot(kind='scatter',x='GDP_Per_Capita',y='Life_Expectancy',color='red') plt.title('GDP Per Capita and Life Expectancy') plt.ylabel('Life Expectancy') plt.xlabel('GDP Per Capita') plt.show()
Pie charts in Pandas with Matplotlib
A pie chart is a circular graphic that displays numeric proportions by dividing a circle into proportional slices. You are most likely already familiar with pie charts as they are widely used.
Let's use a pie chart to explore the proportion (percentage) of the population split by continents.
# sample dataframe for pie chart # source: # df = pd.DataFrame({'population': [422535000, 38304000 , 579024000, 738849000, 4581757408, 1106, 1216130000]}, index=['South America', 'Oceania', 'North America', 'Europe', 'Asia', 'Antarctica', 'Africa'])
We can create pie charts in Matplotlib by passing in the
kind=pie keyword in
df.plot() .
df.plot(kind = 'pie', y='population', figsize=(10, 10)) plt.title('Population by Continent') plt.show()
Box plots in Pandas with Matplotlib
A box plot is a way of statistically representing the distribution of the data through five main dimensions:
- Minimun: The smallest number in the dataset.
- First quartile: The middle number between the minimum and the median.
- Second quartile (Median): The middle number of the (sorted) dataset.
- Third quartile: The middle number between median and maximum.
- Maximum: The highest number in the dataset.
For the box plot, we can use the same dataframe that we used earlier for the'])
To make a box plot, we can use the
kind=box parameter in the
plot() method invoked in a pandas series or dataframe.
df.plot(kind='box', figsize=(8, 6)) plt.title('Box plot of GDP Per Capita') plt.ylabel('GDP Per Capita in dollars') plt.show()
Conclusion
We just learned 5 quick and easy data visualisations using Pandas with Matplotlib. I hope you enjoyed this post and learned something new and useful. If you want to learn more about data visualisations using Pandas with Matplotlib check out Pandas.DataFrame.plot documentation. | https://re-thought.com/how-to-visualise-data-with-python/ | CC-MAIN-2021-31 | en | refinedweb |
How to Get Active Windows with C#Hello everyone, in this article we are going to make an example application in C# that will get the title of the active windows at that time and record it time by time.
Let's get started.
Firstly we need to add below library in our namespace:
using System.Runtime.InteropServices;
Now define GetForeGroundWindow() and GetWindowsText() from user32.dll inside the class which will be called.
[DllImport("user32.dll")] static extern IntPtr GetForegroundWindow(); [DllImport("user32.dll")] static extern int GetWindowText(IntPtr hwnd, StringBuilder ss, int count);
Below code block will get the title of active window. Below method will first get active window and assign it to a pointer variable. Then via this pointer it will get the title text of active window. Then it will return the title as string.
private string ActiveWindowTitle() { //Create the variable const int nChar = 256; StringBuilder ss = new StringBuilder(nChar); //Run GetForeGroundWindows and get active window informations //assign them into handle pointer variable IntPtr handle = IntPtr.Zero; handle = GetForegroundWindow(); if (GetWindowText(handle, ss, nChar) > 0) return ss.ToString(); else return ""; }
Now I will see the active windows time by time. To perform this I have created a Timer and in Tick event of this timer, At tick event we have recorded the active titles inside a Listbox with their time values.
public Form1() { InitializeComponent(); this.TopMost = true; Timer tmr = new Timer(); tmr.Interval = 1000; tmr.Tick += Tmr_Tick; tmr.Start(); } private void Tmr_Tick(object sender, EventArgs e) { //get title of active window string title = ActiveWindowTitle(); //check if it is null and add it to list if correct if (title != "") { lbActiveWindows.Items.Add( DateTime.Now.ToString("hh:mm:ss") + " - " + title); } }
Run the program and switch between the screens. Then you will see the switched screens inside listbox.
That is all in this article.
You can reach the example project on Github via below link:
Burak Hamdi TUFAN
Hi sir, I want to get all user actions in window such as opening folder, clicking on an application ... How do it ?2021/06/20 11:08:35
This idea is similar to keylogger and I think you can perform it with windows API. But probably, an antivirus program will prevent your application to read the user activities.2021/06/28 08:28:22 | https://thecodeprogram.com/how-to-get-active-windows-with-c- | CC-MAIN-2021-31 | en | refinedweb |
So I decided to learn Python. Turns out this computer programming language isn’t so hard (well, until I got this project! :P ).
Within seconds, I fell in love with its easy, crisp syntax and its automatic indentation while writing. I was mesmerized when I learned that data structures like lists, tuples and dictionary could be created and initialized dynamically with a single line (like so, list-name = [] ).
Moreover, the values held in these could be accessed with and without the use of indexes. This makes the code highly readable as the index is replaced by an English word of one’s choice.
Well, enough said about the language. Let me show you what the project demanded.
My brother gave me this project. He came across a text file containing thousands of words. Many of the words shared almost the same meaning. Each word had its definition and an example sentence next to it but in a not-so-organized manner. There were spaces and newlines between a word and its sentence. Some aspects were missing from the words. Below are the snippets of the text file which I’m talking about:
He wanted the text aspects to be uniform. For that, he needed me to neatly assort all similar meaning words beside a topic. He told me that this could be achieved by capturing all the data in the text into a dictionary in the following format:
and then writing them into a CSV (Comma Separated Values) File.
He asked if I could take this up as my first project, now that I had learned the fundamentals. I was thrilled to work out the logic and so I instantly agreed. When asked about the deadline, he gave me a decent time of 2 days to finish.
Alas, I ended up taking double amount of time for I struggled to debug the written code properly. Frankly, if it hadn’t been for my brother’s short visits to my room to look at the progress and hinting at the wrong assumptions made by me while writing the conditions, I was destined to finish the project in eternity :P
I began by creating mini tasks within the program which I sought to finish before building up the entire program. These were as listed below:
1. Forming a Regex to match a number and the word next to it.
I examined the text file and noticed that every topic (herein referred to as ‘key’ ) had a number preceding it. So, I wrote a few lines of code for making a regex (regular expression — a powerful tool to extract text) of the pattern as follows:
However, when I ran this I got an error, UnicodeDecodeError, to be exact which meant I didn’t have access to the text file. I looked it up in and after a long search with no luck, my brother came and found a solution. The error was rectified as follows:
Still, I didn’t get the desired output. This was because some keys had slashes (‘/’) or spaces (‘ ‘) in the text which my regex couldn’t match. I thought of improving the regex expression later and so wrote a comment next to it.
2. Obtaining a list of lines as strings from the text file
For this, I wrote just 1 line of code and fortunately, no errors showed up.
However, I obtained an unclean list. It contained newlines (‘\n’) and spaces (‘ ‘) I then sought to refine the list as follows:
3. Extracting words, meanings, and example sentences separately and adding them to corresponding lists.
This was by far the hardest part to do as it involved proper logic and proper judgment by pattern recognition.
Interestingly, while glancing over the text file, I noticed more patterns. Every word had its meaning in the same line separated by a ‘=’ sign. Also, every example was preceded by ‘:’ sign and ‘Example’ keyword.
I thought of making use of regex again. I found an alternate and more elegant solution by slicing the line (now a string in the list) according to the placement of the symbols. Slicing is another cool feature in python. I wrote the code as follows:
The above code reads almost like English. For every line in the clean list, it checks whether it has a ‘=’ or a ‘:’ sign. If it does, then the index of the sign is found and slicing is done accordingly.
In the first ‘if’, the part before the ‘=’ is stored in the variable ‘word’ and the part after it is stored in ‘meaning’. Similarly for the second ‘if’ (‘elif — else if — in this case), the part after ‘:’ is stored in ‘example’. And after each iteration, the word, meaning and example sentence are stored in the corresponding lists. In this way, the whole data can be extracted.
So far so good. But, I noted that the extraction was to be done in a manner such that every word (and its aspects) of the particular key had to be accumulated together as one value for the key. This meant it was required to store each word, meaning, and example inside a tuple. Each tuple was to be stored inside a single list which would represent itself as the value for a particular key. This is depicted below:
For this, I planned to collect each word, meaning and sentence of each key inside a separate list enclosed by another list, say key-list. Again, the picture will tell you more precisely:
To do this, I added the following code to the one which I wrote for slicing:
This code’s logic (the else part) turned out to be wrong, unfortunately. I wrongly assumed that only 2 conditions(‘=’ and ‘:’) existed in the text. There were many exceptions which I failed to notice. I ended up wasting hours for debugging possible errors in the logic. I had assumed that the complete text file followed the same pattern. But that was simply not the case.
Unable to make progress, I moved on to the next part of the program. I thought I could use some help from my brother after completing the other parts. :P
To be continued…
4. Creating values for keys using Zip Function and Parameter Unpacking.
At this point, I wasn’t entirely sure what I would do even after achieving the above configuration of lists. I had learned about ‘Zip’ function and ‘Parameter Unpacking’ during one of my brother’s tech talks, which literally zipped the lists passed to it, like so:
So I thought I could somehow combine those two features to achieve the desired result. After a bit of to-ing and fro-ing, testing the features and working on dummy lists, I succeeded. I created a separate file (beta) for this task, the snippet of which is given below:
The working of the above code can be figured out by having a look at the output:
The zip() function zips the corresponding lists or values within the lists and encloses them in a tuple. The tuples inside the lists are then converted to lists for unpacking and further zipping. Finally, the desired output is obtained.
I felt much relieved for the code worked this time. I was happy that I could manipulate the would-be extracted data and mold it into the required format. I copied the code to the main file on which I was working and modified the variable names accordingly. Now all there was left to do was to assign values to the keys in the dictionary (and of course the extraction part!).
5. Assigning values to the keys in the dictionary.
For this, I came to this solution after some experimentation with the code:
This produced the desired output as follows:
The program was almost done. The main problem lay in the data extraction part.
… continuation from section 3
After hours and hours of debugging, I grew more and more frustrated as to why the damn thing didn’t work. I called my brother and he gave me a subtle hint about the assumptions I had made while defining the conditional loops and if-else clauses. We scrutinized the text file and noticed that some words had examples in two lines instead of one.
According to my code logic, since there is no ‘:’ sign in the second line (nor a ‘=’ sign, for that matter), the contents in the line would not be treated as a part of the example. As a result, this statement would make the last ‘else’ part true and execute the code written in it. Considering all this, I modified the code as below:
Here, hasNumbers() is a function which checks whether a given line has numbers in it. I defined it as follows:
What this does is that it collects the second line of the example if all other conditions fail, combines it with the first line and then adds it the corresponding list as before.
To my disappointment, this didn’t work and instead showed an error that the index was out of range. I was dumbstruck, as every line of code seemed to be logically correct in my view.
After hours of madness, my brother showed me a way to fetch the line numbers where the error occurred. One of the main skills in programming is the ability the debug the program, to properly check for possible errors and maintain a continuous flow.
Interestingly, the following addition to the code reported that the error occurred at around line number 1750 of the text file.
This meant that the program worked well till that line number and that my code was correct! The problems lay in my wrong assumptions and also the text file thanks to its heterogeneity.
This time around, I noticed some keys were not by their numbers which caused problems in the logic flow. I rectified the mistakes by further modifying the code as follows:
This worked well till line 4428 of the text file but crashed right after. I checked that line number on the text file itself but that didn’t help much. Then I realized, much to my happiness, that it must be the last line. The whole program worked on the clean list which was void of newlines and spaces. I printed the last line of the clean list and compared it with the last line of the text file. They matched!
I was extremely happy to know this as it meant the program was executed until the end. The only reason why it crashed was that after the last sentence none of the code made sense. My conditionals were designed to every time check the next line also, along with the current line. Since there was no line after the last line, it crashed.
So I wrote an additional line of code to cover that up:
Everything worked now. Finally! Now all I had to was to assign the keys to corresponding values and that’s that! I took a break at this moment, considering that my project was finally over. I would add some final touches to it later.
But before taking a break, I decided to enclose every code inside various functions so as to make the code look neat. I already had much trouble navigating up and down the lines of code. So I decided to take a break after doing this.
However, after doing so, the program started giving variable scope errors. I realized that this was because variables declared inside functions cannot be called directly from outside the function as they are in the local namespace. Unwilling to make further changes due to that lame error I decided to revert back to the same code with which I had been hitting my head from the start.
However, to my utter disbelief, the program didn’t work in the same way as it did before. In fact, it didn’t work at all! I simply couldn’t figure out the reason (and I still can’t!). I was utterly depressed for the rest of the day. It was like experiencing a nightmare even before falling asleep!
Fortunately and miraculously, the code worked the next day after I made some careful changes. I made sure that I made many beta files (for each change made) thereafter so as to avoid such unnecessary chaos.
After a few more hours, I was able to finally complete my program (but not until I consumed 4 full days). I made few more changes such as:
i) modifying the ‘hasNumbers’ function to ‘hasNumbersDot’ function and excluding the regex I made earlier in the program. This matched the keys more efficiently as it had no assumptions and hence no exceptions. The code for it is as follows:
ii) replacing the regex condition and the code for obtaining keys from the clean list.
iii) combining the ‘if’ conditions in the ‘examples extraction’ part
iv) materializing the code for dictionary key assignment
Also, after some trial and error, I was able to convert the data obtained into a beautifully structured CSV file:
You can check out my github repository on my profile for viewing the full code for the program including the text file and csv file.
Overall, it was a great experience. I got to learn so much out of this project. I also gained more confidence in my skills. Despite some unfortunate events (programming involves such things :P), I was finally able to complete the given task.
One last thing! Recently, I came across a hilarious meme regarding the stages of debugging which is so relatable to my experience that I can’t resist sharing. xD
Thanks for making it all the way until here (even if you skipped most of it to check out the final result :P). | https://www.freecodecamp.org/news/my-first-python-project-converting-a-disorganized-text-file-into-a-neatly-structured-csv-file-21f4c6af502d/ | CC-MAIN-2021-31 | en | refinedweb |
sahajBERT
Collaboratively pre-trained model on Bengali language using masked language modeling (MLM) and Sentence Order Prediction (SOP) objectives.
Model description
sahajBERT is a model composed of 1) a tokenizer specially designed for Bengali and 2) an ALBERT architecture collaboratively pre-trained on a dump of Wikipedia in Bengali and the Bengali part of OSCAR.
Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering.
We trained our model on 2 of these downstream tasks: sequence classification and token classification
How to use
You can use this model directly with a pipeline for masked language modeling:
from transformers import AlbertForMaskedLM, FillMaskPipeline, PreTrainedTokenizerFast # Initialize tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT") # Initialize model model = AlbertForMaskedLM.from_pretrained("neuropark/sahajBERT") # Initialize pipeline pipeline = FillMaskPipeline(tokenizer=tokenizer, model=model) raw_text = "ধন্যবাদ। আপনার সাথে কথা [MASK] ভালো লাগলো" # Change me pipeline(raw_text)
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import AlbertModel, PreTrainedTokenizerFast # Initialize tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT") # Initialize model model = AlbertModel.from_pretrained("neuropark/sahajBERT") text = "ধন্যবাদ। আপনার সাথে কথা বলে ভালো লাগলো" # Change me encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input)
Limitations and bias
WIP
Training data
The tokenizer was trained on he Bengali part of OSCAR and the model on a dump of Wikipedia in Bengali and the Bengali part of OSCAR.
Training procedure
This model was trained in a collaborative manner by volunteer participants.
Contributors leaderboard
Hardware used
Eval results
We evaluate sahajBERT model quality and 2 other model benchmarks (XLM-R-large and IndicBert) by fine-tuning 3 times their pre-trained models on two downstream tasks in Bengali:
NER: a named entity recognition on Bengali split of WikiANN dataset
NCC: a multi-class classification task on news Soham News Category Classification dataset from IndicGLUE
BibTeX entry and citation info
- Downloads last month
- 384 | https://huggingface.co/neuropark/sahajBERT | CC-MAIN-2021-31 | en | refinedweb |
The first thing you need to do is create a connection to the database using the connect method. After that, you will need a cursor that will operate with that connection.
Use the execute method of the cursor to interact with the database, and every once in a while, commit the changes using the commit method of the connection object.
Once everything is done, don't forget to close the cursor and the connection.
Here is a Dbconnect class with everything you'll need.
import MySQLdb class Dbconnect(object): def __init__(self): self.dbconection = MySQLdb.connect(host='host_example', port=int('port_example'), user='user_example', passwd='pass_example', db='schema_example') self.dbcursor = self.dbconection.cursor() def commit_db(self): self.dbconection.commit() def close_db(self): self.dbcursor.close() self.dbconection.close()
Interacting with the database is simple. After creating the object, just use the execute method.
db = Dbconnect() db.dbcursor.execute('SELECT * FROM %s' % 'table_example')
If you want to call a stored procedure, use the following syntax. Note that the parameters list is optional.
db = Dbconnect() db.callproc('stored_procedure_name', [parameters] )
After the query is done, you can access the results multiple ways. The cursor object is a generator that can fetch all the results or be looped.
results = db.dbcursor.fetchall() for individual_row in results: first_field = individual_row[0]
If you want a loop using directly the generator:
for individual_row in db.dbcursor: first_field = individual_row[0]
If you want to commit changes to the database:
db.commit_db()
If you want to close the cursor and the connection:
db.close_db() | https://riptutorial.com/python/example/14849/accessing-mysql-database-using-mysqldb | CC-MAIN-2021-31 | en | refinedweb |
It's been more than 2 years since Hooks API was added to React. Many projects already adopted the new API and there was enough time to see how the new patterns work in production. In this article I am going to walk you through my list of learnings after maintaining a large hooks-based codebase.
Learning #1. All standard rules apply
Hooks require developers to learn new patterns and follow some rules of hooks. This sometimes makes people think that new pattern dismisses all previous good practices. However, hooks are just yet another way of creating reusable building blocks. If you are creating a custom hook, you still need to apply basic software development practices:
- Single-responsibility principle. One hook should encapsulate a single piece of functionality. Instead of creating a single super-hook, it is better to split it into multiple smaller and independent ones
- Clearly defined API. Similar to normal functions/methods, if a hook takes too many arguments, it is a signal that this hook needs refactoring to be better encapsulated. There were recommendations of avoiding React components having too many props, same for React hooks – they also should have minimal number of arguments.
- Predictable behavior. The name of a hook should correspond to its functionality, no additional unexpected behaviours.
Even though these recommendations may look very obvious, it is still important to ensure that you follow them when you are creating your custom hooks.
Learning #2. Dealing with hook dependencies.
Several React hooks introduce a concept of "dependencies" – list of things which should cause a hook to update. Most often this can be seen in
useEffect, but also in
useMemo and
useCallback. There is a ESLint rule to help you managing an array of dependencies in your code, however this rule can only check the structure of the code and not your intent. Managing hook dependencies is the most tricky concept and requires a lot of attention from a developer. To make your code more readable and maintainable, you could reduce the number of hook dependencies.
Your hooks-based code could become easier with this simple trick. For example, let's consider a custom hook
useFocusMove:
function Demo({ options }) { const [ref, handleKeyDown] = useFocusMove({ isInteractive: (option) => !option.disabled, }); return ( <ul onKeyDown={handleKeyDown}> {options.map((option) => ( <Option key={option.id} option={option} /> ))} </ul> ); }
This custom hook takes a dependency on
isInteractive, which can be used inside the hook implementation:
function useFocusMove({ isInteractive }) { const [activeItem, setActiveItem] = useState(); useEffect(() => { if (isInteractive(activeItem)) { focusItem(activeItem); } // update focus whenever active item changes }, [activeItem, isInteractive]); // ...other implementation details... }
ESLint rule requires
isInteractive argument to be added to
useEffect dependencies, because the rule does not know where this custom hook is used and if this argument is ever changing or not. However, as a developer, we know that once defined this function always has the same implementation and adding it to the dependencies array only clutters the code. Standard "factory function" pattern comes to the rescue:
function createFocusMove({ isInteractive }) { return function useFocusMove() { const [activeItem, setActiveItem] = useState(); useEffect(() => { if (isInteractive(activeItem)) { focusItem(activeItem); } }, [activeItem]); // no ESLint rule violation here :) // ...other implementation details... }; } // usage const useFocusMove = createFocusMove({ isInteractive: (option) => !option.disabled, }); function Demo({ options }) { const [ref, handleKeyDown] = useFocusMove(); // ...other code unchanged... }
The trick here is to separate run-time and develop-time parameters. If something is changing during component lifetime, it is a run-time dependency and goes to the dependencies array. If it is once decided for a component and never changes in runtime, it is a good idea to try factory function pattern and make hooks dependencies management easer.
Learning #3. Refactoring useEffect
useEffect hook us a place for imperative DOM interactions inside your React components. Sometimes they could become very complex and adding dependencies array on top of that makes it more difficult tor read and maintain the code. This could be solved via extracting the imperative DOM logic outside the hook code. For example, consider a hook
useTooltipPlacement:
function useTooltipPosition(placement) { const tooltipRef = useRef(); const triggerRef = useRef(); useEffect(() => { if (placement === "left") { const triggerPos = triggerRef.current.getBoundingElementRect(); const tooltipPos = tooltipPos.current.getBoundingElementRect(); Object.assign(tooltipRef.current.style, { top: triggerPos.top, left: triggerPos.left - tooltipPos.width, }); } else { // ... and so on of other placements ... } }, [tooltipRef, triggerRef, placement]); return [tooltipRef, triggerRef]; }
The code inside
useEffect is getting very long and hard to follow and track if the hook dependencies are used properly. To make this simper, we could extract the effect content into a separate function:
// here is the pure DOM-related logic function applyPlacement(tooltipEl, triggerEl, placement) { if (placement === "left") { const triggerPos = tooltipEl.getBoundingElementRect(); const tooltipPos = triggerEl.getBoundingElementRect(); Object.assign(tooltipEl.style, { top: triggerPos.top, left: triggerPos.left - tooltipPos.width, }); } else { // ... and so on of other placements ... } } // here is the hook binding function useTooltipPosition(placement) { const tooltipRef = useRef(); const triggerRef = useRef(); useEffect(() => { applyPlacement(tooltipRef.current, triggerRef.current, placement); }, [tooltipRef, triggerRef, placement]); return [tooltipRef, triggerRef]; }
Our hook has become one line long and easy to track the dependencies. As a side bonus we also got a pure DOM implementation of the positioning which could be used and tested outside of React :)
Learning #4. useMemo, useCallback and premature optimisations
useMemo hook documentation says:
You may rely on useMemo as a performance optimisation
For some reason, developers read this part as "you must" instead of "you may" and attempt to memoize everything. This may sound like a good idea on a quick glance, but it appears to be more tricky when it comes to details.
To make benefits from memoization, it is required to use
React.memo or
PureComponent wrappers to prevent components from unwanted updates. It also needs very fine tuning and validation that there are no properties changing more often than they should. Any single incorrect propety might break all memoization like a house of cards:
This is a good time to recall YAGNI approach and focus memoization efforts only in a few hottest places of your app. In the remaining parts of the code it is not worth adding extra complexity with
useMemo/
useCallback. You could benefit from writing more simple and readable code using plain functions and apply memoization patterns later when their benefits become more obvious.
Before going the memoization path, I could also recommend you checking the article "Before You memo()", where you can find some alternatives to memoization.
Learning #5. Other React API still exist
If you have a hammer, everything looks like a nail
The introduction of hooks, made some other React patterns obsolete. For example,
useContext hook appeared to be more convenient than Consumer component.
However, other React features still exist and should not be forgotten. For example, let's take this hook code:
function useFocusMove() { const ref = useRef(); useEffect(() => { function handleKeyDown(event) { // actual implementation is extracted outside as shown in learning #3 above moveFocus(ref.current, event.keyCode); } ref.current.addEventListener("keydown", handleKeyDown); return () => ref.current.removeEventListener("keydown", handleKeyDown); }, []); return ref; } // usage function Demo() { const ref = useFocusMove(); return <ul ref={ref} />; }
It may look like a proper use-case for hooks, but why couldn't we delegate the actual event subscription to React instead of doing manually? Here is an alternative version:
function useFocusMove() { const ref = useRef(); function handleKeyDown(event) { // actual implementation is extracted outside as shown in learning #3 above moveFocus(ref.current, event.keyCode); } return [ref, handleKeyDown]; } // usage function Demo() { const [ref, handleKeyDown] = useFocusMove(); return <ul ref={ref} onKeyDown={handleKeyDown} />; }
The new hook implementation is shorter and has an advantage as hook consumers can now decide where to attach the listener, in case if they have more complex UI.
This was only one example, there could be many other scenarios, but the primary point remains the same – there are many React patterns (high-order components, render props, and others) which still exist and make sense even if hooks are available.
Conclusion
Basically, all learnings above go to one fundamental aspect: keep the code short and easy to read. You will be able to extend and refactor it later in the future. Follow the standard programming patterns and your hook-based codebase will live long and prosper.
Discussion (2)
Good read! Question on number 2: Would we not want to keep the function as dependency and let the user solve it, either with useCallback or by letting the user move the isInteractive function outside if the component:
With this, you can still closure over a prop in
isInteractiveif you want to
The answer depends on the use-case. Some things are not supposed to change in runtime and it is nice to move them out of the dependencies tracking logic altogether
For example, you can see this pattern in react-redux hook: react-redux.js.org/api/hooks#custo... | https://practicaldev-herokuapp-com.global.ssl.fastly.net/justboris/popular-patterns-and-anti-patterns-with-react-hooks-4da2 | CC-MAIN-2021-31 | en | refinedweb |
Hello, readers! In this article, we will be focusing on 4 Easy Ways to Perform Random Sampling in Python NumPy.
So, let us get started! 🙂
Random Sampling, to give an overview, is actually selecting random values from the defined type of data and present them to be in further use.
In the course of this topic, we will be having a look at the below functions–
- NumPy random_sample() method
- NumPy ranf() method
- NumPy random_integers() method
- NumPy randint() method
1. NumPy random_sample() method for Random Sampling
With random_sample() method, we can sample the data values and choose random data fat ease. It selects random samples between [0.0 – 1.0] only. We can build a single sample as well as an entire array based on random values.
Have a look at the below syntax!
random.random_sample()
Example:
In the below example, at first, we have performed random sampling and generated a single random value. Further, we have created a 2-dimensional array with random values by passing size as a parameter to the random_sample() function.
Note it that the random values would range between 0.0 to 1.0 only. Plus, random_sample() function generates random values of float type.
import numpy as np ran_val = np.random.random_sample() print ("Random value : ", ran_val) ran_arr = np.random.random_sample(size =(2, 4)) print ("Array filled with random float values: ", ran_arr)
Output:
Random value : 0.3733413809567606 Array filled with random float values: [[0.45421908 0.34993556 0.79641287 0.56985183] [0.88683577 0.91995939 0.16168328 0.35923753]]
2. The random_integers() function
With random_integers() function, we can generate random values or even a multi-dimensional array of random value of type integer. That it, it generates random values of type Integer. Further, it gives us the liberty to choose the range of integer values from which the random numbers would be selected.
Syntax:
random_integers(low, high, size)
- low: The lowest scale/limit for the random values to be chosen. The random values would not have a value below the low value mentioned.
- high: The highest scale/limit for the random values to be chosen. The random values would not have a value beyond the high value mentioned.
- size: The number of rows and columns for the array to be formed.
Example:
In this example, we have created a single dimensional random valued array having values between the range 5-10 only. Further, we have set up a multi-dimensional array using the same concept.
import numpy as np ran_val = np.random.random_integers(low = 5, high =10 , size = 3) print ("Random value : ", ran_val) ran_arr = np.random.random_integers(low = 5, high =10 , size = (2,4)) print ("Array filled with random float values: ", ran_arr)
Output:
Random value : [10 5 9] Array filled with random float values: [[ 8 8 9 6] [ 6 10 8 10]]
3. The randint() function
The randint() function works in a similar fashion as that of random_integers() function. It creates an array having random values within the specified range of integers.
Example:
import numpy as np ran_val = np.random.randint(low = 5, high =10 , size = 3) print ("Random value : ", ran_val)
Output:
Random value : [5 8 9]
4. The ranf() function
Again, ranf() function resembles random_sample() method in terms of functioning. It generates random numbers of type float between 0.0 to 1.0 only.
Example:
import numpy as np ran_val = np.random.ranf() print ("Random value : ", ran_val)
Output:
Random value : 0.8328458165202546
Conclusion
Feel free to comment below, in case you come across any questions. For more such posts related to Python programming, Stay tuned with us! Till then, Happy Learning! 🙂 | https://www.askpython.com/python/random-sampling-in-numpy | CC-MAIN-2021-31 | en | refinedweb |
NAME
Tk_CreateTimerHandler, Tk_DeleteTimerHandler - call a procedure at a given time
SYNOPSIS
#include <tk.h>
Tk_TimerToken Tk_CreateTimerHandler(milliseconds, proc, clientData)
Tk_DeleteTimerHandler(token)
ARGUMENTS
- int milliseconds (in)
How many milliseconds to wait before invoking proc.
- Tk_TimerProc *proc (in)
Procedure to invoke after milliseconds have elapsed.
- ClientData clientData (in)
Arbitrary one-word value to pass to proc.
- Tk_TimerToken token (in)
Token for previously-created timer handler (the return value from some previous call to Tk_CreateTimerHandler).
DESCRIPTION.
KEYWORDS
callback, clock, handler, timer | https://metacpan.org/dist/Tk/view/pod/pTk/TimerHndlr.pod | CC-MAIN-2021-31 | en | refinedweb |
Hide/Unhide Non-Bookmarked Lines
Hello,
Is there any way to hide and unhide all non-bookmarked lines?
Thank you
- Ekopalypse last edited by
afaik, not as builtin feature.
A possible solution might be to use a scripting language plugin
and some code similar to the pythonscript below.
from Npp import editor bookmark_mask = 1<<24 line = 0 while True: line = editor.markerNext(line, bookmark_mask) if line == -1: break editor.hideLines(line,line) line+=1
This is just some demo code demonstrating the feature.
- Alan Kilborn last edited by Alan Kilborn
The hide lines feature in Notepad++ is “underdeveloped”; I’d stay away from it unless and until it is made better by the developers. BUT…I can see how what you want to do is valuable.
@Ekopalypse OP wanted to hide NON bookmarked lines but AFAICT at a quick look, your P.S. will hide the bookmarked lines instead?
- Ekopalypse last edited by
@Alan-Kilborn
you are correct and I thought is on bookmarked lines really correct?
Well, non-bookmarked makes sense :-)
Let’s see if OP wants to go that way.
@Ekopalypse, @Alan-Kilborn Thank you both for your input.
I do indeed wish to hide non-bookmarked lines. Although the code above seems to hide bookmarked lines only - and I don’t know how to unhide them either.
The reasoning is that I have a large database which contains over 24,000 download links - one per line, and I need to go through the painful task of editing each one of them. (I can’t see any other way to modify filenames on one server & modifying the respective link on another server at the same time!)
So to assist with the work, I can highlight all the download links by bookmarking them, and then hiding all other information, which would allow me to sift through the links easier.
- Alan Kilborn last edited by
@Mike-Smith said in Hide/Unhide Non-Bookmarked Lines:
I need to go through the painful task of editing each one of them
24000 things to examine and edit is a huge manual task. Perhaps if you elaborate a bit more and/or show some data, someone here might have some automation hints for you? Maybe it isn’t possible…but hopefully something could be done.
and I don’t know how to unhide them either.
As far as I know, Notepad++'s menus only offers a “Hide Lines”. After you’ve done that one or more times, you’ll see some arrows in the margin, example:
So the way to “unhide” these lines is to click on one of the green arrows. If you have a script that hides a lot of lines, showing them all again when desired is problematic because you’d have to click on a lot of green arrows. At that point the better way would be to simply restart Notepad++ (which doesn’t remember the status of hidden lines when exited and re-run).
I do indeed wish to hide non-bookmarked lines
It seems like you could run your bookmarking operation, then do a “Inverse Bookmark” command, and then run the script @Ekopalypse provided…to get what you want?
It seems like you could run your bookmarking operation, then do a “Inverse Bookmark” command, and then run the script @Ekopalypse provided…to get what you want?
Yes, that’s a good idea! I did just that, and it hid everything I didn’t need to edit. Thank you.
24000 things to examine and edit is a huge manual task.
I don’t thing it’s something that can be automated though. The issue is that I have all the download links in one database (which I’m editing through Notepad++), and the files are stored on a seperate server. The task I am currently processing, is to randomize all filenames:
So not only do I need to complete the task of randomizing the filenames (A task I’m achieving using Bulk Rename Utility), I have to then ensure the download links are changed to represent the relevant filename. | https://community.notepad-plus-plus.org/topic/19019/hide-unhide-non-bookmarked-lines | CC-MAIN-2021-31 | en | refinedweb |
expo-batteryprovides battery information for the physical device (such as battery level, whether or not the device is charging, and more) as well as corresponding event listeners.
expo install expo-battery
If you're installing this in a bare React Native app, you should also follow these additional installation instructions.
import * as React from 'react'; import * as Battery from 'expo-battery'; import { StyleSheet, Text, View } from 'react-native'; export default class App extends React.Component { state = { batteryLevel: null, }; componentDidMount() { this._subscribe(); } componentWillUnmount() { this._unsubscribe(); } async _subscribe() { const batteryLevel = await Battery.getBatteryLevelAsync(); this.setState({ batteryLevel }); this._subscription = Battery.addBatteryLevelListener(({ batteryLevel }) => { this.setState({ batteryLevel }); console.log('batteryLevel changed!', batteryLevel); }); } _unsubscribe() { this._subscription && this._subscription.remove(); this._subscription = null; } render() { return ( <View style={styles.container}> <Text>Current Battery Level: {this.state.batteryLevel}</Text> </View> ); } }
import * as Battery from 'expo-battery';
trueon Android and physical iOS devices and
falseon iOS simulators. On web, it depends on whether the browser supports the web battery API.
Promisethat resolves to a
numberbetween 0 and 1 representing the battery level, or -1 if the device does not provide it.
await Battery.getBatteryLevelAsync(); // 0.759999
BatteryState.UNKNOWN.
Promisethat resolves to a
Battery.BatteryStateenum value for whether the device is any of the four states.
await Battery.getBatteryStateAsync(); // BatteryState.CHARGING
false, even if the device is actually in low-power mode.
Promisethat resolves to a
booleanvalue of either
trueor
false, indicating whether low power mode is enabled or disabled, respectively.
await Battery.isLowPowerModeEnabledAsync(); // true
Battery.BatteryStateenum value
trueif lowPowerMode is on,
falseif lowPowerMode is off
await Battery.getPowerStateAsync(); // { // batteryLevel: 0.759999, // batteryState: BatteryState.UNPLUGGED, // lowPowerMode: true, // }
"android.intent.action.BATTERY_LOW"or rises above
"android.intent.action.BATTERY_OKAY"from a low battery level. See here to read more from the Android docs.
batteryLevelkey.
EventSubscriptionobject on which you can call
remove()to unsubscribe from the listener.
Battery.BatteryStateenum value for whether the device is any of the four states. On web, the event never fires.
batteryStatekey.
EventSubscriptionobject on which you can call
remove()to unsubscribe from the listener.
lowPowerModekey.
EventSubscriptionobject on which you can call
remove()to unsubscribe from the listener.
BatteryState.UNKNOWN- if the battery state is unknown or unable to access
BatteryState.UNPLUGGED- if battery is not charging or discharging
BatteryState.CHARGING- if battery is charging
BatteryState.FULL- if the battery level is full | https://docs.expo.io/versions/v39.0.0/sdk/battery/ | CC-MAIN-2021-31 | en | refinedweb |
When Python programs create threads, each thread shares the same global memory as every other thread. Usually, but not always, multiple threads can safely read from shared resources without issue. Threads writing to shared resources are a different story because one thread could potentially overwrite the work of another thread.
This post demonstrates an example program shown in Programming Python: Powerful Object-Oriented Programming
where threads acquire and release locks in the program. The locking mechanism ensures that only one thread has access to a shared resource at a time.
Code
Here is an example program with my own comments added.
import _thread as thread, time # This mutex object is created by calling # thread.allocate_lock() # The mutex is responsible for synchronizing threads mutex = thread.allocate_lock() def counter(tid, count): for i in range(count): time.sleep(1) # The standard out is a shared resource # Unless the program controls access to the standard out # multiple threads can print to standard out at the same time # which results in garbage output # Acquire a lock mutex.acquire() # Now only the current thread can print to the console print('[{}] => {}'.format(tid, i)) # Make sure to release the lock for other threads when finished mutex.release() if __name__ == '__main__': for i in range(5): thread.start_new_thread(counter, (i, 5)) time.sleep(6) print('Main thread exiting...')
Explanation
The program creates five threads, each of which needs access to the standard output stream. The standard output stream is a global object that all of the threads share, which means that each thread can call print at the same time. That isn’t ideal because we can get garbage output printed to the console if two threads call the print() statement at the same time.
The solution is to lock access to the standard output stream so that only one thread may use it at a time. We do this by creating a mutex object on line 6 in the program by using thread.allocate_lock(). When a thread needs a lock, it calls acquire() on the mutex. At that point, all other threads that need protected resources have to sit and wait for mutex.release().
It’s important to keep the operations between mutex.acquire() and mutex.release() as brief as possible. Only one thread can hold a lock at a time, so the longer one thread holds a lock, the longer other threads need to wait for their turn to use the lock. That naturally impacts the performance of the overall program.
References
Lutz, Mark. Programming Python. Beijing, OReilly, 2013. | https://stonesoupprogramming.com/tag/_thread/ | CC-MAIN-2021-31 | en | refinedweb |
The namespace for Wt. More...
The namespace for Wt.
List of indexes.
The list is defined as std::vector<WModelIndex>.
Enumeration that specifies a horizontal or a vertical alignment.
The vertical alignment flags are AlignBaseline, AlignSub, AlignSuper, AlignTop, AlignTextTop, AlignMiddle, AlignBottom and AlignTextBottom. The horizontal alignment flags are AlignLeft, AlignRight, AlignCenter and AlignJustify.
When used with setVerticalAlignment(), this applies only to inline widgets and determines how to position itself on the current line, with respect to sibling inline widgets.
When used with WTableCell::setContentAlignment(), this determines the vertical alignment of contents within the table cell.
When used with WPainter::drawText(), this determines the horizontal and vertical alignment of the text with respect to the bounding rectangle.
When used with WContainerWidget::setContentAlignment(), this specifies how contents should be aligned horizontally within the container.
Not all values are applicable in all situations. The most commonly used values are AlignLeft, AlignCenter, AlignRight, AlignBottom, AlignMiddle and AlignTop.
Enumeration that specifies where the target of an anchor should be displayed.
Enumeration that indicates a character encoding.
Character encodings are used to represent characters in a stream of bytes.
Enumeration for a cursor style.
Enumeration for a DOM element type.
For internal use only.
Enumeration for the role of a DOM element (for theme support)
Enumeration that indicates a Wt entrypoint type.
An entry point binds a behavior to a public URL. Only the wthttpd connector currently supports multiple entry points.
An enumeration describing an event's type.
Enumeration that indiciates a standard icon.
An enumeration describing a layout direction.
Enumeration that indicates a meta header type.
Enumeration that indicates a direction.
Enumeration that indicates how to change a selection.
Enumeration that specifies a layout mechanism for a widget.
The layout mechanism determines how the widget positions itself relative to the parent or sibling widgets.
Enumeration for a DOM property.
This is an internal API, subject to change.
Enumeration that indicates a regular expression option.
Enumeration that indicates what is being selected.
Enumeration that indicates how to change a selection.
Enumeration that indicates how items may be selected.
Enumeration that indicates a relative location.
Values of CenterX, CenterY, and CenterXY are only valid for WCssDecorationStyle::setBackgroundImage()
Enumeration that indicates a standard button.
Multiple buttons may be specified by logically or'ing these values together, e.g.
Enumeration that specifies the way text should be printed.
Enumeration that indicates the text format.
Enumeration for the role of a css class (for theme support)
Enumeration that indicates what validation styles are to be applie.
Enumeration that specifies an option for rendering a view item.
Enumeration for the role of a subwidget (for theme support) | https://webtoolkit.eu/wt/wt3/doc/reference/html/namespaceWt.html | CC-MAIN-2021-31 | en | refinedweb |
PseudoObjectExpr - An expression which accesses a pseudo-object l-value. More...
#include "clang/AST/Expr.h"
PseudoObjectExpr - An expression which accesses a pseudo-object l-value.
A pseudo-object is an abstract object, accesses to which are translated to calls. The pseudo-object expression has a syntactic form, which shows how the expression was actually written in the source code, and a semantic form, which is a series of expressions to be executed in order which detail how the operation is actually evaluated. Optionally, one of the semantic forms may also provide a result value for the expression.
If any of the semantic-form expressions is an OpaqueValueExpr, that OVE is required to have a source expression, and it is bound to the result of that source expression. Such OVEs may appear only in subsequent semantic-form expressions and as sub-expressions of the syntactic form.
PseudoObjectExpr should be used only when an operation can be usefully described in terms of fairly simple rewrite rules on objects and functions that are meant to be used by end-developers. For example, under the Itanium ABI, dynamic casts are implemented as a call to a runtime function called __dynamic_cast; using this class to describe that would be inappropriate because that call is not really part of the user-visible semantics, and instead the cast is properly reflected in the AST and IR-generation has been taught to generate the call as necessary. In contrast, an Objective-C property access is semantically defined to be equivalent to a particular message send, and this is very much part of the user model. The name of this class encourages this modelling design.
Definition at line 5096 of file Expr.h.
Definition at line 5203 of file Expr.h.
References clang::cast_away_const(), and clang::children().
Definition at line 5215 of file Expr.h.
References clang::Stmt::getStmtClass().
Definition at line 4001 of file Expr.cpp.
References clang::ASTContext::Allocate(), clang::Expr::containsUnexpandedParameterPack(), clang::Expr::Expr(), clang::Stmt::ExprBits, clang::Expr::getObjectKind(), clang::if(), clang::Expr::isInstantiationDependent(), clang::Expr::isTypeDependent(), clang::Expr::isValueDependent(), clang::OK_Ordinary, clang::Stmt::PseudoObjectExprBits, clang::ast_matchers::type, clang::VK_RValue, and clang::ASTContext::VoidTy.
Return the result-bearing expression, or null if there is none.
Definition at line 5151 of file Expr.h.
Referenced by emitPseudoObjectExpr(), and shouldEmitSeparateBlockRetain().
Return the index of the result-bearing expression into the semantics expressions, or PseudoObjectExpr::NoResult if there is none.
Definition at line 5145 of file Expr.h.
Return the syntactic form of this expression, i.e.
the expression it actually looks like. Likely to be expressed in terms of OpaqueValueExprs bound in the semantic form.
Definition at line 5140 of file Expr.h.
Referenced by getBestPropertyDecl(), getSyntacticFromForPseudoObjectExpr(), isImplicitThis(), clang::Expr::isUnusedResultAWarning(), and clang::Sema::recreateSyntacticForm().
Definition at line 5164 of file Expr.h.
Referenced by emitPseudoObjectExpr(), and shouldEmitSeparateBlockRetain().
Definition at line 5170 of file Expr.h.
Referenced by emitPseudoObjectExpr(), and shouldEmitSeparateBlockRetain(). | http://clang.llvm.org/doxygen/classclang_1_1PseudoObjectExpr.html | CC-MAIN-2018-43 | en | refinedweb |
This class encapsulates the compliant contact model force computations as described in detail in Compliant Contact in Drake. More...
#include <drake/attic/multibody/rigid_body_plant/compliant_contact_model.h>
This class encapsulates the compliant contact model force computations as described in detail in Compliant Contact in Drake.
Instantiated templates for the following kinds of T's are provided:
Instantiates a CompliantContactModel.
Scalar-converting copy constructor. See System Scalar Conversion.
Given two collision elements (with their own defined compliant material properties, computes the derived parameters for the contact.
Returns the portion of the squish attributable to Element
a (sₐ). Element
b's squish factor is simply 1 - sₐ. See contact_model_doxygen.h for details.
Computes the generalized forces on all bodies due to contact.
Defines the default material property values for this model instance.
All elements with default-configured values will use the values in the provided property set. This can be invoked before or after parsing SDF/URDF files; all fields that were left unspecified will default to these values. See drake_contact and CompliantMaterial for elaboration on these values.
Configures the model parameters – these are the global model values that affect all contacts.
If values are outside of valid ranges, the program aborts. (See CompliantContactParameters for details on valid ranges.) | http://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1_compliant_contact_model.html | CC-MAIN-2018-43 | en | refinedweb |
Registers a default library pathname with a runtime.
Syntax
#include <prlink.h> PRStatus PR_SetLibraryPath(const char *path);
Parameters
The function has this parameter:
path
- A pointer to a character array that contains the directory path that the application should use as a default. The syntax of the pathname is not defined, nor whether that pathname should be absolute or relative.
Returns
The function returns one of the following values:
- If successful,
PR_SUCCESS.
- If unsuccessful,
PR_FAILURE. This may indicate that the function cannot allocate sufficient storage to make a copy of the path string
Description
This function registers a default library pathname with the runtime. This allows an environment to express policy decisions globally and lazily, rather than hardcoding and distributing the decisions throughout the code. | https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_SetLibraryPath | CC-MAIN-2018-43 | en | refinedweb |
Alternative Logging Frameworks for Application Servers: WildFly
Alternative Logging Frameworks for Application Servers: WildFly
Join the DZone community and get the full member experience.Join For Free
The Future of Enterprise Integration: Learn how organizations are re-architecting their integration strategy with data-driven app integration for true digital transformation.
Introduction
Welcome to the third part of our blog series on configuring application servers to use alternative logging frameworks. This time we turn our gaze to WildFly, the open source application server from JBoss. Again, we'll be configuring it to use Log4j2 and SLF4J with Logback.
For those of you who are looking for parts 1 and 2 of this series, find them here:
Part 1 - GlassFish
Part 2 - WebLogic
For reference, the environment used for this blog was: a 64-bit Linux VM, JDK 8u20, Log4j 2.0.2, Logback 1.1.2, SLF4J 1.7.7, WildFly 8.1, and building/programming done using NetBeans 8.0.
As with the previous entries in this series, I'll be assuming you have set up WildFly before attempting to follow this blog. For a simple guide on this, check out Jboss' one on clustering; it's pretty easy to follow and will get you set up with a basic environment if you don't have one already.
Following the norm of this series, let's begin with Log4j2.
Log4j2
Log4J2 is pretty easy to set up with WildFly on a per deployment basis; WildFly does not officially support using your own loggers on a domain wide basis, meaning that the configuration detailed in this blog will only apply to this application in particular (just as with when using SLF4J and Logback with WebLogic as described in my previous blog).
As we are configuring this on a "per deployment" basis (or per application if you prefer), we do not need to import any jars into WildFly; we package them with the application. With this in mind, let's create a test application (if you've read the previous two blogs in this series, you'll recognise the program):
- Create a Server in NetBeans that maps to your WildFly installation (you'll need the WildFly plugin: tools, plugins, Available Plugins, WildFly Application Server)
- Create a Java Web application
- Create a Servlet in this project.
- Download Log4j and add the following two JARs into the project:
- log4j-api-2.0.2.jar
- log4j-core-2.0.2.jar
- Add the following import statement into your servlet:
import org.apache.logging.log4j.*;
- Declare and initialise the logger:
private static Logger logger = LogManager.getLogger(TestServlet.class.getName());
- Edit the processRequest method to this:
protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); try (PrintWriter out = response.getWriter()) { logger.trace("Tracing!"); logger.debug("Debugging!"); logger.info("Information!"); logger.warn("Warnings!"); logger.error("Oh noes!"); } }
<html> <head> <title>Testing</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> </head> <body> <form name="testForm" action="TestServlet"> <input type="submit" value="push me!" name="testybutton" /> </form> </body> </html>
With the programming done, we need to create a configuration file for Log4j to use. Shaking it up a little from the previous blogs in the series, we'll create a configuration file, log4j2.xml, that prints log messages of any level to a file in my home directory (replace the /home/andrew directory with wherever you want the log file to be stored):
- In the WEB-INF folder of your NetBeans project, create a folder called classes
- Create an xml file in here named log4j2.xml, and populate it with this:
<?xml version="1.0" encoding="UTF-8"?> <Configuration status="WARN"> <Appenders> <File name="FileLogger" fileName="/home/andrew/wildfly.log"> <PatternLayout pattern="%d{HH:mm} [%t] %-5level %logger{36} - %msg%n"/> </Console> </Appenders> <Loggers> <Root level="trace"> <AppenderRef ref="FileLogger"/> </Root> </Loggers> </Configuration>
SLF4J and Logback
Just as before, to use our own SLF4J and Logback binding and configuration, we'll configure for a per deployment basis. First things first, let's get our application to use Logback:
- Download SLF4J and Logback, and add the following JARs to your web application:
- slf4j-api
- logback-core
- logback-classic
- Remove the Log4j package and instead import:
import org.slf4j.Logger; import org.slf4j.LoggerFactory;
- Alter the logger initialisation to:
private static Logger logger = LoggerFactory.getLogger(TestServlet.class.getName());
Create an xml file in the WEB-INF/classes directory called logback.xml, and fill it with this configuration (it performs the same function as the Log4j2 one):
<configuration> <appender name="FILE" class="ch.qos.logback.core.FileAppender"> <file>/home/andrew/wildfly.log</file> <append>true</append> <encoder> <Pattern>%d{HH:mm} [%thread] %-5level %logger{52} - %msg%n</Pattern> </encoder> </appender> <root> <level value="TRACE"/> <appender-ref </root> </configuration>
Navigate to the bin directory of WildFly, and run the following command (replacing $ip and $port with your own specific values):
jboss-cli.sh -c controller=$ip:$port --gui
This will bring up a window, through which you can modify global WildFly settings. If youhadn't realised, WildFly will need to be running for you to run this command successfully. The setting we are after is in the logging subsystem. If you're in domain mode (like the guide I linked to at the top leads you to be in), you'll need to make the changes to the server group that you're deploying the application to.
Right click on the add-logging-api-dependencies setting, select write-attribute, and untick the value box. The cmd box at the top may catch your eye when you click OK as it fills with a command; this is the command that you would have to fill in if you were using the command line. Click submit in the top right hand corner, and the output should display "success".
Restart WildFly and lo! You're ready to deploy!
Wrapping Up
You should now have a grounding in how to use alternative logging with WildFly, affording you more logging flexibility in your work and endeavours.The final entry in this series will cover Apache Tomcat, so check back for it soon. In the meantime, read some of our other blogs on WildFly and JBoss Server:
WildFly 8.0.0.Final is Released
Securing JBoss EAP 6 - Implementing SSL
How to controll *all* JTA properties in JBoss AS 7 / EAP 6
Make your mark on the industry’s leading annual report. Fill out the State of API Integration 2019 Survey and receive $25 to the Cloud Elements store.
Published at DZone with permission of Andrew Pielage . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/alternative-logging-frameworks-0 | CC-MAIN-2018-43 | en | refinedweb |
On Sun, 23 Nov 1997, Roy T. Fielding wrote:
> >LocationMatch/Location do collapse double slashes, but I consider this to
> >be a bug. They are documented to work in the URI space, not in the
> >filespace.
>
> Yep, that's a bug. Dean's analysis matches what I would have said.
> >RFC1738, RFC1808, and Roy's new draft appear silent on the issue.
>
> "/" never equals "//". The only reason we collapse them for matches
> against Directory sections is security within the filesystem mapping.
> If the string is modified, the result should be a redirect or rejection.
> A "//" is meaningful for all resource namespaces not aligned with the
> filesystem, and that's the case for what mod_rewrite is doing.
> '?' :-)
Dw. | http://mail-archives.apache.org/mod_mbox/httpd-dev/199711.mbox/%[email protected]%3E | CC-MAIN-2018-43 | en | refinedweb |
TestCafe 为现代web开发堆栈提供自动化的跨浏览器测试TestCafe 为现代web开发堆栈提供自动化的跨浏览器测试。它是一个纯NodeJS的web应用程序测试解决方案。它负责所有的阶段:启动浏览器,运行测试,收集测试结果并生成报告
v0.21.0
AndreyBelym released this
v0.21.0 (2018-8-2)
Enhancements
⚙️
Test Web Pages Served Over HTTPS (#1985)
Some browser features (like Service Workers, Geolocation API, ApplePaySession, or SubtleCrypto) require a secure origin. This means that the website should use the HTTPS protocol.
Starting with v0.21.0, TestCafe can serve proxied web pages over HTTPS. This allows you to test pages that require a secure origin.
To enable HTTPS when you use TestCafe through the command line, specify the --ssl flag followed by the HTTPS server options. The most commonly used options are described in the TLS topic in the Node.js documentation.
testcafe --ssl pfx=path/to/file.pfx;rejectUnauthorized=true;...
When you use a programming API, pass the HTTPS server options to the createTestCafe method.
'use strict'; const createTestCafe = require('testcafe'); const selfSignedSertificate = require('openssl-self-signed-certificate'); let runner = null; const sslOptions = { key: selfSignedSertificate.key, cert: selfSignedSertificate.cert }; createTestCafe('localhost', 1337, 1338, sslOptions) .then(testcafe => { runner = testcafe.createRunner(); }) .then(() => { return runner .src('test.js') // Browsers restrict self-signed certificate usage unless you // explicitly set a flag specific to each browser. // For Chrome, this is '--allow-insecure-localhost'. .browsers('chrome --allow-insecure-localhost') .run(); });
See Connect to TestCafe Server over HTTPS for more information.
⚙️
Construct Screenshot Paths with Patterns (#2152)
You can now use patterns to construct paths to screenshots. TestCafe provides a number of placeholders you can include in the path, for example,
${DATE},
${TIME},
${USERAGENT}, etc. For a complete list, refer to the command line --screenshot-path-pattern flag description.
You specify a screenshot path pattern when you run tests. Each time TestCafe takes a screenshot, it substitutes the placeholders with actual values and saves the screenshot to the resulting path.
The following example shows how to specify a screenshot path pattern through the command line:
testcafe all test.js -s screenshots -p "${DATE}_${TIME}/test-${TEST_INDEX}/${USERAGENT}/${FILE_INDEX}.png"
When you use a programming API, pass the screenshot path pattern to the runner.screenshots method.
runner.screenshots('reports/screenshots/', true, '${TEST_INDEX}/${OS}/${BROWSER}-v${BROWSER_VERSION}/${FILE_INDEX}.png');
⚙️
Add Info About Screenshots and Quarantine Attempts to Custom Reports (#2216)
Custom reporters can now access screenshots' data and the history of quarantine attempts (if the test run in the quarantine mode).
The following information about screenshots is now available:
- the path to the screenshot file,
- the path to the thumbnail image,
- the browser's user agent,
- the quarantine attempt number (if the screenshot was taken in the quarantine mode),
- whether the screenshot was taken because the test failed.
If the test was run in the quarantine mode, you can also determine which attempts failed and passed.
Refer to the reportTestDone method description for details on how to access this information.
Bug Fixes
- HTML5 drag events are no longer simulated if
event.preventDefaultis called for the
mousedownevent (#2529)
- File upload no longer causes an exception when there are several file inputs on the page (#2642)
- File upload now works with inputs that have the
requiredattribute (#2509)
- The
loadevent listener is no longer triggered when added to an image (testcafe-hammerhead/#1688)
v0.21.0-alpha.1
AndreyBelym released this
Bump version (v0.21.0-alpha.1) (#2679)
v0.20.5
AndreyBelym released this
v0.20.5 (2018-7-18)
Bug fixes
- The
buttonsproperty was added to the
MouseEventinstance (#2056)
- Response headers were converted to lowercase (#2534)
- Updated flow definitions (#2053)
- An
AttributesWrapperinstance is now updated when the the element's property specifies the
disabledattribute (#2539)
- TestCafe no longer hangs when it redirects from a tested page to the 'about:error' page with a hash (#2371)
- TestCafe now reports a warning for a mocked request if CORS validation failed (#2482)
- Prevented situations when a request logger tries to stringify a body that is not logged (#2555)
- The Selector API now reports
NaNinstead of
integerwhen type validation fails (#2470)
- Enabled
noImplicitAnyand disabled
skipLibCheckin the TypeScript compiler (#2497)
- Pages with
rel=prefetchlinks no longer hang during test execution (#2528)
- Fixed the
TypeError: this.res.setHeader is not a functionerror in Firefox (#2438)
- The
formtargetattribute was overridden (testcafe-hammerhead/#1513)
fetch.toString()now equals
function fetch() { [native code] }(testcafe-hammerhead/#1662)
v0.20.5-alpha.1
AndreyBelym released this
Bump version (v0.20.5-alpha.1) (#2611)
v0.20.4
AndreyBelym released this
v0.20.4 (2018-6-25)
Enhancements
TestCafe now takes screenshots using browsers' debug protocols (#2492)
Bug fixes
fetchrequests are now correctly proxied in a specific case (testcafe-hammerhead/#1613)
- Resources responding with
304HTTP status code and with the 'content-length: ' header are proxied correctly now (testcafe-hammerhead/#1602)
- The
transferargument of
window.postMessageis passed correctly now (testcafe-hammerhead/#1535)
- Incorrect focus events order in IE has been fixed (#2072)
v0.20.3
AndreyBelym released this
v0.20.3 (2018-6-6)
Enhancements
⚙️
Add TS definitions to the Docker image (#2481)
Bug fixes
- Selection in a
contenteditable
divnow works properly in a specific scenario (#2365)
- A collision related to several
moment-duration-formatpackage versions is now fixed (#1750)
- TestCafe now reports a warning when saving several screenshots at the same path (#2213)
- A regression related to wrongly processed
document.writein IE11 is now fixed (#2469)
- An out of memory crash on calling console methods is now fixed (testcafe-hammerhead/#1546)
Clickaction for an element with 1px height or width works properly now (#2020)
- Touch emulation for the latest Google Chrome was fixed (#2448)
v0.20.2
AndreyBelym released this
v0.20.1
AndreyBelym released this
v0.20.1 (2018-6-21)
⚙️
Typescript definitions for new features from v0.20.0 have been added (#2428)
Bug fixes
- Now sites with the overridden
Element.prototype.matchesmethod work properly #2241
window.Blobnow returns a correct result when Array of
ArrayBufferis passed as a parameter (testcafe-hammerhead/#1599)
- Firefox Shield popup is not shown during test execution now (#2421)
v0.20.1-alpha.1
AndreyBelym released this
Bump version (alpha) (#2422)
v0.20.0-alpha.4
AndreyBelym released this
0.20.0-alpha.4 (close #2384, close #2382, close #2070) (#2390)
v0.20.0-alpha.3
AndreyBelym released this
update hammerhead (#2368) * update hammerhead * bump version * Downgrade Edge version in client tests * renaming
v0.20.0-alpha.2
AndreyBelym released this
Bump version (alpha) (#2357)
v0.20.0-alpha.1
AndreyBelym released this
0.20.0-alpha.1 (#2328)
v0.19.2-alpha3
AndreyBelym released this
Bump version (alpha) (#2300)
v0.19.2-alpha2
AndreyBelym released this
0.19.2-alpha2 (#2283)
v0.19.2-alpha1
AndreyBelym released this
Update legacy api; Bump version (alpha) (#2269)
v0.19.2-dev20180323
AndreyBelym released this
0.19.2-dev20180323 (#2246)
v0.19.2-dev20180316
AndreyBelym released this
0.19.2-dev20180316 (close #2123) (#2223)
v0.19.1
AndreyBelym released this
v0.19.1 (2018-3-13)
Backward compatibility with the legacy test syntax has been restored (#2210)
Bug Fixes
- The
document.allproperty is now overridden (testcafe-hammerhead/#1046)
- Proxying properties in
asyncclass methods is now supported (testcafe-hammerhead/#1510)
- Fixed wrong proxying of a
localStoragecheck in WebWorkers (testcafe-hammerhead/#1510)
- Function wrappers no longer break is-defined checks (testcafe-hammerhead/#1496)
v0.19.1-dev20180305
AndreyBelym released this
0.19.1-dev20180305 (#2192)
v0.19.0
AndreyBelym released this
v0.19.0 (2018-3-1)
TestCafe Live: See instant feedback when working on tests (#1624)
We have prepared a new tool for rapid test development.
TestCafe Live provides a service that keeps the TestCafe process and browsers opened the whole time you are working on tests. Changes you make in code immediately restart the tests. That is, TestCafe Live allows you to see test results instantly.
For more information, see TestCafe Live.
Enhancements
⚙️
Taking Screenshots of Individual Page Elements (#1496)
We have added the t.takeElementScreenshot action that allows you to take a screenshot of an individual page element.
import { Selector } from 'testcafe'; fixture `My fixture` .page ``; test('Take a screenshot of a fieldset', async t => { await t .click('#reusing-js-code') .click('#continuous-integration-embedding') .takeElementScreenshot(Selector('fieldset').nth(1), 'my-fixture/important-features.png'); });
This action provides additional customization that allows you to position the center of the screenshot or crop it. For more information, see the documentation.
Note that if the screenshot directory is not specified with the runner.screenshots API method or the screenshots command line option, the
t.takeElementScreenshot action will be ignored.
⚙️
Filtering Elements by Their Visibility (#1018)
You can now filter the selector's matching set to leave only visible or hidden elements. To do this, use the filterVisible and filterHidden methods.
import { Selector } from 'testcafe'; fixture `My fixture` .page ``; test('Filter visible and hidden elements', async t => { const inputs = Selector('input'); const hiddenInput = inputs.filterHidden(); const visibleInputs = inputs.filterVisible(); await t .expect(hiddenInput.count).eql(1) .expect(visibleInputs.count).eql(11); });
⚙️
Finding Elements by the Exact Matching Text (#1292)
The current selector's withText method looks for elements whose text content contains the specified string. With this release, we have added the withExactText method that performs search by strict match.
import { Selector } from 'testcafe'; fixture `My fixture` .page ``; test('Search by exact text', async t => { const labels = Selector('label'); const winLabel = labels.withExactText('Windows'); const reusingLabel = labels.withText('JavaScript'); await t .expect(winLabel.exists).ok() .expect(reusingLabel.exists).ok(); });
⚙️
Using Decorators in TypeScript Code (#2117) by @pietrovich
TestCafe now allows you to use decorators when writing tests in TypeScript.
Note that decorators are still an experimental feature in TypeScript.
Bug Fixes
- TestCafe can now scroll a webpage when
bodyhas a scroll bar (#1940)
- Firefox no longer hangs with a dialog asking to set it as the default browser (#1926)
- Legacy API no longer freezes because of an unexpected error (#1790)
- Click on an element that was hidden and then recreated on timeout now works correctly (#1994)
- TestCafe now correctly finds browsers in headless mode on macOS when tests are executing concurrently (#2035)
- When roles are switched using the
preserverUrlflag, the local storage now restores correctly (#2015)
- TestCafe progress bar is no longer visible on screenshots (#2076)
- Window manipulations now wait for page loading (#2000)
- All toolbars are now hidden when taking screenshots (#1445)
- TestCafe now works normally with the latest version of CucumberJS (#2107)
- Fixed an error connected to file permissions on Ubuntu (#2144)
- Browser manipulations can now be executed step-by-step (#2150)
- Fixed a bug where a page wouldn't load because of an error in
generateCallExpression(testcafe-hammerhead/#1389)
- Now the overridden Blob constructor doesn't process data unnecessarily (testcafe-hammerhead/#1359)
- Now the
targetattribute is not set for a button after a click on it (testcafe-hammerhead/#1437)
- The
sandbox,
targetand
styleattributes are now cleaned up (testcafe-hammerhead/#1448)
- A
RangeErrorwith the message
Maximum call stack size exceededis no longer raised (testcafe-hammerhead/#1452)
- A script error is no longer raised on pages that contain a
beforeunloadhandler (testcafe-hammerhead/#1419)
- Fixed wrong overridding of an event object (testcafe-hammerhead/#1445)
- Illegal invocation error is no longer raised when calling the
FileListWrapper.itemmethod (testcafe-hammerhead/#1446) by @javiercbk
- A script error is no longer raised when
Node.nextSiblingis
null(testcafe-hammerhead/#1469)
- The
isShadowUIElementcheck is now performed for
Node.nextSiblingwhen a node is not an element (testcafe-hammerhead/#1465)
- The
toStringfunction is now overridden for anchor elements (testcafe-hammerhead/#1483)
v0.19.0-alpha2
AndreyBelym released this
Bump version (alpha) (#2175)
v0.19.0-alpha1
AndreyBelym released this
Bump version (alpha) (#2138)
v0.18.7-dev20180206
AndreyBelym released this
Bump version (dev) (#2102) * Bump version (dev) * set firefox version
v0.18.7-dev20180201
AndreyBelym released this
Bump version (dev) (#2094)
v0.18.7-dev20180124
AlexanderMoskovkin released this
0.18.7-dev20180124 (close #1959) (#2068) * 0.18.7-dev20180124 * fix test
v0.18.7-dev20180117
AndreyBelym released this
0.18.7-dev20180117 (close #1897) (#2049)
0.18.7-dev20180112
AndreyBelym released this
update hammerhead and remove unnecessary util methods (#2041) * update hammerhead and remove unnecessary util methods * fix tests * update version
v0.18.6
AndreyBelym released this
v0.18.6 (2017-12-28)
Enhancements
Chrome DevTools are opened in a separate window during test execution (#1964)
Bug Fixes
- In Chrome, disabled showing the 'Save password' prompt after typing text in the input of the
passwordtype (#1913)
- Now TestCafe correctly scrolls a page to an element when this page has scrollbars (#1955)
- Fixed the 'Cannot redefine property %testCafeCore%' script error (#1996)
- TestCafe now rounds off dimension values when it calculates scrolling (#2004)
- In Chrome, the 'Download multiple files' dialog no longer prevents test execution process (#2017)
- TestCafe now closes a connection to the specified resource if the destination server hangs up (testcafe-hammerhead/#1384)
- Proxying the
location's
hrefproperty now works correctly (testcafe-hammerhead/#1362)
- The proxy now supports
httpsrequests for node 8.6 and higher (testcafe-hammerhead/#1401)
- Added support for pages with the
superkeyword (testcafe-hammerhead/#1390)
- The proxy now properly emulates native browser behavior for non-success status codes (testcafe-hammerhead/#1397)
- The proxied
ServiceWorker.registermethod now returns a rejected Promise for unsecure urls (testcafe-hammerhead/#1411)
- Added support for
javascriptprotocol expressions applied to the location's properties (testcafe-hammerhead/#1274)
v0.18.6-dev20171222
AndreyBelym released this
0.18.6-dev2017122 (#2001)
v0.18.6-dev20171211
AndreyBelym released this
0.18.6-dev20171211 (#1995)
[docs] Add a topic about Jenkins integration (#1953) * Add a topic about Jenkins integration * Address Boris' remarks * Address Alexander's remarks * Add VCS integration and test results * Change the test results screenshot
Downloads
0.18.6-dev20171129 (#1970) * dev * update browsers * revert update browsers
Downloads
v0.18.5 (2017-11-23): Security Update
Vulnerability Fix (testcafe-legacy-api/#26)
We have fixed a vulnerability related to the dependency on uglify-js v1.x. We used it in our testcafe-legacy-api module that provides backward compatibility with old API from the paid TestCafe version.
Thus, this vulnerability affected only those who run old tests created with the commercial version of TestCafe in the new open-source TestCafe.
Downloads
v0.18.4 (2017-11-17)
Enhancements
⚙️
WebSockets support (testcafe-hammerhead/#911)
TestCafe now provides full-featured WebSockets support (
wss and
ws protocols, request authentication, etc.).
Bug Fixes
- TestCafe can now click on elements that are located under the Status bar and have the
transitioncss property specified (#1934)
- Added support for pages with the
restand
default parameterinstructions (testcafe-hammerhead/#1336)
- Pages with several
basetags are supported now (testcafe-hammerhead/#1349)
- Redirects from cross-domain to same-domain pages are processed now (#1922)
- Contenteditable custom elements are correctly recognized now (testcafe-hammerhead/#1366)
- Internal headers for
fetchrequests are set correctly now (testcafe-hammerhead/#1360)
Downloads
* prepared dev version to publish * update hammerhead
Downloads
v0.18.3 (2017-11-08)
Bug Fixes
- Readonly instrumented DOM properties are now set correctly for plain objects (testcafe-hammerhead/#1351).
- The
HTMLElement.styleproperty is proxied on the client side now (testcafe-hammerhead/#1348).
- The
Refreshresponse header is proxied now (testcafe-hammerhead/#1354).
Downloads
v0.18.2 (2017-10-26)
Bug Fixes
- Screenshots are now captured correctly when using High DPI monitor configurations on Windows (#1896)
- Fixed the
Cannot read property 'getItem' of nullerror which is raised when a console message was printed in an iframe before it's loaded completely (#1875)
- Fixed the
Content iframe did not loaderror which is raised if an iframe reloaded during the
switchToIframecommand execution (#1842)
- Selector options are now passed to all derivative selectors (#1907)
- Fixed a memory leak in IE related to live node collections proxying (testcafe-hammerhead/#1262)
DocumentFragmentnodes now are correctly processed (testcafe-hammerhead/#1334)
Downloads
v0.18.1 (2017-10-17): a recovery release following v0.18.0
--reporter flag name fixed (#1881)
In v0.18.0, we have accidentally changed the --reporter CLI flag to
--reporters. In this recovery release, we roll back to the previous flag name.
Compatibility with RequireJS restored (#1874)
Changes in v0.18.0 made TestCafe incompatible with RequireJS. It is fixed in this recovery release.
We apologize for any inconvenience.
Downloads
v0.18.0 (2017-10-10)
Enhancements
⚙️
Testing in headless Firefox
In addition to Chrome headless, we have added support for testing in headless Firefox (version 56+).
testcafe firefox:headless tests/sample-fixture.js
runner .src('tests/sample-fixture.js') .browsers('firefox:headless') .run() .then(failedCount => { // ... });
⚙️
Outputting test results to multiple channels (#1412)
If you need a report to be printed in the console and saved to a
.json file,
you can now do this by specifying multiple reporters when running tests.
testcafe all tests/sample-fixture.js -r spec,json:report.json
const stream = fs.createWriteStream('report.json'); runner .src('tests/sample-fixture.js') .browsers('chrome') .reporter('spec') .reporter('json', stream) .run() .then(failedCount => { stream.end(); });
⚙️
Entering the debug mode when a test fails (#1608)
TestCafe can now automatically switch to the debug mode whenever a test fails. Test execution will be paused, so that you can explore the tested page to determine the cause of the fail.
To enable this behavior, use the
--debug-on-fail flag in the command line or the
debugOnFail option in the API.
testcafe chrome tests/fixture.js --debug-on-fail
runner.run({ debugOnFail: true });
⚙️
Interacting with the tested page in debug mode (#1848)
When debugging your tests, you can now interact with the tested page. Click the Unlock page button in the page footer to enable interaction.
After that, you can do anything with the webpage. This gives you additional powers to detect problems in your tests.
Click Resume to continue running the test or click Next Step to step over.
⚙️
Chrome and Firefox are opened with clean profiles by default (#1623)
TestCafe now opens Chrome and Firefox with empty profiles to eliminate the influence of profile settings and extensions on test running.
However, you can return to the previous behavior by using the
:userProfile browser option.
testcafe firefox:userProfile tests/test.js
runner .src('tests/fixture1.js') .browsers('firefox:userProfile') .run();
⚙️
Customizable timeout to wait for the
window.load event (#1645)
Previously, TestCafe started a test when the
DOMContentLoaded event was raised. However, there are many pages that execute some kind of initialization code on the
window.load event (which is raised after
DOMContentLoaded because it waits for all stylesheets, images and subframes to load). In this instance, you need to wait for the
window.load event to fire before running tests.
With this release, TestCafe waits for the
window.load event for
3 seconds.
We have also added a
pageLoadTimeout setting that allows you to customize this interval.
You can set it to
0 to skip waiting for
window.load.
The following examples show how to use the
pageLoadTimeout setting from the command line and API.
testcafe chrome test.js --page-load-timeout 0
runner.run({ pageLoadTimeout: 0 });
You can also use the
setPageLoadTimeout method in test API to set the timeout for an individual test.
fixture `Page load timeout` .page ``; test(`Page load timeout`, async t => { await t .setPageLoadTimeout(0) .navigateTo(''); });
⚙️
Access messages output by the tested app to the browser console (#1738)
You can now obtain messages that the tested app outputs to the browser console. This is useful if your application or the framework it uses posts errors, warnings or other informative messages into the console.
Use the
t.getBrowserConsoleMessages method that returns the following object.
{ error: ["Cannot access the 'db' database. Wrong credentials.", '...'], // error messages warn: ['The setTimeout property is deprecated', '...'], // warning messages log: ['[09:12:08] Logged in', '[09:25:43] Changes saved', '...'], // log messages info: ['The application was updated since your last visit.', '...'] // info messages }
Note that this method returns only messages posted via the
console.error,
console.warn,
console.log and
console.info methods. Messages output by the browser (like when an unhandled exception occurs on the page) will not be returned.
For instance, consider the React's typechecking feature, PropTypes. You can use it to check that you assign valid values to the component's props. If a
PropTypes rule is violated, React posts an error into the JavaScript console.
The following example shows how to check the React prop types for errors using the
t.getBrowserConsoleMessages method.
// check-prop-types.js import { t } from 'testcafe'; export default async function () { const { error } = await t.getBrowserConsoleMessages(); await t.expect(error[0]).notOk(); } // test.js import { Selector } from 'testcafe'; import checkPropTypes from './check-prop-types'; fixture `react example` .page `` // .afterEach(() => checkPropTypes()); test('test', async t => { await t .typeText(Selector('.form-control'), 'devexpress') .click(Selector('button').withText('Go')) .click(Selector('h4').withText('Organizations')); });
⚙️
Defining drag end point on the destination element (#982)
The
t.dragToElement action can now drop a dragged element at any point inside the destination element.
You can specify the target point using the
destinationOffsetX and
destinationOffsetY options.
import { Selector } from 'testcafe'; const fileIcon = Selector('.file-icon'); const directoryPane = Selector('.directory'); fixture `My Fixture` .page ``; test('My Test', async t => { await t .dragToElement(fileIcon, directoryPane, { offsetX: 10, offsetY: 10, destinationOffsetX: 100, destinationOffsetY: 50, modifiers: { shift: true } }); });
⚙️
TestCafe exits gracefully when the process is interrupted (#1378)
Previously, TestCafe left browsers open when you exited the process by pressing
Ctrl+C in the terminal.
Now TestCafe exits gracefully closing all browsers opened for testing.
Bug Fixes
- Tests no longer hang in Nightmare (#1493)
- The
focusevent is now raised when clicking links with
tabIndex="0"(#1803)
- Headless Chrome processes no longer hang after test runs (#1826)
setFilesToUploadno longer throws a
RangeErroron websites that use Angular (#1731)
- Fixed a bug where an
iframegot wrong origin (#1753)
document.opennow doesn't throw an error if
document.defaultViewis
null(testcafe-hammerhead/#1272)
- No error is thrown when the handler passed to
addEventListeneris
undefined(testcafe-hammerhead/#1251)
- An error is no longer raised if the processed element is not extendible (testcafe-hammerhead/#1300)
- Fixed a bug where an
onclickhandler did not work after click on a
Submitbutton (testcafe-hammerhead/#1291)
- Images with
style = background-image: url("img.png");are now loaded correctly (testcafe-hammerhead/#1212)
- Documents can now contain two
ShadowUIroots (testcafe-hammerhead/#1246)
- HTML in an overridden
document.writefunction is now processed correctly (testcafe-hammerhead/#1311)
- Elements processing now works for a
documentFragmentas it is added to the DOM (testcafe-hammerhead/#1334)
Downloads
Update hammerhead; Bump version (#1823)
Downloads
[prerelease-0.17.3-dev20170913] Bump version (#1788)
Downloads
Bug Fixes
- Taking a screenshot on teamcity agent works correctly now (#1625)
- It is possible to run tests on remote devices from a docker container (#1728)
- TestCafe compiles TypeScript tests correctly now if Mocha or Jest typedefs are included in the project (#1537)
- Running on remote devices works correctly on MacOS now (#1732)
- A target directory is checked before creating a screenshot (#1551)
- TypeScript definitions allow you to send any objects as
dependenciesfor
ClientFunctionsnow. (#1713)
- The second
MutationObservercallback argument is not missed now (testcafe-hammerhead/#1268)
- Link's
hrefproperty with an unsupported protocol is set correctly now (testcafe-hammerhead/#1276)
- The
document.documentURIproperty is now processed correctly in IE (testcafe-hammerhead/#1270)
JSON.stringifyand
Object.keysfunctions now work properly for a
MessageEventinstance (testcafe-hammerhead/#1277)
Downloads
v0.17.2-dev20170831 Fix EOL in TS files (#1748)
Downloads
Bug Fixes
- The
hoveraction no longer fails for elements that hide on mouseover (#1679)
- SelectText and SelectTextAreaContent TypeScript definitions now match the documentation (#1697)
- TestCafe now finds browsers installed for the current user on Windows (#1688)
- TestCafe can now resize MS Edge 15 window (#1517)
- Google Chrome Canary has a dedicated
chrome-canaryalias now (#1711)
- Test no longer hangs when
takeScreenshotis called in headless Chrome Canary on Windows (#1685)
- Tests now fail if the
uncaughtRejectionexception is raised (#1473)
- TypeScript tests now run on macOS with no errors (#1696)
- Test duration is now reported accurately (#1674)
- XHR requests with an overridden
setRequestHeaderfunction returned by the
XhrSandbox.openNativeXhrmethod are now handled properly (testcafe-hammerhead/#1252)
- HTML in an overridden
document.writefunction is now processed correctly (testcafe-hammerhead/#1218)
Object.assignis now overridden (testcafe-hammerhead/#1208)
- Scripts with
asyncfunctions are processed correctly now (testcafe-hammerhead/#1260)
Downloads
Enhancements
⚙️
Testing Electron applications (testcafe-browser-provider-electron)
We have created a browser provider that allows you to test Electron applications with TestCafe.
Getting it to work is simple. First, install the browser provider plugin from npm.
npm install testcafe-browser-provider-electron
We assume that you have a JavaScript application that you wish to run in Electron.
Create a
.testcafe-electron-rc file that contains configurations for the Electron plugin.
The only required setting here is
mainWindowUrl. It's a URL (or path) to the main window page relative to the application directory.
{ "mainWindowUrl": "./index.html" }
Place this file into the application root directory.
At the next step, install the Electron module.
npm install electron@latest
Now you are ready to run tests. Specify the
electron browser name and the application path
at the test launch.
testcafe "electron:/home/user/electron-app" "path/to/test/file.js"
testCafe .createRunner() .src('path/to/test/file.js') .browsers('electron:/home/user/electron-app') .run();
Nota that you can also test Electron app's executable files. To learn more about the Electron browser provider, see the plugin readme.
⚙️
Concurrent test execution (#1165)
We've added concurrent test launch. This makes a test batch complete faster.
By default TestCafe launches one instance of each specified browser. Tests run one by one in each of them.
Enable concurrency and TestCafe will launch multiple instances of each browser. It will distribute the test batch among them. The tests will run in parallel.
To enable concurrency, add
-cin the command line. Or use the
runner.concurrency() API method.
Specify the number of instances to invoke for each browser.
testcafe -c 3 chrome tests/test.js
var testRunPromise = runner .src('tests/test.js') .browsers('chrome') .concurrency(3) .run();
For details, see Concurrent Test Execution.
⚙️
Further improvements in automatic waiting mechanism (#1521)
We have enhanced the waiting mechanism behavior in certain scenarios where you still used to need
wait actions.
Now automatic waiting is much smarter and chances that you need to wait manually are diminished.
⚙️
User roles preserve the local storage (#1454)
TestCafe now saves the local storage state when switching between roles. You get the same local storage content you left when you switch back.
This is useful for testing websites that perform authentication via local storage instead of cookies.
Bug Fixes
- Selector's
withAttributemethod supports search by strict match (#1548)
- Description for the
pathparameter of the
t.takeScreenshotaction has been corrected (#1515)
- Local storage is now cleaned appropriately after the test run.(#1546)
- TestCafe now checks element visibility with a timeout when the target element's
style.topis negative (#1185)
- Fetching an absolute CORS URL now works correctly. (#1629)
- Add partial support for proxying live node collections (the
GetElementsByTagNamemethod) (#1442)
- TypeScript performance has been enhanced. (#1591)
- The right port is now applied to a cross-domain iframe location after redirect. (testcafe-hammerhead/#1191)
- All internal properties are marked as non-enumerable. (testcafe-hammerhead/#1182)
- Support proxying pages with defined referrer policy. (testcafe-hammerhead/#1195)
- WebWorker content is now correctly proxied in FireFox 54. (testcafe-hammerhead/#1216)
- Code instrumentation for the
document.activeElementproperty works properly if it is
null. (testcafe-hammerhead/#1226)
length,
itemand
namedItemare no longer own properties of
LiveNodeListWrapper. (testcafe-hammerhead/#1222)
- The
scopeoption in the
serviceWorker.registerfunction is now processed correctly. (testcafe-hammerhead/#1233)
- Promises from a fetch request are now processed correctly. (testcafe-hammerhead/#1234)
- Fix transpiling for the
for..ofloop to support browsers without
window.Iterator. (testcafe-hammerhead/#1231)
Downloads
Bump version (#1600) \cc @AlexanderMoskovkin
Downloads
Bug Fixes
- Typing text now raises the
onChangeevent in latest React versions. (#1558)
- Screenshots can now be taken when TestCafe runs from the Docker image. (#1540)
- The native
valueproperty setters of
HTMLInputElementand
HTMLTextAreaElementprototypes are now saved. (testcafe-hammerhead/#1185)
- The
nameand
namedItemmethods of an
HTMLCollectionare now marked as non-enumerable. (testcafe-hammerhead/#1172)
- Code instrumentation of the
lengthproperty runs faster. (testcafe-hammerhead/#979)
Downloads
Bug Fixes
- A typo in RoleOptions typedefs was fixed (#1541)
- TestCafe no longer crashes on node 4 with an unmet dependency (#1547)
- Markup imported via
meta[rel="import"]is now processed. (testcafe-hammerhead/#1161)
- The correct context is passed to
MutationObserver. (testcafe-hammerhead/#1178)
- The
innerHtmlproperty is no longer processed for elements that don't have this property. (testcafe-hammerhead/#1164)
Downloads
Enhancements
⚙ TypeScript support (#408)
In this release, we have added an ability to write tests in TypeScript. Using of TypeScript brings you all the advantages of strongly typed languages: rich coding assistance, painless scalability, check-as-you-type code verification and much more.
TestCafe bundles TypeScript declaration file with the npm package, so you have no need to install any additional packages.
Just create a
.ts file with the
import { Selector } from 'testcafe';
and write your test.
For details, see TypeScript Support
⚙ Support running in Chrome in headless mode and in device emulator (#1417)
Now TestCafe allows you to run your tests in Google Chrome in headless and device emulation modes.
Headless mode allows you to run tests in Chrome without any visible UI shell. To run tests in headless mode, use the
:headless postfix:
testcafe "chrome:headless" tests/sample-fixture.js
Device emulation mode allows you to check how your tests works on mobile devices via Chrome's built-in device emulator. To run tests in device emulation mode, specify
emulation: and device parameters:
testcafe "chrome:emulation:device=iphone 6" tests/sample-fixture.js
For details, see Using Chrome-specific Features.
⚙ Support HTML5 Drag and Drop (#897)
Starting with this release, TestCafe supports HTML5 drag and drop, so you can test elements with the
draggable attribute.
⚙ Fixed URL for opening remote browsers (#1476)
We have simplified the format of links that TestCafe generates when you run tests on remote browsers.
Now, you have no need to type a unique link for each test run, all the links became constant. So, it is easier now to run tests on a remote device repeatedly: you can run them by navigating a link from your browser history.
Bug Fixes
- No TestCafe UI on screenshots created during testing (#1357)
mouseenterand
mouseleaveevents are not triggered during cursor moving (#1426)
- The runner's speed option affects the speed of
doubleClickaction (#1486)
- Press action shortcuts work wrong if input's value ends with '.' or starts with '-.' (#1499)
- A test report has too small line length on Travis (#1469)
- Service messages with cookies do not have enough time to come to server before a new page is loaded (testcafe-hammerhead/#1086)
- The
window.history.replaceStatefunction is overridden incorrectly (testcafe-hammerhead/#1146)
- Hammerhead crashes if a script file contains a sourcemap comment (testcafe-hammerhead/#1052)
- The proxy should override the
DOMParser.parseFromStringmethod (testcafe-hammerhead/#1133)
- The
fetchmethod should emulate the native behaviour on merging headers (testcafe-hammerhead/#1116)
- The
EventSourcerequests are broken when used via proxy (testcafe-hammerhead/#1106)
- The code processing may cause syntax errors in some cases because of wrong
locationproperty wrapping (testcafe-hammerhead/#1101)
- When calling the
fetchfunction without parameters, we should return its native result instead of
window.Promise.reject(testcafe-hammerhead/#1099)
- The
querySelectorfunction is overridden incorrectly (testcafe-hammerhead/#1131)
Downloads
Update testcafe-hammerhead. Bump version. (#1494)
v0.21.1
Assets
v0.21.1 (2018-8-8)
Bug fixes
RequestLogger.clearmethod no longer raises an error if it is called during a long running request (#2688)
fetchrequest (#2686)
document.implementationinstance (testcafe-hammerhead/#1673)
respondfunction are lower-cased now (testcafe-hammerhead/#1704) | https://www.ctolib.com/article/releases/19651 | CC-MAIN-2018-43 | en | refinedweb |
- Advertisement
deadlydogMember
Content Count446
Joined
Last visited
Community Reputation170 Neutral
About deadlydog
- RankMember
Can't See Any Smoke Using Dynamic Particle System Framework (DPSF)
deadlydog replied to CoOlDud3's topic in General and Gameplay ProgrammingI have posted a response to your question at.
Recommended particle engine?
deadlydog replied to DvDmanDT's topic in Graphics and GPU ProgrammingCheck out DPSF (Dynamic Particle System Framework). It's free, actively maintained, has great help docs and support, tutorials, and is super easy to integrate into any XNA project. Also, it supports both 2D and 3D particles, and works on Windows, Xbox 360, Windows Phone, and the Zune.
C# [NonSerializable] and Formatters namespace cannot be found [SOLVED]
deadlydog replied to deadlydog's topic in General and Gameplay ProgrammingHmmmm, yes, I tried creating a new test XNA 3.1 Game project, and it compiled and serialized fine. So the problem does seem to be specific to my one project. ....aahhhh, I just figured it out. I didn't notice, but in the compiler warning it says it is for the XBox 360 project. I have the compiler set to mixed mode, so it builds both the Windows and XBox 360 copy of the project, and it is only the XBox 360 copy the rejects the serializable attribute. I'm not sure why they wouldn't define the NonSerialized attribute and Formatters namespace for the XBox 360 assemblies, as I'm sure they could be used to transfer data to XBox Live and whatnot, but that was my problem. If I just do a Windows build then everything works perfectly. Thanks.
C# [NonSerializable] and Formatters namespace cannot be found [SOLVED]
deadlydog posted a topic in General and Gameplay ProgrammingHi there, I'm having 2 problems trying to get serialization working in C#, both of which are compile time errors: 1 - The first problem I'm having is I'm trying to make my C# class serializable. I have put the [Serializable] attribute in front of my public class declarations, and in front of all the classes/enumerations defined within them. If I do this alone, everything compiles fine. However, I store a handle to an XNA graphics device which I do not want serialized, so I have put the [NonSerialized] attribute in front of it, but when I compile it says "The type or namespace name 'NonSerialized' could not be found (are you missing a using directive or an assembly reference?)" and "The type or namespace name 'NonSerializedAttribute' could not be found (are you missing a using directive or an assembly reference?)". I have tried using [NonSerialized], [NonSerialized()], and [NonSerializedAttribute], but they all give me the same error. Here is a sample code snippet: using System; using System.Collections.Generic; using System.Diagnostics; // Used for Conditional Attributes using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Content; using System.IO; using System.Runtime.Serialization; namespace MyNamespace { ... [Serializable] public class MyClass : MyInterface { ... [NonSerialized] private GraphicsDevice mcGraphicsDevice = null; // Handle to the Graphics Device to draw to ... } } I have also included the System.Runtime.Serialization and System.Runtime.Serialization.Formatters.SOAP .dlls in my project, but it does not seem to make any difference. Also, I'm not sure if it makes a difference or not, but I'm compiling this project into a .dll, not an executable. 2 - My second problem is that when I try to test out serializing and deserializing, the compiler complains that it cannot find the System.Runtime.Serialization.Formatters namespace, it just gives me the error "The type or namespace name 'Formatters' does not exist in the namespace 'System.Runtime.Serialization' (are you missing an assembly reference?)". I get this error whether I try and include System.Runtime.Serialization.Formatters.SOAP or .Binary in the using statements at the top of the file, or if I declare the full namespace path when declaring my variables. Again, I have included the System.Runtime.Serialization and System.Runtime.Serialization.Formatters.SOAP .dlls in my project but it doesn't seems to make a difference. Here's a sample code snippet of this problem: DPSF; using DPSF.ParticleSystems; using System.IO; using System.Runtime.Serialization; //using System.Runtime.Serialization.Formatters; //using System.Runtime.Serialization.Formatters.Binary; //using System.Runtime.Serialization.Formatters.Soap; namespace TestNamespace { public class Game1 : Microsoft.Xna.Framework.Game { ... protected override void Initialize() { // TODO: Add your initialization logic here mcParticleSystem = new DefaultPointSpriteParticleSystemTemplate(null); mcParticleSystem.AutoInitialize(this.GraphicsDevice, this.Content); base.Initialize(); Stream stream = File.Open("ExplosionFlash.dat", FileMode.Create); System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bformatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter(); bformatter.Serialize(stream, mcParticleSystem); stream.Close(); mcParticleSystem = null; stream = File.Open("ExplosionFlash.dat", FileMode.Open); bformatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter(); mcParticleSystem = (DefaultPointSpriteParticleSystemTemplate)bformatter.Deserialize(stream); stream.Close(); } } } Any help or suggestions would be greatly appreciated. Thanks in advance. [Edited by - deadlydog on March 7, 2010 3:32:53 PM]
[.net] Particle Systems in C#. (Optimizations)
deadlydog replied to Jmelgaard's topic in General and Gameplay ProgrammingI know this topic is pretty old, but if you're looking for a particle system framework to use instead of creating your own, check out DPSF (Dynamic Particle System Framework) at. It's for creating particle systems in C# and XNA.
XNA Redistribution / Packaging
deadlydog replied to Andy474's topic in Graphics and GPU ProgrammingIf you want to put the game on your laptop (not the source code), then you can package your game into a self-installing package that will also install the XNA redistributables if required. To do this, from Visual Studio go into the Build menu and select Publish [application name]. This will create a folder, an .applicationdata file, and a .exe file in the destination you specify. Just zip those up, copy them to your laptop, and then run the .exe file and it will install your game for you. Not sure about your question 2. I haven't heard anything before about being able to run XNA apps on the web. You might have more luck posting your XNA questions on the XNA Creators Club Forums.
Allocating particles in memory at run-time not slow?
deadlydog replied to deadlydog's topic in General and Gameplay ProgrammingQuote:Original post by Adam_42 Quote:So as the particle system simulation runs you can notice that particles suddenly jump to the front of the array and are drawn on top of other particles. The standard way of making particles look right regardless of rendering order is to use additive blending instead of standard alpha blending. How well this works depends on what the particles look like. It's best for bright particles on a dark background. If you don't do that you probably need to depth sort, because when the camera moves round the particle system it will usually make the draw order look wrong. Yeah, I'm designing a general purpose particle system framework so I would like it to be able to display both alpha-blended and non-alpha-blended particles correctly. Turning on depth sorting (i.e. RenderState.DepthBufferWriteEnable = true) fixes the problem for opaque particles, but introduces a new problem for semi-transparent ones; the world is shown through the transparent part of the particles, instead of showing the other particles behind it. See a screenshot of the problem here. And this is what it should look like (depth sorting disabled) (still has original problem of particles being suddenly drawn overtop of other particles though). I don't think this problem is related to my issue of the particles being moved around in the array and hence being drawn in a different order (since we have depth sorting enabled here), but I'm not sure what is going on. I'm hoping it's something simple like an additional renderstate property needs to be set or something. If anybody has any ideas why this is happening or how to fix it I would appreciate any suggestions. Thanks Original post by Antheus Quote: Quote: Original post by deadlydog Can anyone think of a simple solution to this problem? For rendering optimizations, read this. Thanks for the tips. nice article. [Edited by - deadlydog on May 7, 2009 10:28:29 AM]
Allocating particles in memory at run-time not slow?
deadlydog replied to deadlydog's topic in General and Gameplay ProgrammingQuote:Original post by deadlydog But now that I've implemented method 4 it seems to be the all around winner, winning with both static and random lifetimes for all active particle amounts almost every time. Number 4 is definitely wins when it comes to performance, but after actually implementing it in a real particle system I've realized one thing that I overlooked with this approach. When a particle dies it gets moved to the end of the list and the position in the array where that particle was gets replaced with the last active particle in the list, and the total number of Active particles is decremented. When the particles are copied into the vertex buffer, I loop over all of the active particles, starting from the end of the Active Particles section of the array to the start of the array (newest particles are drawn first so that older particles are drawn over top of them). Also, most particle systems have the RenderState.DepthWriteEnabled set to false to avoid sorting thousands of particles by depth, saving on execution time and increasing performance. The problem is that since depth sorting is not enabled, the particles are drawn in the order that they appear in the vertex buffer. So when a particle dies it's position in the array is replaced by the newest particle added to the particle system (since new particles are added to the end of the Active particles list). When this happens that new particle which used to be behind most of the other particles is suddenly drawn on top of all of the particles, since its position in the array moved from the end to (possibly) near the start of the array. So as the particle system simulation runs you can notice that particles suddenly jump to the front of the array and are drawn on top of other particles. Can anyone think of a simple solution to this problem? One approach would be to resort the active particles based on age after each update. However this would kill performance and I would be better off just using approach number 8, which gave the 2nd best average performance. Any ideas on how to solve this problem easily would be appreciated. If I can't find any I will likely end up sticking with approach number 8. Thanks.
Allocating particles in memory at run-time not slow?
deadlydog replied to deadlydog's topic in General and Gameplay ProgrammingQuote:Original post by KulSeran Have an array of MAX particles. When you need a particle grab par_array[count]. Increment count. When a particle dies decrement count, swap the particle with par_array[count]. Update only count particles. Yup, I actually just implemented this method last night and it seems to give the best performance all around on average, for both static and random lifetimes. In total I implemented 8 different methods. 1 - Active Linked List with no pre-allocated memory. Just add particles to the linked list when they are needed, and delete them when they die. 2 - Array with linear search for inactive particles. Process entire array. 3 - Array with circular search for inactive particles. Process entire array. 4 - Array with swapping dead particles to the end of the active particles (method mentioned by KulSeran). Process only active particles. 5 - Array with an inactive queue to hold the inactive particle indexes. Process entire array. 6 - Array with an inactive queue and active list. Process only active particles. 7 - active LinkedList and inactive LinkedList holding the particle instances (no array, but memory is still pre-allocated since the linked list nodes are never "deleted", they are just moved back and forth between the linked lists). Process only active particles. 8 - Array with an active LinkedList and inactive LinkedList, where the linked lists hold pointers to the particles in the array. Process only active particles. Before implementing method 4 last night, method 8 performed best with static lifetimes (but not very well with random lifetimes), method 1 with random lifetimes, and method 5 for both static and random lifetimes (but only when the number of active particles = the number of particles allocated in memory). Approach 1 was the best happy medium, however, like it was mentioned since my program isn't allocating / deallocating memory for other application resources at run-time, as would happen in a real application, this methods memory was staying sequential, where in a real application it would likely become fragmented and not perform as well. But now that I've implemented method 4 it seems to be the all around winner, winning with both static and random lifetimes for all active particle amounts almost every time. Quote:Original post by raigan I might be really confused, but in C# don't structs become read-only when you store them in an array/list? For instance the following code doesn't actually do anything to the ith particle: particles.x = 20; The value of particles.x will remain unchanged, because indexing into a list of structs returns the value and not a reference to the object that's in the list. Yup, you're right; which is why I use a Particle class, not a struct. If you used a struct you would need to do something like: Particle sTempParticleStruct = particles; // Copy values into a new struct sTempParticleStruct.x = 20; // Update values particles = sTempParticleStruct; // Replace old particle with updated one
Allocating particles in memory at run-time not slow?
deadlydog replied to deadlydog's topic in General and Gameplay ProgrammingQuote:Original post by Antheus The reason for your results sounds like you're using classes, in which case there would be no difference Yes, my Particle objects are classes, not structs
Allocating particles in memory at run-time not slow?
deadlydog posted a topic in General and Gameplay ProgrammingI'm doing some performance tests on different particle system memory management techniques in C#. I've created a particle simulation (no rendering though) to see how many times each different approach is able to update all of the particles. Three of the approaches I'm using are: 1 - Use a linked list to hold the active particles. When a new particle is required I create a new instance of the Particle class (allocates memory at run-time) and add it to the linked list. When a particle dies remove the particle from the linked list (de-allocated from memory). To update the active particles just update all of the particles in the linked list. 2 - Create a pool of particles in memory using an array of Particle instances and store the inactive particle indexes in an inactive queue. When a new particle is needed grab the next index from the inactive queue, and when it dies return it to the inactive queue. When updating particles loop through the entire particle pool array and update the active particles. 3 - Create a pool of particles in memory using an array of Particle instances and use both an inactive queue and active particle list. When a new particle is needed grab the next index from the inactive queue and add it to the active list, and when it dies remove it from the active list and return it to the inactive queue. To update the active particles just update all of the particles in the active particle list. I test the approaches both with static particle lifetimes (all particles have the same lifetime) and with random lifetimes. I do two tests, the first where 10,000 particles are allocated in memory (for approach 2 and 3) and I test with 10, 100, 1000, 5000, and 10000 active particles. For the second test I allocate 100,000 particles in memory, and test with 10, 100, 1000, 5000, 10000, 50000, and 100000 active particles. Also, I run these tests 5 times and then take the average of the results. I implemented approach #1 just as a simple baseline for comparison (100%). I thought it would be the slowest since it allocates and de-allocates memory at run-time, which I've always heard is expensive and a no-no in particle systems. However, it turns out that this approach gives the best results in the average case (static and random lifetimes, and when some particles are active, and many particles are active). Approach #2, which seems to be the most popular from the particle systems I've downloaded, gives the best performance (~150%) when all of the allocated particles are active, but gives the worst performance when less than half of the particles are active (10% - 50%). Approach #3, which I thought would be the fastest, gives about 90% - 100% performance on average using static particle lifetimes, but around 60% - 70% using random particle lifetimes. So if you know how many particles your effect will require (and allocate only that much memory), and that number remains constant through out the entire effect (which is not always the case), then approach #2 is the best. However, approach #1 seems to give the best performance in the average case. I tested this with a small Particle class (6 floats and 5 ints) and with a large particle class (36 floats and 5 ints), but it did not make much of a difference. So is there something I am missing here? How come approach #1 isn't horribly slow like most people say it should be? My only guess is that it's because I'm using C# and it must be handling the memory management like a champ hehe. Any thoughts?
What is a casual game
deadlydog replied to roychr's topic in Game Design and TheoryTo me a casual game should be one that players don't have to commit to for long periods of time. Players should be able to pick it up and play for only 5 or 10 minutes if they want, and have their progress saved. So if your game requires users to play for half an hour or an hour before reaching the next "save" point, I wouldn't consider it a casual game.
GDI + XNA
deadlydog replied to Sambori's topic in Graphics and GPU ProgrammingDo you just want to draw text to the screen, or are you trying to draw text onto a texture wrapped around a 3D object? If you are just trying to draw text to the screen it is pretty easy in XNA,and there are lots of tutorials. Basically you will create a new SpriteFont and draw it using a SpriteBatch. Here is the MSDN tutorial of how to do it.
calculating Framesper seconds
deadlydog replied to tnutty's topic in For Beginners's ForumIf you want you can use fraps to display the frames per second of your game. Just open up fraps and then run your game and it will display the FPS in your game screen.
Order of frame progression in animation
deadlydog replied to Mybowlcut's topic in General and Gameplay ProgrammingWhy choose; Why not offer both of them. Just have a boolean/enumeration indicating what should happen when the last frame is reached (i.e. wrap, reverse, etc.). The more flexible your classes are the better in my opinion. I like your second suggestion as well. I can't believe I never thought to include that in my animation class :)
- Advertisement | https://www.gamedev.net/profile/29587-deadlydog/ | CC-MAIN-2018-43 | en | refinedweb |
Is it possible to pass a function name as a template parameter? Here's an abstraction of what I'm trying to do. Another possible complication is that I'm trying to assign a default function that is part of a namespace:
Code:namespace foo{ void fighter(); } template <function = foo::fighter> class a_class; | http://cboard.cprogramming.com/cplusplus-programming/109170-function-name-template-parameter.html | CC-MAIN-2016-30 | en | refinedweb |
.tags.bsf; 17 18 /*** Describes the Taglib. This class could be generated by XDoclet 19 * 20 * @author <a href="mailto:[email protected]">James Strachan</a> 21 * @version $Revision: 155420 $ 22 */ 23 public class JythonTagLibrary extends BSFTagLibrary { 24 25 public JythonTagLibrary() { 26 setLanguage( "jython" ); 27 } 28 } | http://commons.apache.org/proper/commons-jelly/libs/bsf/xref/org/apache/commons/jelly/tags/bsf/JythonTagLibrary.html | CC-MAIN-2016-30 | en | refinedweb |
is analogous to the Oracle JDeveloper Property Inspector. You can create and register custom property panels for a component, populate them with component properties, and display them as tabs along with the default tabs in the Component Properties dialog.
The process of configuring custom property panels includes creating them as task flows, packaging them into JAR files, and defining them in catalog. By default, a drop handler is configured for each resource in the catalog. If you want to provide complete control of the drop action to the resource, you can create additional drop handlers for that resource. The Add link then displays a context menu with different options for adding the resource to the page. You can create one or more drop handlers to handle different flavors for resources in your catalog. with Default Elements
You can customize the toolbar by adding, deleting, or rearranging elements. You can also override existing elements with custom elements. For example, you can remove the message showing the page name if you do not want users to see the name of the page they are editing.
For information about customizing add-ons and custom property panels, selectively render panels, register event handlers, and define property filters. The
pe_ext.xml file is not available in your application by default. You must create it the first time you perform such tasks as including add-ons, property panels, or event handlers. Create this file in the
META-INF directory under the project's Web context root or in the
application_home
/
project
/src/META-INF directory. When you run the application, the
pe_ext.xml file is picked up from the JAR file included in the application classpath. Your application can include more than one extension file. However, you must ensure that the JARs containing the extension files are available on the application classpath so that the
pe_ext.xml files are picked up for processing. Every JAR with a
pe_ext.xml in its
META-INF folder is processed, and the Composer extensions are loaded and combined. For information about the different elements you can define in
pe_ext.xml to extend Composer capabilities, see Section B.2.1, "pe_ext.xml."
Application's
adf-config.xml file
The
adf-config.xml file specifies application-level settings that are usually determined at deployment and often changed at runtime. When you perform such tasks as registering new add-ons and custom property panels in Composer, or creating customization layers, you must add appropriate entries in the
adf-config.xml file. The
adf-config.xml file is created automatically when you create an application, and when you add a
Page Customizable component to the page, certain configurations are added to this file.
For information about the-1 Sample code in the JSFF Fragment
<?xml version='1.0' encoding='UTF-8'?> <jsp:root xmlns: <af:panelGroupLayout <af:spacer <af:image <af:spacer <af:outputText <af:spacer <af:outputText </af:panelGroupLayout> </jsp:root>
Note:
At runtime, the add-on panel is automatically sized to fit the content in this fragment.
Create a task flow definition called
custom-panel-task-flow:
From the File menu, choose New.
In the New Gallery dialog, expand Web Tier, select JSF, then ADF Task Flow.
Click OK.
Drop the
custompanelview.jsff fragment that you created onto the task flow definition.
See Also:
"Getting Started with ADF Task Flows" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework for information about creating task flows.
Save the task flow definition file.
Optionally, if you create the task flow in one application but want to consume it in another application, you must first package the task flow in an ADF library and add the resulting JAR in the consuming application.
To package the task flow in an ADF library:
Create a deployment profile for the task flow:
Right-click Portal project):
From the File menu, select New.
In the New Gallery dialog, expand General, select XML, then XML Document.
Click OK.
Name the file
pe_ext.xml.
Add an
<addon-config> element in the file, with a nested
<panels> element.
Add one
<panel> element for each task flow that you want to register as an add-on.
Any number of panels can be declared under the
<panels> element in the extension file.
Example.
To register an add-on, you must add a reference to it in the application's
adf-config.xml file. Add the
addon-panels entry to define new add-ons.
To register add-ons in the
adf-config.
Composer displays the properties of a component in the Component Properties dialog when a user clicks the Edit icon on the component. The Component Properties dialog provides a series of tabs. Each tab displays a group of related attributes. The attributes have associated values that control a component's behavior and visual style properties. For example, the Style tab displays the component's style-related properties, such as width, height, and background color.
Similarly, when a user clicks the Page Properties button, a Page Properties dialog opens with its own series of tabs. These tabs contain display-related page properties, page parameters, and security settings.
You can create and register custom property panels to render along with the tabs displayed in the Component Properties or Page Properties dialog. In addition, you can remove the default panels or replace them with custom property panels. For example, you can develop a friendlier property panel for an
Image component by displaying a picker for its
Source property. This would make it easier for users to select an image from the available options.
This section describes how to create custom property panels. It also describes how to exclude, override, and selectively render default property panels. It contains the following subsections:
Section itself.
You can configure a custom property panel to display in the Component Properties dialog always. Alternatively, you can configure the panel to display only when a particular component or task flow is selected for editing.
This section describes how to create and register a custom property panel. It contains the following subsections:
Section 21.3.1.1, "Creating a Custom Property Panel"
Section 21.3.1.2, "Registering a Custom Property Panel for a Component"
Section 21.3.1.3, "Registering a Custom Property Panel for a Task Flow"
Property panels provide a means of editing page or component properties. For example, a user can click the Edit icon on a selected task flow and modify its parameter values and change its visual attributes in the Component Properties dialog. Composer."
Add a
<property-panels> element inside the
<addon-config> section in the
pe_ext.xml file.
Add a
<property-
panel> declaration within the
<property-panels> element.
You can have multiple
<property-panel> entries.
Within the
<property-panel> element, add a
<component> element to specify the runtime class name of the component (optional) and a
<panel> element to specify the name you used to declare the panel in the
<addon-config> section of the file.
Example 21-8 shows a custom property panel that is associated with a
Command Button component by specifying the component's fully qualified class name. For information about Oracle ADF components and their runtime classes, see Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.
Example 21-8 Code to Register a Property Panel for a Component
<pe-extension <addon-config> <property-panels> <property-panel <component>oracle.adf.view.rich.component.rich.nav.RichCommandButton</component> <panel name="prop.panel.cmdbtn" /> <panel name="prop.panel.generic" /> </property-panel> </property-panels> </addon-config> . . . </pe-extension>
Note:
When registering a property panel, if you do not associate it with a component or task flow -9 shows the sample code used to register a custom property panel for a task flow.
A custom property panel registered for a specific task flow appears only when its associated task flow is selected. Otherwise, default property panels appear.
Note:
Use task flow-specific custom property panels only to customize task flow parameters or any other aspect Composer."
Add a
property-
panel declaration within this.
You can have multiple
property-panel entries.
Add
taskflow-id and
panel elements within the
property-
panel element.
Add the
taskflow-id element to specify the task flow name. Add the
panel element to specify the name you used to declare the property panel in the
addon-config section of the file.
Example), then the registered panel is rendered for all pages, task flows, and components in the Component Properties and Page Properties dialogs.
You can register multiple
panel elements within a
property-panel element. For more information about the
property-panels element and its nested elements, see Section B.2.1.2, "property-panels."."
If you associated the custom property panel with a specific component or task flow, at runtime the panel renders as a tab in the Component Properties dialog invoked from the specified component or task flow. If you did not associate the custom property panel with a specific component or task flow, at runtime the custom property panel renders as a tab in both the Page Properties and Component Properties dialogs for all pages, components, and task flows.
Note:
In the Component Properties and Page Properties dialogs, custom panels are sized by the tab component containing the task flows. The size of the tab component itself is determined by a rule in the currently-applied skin. This rule is called
af|panelTabbed.ComposerTab. rendered Attribute for a Property Panel
<pe-extension <addon-config> <property-panels> <property-panel <component>oracle.rich.CommandButton</component> <panel name="prop.panel.cmdbtn" /> <panel name="oracle.adf.pageeditor.pane.content-style-editor" rendered="false" /> </property-panel> </property-panels> </addon-config> . . . </pe-extension>
Custom property panels registered in the application are rendered in the Component Properties dialog when users click the Edit icon on a specified component. You can configure your application to render a property panel selectively based on different criteria like the role of a logged-in user, the page being viewed, and so on. To display property panels selectively, you can use an EL value in the
property-panel's
rendered attribute as shown in the following example:
<property-panel.
Example or the Apply or OK button in the Component Properties or Page Properties dialog. A Save event handler must implement
oracle.adf.view.page.editor.event.SaveListener. The
isCommit method of the Save event can be used to differentiate between changes made to the page and to component properties. A value of
true implies that the user clicked project, and from the File menu, choose New.
In the New Gallery dialog, expand General, select Java, then Java Class, and click OK.
In the Create Java Class dialog, specify a name for the class, for example,
SaveHandler.
Under the Optional Attributes section, add the
oracle.adf.view.page.editor.event.SaveListener interface..
Click OK.
The Java class source looks like the following:
package view; import javax.faces.event.AbortProcessingException; import oracle.adf.view.page.editor.event.SaveEvent; import oracle.adf.view.page.editor.event.SaveListener; public class SaveHandler implements SaveListener { public Class1() { super(); } public void processSave(SaveEvent saveEvent) throws AbortProcessingException { // Your implementation goes here } }
You must declare the
processSave method as
throws AbortProcessingException because the method may throw this exception if the event must be canceled. You can include the reason for canceling this event in the Exception object when you create it.
On throwing this exception, further processing of this event is canceled and the listeners that are in the queue are skipped.
You can create event handlers for all supported events by performing steps similar to these.
Tip:
You can use the
SaveEvent.isCommit."
To register an event handler:
In the application's
pe_ext.xml file, add the following entries:
<event-handlers> <event-handlerview.SaveHandler</event-handler> </event-handlers>
The values you provide for the
event attribute and between the
event-handler tags are unique to the type of event being entered and the name you specified for the event class.
Save the file.
At runtime, registered event handlers are called according to the sequence in the extension file and according to the order in which they were found on the class path. Composer's native event handler is called last.
On invocation of an event handler's
process
EventName method, if an event handler throws
AbortProcessingException, then the event is canceled and no further event handlers are called, including Composer's native event handlers.If, however, an error occurs while instantiating an event handler, then that the event has been handled.
Specify a Sequence Number for an Event Handler
By specifying the sequence for event handlers, you can decide on the order in which the event handlers, and therefore the listeners, are called. You can assign a sequence number to a listener or modify the default value by defining a
sequence attribute against the registered event handler in the
pe_ext.xml file.
To specify a sequence number for a listener:
In the application's
pe_ext.xml file, locate the
event-handler element for the handler that you want to sequence.
Add a
sequence attribute as follows:
<event-handlers> <event-handlerview.SaveHandler</event-handler> </event-handlers>
The value for the
sequence attribute must be a positive integer. If you do not define this attribute, the event handler is internally assigned a default sequence number of
100. DeletionListener Implementation
public class DeleteHandler implements DeletionListener { ... public void processDeletion(DeletionEvent delEvent) throws AbortProcessingException { // Get the component that must be deleted UIComponent comp = delEvent.getComponent(); if (comp != null) { try { // Assuming that a custom method, handleDelete(comp), handles deletion of // a component and returns true on successful deletion and false in case // of failure boolean deleteSucceeded = handleDelete(comp); // If deletion failed, then notify Composer that the delete event // has been handled. No further events are processed. if (!deleteSucceeded) delEvent.setEventHandled(true); } catch (Exception e) { // Catch Exception throw by handleDelete(comp) and handle it by throwing // AbortProcessingException to stop processing of delete events. throw new AbortProcessingException(e) } } } ... }
Drop handlers are Java classes registered with if they can handle the flavor. If only one drop handler can handle that flavor, then control is passed to that drop handler and the resource is added to the page immediately. If more than one drop handler can handle the flavor, then a context menu displays available drop handlers to users. project, and from the File menu, choose New.
In the New Gallery dialog, expand General, select Java, then Java Class, and click OK.
In the Create Java Class dialog, specify a name for the class, for example,
TestDropHandler.
In the Extends field, enter or browse to select the
Drophandler class,
oracle.adf.view.page.editor.drophandler.DropHandler..
Add the required import statements and click OK.
The Java class source must look like the following:
package test; import java.awt.datatransfer.DataFlavor; import java.awt.datatransfer.Transferable; import java.io.ByteArrayInputStream; import javax.faces.component.UIComponent; import javax.faces.context.FacesContext; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import oracle.adf.rc.component.XmlComponentFactory; import oracle.adf.view.page.editor.drophandler.DropEvent; import oracle.adf.view.page.editor.drophandler.DropHandler; import org.apache.myfaces.trinidad.change.AddChildDocumentChange; import org.apache.myfaces.trinidad.change.ChangeManager; import org.apache.myfaces.trinidad.change.DocumentChange; import org.apache.myfaces.trinidad.context.RequestContext; import org.w3c.dom.Document; import org.w3c.dom.DocumentFragment; public class TestDropHandler extends DropHandler { public TestDropHandler() { super(); } public String getName() { return null; } public DataFlavor[] getAcceptableFlavors() { return new DataFlavor[0]; } public boolean handleDrop(DropEvent dropEvent) { return false; } }
Implement the
getName() method as follows to return the name of the drop handler:
public class TestDrophandler extends DropHandler { public TestDrophandler() { super(); } public String getName() { return "Custom XML"; } . . .
This value (
Custom XML) appears in the context menu on the Add link next to the XML component in the Resource Catalog.
Implement the
getAcceptableFlavors() method as follows to get a list of supported data flavors for the XML component:
public class TestDropHandler extends DropHandler { private static final DataFlavor[] ACCEPTABLE_FLAVORS = { XmlComponentFactory.XML_STRING_FLAVOR }; . . . public DataFlavor[] getAcceptableFlavors() { return ACCEPTABLE_FLAVORS; } . . .
Implement the
handleDrop(DropEvent) method as shown in the following sample file to handle the drop event and add the XML component to the page.
public class TestDropHandler extends DropHandler { . . . public boolean handleDrop(DropEvent de) { Transferable transferable = de.getTransferable(); UIComponent container = de.getContainer(); int index = de.getDropIndex(); try { FacesContext context = FacesContext.getCurrentInstance(); RequestContext rctx = RequestContext.getCurrentInstance(); String fragMarkup = null; // Get the TransferData from the Transferable (expecting a String) Object data = getTransferData(transferable); if (data instanceof String) { fragMarkup = "<?xml version='1.0' encoding='UTF-8'?>" + (String)data; } else { return false; } // Get a DocumentBuilder DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); factory.setValidating(false); DocumentBuilder builder = factory.newDocumentBuilder(); // Parse the xml string into a Document object using DocumentBuilder byte[] markupBytes = fragMarkup.getBytes(); Document newDoc = builder.parse(new ByteArrayInputStream(markupBytes)); // Transform the Document into a DocumentFragment DocumentFragment docFrag = newDoc.createDocumentFragment(); docFrag.appendChild(newDoc.getDocumentElement()); // Create an "add child" document change, that is, insert the fragment DocumentChange change = null; if (index < container.getChildCount()) { // Get the ID of the component we'll be adding just before String insertBeforeId = container.getChildren().get(index).getId(); change = new AddChildDocumentChange(insertBeforeId, docFrag); } else { change = new AddChildDocumentChange(docFrag); } // Apply the "add child" DocumentChange using ChangeManager ChangeManager changeManager = rctx.getChangeManager(); changeManager.addDocumentChange(context, container, change); // Refresh the target container using PPR rctx.addPartialTarget(container); // Mark the drop as completed. return true; } catch (Exception e) { return false; } }
In this example, a
DropEvent parameter is passed to the
handleDrop method. This parameter has three attributes that can be described as follows:
transferable, like
DataFlavor, is a standard Java class that contains the data being added.
container is the container into which component must be dropped.
index is the position of the component inside the container. For example, first, second, and so on.
Save the
TestDropHandler.java file.
After implementing the drop handler, you must register it with Composer."
To register a drop handler:
In the
pe_ext.xml file, add the following entries:
<drop-handlers> <drop-handler>test.TestDropHandler</drop-handler> </drop-handlers>
where
TestDropHandler is the name of the drop handler implementation.
Save the file.
You can register any number of drop handlers in the extension file by adding that many
<drop-handler> elements.
Since you created a drop handler for XML components generated by the
XmlComponentFactory class, you can test how the drop handler works at runtime by adding an XML component to the Resource Catalog and then adding that component to your page at runtime.
To add an XML component to the catalog:
Open the default catalog definition file,
default-catalog.xml.
For information about the location of this file, see "Default Resource Catalog Configuration".
Add the following code within the
<contents> section of the file:
<component id="pc" factoryClass="oracle.adf.rc.component.XmlComponentFactory"> <attributes> <attribute attributeId="Title" value="Test"/> <attribute attributeId="Description" value="XML content you can add to application pages"/> <attribute attributeId="IconURI" value="/adf/pe/images/elementtext_qualifier.png"/> </attributes> <parameters> <parameter id="xml"> Composer's Component Properties dialog because they are filtered out by default. The default filters are defined in the
<filter-config> section of the Composer's extension file (/META-INF/pe_ext.xml).
Global filters filter attributes for all components. They are defined using the
<global-attribute-filter> tag. Tag-level filters filter attributes for a specified component only. They are defined using the
<taglib-filter> tag.
Note:
In an extension file, you can have any number of
<taglib-filter> tags under
<filter-config>, but you can have only one
<global-attribute-filter> tag to define all global attribute filters.
You can define additional filters to hide more properties in the Component Properties dialog. This section describes how. It contains the following subsections:
Section."
<filter-config> <global-attribute-filter> <attribute name="accessKey" /> <attribute name="attributeChangeListener" /> <attribute name="autoSubmit" /> <attribute name="binding" /> </global-attribute-filter> <taglib-filter <tag name="commandButton"> <attribute name="text" /> <attribute name="icon" /> </tag> </taglib-filter> </filter-config>
Save the file.
At runtime, when you edit a component's properties, the properties that were filtered out are not rendered in the Component Properties dialog.
You can remove global and tag-level filters so that previously filtered properties are now rendered in the Component Properties dialog. This is useful for displaying properties that are filtered out by Composer's built-in filters or by another extension file defined elsewhere the application.
Note:
After you remove a property filter, it is rendered in the Component Properties dialog even if a filter is defined for that property in another extension file.
To remove a property filter:
Edit the Composer extension file,
pe_ext.xml, available in the
META-INF directory.
You can remove property filters by editing entries in this file.
Search for the attribute from which to remove the filter, and set
filtered to
false in the
<attribute> tag as shown in the following example:
<pe-extension . . . <filter-config> <global-attribute-filter> <attribute name="accessKey" filtered="false" /> <attribute name="attributeChangeListener" /> . . . </global-attribute-filter> <taglib-filter <tag name="activeCommandToolbarButton"> . . . <attribute name="windowWidth" filtered="false"/> </tag> </taglib-filter> </filter-config> </pe-extension>
Save the file..4, "How to Override a Toolbar Section to Display Custom Content"
Use the
toolbarLayout attribute on a
Page Customizable component to control which toolbar sections are displayed and the order in which they appear.
To customize the toolbar for all editable pages in the application, you can create a template, add a
Page Customizable component to the template, and specify the
toolbarLayout attribute against the
Page Customizable component. You can then base all the pages in the application on that template.
If you do not specify a value for
toolbarLayout, this attribute is internally set to
message stretch statusindicator newline menu addonpanels stretch help button, which is the default layout for the Composer toolbar sections.
To customize the toolbar on an application page:
Open your customizable JSPX page in JDeveloper and select the
Page Customizable component in the Structure window.
In the Property Inspector, specify space-separated values for the
toolbarLayout attribute.
The section names in Table 21-3 are valid values for this attribute.
Note:
You can add only one
stretch value per row on the toolbar. If you add more than one
stretch value, only the first one is displayed; all others are ignored.
In Source view, the
toolbarLayout attribute appears as shown in the following example:
<pe:pageCustomizable <f:facet <pe:pageEditorPanel </f:facet> </pe:pageCustomizable>
Save the JSPX file.
At runtime, the toolbar displays the sections you specified in the order you specified them (Figure 21-20).
Figure 21-20 Customized Composer Toolbar
You can add custom sections by creating facets and specifying the facet names in the
toolbarLayout attribute. Populate the new facets with elements you want to display on the toolbar. The following example shows you how to create a section; specifically, how to add a Report a Bug button that opens a popup that enables users to report a bug.
To add a custom section:
Open the JPSX page for which you want to customize the toolbar.
Add a
facet inside the
Page Customizable and name it
bugreport.
Add a
Command Toolbar Button component inside the facet and set the
text attribute to
Report a Bug.
To add an image to the button, specify the path to the image using the
icon attribute.
Add the
Show Popup Behavior component to invoke a popup on clicking the
Command Toolbar Button.
In Source view, the
Page Customizable would appear as follows:
<pe:pageCustomizable <af:commandToolbarButton <af:showPopupBehavior </af:commandToolbarButton> </f:facet> . . . <af:popup <af:dialog <af:panelFormLayout <af:inputText <af:inputText <af:inputText <af:inputText </af:panelFormLayout> </af:dialog> </af:popup> . . . </pe:pageCustomizable>
Save the page.
At runtime, the Composer toolbar displays a Report a Bug button, as shown in Figure 21-21.
Figure 21-21 Composer Toolbar with Custom Section
Clicking the Report a Bug button displays the File a Bug dialog that enables users to report a bug.
You can display custom content in a default toolbar section by adding a facet of the same name as the section and populating it with custom content. The facet content overrides the content of the default section. For example, to display a custom message to users in place of the
Editing Page:
Page_Name message, you can create a custom facet named
message and include an
Output Text component with the text you want to display to users. The
Output Text component then displays in place of the default message. The following example shows a
Page Customizable component with a custom toolbar message:
<pe:pageCustomizable <f:facet <af:outputText </f:facet> <f:facet <pe:pageEditorPanel </f:facet> </pe:pageCustomizable>
Figure".
Problem
You created an add-on, but it does not appear on the Composer toolbar.
Solution."
Problem
You have registered a custom property panel. However, it does not appear in the Component Properties dialog when your select a component to display its properties.
Solution
Ensure the following:
The
pe_ext.xml file is in
/META-INF folder in a JAR file or the application and is available in the class path.
The task flow binding ID specified while registering the panel in
pe_ext.xml is correct.
The
property-panel registration is correct and is specified against the component you want the panel to appear against. Further, the fully qualified class name of the component is correctly specified using the
component node.
A duplicate property panel registration is not overriding your panel entry.
If the panel is configured to use the rendered attribute, ensure that the value or EL evaluates to
true.
If the add-on is in a JAR file, ensure that the JAR file is created as an ADF Library.
For information about custom property panels, see Section 21.3, "Creating Custom Property Panels."
Problem
You do not see some properties of a component in the Component Properties dialog.
Solution
Ensure that the component properties are not filtered or restricted. For more information, see Section 21.8.1, "How to Define Property Filters" and Section 24.4, "Applying Attribute-Level Security." | http://docs.oracle.com/cd/E25178_01/webcenter.1111/e10148/jpsdg_page_editor_adv.htm | CC-MAIN-2016-30 | en | refinedweb |
32 import org.apache.http.annotation.Immutable; 33 34 /** 35 * Signals failure to establish connection using an unknown protocol scheme. 36 * 37 * @since 4.3 38 */ 39 @Immutable 40 public class UnsupportedSchemeException extends IOException { 41 42 private static final long serialVersionUID = 3597127619218687636L; 43 44 /** 45 * Creates a UnsupportedSchemeException with the specified detail message. 46 */ 47 public UnsupportedSchemeException(final String message) { 48 super(message); 49 } 50 51 } | http://hc.apache.org/httpcomponents-client-4.3.x/httpclient/xref/org/apache/http/conn/UnsupportedSchemeException.html | CC-MAIN-2016-30 | en | refinedweb |
PostGIS is a geospatial extension to PostgreSQL which gives a bunch of functions to handle geospatial data and queries, e.g. to find points of interest near a certain location, or storing a navigational route in your database. You can find the PostGIS documentation here.
In this example, I’ll show how to create a location aware website using Ruby on Rails, PostgreSQL, and PostGIS. The application, when finished, will be able to store your current location – or a check-in – in the database, show all your check-ins on a map, and show check-ins nearby check-ins.
This app is written in Rails 3.1 but it could just as well be written in another version. As of writing, the current version of the spatial_adapter gem has an issue in Rails 3.1 but we will create a workaround for this until it gets fixed.
You can view the complete source code or see the final application in action.
We will first create our geospatially enabled database. First check out out my post on installing PostgreSQL and PostGIS on Mac OS X.
Create your database:
$ createdb -h localhost my_checkins_development
Install PostGIS in your database:
$ cd /opt/local/share/postgresql90/contrib/postgis-1.5/
$ psql -d my_checkins_development -f postgis.sql -h localhost
$ psql -d my_checkins_development -f spatial_ref_sys.sql -h localhost
Your database is now ready for geospatial queries.
Create your app:
$ rails new my_checkins
The spatial_adapter gem is a plugin that adds geospatial functionality to Rails when using a PostgreSQL and PostGIS. It uses GeoRuby for data types. Add this and the pg (Postgres) gem to your Gemfile:
gem 'spatial_adapter'
gem 'pg'
Run bundle install:
bundle install
$ bundle install
Setup your config/database.yml:
development:
adapter: postgresql
database: my_checkins_development
host: localhost
And your app is geospatially enabled
Let’s create some scaffold code to handle our check-ins:
$ rails g scaffold checkin title:string location:point
Take notice of the point data type – that’s a geospatial type.
Before running your migrations, edit db/migrate/create_checkins.rb, replacing this:
t.point :location
with this:
t.point :location, :geographic => true
This tells your migration to add a geographic column that is set up to handle geographic coordinates, also known as latitudes and longitudes.
Run your migrations:
$ rake db:migrate
We are now ready to store our check-ins.
The Checkin model now contains a location field which is a data type of GeoRuby::SimpleFeatures::Point. This data type has properties of x and y. We will expose these as properties directly on the model. In app/models/checkin.rb:
class Checkin < ActiveRecord::Base
def latitude
(self.location ||= Point.new).y
end
def latitude=(value)
(self.location ||= Point.new).y = value
end
def longitude
(self.location ||= Point.new).x
end
def longitude=(value)
(self.location ||= Point.new).x = value
end
end
Latitude and longitude are now exposed.
In app/views/checkins/_form.html.erb, replace this:
<div class="field">
<%= f.label :location %><br />
<%= f.text_field :location %>
</div>
With this:
<div class="field">
<%= f.label :latitude %><br />
<%= f.text_field :latitude %>
</div>
<div class="field">
<%= f.label :longitude %><br />
<%= f.text_field :longitude %>
</div>
If it wasn’t for a little bug in spatial_adapter under Rails 3.1, we would now be able to save locations from our Rails app. However, what the bug does is that it cannot create records when the location field is set. It can update them so what we will do is to make sure it first creates the check-in with a location set to nil and then updates it with the correct location. Like this, in app/controllers/checkins_controller.rb in the create method, replace this:
def create
...
if @checkin.save
...
def create
...
if @checkin.valid?
location = @checkin.location
@checkin.location = nil
@checkin.save!
@checkin.location = location
@checkin.save!
...
And it should work.
Try and fire up your server:
$ rails s
And go to in your browser.
Next, in app/views/checkins/show.html.erb, replace this:
<p>
<b>Location:</b>
<%= @checkin.location %>
</p>
<p>
<b>Location:</b>
<%= @checkin.latitude %>, <%= @checkin.longitude %>
</p>
And it will show the latitude and longitude you just entered.
We would like to be able to create check-ins from our current location. Modern browsers exposes this functionality via a JavaScript API. Create app/assets/javascripts/checkins.js and add this:
function findMe() {
if(navigator.geolocation) {
navigator.geolocation.getCurrentPosition(function(position) {
document.getElementById('checkin_latitude').value = position.coords.latitude;
document.getElementById('checkin_longitude').value = position.coords.longitude;
}, function() {
alert('We couldn\'t find your position.');
});
} else {
alert('Your browser doesn\'t support geolocation.');
}
}
And a button in the top of app/views/checkins/_form.html.erb:
<input type="button" value="Find me!" onclick="findMe();" />
Try it in your browser. If it gives you a JavaScript error saying the findMe method isn’t defined, try restarting your server to get the new javascript loaded. You should now be able to get your current location by clicking the Find me! button.
Let’s create a method for finding nearby check-ins. PostGIS has a function named ST_DWithin which returns true if two locations are within a certain distance of each other. In app/models/checkin.rb, add the following to the top of the class:
class Checkin < ActiveRecord::Base
scope :nearby_to,
lambda { |location, max_distance|
where("ST_DWithin(location, ?, ?) AND id != ?", checkin.location, max_distance, checkin.id)
}
...
In app/controllers/checkins_controller.rb, add the following:
def show
@checkin = Checkin.find(params[:id])
@nearby_checkins = Checkin.nearby_to(@checkin, 1000)
...
In app/views/checkins/show.html.erb, add the following just before the links in the bottom:
<h2>Nearby check-ins</h2>
<ul>
<% @nearby_checkins.each do |checkin| %>
<li><%= link_to checkin.title, checkin %></li>
<% end %>
</ul>
It now shows all nearby checkins. Try adding a couple more based on your current location and see it in action.
Wouldn’t it be nice to show all our check-in on a map? We will do this using the Google Maps API.
In app/views/checkins/index.html.erb, clear out the table and list, and add the following:
<script type="text/javascript" src=""></script>
That loads the Google Maps JavaScript API functionality.
Create a div for the map:
<div id="map" style="width: 600px; height: 500px;"></div>
And add the following script at the bottom:
<script type="text/javascript">
// Create the map
var map = new google.maps.Map(document.getElementById("map"), {
mapTypeId: google.maps.MapTypeId.ROADMAP
});
// Initialize the bounds container
var bounds = new google.maps.LatLngBounds();
<% @checkins.each do |checkin| %>
There’s our map Check it out at. Try creating some check-ins around the world to see the map expand.
That’s a location aware app that stores check-ins based on our current location, shows nearby check-ins, and displays check-ins on a map.
View the complete source code or see the final application in action.
I'd love to hear your thoughts!
Write a comment, Follow me on Twitter, or Subscribe to my. | http://www.codeproject.com/Articles/312398/Creating-a-location-aware-website-using-Ruby-on-Ra?fid=1677668&df=10000&mpp=10&sort=Position&spc=None&tid=4444296&noise=1&prof=True&view=Expanded | CC-MAIN-2016-30 | en | refinedweb |
I bet this will be an easy diagnosis but I can't seem to find why I am getting a NullPointerException. I used to make things like this all the time so it's extra weird. Any help will be appreciated.
import java.util.Scanner; public class Main { private static Scanner input = new Scanner(System.in); private static String cmd; public static void main(String[] args){ while(true/*!cmd.equals("quit")*/){ cmd.equals(input.nextLine()); processCommand(); } } //processes commands. Just testing right here ATM public static void processCommand(){ if(cmd.equals("hi")){ System.out.println("boo"); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/30934-nullpointerexception-how.html | CC-MAIN-2016-30 | en | refinedweb |
C Sharp for Beginners/Inheritance
As this is about getting started programming, we don't want to confuse you with too much complicated stuff.
You may sometimes see classes declared like this:
class MyClass : Form { ...// }
instead of just like this
class MyClass { ...// }
People Inheritance
It is common for You to inherit characteristics from your parents. You may have your Mother’s way of talking or your Father’s nose. This doesn't mean you are identical to your Parents, but certain characteristics come "built in" when you’re born.
Code Inheritance
When we write code, it could be useful to inherit a whole bunch of abilities from an existing class. Lets go with an example. There are two defined classes “Animal” and “Bird”, but the “Bird” class inherits from the Animal class.
class Animal { public string kindOfAnimal; public string name; .... } class Bird : Animal // “Bird” class inherits from “Animal” class { public string featherColor; … }. We're really saying "I'm defining a new class called ‘Bird’ but it must inherit everything from the “Animal” class too.";
When to Use Inheritance
Inheritance is best used in cases where what you're trying to achieve can mostly be done by an existing class and you just want to extend it or customize it. In the following example, the class "Guitarist" inherits three fields from the class "Musician" and adds two fields of its own. The colon “:” is the part that tells the computer to make the new class (Guitarist) inherit from the class written to the right of the colon.
public class Musician { public string name; public int ageInYears; ....\\ } public class Guitarist : Musician { public string guitarType; public string guitarBrand; } Guitarist g = new Guitarist(); g.name = "JOHN ABC"; g.ageInYears = 25; g.guitarType = ”Acoustic”; g.guitarBrand = ”Gibson”;
When we create an instance of a “Guitarist”, we can immediately address the fields of both an Musician and a Guitarist (as long as they’re not private). | https://en.wikibooks.org/wiki/C_Sharp_for_Beginners/Inheritance | CC-MAIN-2016-30 | en | refinedweb |
> Sorry for being picky, but while your derivative factory > function follows the mathematical definition, it is not the > "best" way to do it numerically. The preferred way is: > > def derivative(f): > """ > Factory function: accepts a function, returns a closure > """ > def df(x, h=1e-8): > return (f(x + h/2) - f(x - h/2))/h > return df > > This is best seen by doing a graph (say a parabola) and drawing > the derivative with a large "h" using both methods near a local > minimum. > > André Hey good to know André and thanks for being picky. Fresh from yesterday's example, I posted something about *integration* to a community college math teacher list last night, which I think is maybe closer to your better way (which I hadn't seen yet). === def integrate(f,a,b,h=1e-3): """ Definite integral with discrete h Accepts whatever function f, runs x from a to b using increments h """ x = a sum = 0 while x <= b: sum += h*(f(x-h)+f(x+h))/2.0 # average f(x)*h x += h return sum >>> def g(x): return pow(x,2) # so this is what we want to investigate >>> integrate(g,0,3,h=1e-4) # h = 0.0001, close to 9 8.9995500350040558 >>> integrate(f,0,3,h=1e-6) # h = 0.000001, even closer to 9 8.9999954998916039 def defintegral(intexp, a, b): return intexp(b) - intexp(a) >>> def intg(x): return (1./3)*pow(x,3) # limit function >>> defintegral(intg, 0, 3) # exactly 9 9.0 === The post (which includes the above but is longer) hasn't showed up in the Math Forum archives yet, or I'd add a link for the record. Maybe later. Kirby | https://mail.python.org/pipermail/edu-sig/2005-March/004593.html | CC-MAIN-2016-30 | en | refinedweb |
pyexpect 1.0.2. (If someting important.equals(23)
Matchers also have many aliasses defined to enable you to write the expectations in a natural way:
expect(True).is_.true() expect(True).is_true() expect(True).is_equal(True) expect(True) == True expect(raising_calable).raises() expect(raising_calable).to_raise()
Choose whatever makes sense for your specific test to read well so that reading the test later feels natural and transports the meaning of the code as best as possible. Should an important alias be missing, pull requests are welcome.
Simplicity of extension: All the other python packages I’ve looked at:
def is_falseish(self): # whatever you have to do. For helpers and availeable values see expect() source self._assert(bool(self._expected) is False, "to be falseish") expect.is_falseish = is_falseish
Done! with multiple, but you get the idea).:_within_range.
-.2.xml | https://pypi.python.org/pypi/pyexpect/1.0.2 | CC-MAIN-2016-30 | en | refinedweb |
What am I doing wrong here? I've tried this in the main, I've tried it in a function using a return statement, and here is my attempt to pass a pointer to a character (a math operator--this is supposed to be a calculator program) into a loop. It goes in the first time, but then after that it ignores the input of the character and just prints out the statements without waiting for input. Can you look at this program fragment and tell me what to do to fix it?
Thanks a lot.
Donna
#include <stdio.h>
void get_operator(char *lop);
int get_operand();
void main(){
// int num1=0;
char op;
int num2=0;
// int accum=0;
int i;
for (i =1; i < 4; ++i){
get_operator(&op);
printf("the operator is %c\n", op);
num2 = get_operand();
printf("operand is %d\n", num2);
}
printf("the final result is %d\n", num2);
}
void get_operator( char *lop){
printf("enter a math operator or q to quit> ");
scanf("%c", lop);
}
int get_operand(){
int lnum2; /* local variable for operand */
printf("enter an integer >");
scanf("%d", &lnum2);
return lnum2;
}
} | http://cboard.cprogramming.com/c-programming/3202-scanf-char-doesn't-work-loop-printable-thread.html | CC-MAIN-2016-30 | en | refinedweb |
java.lang.Object
com.bea.content.loader.bulk.BulkLoadercom.bea.content.loader.bulk.BulkLoader
public class BulkLoader
The Content Manager bulk loader application.
This class will scan the local file system for files to load via the content manager.
BulkLoader has limited use when loading against a Library Services
Enabled Repository. If the content is new (not currently loaded into the WLP Repository)
and Library Services is enabled, the content may be loaded.
In this case, the workflowstatus property must be specified in the md.properties, which
defines the workflow status id used when the content is checked in. All lifecycle actions will
operate as if the bulkloader user is a user in the admin tools. So, for example
if a content item is checked in as "Ready" then the assignements will occur. Please review
the
Workflow javadoc for
more information.
The type (ObjectClass) for the file along with the values of any required properties must be specified in the metata data properties file. Thus, a type must be defined in the content repository before the bulkloader may load files from the file system.
A folder will be loaded as a Hierarchy Node and a file will be loaded as a Content Node.
The actual bytes will be loaded into the primary property (must be defined in the type) and must be of type Binary.
In order for bulkloader to run, the repository and the application must be passed in as arguments and the server must be running.
This class is mainly designed to run as a command-line application, via a "java com.bea.content.loader.bulk.BulkLoader" command-line. To see a usage, give it a -h flag or read the Usage.txt in this package.
Additionally, BulkLoader objects can be created and used to provide the functionality in other places. The lifecycle of a BulkLoader is as follows:
The base directory that will be loaded may be passed in using the -d paramter. If it is not specified then the current directory "." will be used. Any additional argument will be considered a file/folder to load relative to the base directory, or if an absolute path is specified then it will be used.
For FileSystemRepositories, the "-d" parameter must be the same as the "cm_fileSystem_dir" property in content-config.xml unless the repository is managed.
Folders (directories) will automatically be assigned an ObjectClass of type (ObjectClass.FOLDER)
parseArgs()if not passed in to constructor.
validateArgs()to make sure the loader has valid arguments.
doLoad()to execute the load.
If manually constructing and utilizing a BulkLoader object, be certain to synchronize all access to the object. Since the command-line program is single-threaded, BulkLoader objects are not thread-safe by design.
To load the default
LoaderFilters,
the BulkLoader looks for the com/bea/content/loader/bulk/loader.properties
file in the CLASSPATH. From that it reads the list of default LoaderFilter
class names from the "loader.defFilters" property. To not use any of the
default filters, specify +filters in the command-line args.
MetaParser,
FileCache
public static final String JNDI_FACTORY
public static final String DEF_MD_FILE_EXT
public static final String DEF_WLS_PROPS_PATH
public static final String DEF_MIME_TYPE
public static final String BEA_BINARY_CHECKSUM
public static final String WLP_BINARY_CHECKSUM
public static final String BEA_BINARY_SIZE
public static final String WLP_BINARY_SIZE
public boolean verbose
public boolean recurse
public boolean doMetaParse
public boolean includeHidden
public boolean inheritProps
public boolean ignoreErrors
public String baseDirectory
public String mdFileExt
This should start with a ".".
public String fileEncoding
public List matchList
Empty to include all.
public List ignoreList
public List htmlMatchList
public List fileList
public List loaderFilters
protected Collection metadataNames
protected long numDocsLoaded
protected String repository
protected String batchFileName
protected final String DEFAULT_PWD_FILE
protected String pwdFileName
protected String DEF_ENCODE_PREFIX
protected String url
protected final String T3S
protected String jndiName
protected String application
protected String user
protected String password
protected String deletePath
protected boolean isFileSystem
public BulkLoader()
public BulkLoader(String[] args) throws IllegalArgumentException, RepositoryException
IllegalArgumentException- thrown on invalid args
RepositoryException
parseArgs(java.lang.String[])
public boolean accept(File dir, String name)
acceptin interface
FilenameFilter
public void parseArgs(String[] args) throws IllegalArgumentException
args- the input arguments.
BulkLoader.ShowUsageException- thrown if the caller should show a usage report.
IllegalArgumentException- thrown on bad arguments.
protected void initRepoFileDir()
public IRepositoryConfig getRepositoryConfig() throws RepositoryException
RepositoryException
public void usage()
public void usage(PrintWriter out)
public void printArgs()
public void validateArgs() throws IllegalStateException
This does not validate that the arguments are valid. That will be done in initialize().
IllegalStateException
public void finished() throws RemoteException, javax.ejb.RemoveException, Exception
RemoteException
javax.ejb.RemoveException
Exception
public void doDelete() throws Exception
Exception
public void processBatchProperties() throws Exception
BulkLoader.ShowUsageException
Exception
public void processPwdProperties() throws Exception
Exception
public void doLoad() throws Exception
Exception
public void doLoad(File baseDir, String path, Properties mdProperties) throws Exception
If path is a directory, all files underneath it that match our patterns will be included. If path is a file, it will be loaded.
baseDir- the base directory (can be used to get absolute file paths).
path- the path to the file or directory (this can be multi-part, not just name).
mdProperties- the base md properties for file (this should be a clone this method can modify as needed).
SQLException- thrown on a database error.
Exception
public void loadIndividualFile(File f, String path, Properties mdProperties) throws Exception
Exception
public Properties getMetadataProperties(File base, Properties p) throws IOException
This does not do a META data parse.
base- the file or directory base path.
p- the properties to load into (null to create new).
IOException- on an error reading the properties file.
public boolean checkFileAttributes(File f)
public void inspectCurrentDirectory(File f, String path, Properties mdProperties) throws Exception
Exception
public Properties getLoaderFilterProperties(File f, Properties p)
f- the file.
p- the properties object to add to (null to create new one).
public boolean shouldInclude(String name)
public boolean shouldIgnore(String name)
public boolean isHtmlFile(String name)
public String fixPath(String path)
public void debug(String mesg)
Subclasses can override this method to change where messages go.
public void warning(String mesg, Throwable ex)
Subclasses can override this method to change where messages go.
public void warning(String mesg)
public void error(String mesg, Throwable ex)
Subclasses can override this method to change where messages go.
public void error(String mesg)
public static boolean isReadableDirectory(String name)
public static boolean isHidden(File f)
Under UNIX, the File.isHidden() reports that "/weblogicCommerce/dmsBase/." is a hidden file, which it is not. So, this fixes that problem by getting canonicals paths for directories before calling isHidden(). That seems to do the trick.
public static int main(BulkLoader loader, String[] args)
This will take a BulkLoader through the bulk loading steps. Output
will be sent via the BulkLoader's
debug(),
warning(), and
error() methods.
This will not call System.exit().
args- the command-line args.
parseArgs(java.lang.String[]),
validateArgs()
public static void main(String[] args)
This will call System.exit() on invalid args or error. To invoke a bulk load from your own code, create and manipulate a BulkLoader object. You can use the other main method, which does not exit.
args- the command-line args.
main(com.bea.content.loader.bulk.BulkLoader,java.lang.String[]) | http://docs.oracle.com/cd/E26806_01/wlp.1034/e14255/com/bea/content/loader/bulk/BulkLoader.html | CC-MAIN-2016-30 | en | refinedweb |
To recap, I am in the process of migrating this site from ASP.NET Web Forms to the MVC platform. Along with the change in server-side approach, I am also applying the Entity Framework to help with my data access layer. The previous article looked at functionality within the content management system to add articles to the site, which includes applying an Article Type (one-to-many relationshhip) and Category tags (many-to-many). The Entity diagram is as follows:
Within the database, there is a bridging table between Articles and Categories (ArticleCategories). This contains just the ArticleID and CategoryID as foreign keys. A Composite Primary Key has been created on ArticleCategories which includes both of the fields in the table. Entity Framework needs a unique constraint to work nicely, and this makes sense. It means that I cannot have two entries where the same article is linked with the same category. If such a constraint were not put in place, EF will make the table read-only.
The workflow for modifying an existing Article entry is straightforward - I will present a list of Categories, and once one of those is selected, I will be given a list of article titles that appear in that category. Selecting one of those will present the article itself in an editable form, and a button for submitting the changes. Dependant, or cascading select lists immediately click my AJAX switches, so I will use jQuery to do this. The EditArticle View therefore contains references to jQuery, and the SelectBoxes plugin that featured in this article:
<script type="text/javascript" src="../../scripts/jquery-1.3.2.js"></script> <script type="text/javascript" src="../../scripts/jquery.selectboxes.min.js"></script> ......... <h2> Edit Article</h2> <div> <%=Html.DropDownList( "CategoryID", new SelectList( (IEnumerable<Category>)ViewData["categories"], "CategoryID", "CategoryName" ), string.Empty)%> </div> <div> <select name="ArticleID" id="ArticleID"> </select> </div> <div id="articleform"> </div>
There are two Select Lists - one built via an HtmlHelper extension method, and an empty one in html (ArticleID). There is also an empty div called articleform, which will get populated by an edit form via AJAX. The first Select list is populated with data by the Controller when the view is first requested. The controller action takes no arguments and just gets data from the Repository to list the Categories:
[ValidateInput(false)] public ActionResult EditArticle() { ViewData["Categories"] = repository.GetCategoryList(); return View(); }
And the Repository method that gets called:
public IQueryable<Category> GetCategoryList() { return (de.CategorySet.OrderBy(c => c.CategoryName)).AsQueryable(); }
All quite easy so far, and the result when running the page is this (as you would expect):
We need some AJAX help to get the list of articles when one of the Categories is selected, and to display them into the second select list. We also need some code on the server-side to respond to the AJAX request, so that involves another Action on the controller, and a data access method in the Repository. The controller action will return JSON so that we can work with it easily within jQuery:
public JsonResult GetArticlesByCategory(int categoryid) { return Json(repository.GetArticlesByCategory(categoryid)); }
And the Repository method is as follows:
public IQueryable<ArticleTitle> GetArticlesByCategory(int id) { return (de.ArticleSet.Where(a => a.Categories .Any(c => c.CategoryID == id)) .OrderByDescending(a => a.DateCreated) .Select(a => new ArticleTitle { ID = a.ArticleID, Head = a.Headline, })) .AsQueryable(); }
If you have read previous ramblings of mine about migrating across to MVC and EF, you will already know that I created a small class called ArticleTitle, which just contains two properties - the Article Title and its ID. This means I can list articles without having to bring back all of the text content, date created, etc etc which I don't need to display. This lightweight class is perfect for the second select list. So I have a method that gets the items, and an action that turns the result into JSON. Now for some jQuery to fit the two together:
<script type="text/javascript"> $(document).ready(function() { $("#CategoryID").change(function() { $.ajaxSetup({ cache: false }); $.getJSON("/Admin/GetArticlesByCategory/" + $("#CategoryID").val(), null, function(data) { $("#ArticleID").removeOption(/./).addOption("", "", false); for (var i = 0; i < data.length; i++) { var val = data[i].ID; var text = data[i].Head; $("#ArticleID").addOption(val, text, false); } }); }); $("#ArticleID").change(function() { $("#articleform").load("/Admin/GetArticleForEdit/" + $("#ArticleID").val()); }); }); </script>
This goes into the <head> area of the EditArticle view. It applies an event handler to the onchange event of the Categories select list, which fires an AJAX request to the controller action just detailed. Before it does that, an ajaxSetup option is set to prevent IE from caching the results in the select list. Having obtained the JSON from the controller action, the SelectBoxes plug in clears the ArticleID select list of any data that was populated by a previous request, and then adds an empty string as a default option, before iterating over the JSON and populating the ArticleID select list. Finally, it adds an event handler to the onchange event of the ArticleID select list, which loads the result of a call to another controller action - GetArticleForEdit, which takes the ID of the article as a parameter:
public ActionResult GetArticleForEdit(int id) { var article = repository.GetArticle(id); var selectedvalue = article.ArticleTypes.ArticleTypeID; ViewData["ArticleTypes"] = new SelectList( repository.GetArticleTypes(), "ArticleTypeID", "ArticleTypeName", selectedvalue ); ViewData["Categories"] = repository.GetCategoryList(); ViewData["Article"] = article; return View("EditArticlePartial"); }
The action adds some data to the ViewDataDictionery - a SelectList populated with SelectListItems for the Select lsit of article types, together with the current Article's ArticleTypeID as the selected item (the last argumanet in teh parameter list), a list of categories and the details of the article to be edited, and returns it to a Partial View, EditArticlePartial, which then makes use of the data to provide the html for a populated edit form. the repository.GetArticle() method deserves a quick look:
public Article GetArticle(int id) { return (de.ArticleSet .Include("ArticleTypes") .Include("Categories") .Where(a => a.ArticleID == id)) .First(); }
An Article object is returned together with its collection of ArticleTypes and its collection of Categories through the use of the Include() extension method. This ensures that the collections are populated and loaded. The string in the parameter to the Include() method is the navigation property that features in the Entity Diagram at the beginning of the this article. We need the collections so that we can map checked checkboxes and select lists' selected items back when the edit form is displayed:
<%@ Control <table> <tr> <td>Headline</td> <td><%= Html.TextBox("Headline", article.Headline, new {</td> <td><input type="submit" name="action" id="action" value="Submit" /></td> </tr> </table> </form>
Given the lack of helpers for multiple checkboxes within the MVC Framework, I have resorted to a classic ASP style of code. It's quite simple: as each option element it written to the browser, its value os compared to the CategoryID values in the collection of Categories that come with the article that was passed in via ViewData. If a match is found, checked="checked" is added to the option as an attribute. I've spaced the code out here in the sample, but written it all in one line in the original. The reason for that is that the html that gets rendered will retain the linebreaks that occur as a result of formatting the code to a more readable form. Just like the old ColdFusion pages, where if you viewed source, you would see large chunks of white space where server-side code was embedded. The real answer is to write your own Html.Helper for the CheckBoxList.
You will notice in the example shown in the image that my articles contain html tags. If I click submit at the moment, I will get a YSOD (Yellow Screen of Death) telling me that potentially dangerouss values were posted. So I prevent that by attributing the controller action that will take care of updating the article with ValidateInput(false):
[ValidateInput(false)] [AcceptVerbs("POST")] public ActionResult EditArticle(Article article) { var articleTypeId = Request.Form["ArticleTypeID"]; var categoryId = Request["CategoryID"].Split(','); repository.EditArticle(article, articleTypeId, categoryId); return Content("Updated"); }
The final part of code to look at is the actual repository.EditArticle() method that persists the changed values in the database:
public void EditArticle(Article article, string articleTypeId, string[] categoryId) { var id = 0; Article art = de.ArticleSet .Include("ArticleTypes") .Include("Categories") .Where(a => a.ArticleID == article.ArticleID) .First(); var count = art.Categories.Count; for (var i = 0; i < count; i++) { art.Categories.Remove(art.Categories.ElementAt(i)); count--; } foreach (var c in categoryId) { id = int.Parse(c); Category category = de.CategorySet.Where(ct => ct.CategoryID == id).First(); art.Categories.Add(category); } art.Headline = article.Headline; art.Abstract = article.Abstract; art.Maintext = article.Maintext; art.DateAmended = DateTime.Now; art.ArticleTypesReference.EntityKey = new EntityKey( "DotnettingEntities.ArticleTypeSet", "ArticleTypeID", int.Parse(articleTypeId) ); de.SaveChanges(); }
The first section of the code gets the article that is to be updated. Updating the many-to-many relationship that exists between the Article and its Categories is next. If I was using a stored procedure, I would include some SQL that simply removes the existing rows from the bridging table and replace them with whatever was posted as part of the updated article. This will execute no matter whether there were any changes to the categories or not. The Entity Framework appears to be a lot more clever than that. The code does indeed remove the categories from the article's collection and marks them for deletion. Then the code adds whichever categories were posted from the form. However, if you use Sql Profiler to check, you will see that a SQL command to delete and insert items into the ArticleCategories bridging table only occurs if there have indeed been any changes to the categories that the article was related to. Nevertheless, an SQL command is executed for each of the Category objects that are fetched befroe being added to the collection.
The rest of the code is mostly self-explantory. The items that could be edited in the form are updated, along with the DateAmended property. Finally, the relationship between the Article and the ArticleType is updated by setting the value of the key for the potentially changed ArticleType. An alternative method of updating a one-to-many relationship like this is to query for the ArticleType object that has the ArticleTypeID that was posted, and then applying that to the ArticleType collection of the Article::
int id = int.Parse(articleTypeId); ArticleType at = de.ArticleTypeSet.Where(a => a.ArticleTypeID == id).First(); article.ArticleTypes = at;
However, doing this causes an SQL query to be executed against the database. Setting the EntityKey value does not.
The result is that it all works. But I'm not 100% happy with the way that the many-to-many relationship is updated. I will find some time to do some more investigation - unless someone comes up with a cleaner way to manage this element of the process in the meantime.
9 Comments
- Rahul Gupta
- Jim
A suggestion would be to add links where you reference other posts so you can easily follow to the post you refer to.
Thanks for the post. It has been very helpful.
- venkateshsan
- Mike
All the code that is relevant to the article is in the article. I've done my best to try to simplify the concepts. If that is not enough, then I'm sorry. But I will not be making the source code of this application available.
- Me12
EntityReference<FinanceModel.DocumentDestinations> dd = new EntityReference<FinanceModel.DocumentDestinations>();
dd.EntityKey = DocumentDestination(documentDestination_ID).EntityKey;
document.DocumentDestinationsReference = dd;
- nairit
- ahmed Zubair
- Max
- Mike
All of the relevant code is included in the article with detailed explanations. If you are having problems adapting the concepts to your particular projects, I suggest you post a question to the forums at and explain clearly what you are trying to achieve, what you have done so far and what problems/error messages you get. | http://www.mikesdotnetting.com/article/110/asp-net-mvc-entity-framework-modifying-one-to-many-and-many-to-many-relationships | CC-MAIN-2016-30 | en | refinedweb |
Annotation
The article will familiarize developers of application software with tasks set before them
On default by operating system in the article Windows is meant. By 64-bit systems x86-64 (AMD64) architecture is understood. By the development environment – Visual Studio 2005/2008. You may download demo sample which will be touched upon in the article from this address:.
Introduction
Parallel computing and large RAM size are now available not only for large firmware complexes meant for large-scale scientific computing, but are being used also for solving everyday tasks related to work, study, entertainment and computer games.
The possibility of paralleling and large RAM size, on the one hand, make the development of resource-intensive applications easier, but on the other hand, demand more qualification and knowledge in the sphere of parallel programming from a programmer. Unfortunately, a lot of developers are far from possessing such qualification and knowledge. And this is not because they are bad developers but because they simply haven’t come across such tasks. This is of no surprise as creation of parallel systems of information processing has been until recently carried out mostly in scientific institutions while solving tasks of modeling and forecasting. Parallel computer complexes with large memory size were used also for solving applied tasks by enterprises, banks etc, but until recently specificity of development, testing and debugging of such systems by themselves. By resource-intensive software we mean program code which uses efficiently abilities of multiprocessor systems and large memory size (2GB and more). That’s why we’d like to bring some knowledge to developers who may find it useful while mastering modern parallel 64-bit systems in the nearest future.
It will be fair to mention that problems related to parallel programming have been studied in detail long ago and described in many books, articles and study courses. That’s why this article will devote most attention to the sphere of organizational and practical issues of developing high-performance applications and to the use of 64-bit technologies.
While talking about 64-bit systems we’ll consider that they use LLP64 data model (see table 1). It is this data model that is used in 64-bit versions of Windows operating system. But information given here may be as well useful while working with systems with a data model different from LLP64.
Table 1. Data models and their use in different operating systems.
Be brave to use parallelism and 64-bit technology
Understanding the conservatism in the sphere of developing large program systems, we would like, though, to advise you to use those abilities which are provided by multicore 64-bit processors. It may become a large competitive advantage over similar systems and also become a good reason for news in advertisement companies.
It is senseless to delay 64-bit technology and parallelism as their mastering is inevitable. You may ignore all-round passion for a new programming language or optimizing a program for MMX technology. But you cannot avoid increase of the size of processed data and slowing down of clock frequency rise. Let’s touch upon this statemzent picture 1). There is an article on this topic which is rather interesting: "The Free Lunch Is Over. A Fundamental Turn Toward Concurrency in Software" [1].
Picture 1. Rise of clock frequency and the number of transistors on a dice. During last 30 years productivity has been determined by clock frequency, optimization of command execution and cache enlarging. In next years it will be determined by the number of cores. Development of parallel programming means will become the main direction of programming technologies’ development.
Parallel programming will allow not only to solve the problem of slowing down of clock frequency rise speed but in general come to creation of scalable software which will use fully abort because of memory shortage after several hours of work. Thirdly, this is an opportunity to easily work with data arrays of several GB. Sometimes it results in amazing rise of productivity through means of excluding access operations to the hard disk. give “Yes” answer at least for one.
Provide yourself with good hardware
So, you decided to use parallelism and 64-bit technologies in your program developments. Perfect. So, let’s at first touch upon some organizational questions.
Despite that you have to face the development of complex programs processing large data sizes, can feel deeply all the troubles caused by the lack of RAM memory working it so that those processes which took 10 minutes take less than 5 now.
I want you once again to pay your attention that the aim is not to occupy a programmer with a useful task in his leisure-time but to speed up all the processes in general. Installation of a second computer (dual-processor system) with the purpose that the programmer will switch over to other tasks while waiting is wrong at all. A programmer’s labor is not that of a street cleaner, who can clear a bench from snow during a break while breaking ice. A programmer’s labor needs concentration on the task and keeping in mind a lot of its elements. Don’t try to switch over a programmer (this try will be useless), try to make it so that he could continue to solve the task he’s working at as soon as possible. According to the article “Stresses of multitask work: how to fight them” [2] to go deeply into some other task or an interrupted one a person needs 25 minutes. If you don’t provide continuity of the process, half the time will be wasted on this very switching over. It doesn’t matter what it is – playing ping-pong or searching for an error in another program.
Don’t spare money to buy some more GB of memory. This purchase will be repaid after several steps of debugging a program allocating large memory size. Be aware that lack of RAM memory causes swapping and can slow down the process of debugging from minutes to hours.
Don’t spare money to provide the machine with RAID subsystem. Not to be a theorist, here you are an example from our own experience (table 2).
Configuration (pay attention to RAID) Time of building of an average project using large number of exterior libraries. AMD Athlon™ 64 X2 Dual Core Processor 3800+, 2 GB of RAM,2 x 250Gb HDD SATA - RAID 0 95 minutes AMD Athlon™ 64 X2 Dual Core Processor 4000+, 4 GB of RAM,500 Gb HDD SATA (No RAID) 140 minutes Table 2. An example of how RAID influences the speed of an application’s building. Dear managers! Trust me that economizing on hardware is compensated by delays of programmers’ work. Such companies as Microsoft provide the developers with latest models of hardware not because of generosity and wastefulness. They do count their money and their example shouldn’t be ignored.
At this point the part of the article devoted to managers is over, and we would like to address again creators of program solutions. Do demand for the equipment you consider to be necessary for you. Don’t be shy, after all your manager is likely just not to understand that it is profitable for everybody. You should enlighten him. Moreover, in case the plan isn’t fulfilled it is you who will seem to be guilty. It is easier to get new machinery than try to explain on what you waste your time. Imagine yourself how can your excuse projecting a new system and even not the study of theory, this task is to demand to buy all the necessary hardware and software in good time. Only after that you may begin to develop resource-intensive program solutions efficiently. It is impossible to write and check parallel programs without multicore processors. And it is impossible to write a system for processing large data sizes without necessary RAM memory size.
Before we switch over to the next topic, we would like to share some ideas with you which will help you to make your work test packs) for which usual machines are not productive enough, may be solved by using several special high-performance machines with remote access. Such an example of remote access is Remote Desktop or X-Win. Usually simultaneous test launches are carried out only by few developers. And for a group of 5-7 developers 2 dedicated high-performance machines are quite enough. It won’t be the most convenient solution but it will be rather saving in comparison to providing every developer with such workst a lot of effort and time.
Causes why the debugger is not so attractive
Bad applicability of the debugger while working debugging variant there occurs allocation of larger memory size for control of going out of the arrays’ limits, memory fill during allocation/deletion etc. This slows down the work even more.
One can truly notice that a program may be debugged not necessarily at large working data sizes and one may manage with testing tasks. Unfortunately, this is not so. An unpleasant surprise consists in that while developing 64-bit systems you cannot be sure of the correct work of algorithms, testing them at small data sizes instead of working sizes of many GB.
Here you are another simple example demonstrating the problem of necessary testing at large data sizes.
#include #include #include #include©; } std::cout << "Array size=" << buffer.size() << std::endl; return 0; } This program reads the file and saves in the array all the symbols related to capital English letters. was used correctly on the 32-bit system – with taking into consideration this limit and no errors occurred. On the 64-bit system we’d like to process files of larger size as there is no limit of the array’s size of 2 GB. Unfortunately, the program is written incorrectly from the point of view of LLP64 data model (see table 1) used in the 64-bit Windows version. The loop contains int type variable whose size is still 32 bits. If the file’s size is 6 GB, condition "i != fileSize" will never be fulfilled and an infinite loop will occur.
This code is mentioned to show how difficult it is to use the debugger while searching for errors which occur only at a large memory size. On getting an eternal loop while processing the file on the 64-bit system you may take a file of 50 bites for processing and watch how the functions works under the debugger. But an error won’t occur at such data size and to watch the processing of 6 billion elements under the debugger is impossible.
Of course, you should understand that this is only an example and that it can be debugged easily and the cause of the loop may be found. Unfortunately, this often becomes practically impossible in complex systems because of the slow speed of the processing of large data sizes.
To learn more about such unpleasant examples see articles “Forgotten problems of 64-bit program development” [3] and “20 issues of porting C++ code on the 64-bit platform” [4].
2) Multi-threading
The method of several instruction threads executed simultaneously for speeding up the processing of large data size has been used for a long time and rather successfully in cluster systems and high-performance servers. But only with the appearance of multicore processors on market, the possibility of parallel data processing is being widely used by application software. And the urgency of the parallel system development will only increase in future.
Unfortunately, it is not simple to explain what is difficult about debugging of parallel programs. Only on facing the task of searching and correcting errors in parallel systems one may feel and understand the uselessness of such a tool as a debugger. But in general, all the problems may be reduced to the impossibility of reproduction of many errors and to the way the debugging process influences the sequence of work of parallel algorithms.
To learn more about the problems of debugging parallel systems you may read the following articles: “Program debugging technology for machines with mass parallelism” [5], "Multi-threaded Debugging Techniques" [6], "Detecting Potential Deadlocks" [7].
The difficulties described are solved by using specialized methods and tools. You may handle 64-bit code by using static analyzers working with the input program code and not demanding its launch. Such an example is the static code analyzer Viva64 [8].
To debug parallel systems you should pay attention to such tools as TotalView Debugger (TVD) [9]. TotalView is the debugger for languages C, C++ and Fortran which works at Unix-compatible operating system and Mac OS X. It allows to control execution threads, show data of one or all the threads, can synchronize the threads through breakpoints. It supports also parallel programs using MPI and OpenMP.
Another interesting application is the tool of multi-threading analysis Intel® Threading Analysis Tools [10].
Use of a logging system
All the tools both mentioned and remaining undiscussed are surely useful and may be of great help while developing high-performance applications. But one shouldn’t forget about such time-proved methodology as the use of logging systems. Debugging by logging method hasn’t become less urgent for several decades and still remains a good tool about which we’ll speak in detail. The only change concerning logging systems is growing demands towards them. Let’s try to list the properties a modern logging system should possess for high-performance systems:
- The code providing logging of data in the debugging version must be absent in the output version of a software product. Firstly, this is related to the increase of performance and decrease of the software product’s size. Secondly, it doesn’t allow to use debugging information for cracking of an application and other illegal actions.
- The logging system’s interfaces should be compact not the number of its details. carry out realized. This allows to include the debugging information into Release-versions what is important when carrying out debugging at large data size. Unfortunately, when compiling the un pairs of brackets what is often forgotten. That’s why let’s bring some improvement: is turned off this code doesn’t matter at all and you can safely use it in critical code sections.
enum E_LogVerbose { Main, Full }; #ifdef DEBUG_MODE void WriteLog(E_LogVerbose, const char *strFormat, ...) { ... } #else ... #endif WriteLog (Full, "Coordinate = (%d, %d)\n", x, y); This is convenient in that way that you can decide whether to filter unimportant messages or not after the program’s shutdown by using a special utility. The disadvantage of this method is that all the information is shown – both important and unimportant, what may influence the productivity badly. That’s why you may create several functions of WriteLogMain, WriteLogFull type and so on, whose realization will depend upon the mode of the program’s building. We mentioned that the writing of the debugging information must not influence the speed of the algorithm’s work too much. We can reach this by creating a system of gathering messages, the writing of which occurs in the thread executed simultaneously. The outline of this mechanism is shown on picture 2.
Picture 2. Logging system with lazy data write. As you can see on the picture the next data portion is written into an intermediate array with strings of fixed length. The fixed size of the array and its strings allows to exclude expensive operations of memory allocation. of anticipatory writing of information into the file.
The described mechanism provides practically instant execution of WriteLog function. If there are offloaded processor’s cores in the system the writing into the file will be virtually transparent for the main program code.
The advantage of the described system is that it can function practically without changes while debugging the parallel program, when several threads are being written into the log simultaneously. You need just to add a process identifier so that you can know later from what threads the messages were received (see picture 3).
Picture 3. Logging system while debugging multithread applications. The last improvement we’d like to offer is organization to show this by an example.
Program code:
class NewLevel { public: NewLevel() { WriteLog("__BEGIN_LEVEL__\n"); } {.
The use of right data types from the viewpoint of 64-bit technologies
The use of. To such types int, unsigned, long, unsigned long, ptrdiff_t, size_t and pointers can be referred. Unfortunately, there are practically no popular literature and articles which touch upon the problems of choosing types. And those sources which do, for example "Software Optimization Guide for AMD64 Processors" [12], are seldom read by application programmers.
The urgency of right choice of base types for processing data is determined by two important causes: the correct code’s work and its efficiency.
Due to the historical development the base and the most often used integer type in C and C++ languages is int or unsigned int. It is accepted to consider int type the most optimal as its size coincides with the length of the processor’s computer word.
The computer word is a group of RAM memory bits taken by the processor at one call (or processed by it as a single group) and usually contains 16, 32 or 64 bits.
The tradition to make data models LLP64 and LP64 which are used in 64-bit Windows operating system and most Unix systems (Linux, Solaris, SGI Irix, HP UX 11).
It is a bad decision to leave int type size of 32-bit due to many reasons, but it is really a reasonable way to choose the lesser of two evils. First of all, it is related to the problems of providing backward compatibility. To learn more about the reasons of this choice you may read the blog "Why did the Win64 team choose the LLP64 model?" [13] and the article "64-Bit Programming Models: Why LP64?" [14].
For developers of 64-bit applications all said above is the reason to follow two new recommendations in the process of developing software.
Recommendation 1. Use ptrdiff_t and size_t types for the loop counter and address arithmetic’s counter instead of int and unsigned.
Recommendation 2. Use ptrdiff_t and size_t types for indexing in arrays instead of int and unsigned.
In other words you should use whenever possible data types whose size is 64 bits in a 64-bit system. Consequently you shouldn’t use constructions like:
for (int i = 0; i != n; i++) array[i] = 0.0; Yes, this is a canonical code example. Yes, it is included in many programs. Yes, with it learning C and C++ languages begins. But it is recommended not to use it anymore. Use either an iterator or data types ptdriff_t and size_t). Consequently it cannot be used instead of ptrdiff_t and size_t types.
2) The use by examples why we are so insistent asking you to use ptrdiff_t/size_t type instead of usual int/unsigned type.
We’ll begin with an example illustrating the typical error of using unsigned type for the loop counter in 64-bit code. We have already described a similar example before but let’s see it once again as this error is widespread:
size_t Count = BigValue; for (unsigned Index = 0; Index != Count; ++Index) { ... } This is typical code variants of which size_t type instead of unsigned. The next example shows the error of using int type for indexing large arrays:
double *BigArray; int Index = 0; while (...) BigArray[Index++] = 3.14f; This code doesn’t seem suspicious to an application developer accustomed to the practice of using variables of int or unsigned types as arrays’ indexes. Unfortunately, this code won’t work in a 64-bit system if the size of the processed array BigArray becomes more than four billion items. In this case an overflow of Index variable will occur and the result of the program’s work will be incorrect (not the whole array will be filled). Again, the correction of the code is to use ptrdiff_t or size_t types for indexes. As the last example we’d like to demonstrate the potential danger of mixed use of 32-bit and 64-bit types, which you should avoid whenever possible. Unfortunately, few developers think about the consequences of inaccurate mixed arithmetic and the next example is absolutely unexpected for many (the results are received with the use of ptrdiff_t and size_t types from the viewpoint of productivity. For demonstration we’ll take a simple algorithm of calculating the minimal length of the path in the labyrinth. You may see the whole code of the program through this link:.
In this article we place only the text of functions FindMinPath32 and FindMinPath64.)[eight)[eight; } FindMinPath32 function is written in classic 32-bit style with the use of unsigned types. FindMinPath64 function differs from it only in that all the unsigned types in it are replaced Table 2. Execution time size_t type instead of unsigned allows the compiler to construct more efficient code working 8% faster!
It is a simple and clear example of how the use of data not equal to the computer word’s size decreases the algorithms productivity. Simple replacement of int and unsigned types with ptrdiff_t and size_t may give great productivity increase. First of all this refers to the use of these data types for indexing arrays, address arithmetic and organization of loops.
We hope that having read all said above you will think if you should continue to write:
for (int i = 0; i !=n; i++) array[i] = 0.0; To automate the error search in 64-bit code developers of Windows-application may take into consideration the static code analyzer Viva64 [8]. Firstly, its use will help to find most errors. Secondly, while developing programs under its control you will use 32-bit variables more seldom, avoid mixed arithmetic with 32-bit and 64-bit data types what will at once increase productivity of your code. For developers of Unix-system such static analyzer may be of interest as Gimpel Software PC-Lint [15] and Parasoft C++test [16].].
Additional ways of increasing productivity of program systems
In the last part of this article we’d like to touch upon some more technologies which may be useful for you while developing resource-intensive program solutions.
Intrinsic- functions
Intrinsic-functions are special system-dependent functions which execute actions impossible to be executed on the level of C/C++ code or which execute these actions much more efficiently. As the matter of fact they allow to avoid using inline-assembler because it is often impossible or undesirable.
Programs may use intrinsic-functions for creating faster code due to the absence of overhead costs on the call of a usual function type. In this case, of course, the code’s size will be a bit larger. In MSDN the list of functions is given which may be replaced with their intrinsic-versions. For example, these functions are memcpy, strcmp etc.
In Microsoft Visual C++ compiler there is a special option «/Oi» which allows to automatically replace calls of some functions with intrinsic-analogs.
Beside automatic replacement of usual functions with intrinsic-variants we can explicitly use intrinsic-functions in the code. It in compilers, while assembler code has to be updated manually.
- Inline optimizer doesn’t work with assembler code, that’s why you need exterior linking of the module, while intrinsic-code doesn’t need this.
- Intrinsic-code is easier to port than assembler code.
- The use of intrinsic-functions in automatic mode (with the help of the compiler’s key) allows to get some per cent of productivity increase free and “manual” mode even more. That’s why the use of intrinsic-functions is justified.
- To learn more about the use of intrinsic-functions you may see Visual C++ group’s blog [21].
- Data alignment doesn’t influence the code productivity so greatly as it was 10 years ago. But sometimes you can get a little profit in this sphere too saving some memory and productivity.
struct foo_original {int a; void *b; int c; }; This structure takes 12 bytes in 32-bit mode but in 64-bit mode it takes 24 bytes. In order to make it so that this structure takes prescribed 16 bytes in 64-bit mode you should change the sequence order of fields: struct foo_new { void *b; int a; int c; }; In some cases it is useful to help the compiler explicitly by defining the alignment manually in order to increase productivity. For example, SSE); Sources "Porting and Optimizing Multimedia Codecs for AMD64 architecture on Microsoft Windows" [19], "Porting and Optimizing Applications on 64-bit Windows for AMD64 Architecture" [20] offer detailed review of these problems.
Files mapped into memory
With the appearance of 64-bit systems the technology of mapping of files into memory became more attractive because the data access hole increased. to 32-bit architectures. Only a region of the file can be mapped into the address space, and to access such a file by memory mapping, those regions will have to be mapped into and out of the address space as needed. On 64-bit windows you have much larger address space, so you may map whole file at once.
Keyword __restrict
One of the most serious problems for a compiler is aliasing. When the code reads and writes memory it is often impossible at the step of compilation to determine whether more than one index is provided with access to this memory space, i.e. whether more than one index can be a "synonym" for one and the same memory space. That's why the compiler should be very careful working inside a loop in which memory is both read and written while storing data in registers and not in memory. This insufficient use of registers may influence the performance greatly.
The keyword __restrict is used to make it easier for the compiler to make a decision. It "tells" the compiler to use registers widely.
Keyword __restrict allows the compiler not to consider the marked pointers aliased, i.e. referring to one and the same memory area. In this case the compiler can provide more efficient optimization. Let’s look at the example:
int * __restrict a; int *b, *c; for (int i = 0; i < 100; i++) { *a += *b++ - *c++ ; // no aliases exist } In this code the compiler can safely keep the sum in the register related to variable “a” avoiding writing into memory. MSDN is a good source of information about the use of __restrict keyword.
SSE- instructions
Applications executed on 64-bit processors (independently of the mode) will work more efficiently if SSE-instructions are used in them instead of MMX/3DNow. This is related to the capacity of processed data. SSE/SSE2 instructions operate with 128-bit data, while MMX/3DNow only with 64-bit data. That’s why it is better to rewrite the code which uses MMX/3DNow with SSE-orientation.
We won’t dwell upon SSE-constructions in this article offering the readers who may be interested to read the documentation written by developers of processor architectures.
Some particular rules of using language constructions
64-bit architecture gives new opportunities for optimizing the programming language on the level of separate operators. These are the methods (which have become traditional already) of “rewriting” pieces of a program for the compiler to optimize them better. Of course we cannot recommend these methods for mass use but it may be useful to learn about them.
On the first place of the whole list of these optimizations is manual unrolling of the loops. The essence of this method is clear (/fp:fast key for Visual C++) but not always.
- Another syntax optimization is the use of array notation instead of pointer one.
Conclusion
Despite that you’ll have to face many difficulties while creating program systems using hardware abilities of modern computers efficiently, it is worthwhile. Parallel 64-bit systems provide new possibilities in developing real scalable solutions. They allow to enlarge the abilities of modern data processing software tools, be it games, CAD-systems or pattern recognition. We wish you luck in mastering new technologies!
References.
- herb Sutter. The Free Lunch Is Over. A Fundamental Turn Toward Concurrency in Software. <a href="" target="_blank">
-. | http://www.gamedev.net/page/resources/_/technical/general-programming/development-of-resource-intensive-applications-r2496 | CC-MAIN-2016-30 | en | refinedweb |
To enable more interoperability scenarios, Microsoft has released today.
In this post, we’ll take a look at the latest releases of two open source tools that help PHP developers implement OData producer support quickly and easily on Windows and Linux platforms:
- The OData Producer Library for PHP, an open source server library that helps PHP developers expose data sources for querying via OData. (This is essentially a PHP port of certain aspects of the OData functionality found in System.Data.Services.)
- The OData Connector for MySQL, an open source command-line tool that generates an implementation of the OData Producer Library for PHP from a specified MySQL database.
These tools are written in platform-agnostic PHP, with no dependencies on .NET.
OData Producer Library for PHP
Last September, my colleague Claudio Caldato announced the first release of the Odata Producer Library for PHP, an open-source cross-platform PHP library available on Codeplex. This library has evolved in response to community feedback, and the latest build (Version 1.1) includes performance optimizations, finer-grained control of data query behavior, and comprehensive documentation.
OData can be used with any data source described by an Entity Data Model (EDM). The structure of relational databases, XML files, spreadsheets, and many other data sources can be mapped to an EDM, and that mapping takes the form of a set of metadata to describe the entities, associations and properties of the data source. The details of EDM are beyond the scope of this blog, but if you’re curious here’s a simple example of how EDM can be used to build a conceptual model of a data source.
The OData Producer Library for PHP is essentially an open source reference implementation of OData-relevant parts of the .NET framework’s System.Data.Services namespace, allowing developers on non-.NET platforms to more easily build OData providers. To use it, you define your data source through the IDataServiceMetadataProvider (IDSMP) interface, and then you can define an associated implementation of the IDataServiceQueryProvider (IDSQP) interface to retrieve data for OData queries. If your data source contains binary objects, you can also implement the optional IDataServiceStreamProvider interface to handle streaming of blobs such as media files.
Once you’ve deployed your implementation, the flow of processing an OData client request is as follows:
- The OData server receives the submitted request, which includes the URI to the target resource and may also include $filter, $orderby, $expand and $skiptoken clauses to be applied to the target resource.
- The OData server parses and validates the headers associated with the request.
- The OData server parses the URI to resource, parses the query options to check their syntax, and verifies that the current service configuration allows access to the specified resource.
- Once all of the above steps are completed, the OData Producer for PHP library code is ready to process the request via your custom IDataServiceQueryProvider and return the results to the client.
These processing steps are the same in .NET as they are in the OData Producer Library for PHP, but in the .NET implementation a LINQ query is generated from the parsed request. PHP doesn’t have support for LINQ, so the producer provides hooks which can be used to generate the PHP expression by default from the parsed expression tree. For example, in the case of a MySQL data source, a MySQL query expression would be generated.
The net result is that PHP developers can offer the same querying functionality on Linux and other platforms as a .NET developer can offer through System.Data.Services. Here are a few other details worth nothing:
- In C#/.NET, the System.Linq.Expressions namespace contains classes for building expression trees, and the OData Producer Library for PHP has its own classes for this purpose.
- The IDSQP interface in the OData Producer Library for PHP differs slightly from .NET’s IDSQP interface (due to the lack of support for LINQ in PHP).
- System.Data.Services uses WCF to host the OData provider service, whereas the OData Producer Library for PHP uses a web server (IIS or Apache) and urlrewrite to host the service.
- The design of Writer (to serialize the returned query results) is the same for both .NET and PHP, allowing serialization of either .NET objects or PHP objects as Atom/JSON.
For a deeper look at some of the technical details, check out Anu Chandy’s blog post on the OData Producer Library for PHP or see the OData Producer for PHP documentation available on Codeplex.
OData Connector for MySQL
The OData Producer for PHP can be used to expose any type of data source via OData, and one of the most popular data sources for PHP developers is MySQL. A new code generator tool, the open source OData Connector for MySQL, is now available to help PHP developers implement OData producer support for MySQL databases quickly and simply.
The OData Connector for MySQL generates code to implement the interfaces necessary to create an OData feed for a MySQL database. The syntax for using the connector is simple and straightforward:
php MySQLConnector.php /db=mysqldb_name /srvc=odata_service_name /u=db_user_name /pw=db_password /h=db_host_name
The MySQLConnector generates an EDMX file containing metadata that describes the data source, and then prompts the user for whether to continue with code generation or stop to allow manual editing of the metadata before the code generation step.
EDMX is the Entity Data Model XML format, and an EDMX file contains a conceptual model, a storage model, and the mapping between those models. In order to generate an EDMX from a MySQL database, the OData Connector for MySQL needs to be able to do database schema introspection, and it does this through the Doctrine DBAL (Database Abstraction Layer). You don’t need to understand the details of EDMX in order to use the OData Connector for MySQL, but if you’re curious see the .edmx File Overview article on MSDN.
If you’re familiar with EDMX and wish to have very fine-grained control of the exposed OData feeds, you can edit the metadata as shown in the diagram, but this step is not necessary. You can also set access rights for specific entities in the DataService::InitializeService method after the code has been generated, as described below.
If you stopped the process to edit the EDMX, one additional command is needed to complete the generation of code for the interfaces used by the OData Producer Library for PHP:
php MySQLConnector.php /srvc=odata_service_name
Note that the generated code will expose all of the tables in the MySQL database as OData feeds. In a typical production scenario, however, you would probably want to fine-tune the interface code to remove entities that aren’t appropriate for OData feeds. The simplest way to do this is to use the DataServiceConfiguration object in the DataService::InitializeService method to set the access rights to NONE for any entities that should not be exposed. For example, you may be creating an OData provider for a CMS, and you don’t want to allow OData queries against the table of users, or tables that are only used for internal purposes within your CMS.
For more detailed information about working with the OData Connector for MySQL, refer to the user guide available on the project site on Codeplex.
These tools are open-source (BSD license), so you can download them and start using them immediately at no cost, on Linux, Windows, or any PHP platform. Our team will continue to work to enable more OData scenarios, and we’re always interested in your thoughts. What other tools would you like to see available for working with OData?
Wow – especially the MySql OData connector is great news. I am sure it will benefit the OData ecosystem greatly.
Thanks!
By MySql Odata Connector, you can connect from sql to php development as well as .net.
Doug Mahugh
It looks like Kellerman Software has a MySQL LINQ Provider:
This is really great news for PHP developers. OData itself is phenomenonal. | https://blogs.msdn.microsoft.com/interoperability/2012/02/09/open-source-odata-tools-for-mysql-and-php-developers/ | CC-MAIN-2016-30 | en | refinedweb |
adammw Wrote:Hi,
Reading though the general Internet and these forums I have seen many references to different patches, modifications and problems related to RTMP.
My questions are:
How currently is rtmp implemented in xbmc (or at least will be by 10.05)?
Is it based on rtmpdump's librtmp or a different librtmp?
Is it available to plugins/addons/scripts/etc.?
Is it built as a static or shared library (e.g. can a drop-in replacement be used)?
Does it support swf verification or not?
What needs to be changed (by the end-user) to support swf verification or rtmpe?
Does it support seeking on non-live streams?
Sorry for all the questions but I'd like to define exactly the current state in the SVN head (or the plan for 10.05) so I can go ahead and work out what I need to do to develop a plugin to play online videos utilising rtmp.
Thanks.
Quote:Yes, Yes, Yes
"""
Plugin for testing RTMP support
"""
import xbmc, xbmcgui, xbmcplugin
if ( __name__ == "__main__" ):
rtmp_url="rtmp://localhost/vod"
item = xbmcgui.ListItem("RTMPLocal")
item.setProperty("PlayPath", "Get Smart")
xbmc.Player(xbmc.PLAYER_CORE_AUTO).play(rtmp_url, item)
adammw Wrote:EDIT: I feel really stupid, it seems I didn't read the comments well enough. The problem was the format changed to have the playpath inside the url (space separated) rather than with that setProperty thing. You can ignore this reply.
Basje Wrote:Are there any directions on how to use the RTMPE?
"""
Plugin for testing RTMP support
"""
import xbmc, xbmcgui, xbmcplugin
if ( __name__ == "__main__" ):
rtmp_url="rtmpe://ten-flashope-e.vx.kitd.com/ondemand/19056/1395/geo/ausonly/2010/Q1/fox-glee-122-080710-sg1_700.flv"
item = xbmcgui.ListItem("RTMPLocal")
xbmc.Player(xbmc.PLAYER_CORE_AUTO).play(rtmp_url, item)
kulprit Wrote:And the changes to support swf verification required along with rtmpe on windows would be great as well.
import xbmc, xbmcgui
rtmp_url = "rtmp://127.0.0.1/myapp playpath=foobar swfurl= swfvfy=true"
item = xbmcgui.ListItem("RTMPLocal")
xbmc.Player(xbmc.PLAYER_CORE_DVDPLAYER).play(rtmp_url, item) | http://forum.kodi.tv/showthread.php?tid=76914&pid=566671 | CC-MAIN-2016-30 | en | refinedweb |
In Oracle Communications Network Integrity, processor entities are the building-blocks for actions, as they implement atomic sub-functions for actions.
For example, an SNMP processor is included in an action to poll network devices; a modeler processor is included in an action to model raw SNMP data from a network device and add it to a database. Combined, these two processors comprise a discovery action that polls SNMP-enabled network devices and persists the modeled SNMP data.
By adding multiple processors to an action, the action performs several complex function by executing the processors according to the sequence in which they were added to the action.
Processors are of different types:
Discovery Processor: part of a discovery action.
Import Processor: part of an import action.
Assimilation Processor: part of an assimilation action.
Discrepancy Detection Processor: part of a discrepancy detection processor action.
Discrepancy Resolution Processor: part of a discrepancy resolution action.
File Transfer Processor: used to retrieve files from local or remote directories. For more information, see Network Integrity File Transfer and Parsing Guide.
File Parsing Processor: used to parse data retrieved by the File Transfer processor so that the data is available to other processors. For more information, see Network Integrity File Transfer and Parsing Guide.
Unlike actions, processors are not visible in Network Integrity.
To create a processor, see the following:
To configure a processor, see the following:
To configure the input and output parameters for the processor, see the following:
To configure property groups and properties for a processor, see the following:
About Properties and Property Groups
To view an outline of code generation for processors, see the following:
To view an outline of processor implementation, see the following:
About Processor Implementation
You can create a processor independently or create a processor in the process of adding it to an action. The latter method is recommended because it automatically adds the processor to the list of processors that the action uses. And it also ensures that you can create only the supported types of processors for the current action.
To create a processor, see the Design Studio Help.
The main steps in configuring a processor using the processor editor in Design Studio include:
Using the Properties tab to define properties that are passed to the processor.
Using the Context Parameters tab to define the processor's inputs and outputs.
Using the Details tab to specify the implementation class.
After you configure an action and its processors, complete the action by coding the implementations for the processors.
The processor editor has a Context Parameters tab that you can use to configure the input and output parameters for the processor. Both input and output parameters are optional for a processor.
For extensibility, configure the processor to produce an output parameter that is available to other processors to continue data processing. Typically, the output parameter should be the Oracle Communications Information Model entity that the processor models: for example, LogicalDevice or PhysicalDevice.
See Oracle Communications Information Model Reference and Network Integrity Information Model Reference for further information about the Information Model.
After adding input and output parameters for the processor using Oracle Communications Design Studio, these parameters appear in tabular format in the Context Parameters area of the processor editor. Design Studio generates the request and response Java classes based on the input and output parameters.
A property group is a logical container configured on a processor. A property group can be added to multiple processors. Property group names must be unique within a processor.
Properties are added to property groups and are assigned property values to pass to the processor, either hard-coded or at run time.
Property groups do not inherently pass any values to the processor other than the values belonging to its properties.
Property groups and properties are configured on processors on the Properties tab of the processor editor.
Property groups can be configured as Managed groups, where the values for the properties it contains can be set at run time using the MBean interface. See Network Integrity System Administrator's Guide for more information.
Property groups can be configured as Map groups, where the property group produces a simplified API for properties that are used as maps.
A Java class is generated for the property group so that you can extend a cartridge to access the property values it contains using a generated interface.
A property consists of a name-value pair that is passed to the processor through the property group. Depending on how the property group is configured, the property value is either hard-coded, or provided at run time through the MBean interface. Property names must be unique within the property group.
Properties can be configured with the following options:
Property values can be set using a cartridge model variable, where you can specify the value of the variable at deployment time. To set a property value with a cartridge model variable, the value string must begin and end with a percentage (%) symbol, as in the following example:
%Property_Value%
Properties can be configured as Secret values, to pass encrypted values at deployment time using the MBean interface. The property value must be encrypted before it can be entered in the MBean interface. See Network Integrity System Administrator's Guide for more information.
For more information on adding property groups to a processor, adding properties to a property group, and setting cartridge model variables, see the Design Studio Help.
This section describes code generation for processors in Network Integrity:
About the Location for Generated Code
About the Processor Interface
About the PropertyGroup and Properties Classes
Design Studio code-generates the relevant Java classes for the processor. The generated code is located at:
Studio_Workspace\NI_Project_Root\generated\src\Project_Default_Package\Processor_Type\Processor_Implementation_Prefix
where:
Studio_Workspace is the Eclipse Workspace root
NI_Project_Root is the Network Integrity project root
Project_Default_Package is the default package configured at the cartridge editor
Processor_Type is run time following action types:
discoveryprocessors
importprocessors
assimilationprocessors
detectionprocessors
resolutionprocessors
Processor_Implementation_Prefix is the action implementation prefix in lowercase.
Every processor has a generated interface. The generated processor interface class is named Processor_NameProcessorInterface.java.
In general, the generated processor interface has the
invoke method defined. The interface has two forms of
invoke methods, depending on whether there is an output parameter defined for the processor.
// Signature for processor which does not have output parameters public void invoke(<Processor_Specific_Context> context, ExampleProcessorRequest request) throws ProcessorException { // TODO Auto-generated method stub // Signature for processor which has output parameters public ExampleProcessorResponse invoke(<Processor_Specific_Context> context, ExampleProcessorRequest request) throws ProcessorException { // TODO Auto-generated method stub return null; }
The generated processor interface has a slightly different signature, depending on the type of processor: for example, Processor_Specific_Context differs between processor types. See individual chapters on specific processors for more information.
A properties class is always code-generated for the processor, whether the processor has property groups and properties configured or not. The properties class is used as an input parameter for the constructor of the generated request class.
The generated properties class is named Processor_NameProcessorProperties.java.
The generated properties class has a public method,
String[] getValidProperties(). This method returns a string array that contains a list of valid property group names configured for this processor. If the processor has no property groups configured, this method returns an empty array.
If the processor has property groups and properties configured, for each property group a PropertyGroup class is code-generated.
The generated PropertyGroup class is named PropertyGroup_NamePropertyGroup.java.
The generated PropertyGroup represents the configured property group and all of its properties. The generated properties class has the getter methods to get each PropertyGroup directly, and has all the setter methods to modify the property values.
The generated PropertyGroup class has a public method,
String[] getValidProperties(). This method returns a string array that contains a list of valid properties names configured for this property group. If the property group has no property configured, this method returns an empty array.
If the property group is not configured as a Map group, the generated PropertyGroup class provides getter methods for all the properties configured in this property group.
If the property group is configured as a Map group, the generated PropertyGroup class does not provide getter methods for all the properties configured in this property group. Instead, the API for the property group resembles a Java Map, where the property values are retrieved and set using the property name passed as a value.
Implementation of the processor is done in the processor editor using the Details tab. See the Design Studio Help for specific configuration details.
You can click the Implementation Class link to open the Java editor for this implementation Java class. Design Studio auto-generates the skeleton Java implementation class, which implements the processor interface with an empty implementation method.
You must decide whether to complete implementing the method. Sometimes, if the processor was changed later (for example, by adding output parameters or removing parameters) the implementation class displays a compiling error. This is expected because the skeleton implementation class is regenerated. You must modify the implementation class to match the changed processor interface.
For information about how to implement a processor, see the individual processor section.
When a processor deals with resources (for example, sockets and files), it is necessary to clean up the resources used or created while the processor executes. Using a finalizer on the processor ensures that the used or created resources get cleaned up, whether the action fails or is successful. When implemented, the finalizer cleans up the resources used or created by the processor. It is not mandatory to implement the finalizer if the processor does not deal with a resource, or if the resource is used only within the processor (in which case the processor implementation should make sure the local resource is closed properly). The processor must implement the finalizer if the processor allocates a resource that is to be output for use by other processors.
Finalizers that are not inside a For Each loop are called by the action controller class (code-generated) before it completes. Finalizers that are inside a For Each loop are called by the action controller class at the end of the For Each loop. In all cases, finalizers are called in the reverse order to which they are registered (finalizers registered first are called last; finalizers registered last are called first).
The processor implementation class must implement the interface oracle.communications.sce.integrity.sdk.processor.ProcessorFinalizer to have the action controller clean up the resources that are used or created by the processor. If a processor does not use or create a resource, it does not implement the ProcessorFinalizer interface.
The processor defines only one method:
public void close(boolean failed);
The processor that implements the ProcessorFinalizer interface must implement this method to close all the resources used or created during the execution of this processor. This method takes an input parameter as Boolean. If there is an exception during the execution of the processors, the action controller calls the finalizer by passing True to this method; otherwise the action controller calls the finalizer by passing False to the method, in the successful case. The processor might implement the close logic differently for both successful and failed scenarios: for example, if it is a failed scenario, the close method might log an error message before closing the resources.
The following code shows how to implement the ProcessorFinalizer for a sample processor:
public class SampleProcessorImpl implements SampleProcessorInterface, ProcessorFinalizer { public SampleProcessorResponse invoke(SampleProcessorRequest request) throws ProcessorException { // Implement the Processor here… } public void close(boolean failed) { if(failed) { // something is failed, log extra error message here. } // close the InputStream here. try { myInputStream.close() } catch(IOException ioe) { // log the IOException here… } } }
The action controller class calls the finalizers for both successful and failed scenarios. The finalizers that are not inside a For Each loop do not begin until the end of the action. The finalizers that are inside a For Each loop do not begin until the end of the loop. When a processor that implements the ProcessorFinalizer completes the execution, it is still in the scope of the action. The processor does not get purged by the garbage collector to release the memory.
If a processor implements the ProcessorFinalizer, it is a good practice to limit the number of member variables for that processor and ensure that the processor is not using a large amount of memory. If the processor uses a lot of memory, it is a good practice to release the memory as soon as it is no longer required. For example, if a processor is using a large HashMap, and it also implements the ProcessorFinalizer, the processor should clear the contents of the HashMap when it is done using it and assign the null pointer to this HashMap. | http://docs.oracle.com/cd/E23717_01/doc.71/e23701/dev_proc_general.htm | CC-MAIN-2016-30 | en | refinedweb |
Pier Fumagalli wrote:
>).
try clicking on ""
> For blocks, though, do we want to have them to point at something (like
> the block descriptor, or the home page) or shall we ignore for now?
we *MUST* be able to serve stuff from our URI block identifiers from the
future. it's actually the whole point in having http: based identifiers.
I think that blocks URI should be starting with because that doesn't change and also
signifies that these URI don't represent namespaces (which will be still) and will also make it easier for us to control
that URL space when we'll need to publish the metadata in it.
--
Stefano. | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200403.mbox/%[email protected]%3E | CC-MAIN-2016-30 | en | refinedweb |
Sending e-mails in the .NET Framework 2.0 is about the same as in
version 1.x. There are just a couple of variations. First, all the
functionality is within the new
System.Net.Mail namespace. The
System.Web.Mail namespace, wich was used in the 1.x frameworks is now considered obsolete.
Lets get right to the code. It's really straight forward and self explanatory:
MailMessage oMsg = new MailMessage();
// Set the message sender
oMsg.From = new MailAddress("[email protected]", "Xavier Larrea");
// The .To property is a generic collection,
// so we can add as many recipients as we like.
oMsg.To.Add(new MailAddress("[email protected]","John Doe"));
// Set the content
oMsg.Subject = "My First .NET email";
oMsg.Body = "Test body - .NET Rocks!";
oMsg.IsBodyHtml = true;
SmtpClient oSmtp = new SmtpClient("smtp.myserver.com");
//You can choose several delivery methods.
//Here we will use direct network delivery.
oSmtp.DeliveryMethod = SmtpDeliveryMethod.Network;
//Some SMTP server will require that you first
//authenticate against the server.
NetworkCredential oCredential = new NetworkCredential("myusername","mypassword");
oSmtp.UseDefaultCredentials = false;
oSmtp.Credentials = oCredential;
//Let's send it already
oSmtp.Send(oMsg);
Very easy, right? Remember always to use the
Try-Catch
block when sending emails because lot of things can cause an exception:
bad email addresses, authentication errors, network failure, etc.
I hope you find this code useful. Happy coding! | http://www.developerfusion.com/code/5456/sending-authenticated-emails-in-net-20/ | CC-MAIN-2016-30 | en | refinedweb |
/* pwd.c - Try to approximate UN*X's getuser...() functions under MS-DOS. Copyright (C) 1990 by Thorsten Ohl, [email protected]. */ /* This 'implementation' is conjectured from the use of this functions in the RCS and BASH distributions. Of course these functions don't do too much useful things under MS-DOS, but using them avoids many "#ifdef MSDOS" in ported UN*X code ... */ /* Stripped out stuff - MDLadwig <[email protected]> --- Nov 1995 */ #include "mac_config.h" #include <pwd.h> #include <stdio.h> #include <stdlib.h> #include <string.h> static char *home_dir = "."; /* we feel (no|every)where at home */ static struct passwd pw; /* should we return a malloc()'d structure */ static struct group gr; /* instead of pointers to static structures? */ pid_t getpid( void ) { return 0; } /* getpid */ pid_t waitpid(pid_t, int *, int) { return 0; } /* waitpid */ mode_t umask(mode_t) { return 0; } /* Umask */ /* return something like a username in a (butchered!) passwd structure. */ struct passwd * getpwuid (int uid) { pw.pw_name = NULL; /* getlogin (); */ pw.pw_dir = home_dir; pw.pw_shell = NULL; pw.pw_uid = 0; return &pw; } /* Misc uid stuff */ struct passwd * getpwnam (char *name) { return (struct passwd *) 0; } int getuid () { return 0; } int geteuid () { return 0; } int getegid () { return 0; } | http://opensource.apple.com/source/cvs/cvs-30/cvs/macintosh/pwd.c | CC-MAIN-2016-30 | en | refinedweb |
Version Introduced: ODBC 1.0 Standards Compliance: Open Group
SQLColumns returns the list of column names in specified tables. The driver returns this information as a result set on the specified StatementHandle.
SQLRETURN SQLColumns(
SQLHSTMT StatementHandle,
SQLCHAR * CatalogName,
SQLSMALLINT NameLength1,
SQLCHAR * SchemaName,
SQLSMALLINT NameLength2,
SQLCHAR * TableName,
SQLSMALLINT NameLength3,
SQLCHAR * ColumnName,
SQLSMALLINT NameLength4);
[Input] Statement handle.
[Input] Catalog name. If a driver supports catalogs for some tables but not for others, such as when the driver retrieves data from different DBMSs, an empty string ("") indicates] String search pattern for schema names. If a driver supports schemas for some tables but not for others, such as when the driver retrieves data from different DBMSs, an empty string ("") indicates those tables table names.
If the SQL_ATTR_METADATA_ID statement attribute is set to SQL_TRUE, TableName is treated as an identifier and its case is not significant. If it is SQL_FALSE, TableName is a pattern value argument; it is treated literally, and its case is significant.
[Input] Length in characters of *TableName.
[Input] String search pattern for column names.
If the SQL_ATTR_METADATA_ID statement attribute is set to SQL_TRUE, ColumnName is treated as an identifier and its case is not significant. If it is SQL_FALSE, ColumnName is a pattern value argument; it is treated literally, and its case is significant.
[Input] Length in characters of *ColumnName.
SQL_SUCCESS, SQL_SUCCESS_WITH_INFO, SQL_STILL_EXECUTING, SQL_ERROR, or SQL_INVALID_HANDLE.
When SQLColumns, TableName, or ColumnName name length arguments was less than 0 but not equal to SQL_NTS.
The value of one of the name length arguments exceeded the maximum length value for the corresponding catalog or name. The maximum length of each catalog or name can be obtained by calling SQLGetInfo with the InfoType values. (See "Comments.")
HYC00
Optional feature not implemented
A catalog name was specified, and the driver or data source does not support catalogs.
A schema name was specified, and the driver or data source does not support schemas.
A string search pattern was specified for the schema name, table name, or column name, and the data source does not support search patterns for one or more of those arguments..
This function typically is used before statement execution to retrieve information about columns for a table or tables from the data source's catalog. SQLColumns can be used to retrieve data for all types of items returned by SQLTables. In addition to base tables, this may include (but is not limited to) views, synonyms, system tables, and so on. By contrast, the functions SQLColAttribute and SQLDescribeCol describe the columns in a result set and the function SQLNumResultCols returns the number of columns in a result set. For more information, see Uses of Catalog Data.
For more information about the general use, arguments, and returned data of ODBC catalog functions, see Catalog Functions.
SQLColumns returns the results as a standard result set, ordered by TABLE_CAT, TABLE_SCHEM, TABLE_NAME, and ORDINAL_POSITION.
When an application works with an ODBC 2.x driver, no ORDINAL_POSITION column is returned in the result set. As a result, when working with ODBC 2.x drivers, the order of the columns in the column list returned by SQLColumns is not necessarily the same as the order of the columns returned when the application performs a SELECT statement on all columns in that table.
SQLColumns might not return all columns. For example, a driver might not return information about pseudo-columns, such as Oracle ROWID. Applications can use any valid column, whether it is returned by SQLColumns.
Some columns that can be returned by SQLStatistics are not returned by SQLColumns. For example, SQLColumns does not return the columns in an index created over an expression or filter, such as SALARY + BENEFITS or DEPT = 0012.
The lengths of VARCHAR columns are not shown in the table; the actual lengths depend on the data source..
TABLE_QUALIFIER
TABLE_CAT
TABLE_OWNER
TABLE_SCHEM
PRECISION
COLUMN_SIZE
LENGTH
BUFFER_LENGTH
SCALE
DECIMAL_DIGITS
RADIX
NUM_PREC_RADIX
The following columns have been added to the result set returned by SQLColumns for ODBC 3.x:
CHAR_OCTET_LENGTH
ORDINAL_POSITION
COLUMN_DEF
SQL_DATA_TYPE
IS_NULLABLE
SQL_DATETIME_SUB
The following table lists the columns in the result set. Additional columns beyond column 18 (IS_NULLABLE) can be defined by the driver. An application should gain access to driver-specific columns by counting down from the end of the result set instead of specifying an explicit ordinal position. For more information, see Data Returned by Catalog Functions.
TABLE_CAT (ODBC 1.0)
1
Varchar
Catalog name; NULL if not applicable to the data source. If a driver supports catalogs for some tables but not for others, such as when the driver retrieves data from different DBMSs, it returns an empty string ("") for those tables that do not have catalogs.
TABLE_SCHEM (ODBC 1.0)
2
Varchar
Schema name; NULL if not applicable to the data source. If a driver supports schemas for some tables but not for others, such as when the driver retrieves data from different DBMSs, it returns an empty string ("") for those tables that do not have schemas.
TABLE_NAME (ODBC 1.0)
3
Varchar not NULL
Table name.
COLUMN_NAME (ODBC 1.0)
4
Column name. The driver returns an empty string for a column that does not have a name.
DATA_TYPE (ODBC 1.0)
5
Smallint not NULL
SQL data type. This can be an ODBC SQL data type or a driver-specific SQL data type. For datetime and interval data types, this column returns the concise data type (such as SQL_TYPE_DATE or SQL_INTERVAL_YEAR_TO_MONTH, instead of the nonconcise data type such as SQL_DATETIME or SQL_INTERVAL). For a list of valid ODBC SQL data types, see SQL Data Types in Appendix D: Data Types. For information about driver-specific SQL data types, see the driver's documentation.
The data types returned for ODBC 3.x and ODBC 2.x applications may be different. For more information, see Backward Compatibility and Standards Compliance.
TYPE_NAME (ODBC 1.0)
6
Data source–dependent data type name; for example, "CHAR", "VARCHAR", "MONEY", "LONG VARBINAR", or "CHAR ( ) FOR BIT DATA".
COLUMN_SIZE (ODBC 1.0)
7
Integer
If DATA_TYPE is SQL_CHAR or SQL_VARCHAR, this column contains the maximum length in characters of the column. For datetime data types, this is the total number of characters required to display the value when it is converted to characters. For numeric data types, this is either the total number of digits or the total number of bits allowed in the column, according to the NUM_PREC_RADIX column. For interval data types, this is the number of characters in the character representation of the interval literal (as defined by the interval leading precision, see Interval Data Type Length in Appendix D: Data Types). For more information, see Column Size, Decimal Digits, Transfer Octet Length, and Display Size in Appendix D: Data Types.
BUFFER_LENGTH (ODBC 1.0)
8
The length in bytes of data transferred on an SQLGetData, SQLFetch, or SQLFetchScroll operation if SQL_C_DEFAULT is specified. For numeric data, this size may differ from the size of the data stored on the data source. This value might differ from COLUMN_SIZE column for character data. For more information about length, see Column Size, Decimal Digits, Transfer Octet Length, and Display Size in Appendix D: Data Types.
DECIMAL_DIGITS (ODBC 1.0)
9
Smallint
The total number of significant digits to the right of the decimal point. For SQL_TYPE_TIME and SQL_TYPE_TIMESTAMP, this column contains the number of digits in the fractional seconds component. For the other data types, this is the decimal digits of the column on the data source. For interval data types that contain a time component, this column contains the number of digits to the right of the decimal point (fractional seconds). For interval data types that do not contain a time component, this column is 0. For more information about decimal digits, see Column Size, Decimal Digits, Transfer Octet Length, and Display Size in Appendix D: Data Types. NULL is returned for data types where DECIMAL_DIGITS is not applicable.
NUM_PREC_RADIX (ODBC 1.0)
10
For numeric data types, either 10 or 2. If it is 10, the values in COLUMN_SIZE and DECIMAL_DIGITS give the number of decimal digits allowed for the column. For example, a DECIMAL(12,5) column would return a NUM_PREC_RADIX of 10, a COLUMN_SIZE of 12, and a DECIMAL_DIGITS of 5; a FLOAT column could return a NUM_PREC_RADIX of 10, a COLUMN_SIZE of 15, and a DECIMAL_DIGITS of NULL.
If it is 2, the values in COLUMN_SIZE and DECIMAL_DIGITS give the number of bits allowed in the column. For example, a FLOAT column could return a RADIX of 2, a COLUMN_SIZE of 53, and a DECIMAL_DIGITS of NULL.
NULL is returned for data types where NUM_PREC_RADIX is not applicable.
NULLABLE (ODBC 1.0)
11
SQL_NO_NULLS if the column could not include NULL values.
SQL_NULLABLE if the column accepts NULL values.
SQL_NULLABLE_UNKNOWN if it is not known whether the column accepts NULL values.
The value returned for this column differs from the value returned for the IS_NULLABLE column. The NULLABLE column indicates with certainty that a column can accept NULLs, but cannot indicate with certainty that a column does not accept NULLs. The IS_NULLABLE column indicates with certainty that a column cannot accept NULLs, but cannot indicate with certainty that a column accepts NULLs.
REMARKS (ODBC 1.0)
12
A description of the column.
COLUMN_DEF (ODBC 3.0)
13
The default value of the column. The value in this column should be interpreted as a string if it is enclosed in quotation marks.
If NULL was specified as the default value, this column is the word NULL, not enclosed in quotation marks. If the default value cannot be represented without truncation, this column contains TRUNCATED, without enclosing single quotation marks. If no default value was specified, this column is NULL.
The value of COLUMN_DEF can be used in generating a new column definition, except when it contains the value TRUNCATED.
SQL_DATA_TYPE (ODBC 3.0)
14
SQL data type, as it appears in the SQL_DESC_TYPE record field in the IRD. This can be an ODBC SQL data type or a driver-specific SQL data type. This column is the same as the DATA_TYPE column, except for datetime and interval data types. This column returns the nonconcise data type (such as SQL_DATETIME or SQL_INTERVAL), instead of the concise data type (such as SQL_TYPE_DATE or SQL_INTERVAL_YEAR_TO_MONTH) for datetime and interval data types. If this column returns SQL_DATETIME or SQL_INTERVAL, the specific data type can be determined from the SQL_DATETIME_SUB column. For a list of valid ODBC SQL data types, see SQL Data Types in Appendix D: Data Types. For information about driver-specific SQL data types, see the driver's documentation.
SQL_DATETIME_SUB (ODBC 3.0)
15
The subtype code for datetime and interval data types. For other data types, this column returns a NULL. For more information about datetime and interval subcodes, see "SQL_DESC_DATETIME_INTERVAL_CODE" in SQLSetDescField.
CHAR_OCTET_LENGTH (ODBC 3.0)
16
The maximum length in bytes of a character or binary data type column. For all other data types, this column returns a NULL.
ORDINAL_POSITION (ODBC 3.0)
17
Integer not NULL
The ordinal position of the column in the table. The first column in the table is number 1.
IS_NULLABLE (ODBC 3.0)
18
"NO" if the column does not include NULLs.
"YES" if the column could include NULLs.
This column returns a zero-length string if nullability is unknown.
ISO rules are followed to determine nullability. An ISO SQL–compliant DBMS cannot return an empty string.
The value returned for this column differs from the value returned for the NULLABLE column. (See the description of the NULLABLE column.)
In the following example, an application declares buffers for the result set returned by SQLColumns. It calls SQLColumns to return a result set that describes each column in the EMPLOYEE table. It then calls SQLBindCol to bind the columns in the result set to the buffers. Finally, the application fetches each row of data with SQLFetch and processes it.
// SQLColumns_Function.cpp
// compile with: ODBC32.lib
#include <windows.h>
#include <sqlext.h>
#define STR_LEN 128 + 1
#define REM_LEN 254 + 1
// Declare buffers for result set data
SQLCHAR szSchema[STR_LEN];
SQLCHAR szCatalog[STR_LEN];
SQLCHAR szColumnName[STR_LEN];
SQLCHAR szTableName[STR_LEN];
SQLCHAR szTypeName[STR_LEN];
SQLCHAR szRemarks[REM_LEN];
SQLCHAR szColumnDefault[STR_LEN];
SQLCHAR szIsNullable[STR_LEN];
SQLINTEGER ColumnSize;
SQLINTEGER BufferLength;
SQLINTEGER CharOctetLength;
SQLINTEGER OrdinalPosition;
SQLSMALLINT DataType;
SQLSMALLINT DecimalDigits;
SQLSMALLINT NumPrecRadix;
SQLSMALLINT Nullable;
SQLSMALLINT SQLDataType;
SQLSMALLINT DatetimeSubtypeCode;
SQLHSTMT hstmt = NULL;
// Declare buffers for bytes available to return
SQLINTEGER cbCatalog;
SQLINTEGER cbSchema;
SQLINTEGER cbTableName;
SQLINTEGER cbColumnName;
SQLINTEGER cbDataType;
SQLINTEGER cbTypeName;
SQLINTEGER cbColumnSize;
SQLLEN cbBufferLength;
SQLINTEGER cbDecimalDigits;
SQLINTEGER cbNumPrecRadix;
SQLINTEGER cbNullable;
SQLINTEGER cbRemarks;
SQLINTEGER cbColumnDefault;
SQLINTEGER cbSQLDataType;
SQLINTEGER cbDatetimeSubtypeCode;
SQLINTEGER cbCharOctetLength;
SQLINTEGER cbOrdinalPosition;
SQLINTEGER cbIsNullable;
int main() {
int i = 0;
SQLHENV henv;
SQLHDBC hdbc;
SQLHSTMT hstmt = 0;
SQLRETURN retcode;
SQLPOINTER rgbValue = &i;
retcode = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv);
retcode = SQLSetEnvAttr(henv, SQL_ATTR_ODBC_VERSION, (SQLPOINTER*)SQL_OV_ODBC3, 0);
retcode = SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc);
retcode = SQLSetConnectAttr(hdbc, SQL_LOGIN_TIMEOUT, (SQLPOINTER)(rgbValue), 0);
retcode = SQLConnect(hdbc, (SQLCHAR*) "Northwind", SQL_NTS, (SQLCHAR*) NULL, 0, NULL, 0);
retcode = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt);
retcode = SQLColumns(hstmt, NULL, 0, NULL, 0, (SQLCHAR*)"CUSTOMERS", SQL_NTS, NULL, 0);
if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) {
// Bind columns in result set to buffers
SQLBindCol(hstmt, 1, SQL_C_CHAR, szCatalog, STR_LEN,&cbCatalog);
SQLBindCol(hstmt, 2, SQL_C_CHAR, szSchema, STR_LEN, &cbSchema);
SQLBindCol(hstmt, 3, SQL_C_CHAR, szTableName, STR_LEN,&cbTableName);
SQLBindCol(hstmt, 4, SQL_C_CHAR, szColumnName, STR_LEN, &cbColumnName);
SQLBindCol(hstmt, 5, SQL_C_SSHORT, &DataType, 0, &cbDataType);
SQLBindCol(hstmt, 6, SQL_C_CHAR, szTypeName, STR_LEN, &cbTypeName);
SQLBindCol(hstmt, 7, SQL_C_SLONG, &ColumnSize, 0, &cbColumnSize);
SQLBindCol(hstmt, 8, SQL_C_SLONG, &BufferLength, 0, &cbBufferLength);
SQLBindCol(hstmt, 9, SQL_C_SSHORT, &DecimalDigits, 0, &cbDecimalDigits);
SQLBindCol(hstmt, 10, SQL_C_SSHORT, &NumPrecRadix, 0, &cbNumPrecRadix);
SQLBindCol(hstmt, 11, SQL_C_SSHORT, &Nullable, 0, &cbNullable);
SQLBindCol(hstmt, 12, SQL_C_CHAR, szRemarks, REM_LEN, &cbRemarks);
SQLBindCol(hstmt, 13, SQL_C_CHAR, szColumnDefault, STR_LEN, &cbColumnDefault);
SQLBindCol(hstmt, 14, SQL_C_SSHORT, &SQLDataType, 0, &cbSQLDataType);
SQLBindCol(hstmt, 15, SQL_C_SSHORT, &DatetimeSubtypeCode, 0, &cbDatetimeSubtypeCode);
SQLBindCol(hstmt, 16, SQL_C_SLONG, &CharOctetLength, 0, &cbCharOctetLength);
SQLBindCol(hstmt, 17, SQL_C_SLONG, &OrdinalPosition, 0, &cbOrdinalPosition);
SQLBindCol(hstmt, 18, SQL_C_CHAR, szIsNullable, STR_LEN, &cbIsNullable);
while (SQL_SUCCESS == retcode) {
retcode = SQLFetch(hstmt);
/*
if (retcode == SQL_ERROR || retcode == SQL_SUCCESS_WITH_INFO)
0; // show_error();
if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO)
0; // Process fetched data
else
break;
*/
}
}
}
Binding a buffer to a column in a result set
SQLBindCol Function
Canceling statement processing
SQLCancel Function
Returning privileges for a column or columns
SQLColumnPrivileges Function
Fetching a block of data or scrolling through a result set
SQLFetchScroll Function
Fetching multiple rows of data
SQLFetch Function
Returning columns that uniquely identify a row, or columns automatically updated by a transaction
SQLSpecialColumns Function
Returning table statistics and indexes
SQLStatistics Function
Returning a list of tables in a data source
SQLTables Function
Returning privileges for a table or tables
SQLTablePrivileges Function | http://msdn.microsoft.com/en-us/library/ms711683(VS.85).aspx | crawl-002 | en | refinedweb |
The Metafile class provides methods for recording and saving metafiles. The Encoder class enables users to extend GDI+ to support any image format. The PropertyItem class provides methods for storing and retrieving metadata in image files.
Classes within the System.Drawing.Imaging namespace are not supported for use within a Windows or ASP.NET service. Attempting to use these classes from within one of these application types may produce unexpected problems, such as diminished service performance and run-time exceptions. | http://msdn.microsoft.com/en-us/library/system.drawing.imaging(VS.80).aspx | crawl-002 | en | refinedweb |
XML Glossary
December 1999
Because XML is so important across the industry, Microsoft Corp. put together this quick cheat sheet for you to keep handy when you're writing about XML and related technologies. The following definitions are designed to give you an understanding of what these products and technologies are and why they're important. We have also included some links that can provide more information.
Extensible Markup Language (XML): XML is a universal method for representing structured information (spreadsheets, address books, technical information, etc.) in a way that is especially well-suited to moving data in a distributed computing environment. With XML, developers can specify the of a document, for example, the document's title, its author or a list of related links. Most important, XML provides a way of separating data from the methods that act on it and the way it is presented.
Document Object Model (DOM): The Document Object Model is a platform- and language-neutral interface that allows programs and scripts to dynamically access and update the content, structure and style of XML (and HTML) documents. The goal of the DOM specification is to define a programmatic interface for XML and HTML.
What are style sheets? Cascading style sheets (CSSs) and the Extensible Stylesheet Language (XSL) are ways to achieve more control in the presentation of HTML and XML documents.
Cascading style sheets: CSSs are a style sheet mechanism that has been specifically developed to meet the needs of Web designers and users. Put simply, they give designers control over typefaces, font sizes, leading (line spacing), position and other aspects of presentation.
Extensible Stylesheet Language: XSL is a language for expressing style sheets. It consists of two parts: XSLT, a language for transforming XML documents, and an XML vocabulary for specifying formatting semantics.
What are XSL transformations (XSLT) and XML Path Language (XPath)? XSLT is language for describing rules for transforming XML documents into other XML documents. The new XML document can be completely different from the original. In addition, in constructing the new document, elements from the original document tree can be filtered and reordered, and arbitrary structure can be added.
XPath is a language for addressing parts of an XML document, designed to be used by both XSLT and a future specification called XPointer, which further defines how to address the text contents of XML documents.
Document type definitions (DTDs): A DTD is a set of syntax rules for tags. It tells which tags can be used in a document, the order in which they should appear, which tags can appear inside others, which tags have attributes, and so on. Because XML is not a language itself but rather a system for defining languages, it doesn't have a universal DTD the way HTML does. Instead, each industry or organization that wants to use XML for data exchange can define its own DTDs.
XML schemas: A schema, like a DTD, is a way to define a set of XML elements and attributes and the rules for their correct combination. However, XML schemas provide a superset of capabilities found in DTDs. In particular, XML schemas provide support for datatypes and namespaces.
XML namespaces: XML namespaces provide a simple method for uniquely qualifying element and attribute names used in XML documents by associating them with namespaces identified by URI references. This allows developers to combine information from different data structures in a single XML document without
"collisions"
between element names. For example, if one attempts to combine book information and author information in a single document, a conflict between the
"title"
of the book and the
"title"
of the author could result. XML namespaces solve this problem.
How is XML related to Standard Generalized Markup Language (SGML) and HTML? SGML is a way of expressing data in text-processing applications. Both XML and HTML are document formats derived from SGML. HTML is an application of SGML, whereas XML is a subset of SGML. The distinction is important: HTML can't be used to define new applications, but XML can. HTML, SGML and XML will continue to be used where appropriate; none of them will render the others obsolete.
Simple Object Access Protocol (SOAP): SOAP enables applications to talk to Web services as though they were applications. The specification will allow Web sites to become sophisticated applications that can be accessed programmatically as well as through a browser. SOAP is Internet-savvy, open standards-based and easy to work with because it is easily readable by humans.
Microsoft BizTalk Framework: The Microsoft® BizTalk TM Framework is a comprehensive XML-based implementation framework developers can use to design and implement solutions based on a Web Services Architecture. It helps establish a set of guidelines for the publishing of schemas in XML and the use of XML messages to easily integrate software programs to build rich, new Web-based solutions.
Microsoft BizTalk Server: BizTalk Server provides the tools and infrastructure companies require to exchange business documents among various platforms and operating systems, regardless of the application being used to process the documents. Using BizTalk Server, companies can easily exchange documents between applications within their own organization. BizTalk Server also provides a standard gateway for sending and receiving documents via the Internet. By taking advantage of BizTalk-compatible messages and compliant schemas, BizTalk Server enables organizations to conduct business online effectively and efficiently.
XML parser: An XML parser is the piece of software that reads XML files and makes the information from those files available to applications and programming languages, usually through a known interface like the DOM (see above). The XML parser is responsible for testing whether a document is well-formed and, if given a DTD or XML schema, it will also check for validity (i.e., it determines if the document follows the rules of the DTD or schema). Microsoft includes a validating XML parser with the Windows® 2000 operating system and Microsoft Internet Explorer 5 browser software.
More Information Sources
Press Release:
Microsoft Announces Finalized BizTalk Framework - December 6, 1999
Windows DNA 2000 Provides Pervasive XML Support for Next-Generation Web Development - September 13, 1999
Microsoft Announces Windows DNA 2000 - September 13, 1999
Microsoft Windows 2000 DNA to Fundamentally Transform the Way People Build and Use Web Sites - January 3, 2000
Tod Nielsen Helps Microsoft Usher in a New Era of Internet Computing - November 29, 1999
Friction Free Software: A Foundation for Building Tomorrow's Applications - November 8, 1999
Microsoft Embarks on New Era of Enabling Web Development with Windows DNA 2000 - September 13, 1999
MSDN Online: XML Developer Center
Microsoft Visual Studio Interoperability Center
XML in 10 Points
Understanding XML
WWWC Consortium
XML.org
XML.com
BizTalk Web Site
Open Applications Group
Microsoft, BizTalk and Windows are either registered trademarks or trademarks of Microsoft Corp. in the United States and/or other countries/regions.
Other product and company names herein may be trademarks of their respective owners. | http://www.microsoft.com/presspass/features/2000/01-03xmlglossary.mspx | crawl-002 | en | refinedweb |
Windows Communication Foundation From the Inside
Some tips for building support for versioning into the naming of data contracts.
First, the primary route for versioning should be through the namespace part of the contract rather than the member name part of the contract. Versioning the contract through member names tends to leak across the service boundary more forcefully. The programming experience of the service often makes a member name directly visible while a namespace is more or less invisible.
Second, choose a single consistent scheme for identifying the version. Two popular schemes are the date of the contract and a sequential numbering system of major and minor versions. Both schemes provide the basic element required of a versioning identity, which is an unambiguous total order among the different versions. However, multiple schemes should not be mixed together for a single contract and preferably not for a single system as well.
The date scheme,, has issues around granularity but can be very evocative since you probably already associate dates in your mind with other events. The issue with granularity is that you have to plan ahead for a maximum update frequency. In the previous example, two updates in the same month would collide with the same name, suggesting that a contract that is updated frequently might include additional levels of refinement, such as the day of the month. However, unnecessarily fine granularity makes the name cumbersome.
The numbering scheme,, gives less of a clue about what the version corresponds to but has fewer issues with granularity. Updates can happen as frequently as you want since you can just keep picking new numbers. However, you still have to give some thought to granularity when deciding how many numbering components to include. For example, a single version number may be sufficient if no distinction is needed between major and minor updates.
Next time: Finding a Client Channel
Simplicity is elusive. A few weeks ago I learned that part of transaction flow, propagating information | http://blogs.msdn.com/drnick/archive/2008/07/11/naming-contracts-for-versioning.aspx | crawl-002 | en | refinedweb |
[Blog:p>
<w:r>
<w:t>abcdefghi</w:t>
</w:r>
</w:p>
If we select “def” in the above text, and add a comment, the markup changes to look like this:
<w:t>abc</w:t>
<w:commentRangeStart w:
<w:t>def</w:t>
<w:commentRangeEnd w:
<w:rPr>
<w:rStyle w:
</w:rPr>
<w:commentReference w:
<w:t>ghi</w:t>;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Xml;
using System.Xml.Linq;";
class Program
//);
static void Main(string[] args).
[Blog Map]
:r>
<w:t>4567</w:t>
</w>
<body>
<p>
<r>
<t>Text of first para.</t>
</r>
</p>
<pPr>
<pStyle val="Heading1"/>
</pPr>
<t>Text of second para.</t>
</body>
<");
return null;
XElement root = XElement.Parse(
@"?
Trademarks |
Privacy Statement | http://blogs.msdn.com/ericwhite/ | crawl-002 | en | refinedweb |
There are many .NET Memory Performance Counters and this is meant to give you some guidelines in interpreting the counter data and how to correlate them. This assumes you have a basic understanding of GC.
First thing you may want to look at is “% Time in GC”. This is the percentage of the time spent in GC since the end of the last GC. For example, it’s been 1 million cycles since last GC ended and we spent 0.3 million cycles in the current GC, this counter will show 30%.. If this number is 10%, it’s probably better to look elsewhere in your app because even if you could get rid of half of that, you would only be saving 5% - most likely not very worthwhile.
If you think you are spending too much time in GC, it’s a good time to look at “Allocated Bytes/sec” which shows you the allocation rate. This counter is not exactly accurate when the allocation rate is too low – meaning that if the sampling frequency is higher than the GC frequency as the counter is only updated at the beginning of each GC.
When each GC begins, there could be 2 cases:
1) Gen0 is basically full (meaning that it wasn’t large enough to satisfy the last small object allocation request);
2) The LOH (Large Object Heap) is basically full (meaning that it wasn’t large enough to satisfy the last large object allocation request);
When the allocation request can’t be satisfied, it triggers a GC. So when this GC begins we update the value this counter uses by adding the number of allocated bytes in gen0 and LOH to it, and since it’s a rate counter the actual number you see is the difference between the last 2 values divided by the time interval.
Let's say you are sampling every second (which is the default in PerfMon) and during the 1st second a Gen0 GC occured which allocated 250k. So at the end of the 1st second PerfMon samples the counter value which will be (250k-0k) / 1 second = 250k/sec. Then no GCs happened during the 2nd and the 3rd second so we don't change the value we recorded which will still be 250k, now you get (250k-250k) / 1 second = 0k/sec. Then let's say a Gen0 GC happened during the 4th second and we recorded that we allocated 505k total, so at the end of the 4th second, PerfMon will show you (505k-250k) / 1 second = 255k/sec.
This means when GC doesn't happen very frequently, you get these 0k/sec counter values. But if say there's always at least one GC happening during each second, you will see an accurate value for the allocation rate (when you are sampling every second, that is).
We also have a counter that’s just for large objects – “Large Object Heap size”. This is updated at the same time “Allocated Bytes/sec” is updated and it just counts the bytes allocated in LOH. If you suspect that you are allocating a lot of large objects (in Whidbey this means 85000 bytes or more. But the runtime itself uses the LOH so if you see the LOH size less than 85000 bytes that's due to the runtime allocation [Editted on 12/31/2004]), you can look at this counter along with “Allocated Bytes/sec” to verify it.
Allocating at a high rate definitely is a key factor that causes GC to do a lot of work. But if your objects usually die young, ie, mostly Gen0 GCs, you shouldn’t observe a high percentage of time spend in GC. Ideally if they all die in Gen0 then you could be doing a lot of Gen0 GCs but not much time will be spent in GC as Gen0 GCs take little time.
Gen2 GC requires a full collection (Gen0, Gen1, Gen2 and LOH! Large objects are GC’ed at every Gen2 GC even when the GC was not triggered by lack of space in LOH. Note that there isn’t a GC that only collects large objects.) which takes much longer than younger generation collections. A healthy ratio is for every 10 Gen0 GC we do a Gen1 GC; and for every 10 Gen1 GC we do a Gen2 GC. If you are seeing a lot of time spent in GC it could be that you are doing Gen2 GC’s too often. So look at collection counters:
“# Gen 0 Collections”
“# Gen 1 Collections”
“# Gen 2 Collections”
They show the number of collections for the respective generation since the process started. Note that a Gen1 collection collects both Gen0 and Gen1 in one pass – we don’t do a Gen0 GC, and then determine that no space is available in Gen0, then we go do a Gen1 GC. Which generation to collect is decided at the beginning of each GC.
If you are seeing a lot of Gen2 GCs, it means you have many objects that live for too long but not long enough for them to always stay in Gen2. When you are spending a lot of time in GC but the allocation rate is not very high, it might very well be the case that many of the objects you allocated survived garbage collection, ie, they get promoted to the next generation. Looking at the promotion counters should give you some idea, especially the Gen1 counter:
“Promoted Memory from Gen 0”
“Promoted Memory from Gen 1”
Note that these values don’t include the objects promoted due to finalization for which we have this counter:
“Promoted Finalization-Memory from Gen 0”
It gets updated at the end of each GC. Note that in Everett there’s also the “Promoted Finalization-Memory from Gen 1”counter which was removed in Whidbey. The reason was it was not useful. The “Promoted Finalization-Memory from Gen 0” counter already has the memory promoted from both Gen 0 and Gen 1. We should really rename it to just “Promoted Finalization-Memory”.
One of the worst situations is when you have objects that live long enough to be promoted to Gen2 then die quickly in Gen2 (the “midlife crisis” scenario). When you have this situation you will see high values for “Promoted Memory from Gen 1” and lots of Gen2 GCs. You can then use the CLRProfiler to find out which objects they are.
Note that when a finalizable object survives all the objects it refers to also survive. And the values for “Promoted Finalization-Memory from Gen X” include these objects too. Using the CLRProfiler is also a convenient way to find out which objects got finalized.
When you observe high values for these promotion counters, you will most likely observe high values for the Gen1 and Gen2 heap sizes as well. These sizes are indicated by these counters:
“Gen 1 heap size”
“Gen 2 heap size”
We also have a counter for Gen 0 heap size but it means the budget we allocated for the next GC (ie, the number of bytes that would trigger the next GC) and not the actual Gen 0 size which is either 0 or a small number if there’s pinning in Gen 0.
The heap size counters are updated at the end of each GC and indicate values for that GC. Gen0 and Gen1 heap sizes are usually fairly small (from ~256k to a few MBs) but Gen2 can get arbitrarily big.
If you want to get an idea of how much memory allocated total on the GC heap, you can look at these 2 counters:
“# Total committed Bytes”
“# Total reserved Bytes”
They are updated at the end of each GC and indicate the total committed/reserved memory on the GC heap at that time. The value of the total committed bytes is a bit bigger than
“Gen 0 heap size” + “Gen 1 heap size” + “Gen 2 heap size” + “Large Object Heap Size”
Note that this “Gen 0 heap size” is the actual Gen0 size - not the Gen0 budget as shown in the perfmon counter. [Editted on 12/31/2004]
When we allocate a new heap segment, the memory is reserved for that segment and will only be committed when needed at an appropriate granularity. So the total reserved bytes could be quite a bit bigger than the total committed bytes.
If you see a high value for “# Induced GC” it’s usually a bad sign – someone is calling GC.Collect too often. It’s often a bug in the app when this happens.
[Editted on 12/31/2004]
When trying to create a perfmon counter wew get the following error.
nvalidOperationException: Cannot read Instance :MonitorUseResou
namespace Apress.ExpertDotNet.MonitorUseResources
{
//This is the beginning
//below is the code where the problem starts
InitializeComponent();
this.pcGen0Collections = new PerformanceCounter(".NET CLR Memory",
"# Gen 0 Collections", "MonitorUseResou");
this.pcGen0Collections.BeginInit();
this.pcGen1Collections = new PerformanceCounter(".NET CLR Memory",
"# Gen 1 Collections", "MonitorUseResou");
this.pcGen1Collections.BeginInit();
//Here is what we found, when
//we rename "MonitorUseResou" "MonitorUseReso"
//with exactly 14 characters it works. Can you
//explain this?
//Thank You
I was writing an internal wiki page on performance and thought this info is useful to many external readers
Managed Heap Size We have both .NET CLR Memory perf counters and SoS extensions that report manged heap
You've been kicked (a good thing) - Trackback from DotNetKicks.com
What performance counter should be used to debug the OutOfMemoryException?
Rajesh, this is answered in
The multi-threaded server app that we ported from VJ++ to J# is showing a high "# induced GC". We are not calling GC directly from our code. What is the best way to find who is calling the GC?
Thanks!
-Ashok
Ashok, you can set a breakpoint on the GC.Collect method and see who's calling it.
Maoni,
Below is the stack trace I obtained from VS'05 Profiler which captures the bottleneck call sequence -that accounts for over 50% of the sampled time!
Please see my question at the bottom of this post.
WKS::GCHeap::FinalizerThreadCreate(void)
ManagedThreadBase::FinalizerBase(void (*)(void *))
ManagedThreadBase_NoADTransition(void (*)(void *),...)
Thread::SetStackLimits(enum Thread::SetStackLimitScope)
Thread::HasStarted(void)
Thread::DoADCallBack(struct ADID,void (*)...)
CustomAttributeParser::UnpackValue(unsigned char...)
Object::GetAppDomain(void)
WKS::CreateGCHeap(void)
HostExecutionContextManager::SetHostRestrictedContext
WKS::CallFinalizer(class Object *)
MethodTable::CallFinalizer(class Object *)
com.ms.vjsharp.lang.ThreadEnd.Finalize()
FastAllocatePrimitiveArray(class MethodTable *,...)
GCInterface::CollectGeneration(int)
WKS::gc_heap::mark_phase(int,int)
WKS::GCHeap::GarbageCollectGeneration(unsigned int...)
WKS::gc_heap::garbage_collect(int,int)
Please note that the last line above represents top of the stack. It appears that ThreadEnd.Finalize is inducing the garbage collection? I am stuck here. What do you think could be going on?
Ashok, the calls you showed - they can't be from the same callstack.
Maoni, this is as reported by VS'05 Team Profiler while sampling our server exe over a half hour period. I pulled this out of the call stack view. The execution thread in question is calling Finalize on various objects all of happen very fast EXCEPT for ThreadEnd.Finalize. The profiler clearly shows:
1. FastAllocatePrimitiveArray to be a descendent of ThreadEnd.Finalize
2. GCInterface.CollectGeneration to be the descendent of FastAllocatePrimitiveArray().
I have run the profiler several times now. The results are consistent - always the same.
What am I missing?
Is there a way I can send you the .VPS (Profiler report) file?
Ashok, I dunno how to intrepret what VS profiler shows 'cause I don't use it. But I can tell you for a fact that FastAllocatePrimitiveArray doesn't call GCInterface.CollectGeneration.
The advice I can give you is to set a bp on the GC.Collect call (you can set it on GCInterface.CollectGeneration if you want...the former calls the latter).
Rather than spending a lot of time on explaining the details of the garbage collector, I'll refer you
PingBack from
ThreadEnd.Finalize definitely calls GC.Collect()
PingBack from
Problem We have had a few customers run into this issue where they are using the J# ThreadEnd objects
PingBack from
Hi,
I have been having trouble understanding very large values in % Time in GC? Aren't the values supposed to be in the reange 0-100%? What would a value like 2018.185129 mean? Any help would be greatly appreciated!
Thank you!
Regards,
Aishwariya
To achieve best performance you need to make decisions based on trade-off between
I often hear from X++ developers, that the .Net garbage collector ( GC ) does not work correctly or that
Разработка веб-сайтов, впрочем как и любых других софтверных строений, всегда стояла на распутьи трех
Are you trying to improve the memory usage of your .net application? I’ve spent some time recently trying
PingBack from
If you would like to receive an email when updates are made to this post, please register here
RSS
Trademarks |
Privacy Statement | http://blogs.msdn.com/maoni/archive/2004/06/03/148029.aspx | crawl-002 | en | refinedweb |
Silverlight - Layout, Controls and Other Stuff
In a previous post, I introduced the Snapper element, which is a UserControl subclass that snaps its Content to an integer pixel. Now I'll show how to implement snapping as an attached behavior using a custom attached DependencyProperty.
To use the Snapper element, you put it into your tree, and "wrap" the element that you want to snap, like this:
<local:Snapper> <Rectangle Height="40" Width="40" Stroke="Black" Margin="2"/></local:Snapper>
That's not bad, but there's a cooler way to do it. We can add pixel snapping to any element by using an attached property to attach a behavior, like this:
<Rectangle Height="40" Width="40" Stroke="Black" Margin="2" local:PixelSnapBehavior.
Attached Properties
Using XAML's attached properties, it is possible to put a property on an element that doesn't know about the property at compile time. In other words, "normal" properties have to be implemented on an element's class or base classes. Attached properties do not have this restriction. Some examples of attached properties include Canvas.Left, Canvas.Top, Canvas.ZIndex, Grid.Row, Grid.Column, etc. (If you have to put a "dot" in the property name, it is an attached property.) You can also create your own attached properties, and get and set their values on other objects. Before describing how to create your own attached properties, let's review regular DependencyProperties.
Dependency Properties
A "regular" DependencyProperty is a property that you define for a given class, and it is only valid on that class, like a CLR property. You register the DependencyProperty, and define a CLR property for it. A change notification callback is optional--you can pass null instead of the PropertyMetadata. Here are the elements needed to define a DependencyProperty. Note that the property and the change notification are static. Also note that the class you define the property on must descend from DependencyObject. In this case, I'm using a UserControl.
public class DPExample : UserControl
{
static DependencyProperty DistanceProperty = DependencyProperty.Register("Distance", typeof(double), typeof(DPExample), new PropertyMetadata(OnDistanceChanged));
static private void OnDistanceChanged(DependencyObject obj, DependencyPropertyChangedEventArgs e)
{
Debug.WriteLine("Distance property changed from {0} to {1}", e.OldValue, e.NewValue);
}
public double Distance
get { return (double)GetValue(DistanceProperty); }
set { SetValue(DistanceProperty, value); }
}
Attached DependencyProperties
An attached DependencyProperty is defined on one class, but made to by used (mostly) on instances of other classes. It is not necessary for the other classes to know about this property at compile time, so you can set an attached DependencyProperty on any object that descends from DependencyObject--even objects provided in the Silverlight framework. The property is now registered with RegisterAttached (the parameters are the same.) The property get has been replaced by a public static void Get<propertyName> method, and the property setter has been replaced by a public static <type> Set<propertyName> method.
static DependencyProperty MassProperty = DependencyProperty.RegisterAttached("Mass", typeof(double), typeof(DPExample), new PropertyMetadata(OnMassChanged));
static private void OnMassChanged(DependencyObject obj, DependencyPropertyChangedEventArgs e)
Debug.WriteLine("Mass property changed from {0} to {1}", e.OldValue, e.NewValue);
public static void SetMass(DependencyObject obj, double value)
obj.SetValue(MassProperty, value);
public static double GetMass(DependencyObject obj)
object result = obj.GetValue(MassProperty);
return result != null ? (double)result : DefaultMass;
public const double DefaultMass = 1;
You will notice that when the GetMass method calls GetValue, it does not immediately cast to a double. This is because if the property has not been set on the instance passed in by the obj parameter, GetValue will return null. In this case, it is typical to return a default value (as above) or some value that signals "not set". This attached DependencyProperty can be set in code by calling the SetMass method, set in XAML, and animated.
Attached Behaviors
If we define a behavior as "doing something" then when we attach a behavior to an element, we are getting it to do something that it could not do before. This is typically done by attaching an event handler or handlers to an elements events. The behavior is in the event handler, which is defined elsewhere. This is obviously easy enough to do in code, but it can also be done in XAML (and code) by leveraging attached DependencyProperties. When an attached DependencyProperty is set on an instance, the property changed notification method is called. This is where you can hook into the element's events.
The Code
Here is a class that implements pixel snapping as an attached behavior.
using System;
using System.Collections.Generic;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using System.Diagnostics;
namespace CustomAttachedDP
public class PixelSnapBehavior
// Define the attached DependencyProperty
public static DependencyProperty PixelSnapProperty = DependencyProperty.RegisterAttached(
"PixelSnap", typeof(PixelSnapType), typeof(PixelSnapBehavior), new PropertyMetadata(SnapPropertyChanged));
// In the property changed notification method, we will add the element to a list of objects that will
// be snapped when we get a LayoutUpdated event. We have to do a bunch of fancy stuff with weak references,
// our own list, etc. because there is no Unloaded event.
public static void SnapPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
PixelSnapType newSnap = (PixelSnapType)e.NewValue;
int index = 0;
while (index < _objects.Count)
{
if (_objects[index].Target == d)
break;
++index;
}
if (index < _objects.Count)
if (newSnap == PixelSnapType.None)
{
if (_objects[index].IsAlive)
{
Debug.WriteLine("Removing RenderTransform");
((FrameworkElement)_objects[index].Target).RenderTransform = null;
}
_objects.RemoveAt(index);
}
else if (newSnap != PixelSnapType.None)
_objects.Add(new WeakReference(d));
if (!_attached && _objects.Count > 0)
FrameworkElement element = d as FrameworkElement;
if (element != null)
element.LayoutUpdated += new EventHandler(LayoutUpdated);
_attached = true;
else if (_attached && _objects.Count == 0)
element.LayoutUpdated -= new EventHandler(LayoutUpdated);
_attached = false;
}
// The attached DependencyProperty setter
public static void SetPixelSnap(DependencyObject obj, PixelSnapType value)
obj.SetValue(PixelSnapProperty, value);
// The attached DependencyProperty getter
public static PixelSnapType GetPixelSnap(DependencyObject obj)
object result = obj.GetValue(PixelSnapProperty);
return result == null ? PixelSnapType.None : (PixelSnapType)result;
// A utility method to remove all snapped objects from the list.
public static void RemoveAll()
while (_objects.Count > 0)
if (_objects[0].IsAlive)
SetPixelSnap((DependencyObject)_objects[0].Target, PixelSnapType.None);
}
else
_objects.RemoveAt(0);
// The event handler for the LayoutUpdated event. It will snap everything that it thinks is
// still alive. This is not 100% bulletproof but it should work in most scenarios.
private static void LayoutUpdated(object sender, EventArgs e)
if (_objects[index].IsAlive == false)
Snap(_objects[index].Target as FrameworkElement);
++index;
// Try to align an element on an integer pixel
private static void Snap(FrameworkElement target)
if (target == null)
return;
PixelSnapType snap = PixelSnapBehavior.GetPixelSnap(target);
// Remove existing transform
TranslateTransform savedTransform = target.RenderTransform as TranslateTransform;
if (savedTransform != null)
{
target.RenderTransform = null;
// Calculate actual location
MatrixTransform globalTransform = target.TransformToVisual(Application.Current.RootVisual) as MatrixTransform;
Point p = globalTransform.Matrix.Transform(_zero);
double deltaX = snap == PixelSnapType.Closest ? Math.Round(p.X) - p.X : (int)p.X - p.X;
double deltaY = snap == PixelSnapType.Closest ? Math.Round(p.Y) - p.Y : (int)p.Y - p.Y;
// Set new transform
if (deltaX != 0 || deltaY != 0)
if (savedTransform == null)
savedTransform = new TranslateTransform();
target.RenderTransform = savedTransform;
savedTransform.X = deltaX;
savedTransform.Y = deltaY;
private static readonly Point _zero = new Point(0, 0);
private static List<WeakReference> _objects = new List<WeakReference>();
private static bool _attached = false;
public enum PixelSnapType
None,
Closest,
TopLeft
Rob Houweling with a Sketch Application (2 parts), Martin Mihaylov continuing with Shapes, Karen Corby
You've refactored the dependecy property's explaining code, but the description after was not refactored: DPExample.Count was written instead of DPExample.Distance
Hey, thanks, TWiStErRob. Corrected.
From Dave Relyea's blog, Pixel Snapping in Silverlight as an attached behavior:
Nice sample, but don't use IsLive property on the WeakReference it's evil ;)
Between your call to .IsLive and .Target the GC could have collected the object and exception will be raised
Hi, good to see that attached behaviors are picking up... I find them very useful, and this is a very nice sample. I wanted to share an example of using attached behaviors in Silverlight to emulate the ICommand interface behavior present in WPF.
You may find it useful, as the concepts are exactly the same,
Julian | http://blogs.msdn.com/devdave/archive/2008/06/22/Using-an-Attached-DependencyProperty-to-Implement-Pixel-Snapping-as-an-Attached-Behavior.aspx | crawl-002 | en | refinedweb |
Introduction
Integrating external applications with SharePoint data and functionality is pretty easy, but the documentation is scattered, so I thought it might be helpful to provide a complete solution that covers a few main scenarios.
I’ve used the SharePoint web services to provide a custom search interface and to populate lookup tables. I’ve also seen it used as a workaround in a few scenarios where the SharePoint Object Model was behaving erratically.
This post will focus on how to quickly get started with the web services interface in WSS, and particularly how to access SharePoint data and search results quickly, in a new project. It’s not very polished, but it gets the job done.
Does this apply to me?
First, when communicating with WSS and MOSS, you have to decide which remote access tool best fits your needs.
As Alvin Bruney points out in his book Programming Excel Services (Safari), the web services interface is also “a way to access the resources of the server with reduced risk of instability to the server resources.”
Secondly, figure out if you need to do this from scratch at all.
Check into LINQ to SharePoint, which can use either the SharePoint Object model or the web services interface. It looks like a really slick and robust way to interface with your existing SharePoint data. On the down side, it appears to not have been updated since November 2007 and Bart De Smet, the project’s author (blog), notes that it is an alpha release and not ready for production. I’ve steered clear of it for that reason and due to client restrictions, but you might save yourself a lot of time if you can use it!
Check out this video, from the project’s CodePlex page, for a good quick start:
LINQ to SharePoint quick start video (5:37, 3.18MB, WMV)
If you decide against LINQ to SharePoint, you can still easily add web references and consume the SharePoint web services in your custom code.
Walkthrough: Accessing List data
As it turns out, the web host for Hands on DC (a volunteer organization with which I work) does some hokey things with NTLM authentication that aren’t supported by WCF. Namely, it seems like they’re doing Transport level authentication over plaintext HTTP instead of HTTP over SSL. I can see why this isn’t supported, for security reasons. Despite Dan Rigby’s excellent post about impersonation in WCF, I couldn’t get anything to work.
But alas, with no slight of hand whatsoever, the scenario works in VS2005 and a regular old-school ASMX-based web service proxy (web reference). Here’s a quick (and quite dirty) example, initially based on Ishai Sagi’s (blog) response on MSDN:
Video hint: Click the full screen icon (looks like a TV) Download source code (39KB, ZIP file)
The query that we send to the list service is written in CAML. When we want to get more complex in our query, such as only pulling back certain columns, check out the U2U CAML Query Builder. It provides an awesome interface into creating your CAML queries.
In the WCF world, Kirk Evans has an awesome walkthrough, Calling SharePoint Lists Web Service using WCF, which includes how to streamline the XML access by using an XPathNavigator, and later using a DataTable that reads from an XmlNodeReader. Better than querying the fields directly for sure!
Walkthrough: Querying WSS Search Results
Search is a little more challenging. As is the case with every other web service, we send the query service XML, and it returns XML. We can take a shortcut and generate the request XML with a tool such as the Search Query Web Service Test Tool for MOSS (or we could also import the schema and generate a C# class from it).
Since I’m a big fan of using XSD.EXE to generate a C# class, I chose to do so with the search result support classes. It was mostly productive, although the Results node is free-form (can take any node type) and the generator doesn’t seem to support that. In the end, we can use the generated classes to get statistics about the data set, and can navigate the Documents using regular XML methods.
Here is a complete walkthrough of adding very basic search results, including a total result count and the first page of results, to our application:
Video hint: Click the full screen icon (looks like a TV) Download source code (56KB, ZIP file)
Conclusion
Use SharePoint web services when you have the need to separate concerns and reduce risk inherent with new code deployments to the SharePoint farm, you want remote access to SharePoint from a non-SharePoint server, or you need to access SharePoint from something other than .NET (such as JavaScript).
This post and associated videos walked you through creating a Windows Forms application, from scratch, that pulls data in from SharePoint, including List-based data and search results. The process is similar to any other web service, but there are a few gotchas and pain points that I hope have been cleared up in this resource.
I collected some additional links in the process of putting this post together, and added them on delicious with the tag “sharepoint-webservices”..
For the past few months (and 100+ volunteer hours!) I’ve been creating a web application for Hands on DC that calculates volunteers, gallons of paint, and materials for work projects for their annual Work-a-Thon event. After the encouragement of a few coworkers who did some initial work on the project, I committed to using ASP.NET MVC, technology which has been out over a year but just reached a production 1.0 release at Mix 09 this year.
Getting up and running with MVC wasn’t an easy task. The project was also my first foray into LINQ to SQL, and really .NET 3.5 in general, so it was a little intimidating at first! There’s not much documentation and it’s split across the many release versions of MVC. The main site will get you up doing very basic things (but is seriously lacking content), though Phil Haack’s webcast and Scott Hanselman, et. al.’s free e-Book are helpful.
In the process, I discovered some important companion pieces in MvcContrib and jQuery, including the validation plugin and the datatable plugin. I want to highlight work that I did to combine the MvcContrib data grid with the datatable for sorting, paging and filtering. This was something I struggled with for several hours, so I’m hoping there is some value in posting the full example.
Figure 1. Example of using MvcContrib with jQuery datatable plugin.
Walkthrough
Here is a complete from-scratch example.
Figure 2. Solution after copying the datatable media folder.
<%@ Import Namespace="MvcContrib.UI.Grid" %>
<%@ Import Namespace="MvcContrib.UI.Grid.ActionSyntax" %>
<%@="../../media/css/demos.css" rel="stylesheet" type="text/css" />
<script src="../../media/js/jquery.js" type="text/javascript"></script>
<script src="../../media/js/jquery.dataTables.js" type="text/javascript"></script>
</head>
public enum Medal
{
Gold,
Silver,
Bronze
}
public class MedalWinner
{
public string Location { get; set; }
public string Year { get; set; }
public string Sport { get; set; }
public Medal Medal { get; set; }
public string Country { get; set; }
public string Name { get; set; }
public MedalWinner(string l, string y, string s, Medal m, string c, string n)
{
Location = l;
Year = y;
Sport = s;
Medal = m;
Country = c;
Name = n;
}
}
public ActionResult Index()
{
ViewData["Message"] = "Welcome to ASP.NET MVC!";
var medalWinners = new List<MedalWinner>();
medalWinners.Add(
new MedalWinner("Athens", "2004", "Handball",
Medal.Gold, "Croatia", "LOSERT, Veni"));
medalWinners.Add(
new MedalWinner("Athens", "2004", "Handball",
Medal.Gold, "Croatia", "BALIC, Ivano"));
medalWinners.Add(
new MedalWinner("Athens", "2004", "Handball",
Medal.Gold, "Croatia", "ZRNIC, Vedran"));
medalWinners.Add(
new MedalWinner("Athens", "2004", "Handball",
Medal.Silver, "Germany", "JANSEN, Torsten"));
medalWinners.Add(
new MedalWinner("Athens", "2004", "Handball",
Medal.Silver, "Germany", "KRETZSCHMAR, Stefan"));
medalWinners.Add(
new MedalWinner("Athens", "2004", "Handball",
Medal.Silver, "Germany", "VON BEHREN, Frank "));
ViewData["MedalWinners"] = medalWinners;
return View();
}
<ol>
<% foreach (HomeController.MedalWinner winner
in (List<HomeController.MedalWinner>)ViewData["MedalWinners"] )
{ %>
<li><%= winner.Name %>, <%= winner.Country %></li>
<% } %>
</ol>
<% Html.Grid((List<HomeController.MedalWinner>)ViewData["MedalWinners"])
.Columns(column =>
{
column.For(c => c.Year);
column.For(c => c.Location);
column.For(c => c.Name);
column.For(c => c.Country);
column.For(c => c.Medal.ToString());
column.For(c => c.Sport);
}).Render();
%>
<%);
}).Render();
%>
<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
$('#example').dataTable();
});
</script>
<style>
#example { width: 100%; }
#container { width: 600px; }
</style>
<div id="container">
<%);
}).Attributes(id => "example").Render();
%>
</div>
$(document).ready(function() {
$('#example').dataTable({
"iDisplayLength": 25,
"aaSorting": [[2, "asc"]],
"aoColumns": [{ "bSortable": false }, null,
null, null, null, { "bSortable": false}]
});
});
[Download complete source code] (394KB, ZIP file)
A few years ago I created an article around Reporting Services and dates. It could have been written more generically, because I reference this quite a bit to get common dates like "the beginning of this week", "midnight last night", etc, in my SQL queries. It's a fairly comprehensive list of relative dates that one might want to get in T-SQL for reporting, scheduling, etc.
It can get pretty complex, such as this function for getting the end of the current week
CREATE FUNCTION get_week_end (@date datetime)RETURNS datetime ASBEGIN return dateadd(yyyy, datepart(yyyy, dateadd(weekday,7-datepart(weekday, @date),@date))-1900, 0) + dateadd(ms, -3, dateadd(dy, datepart(dy, dateadd(weekday,7-datepart(weekday, @date),@date)),0) )END
If you don't find what you need, you can typically use the dateadd function to tweak one of these. Here is the complete list outlined in the article:.:
I've been working on various forms of displaying status messages from enums, and here's the latest preferred iteration of how to do this. Regurgitated and tweaked from WayneHartman.com.
public enum XmlValidationResult
{
[Description("Success.")]
Success,
[Description("Could not load file.")]
FileLoadError,
[Description("Could not load schema.")]
SchemaLoadError,
[Description("Form XML did not pass schema validation.")]
SchemaError
}
private string GetEnumDescription(Enum value)
{
// Get the Description attribute value for the enum value
FieldInfo fi = value.GetType().GetField(value.ToString());
DescriptionAttribute[] attributes =
(DescriptionAttribute[])fi.GetCustomAttributes(
typeof(DescriptionAttribute), false);
if (attributes.Length > 0)
{
return attributes[0].Description;
}
else
{
return value.ToString();
}
}
It's possible to do something even cooler like cache the values or add a ToDescription() method (in C#3.0), but I just wanted an simple, repeatable way to do this.
Trademarks |
Privacy Statement | http://blogs.msdn.com/paulwhit/ | crawl-002 | en | refinedweb |
Aaron Skonnard,
Pluralsight
October 2008..
Figure 1: Some common
HTTP methods
GET
Requests a specific representation of a resource
Yes
PUT
Create or update a resource with the supplied representation
No
DELETE
Deletes the specified resource
Submits data to be processed by the identified resource
HEAD
Similar to GET but only retrieves headers and not the body
OPTIONS
Returns the methods supported by the identified resource
Although HTTP fully supports CRUD, HTML 4 only supports
issuing GET and POST requests through its various elements. This limitation has
held Web applications back from making full use of HTTP, and to work around it,
most applications overload POST to take care of everything but resource
retrieval. HTML 5, which is currently under development, plans to fix this by
adding new support for PUT and DELETE.
GET, HEAD, and OPTIONS are all examples of safe methods that
aren’t intended to have side effects. All safe methods are also idempotent, as
are PUT and DELETE, so you should be able to repeat them multiple times without
harm. The POST method is something of a special case. According to the HTTP
specification, POST should be used to provide a representation that can be
treated as a subordinate of the target resource. For example, you could POST a
new blog entry to the URI representing the blog feed, causing a new blog entry
to be added to the feed. POST can also be used to process a block of data such
as the data transmitted by an HTML form. The actual function performed by the
POST method is defined by the server. Therefore, POST cannot be considered safe
or idempotent by clients.
HTTP also defines a suite of standard status codes that
specify the result of processing the request. Status codes are organized into
ranges that mean different things. For example, status codes in the 200 range
mean “successful” while status codes in the 400 range mean the client issued a
bad request. Figure 2 describes each status code range and provides a few
examples of common status codes.
Figure 2: Some common
HTTP status codes
100
Informational
100 Continue
200
Successful
200 OK
201
Created
202
Accepted
300
Redirection
301 Moved Permanently
304
Not Modified
400
Client error
401 Unauthorized
402
Payment Required
404
Not Found
405
Method Not Allowed
500
Server error
500 Internal Server Error
501
Not Implemented
The HTTP specification also defines a suite of headers that
can be used to negotiate behavior between HTTP clients and servers. These
headers provide built-in solutions for important communication concepts like
redirection, content negotiation, security (authentication and authorization),
caching, and compression. When you build something using HTTP, you get these
solutions for free and don’t have to invest time in reinventing similar
solutions in your own code. And when you’re not using HTTP, it’s likely that
you’ll end up developing similar solutions for these things when your system
grows in scale.
The Web platform has been around for years and countless
organizations have deployed successful large-scale distributed applications
using the concepts I just described. The Web’s general architectural style has
been concretely described in a PhD dissertation as what’s known as “REST”..
However, the key design constraint that sets REST apart from
other distributed architectural styles is its emphasis on a uniform interface
between components. The theory is that generalizing and standardizing the
component interface will ultimately simplify the overall system architecture
and provide more visibility into the various interactions. REST further defines
how to use the uniform interface through additional constraints around how to
identify resources, how to manipulate resources through representations, and
how to include metadata that make messages self-describing.
When something conforms to these REST design constraints, we
commonly refer to it as “RESTful” – a term that I casually use throughout this
whitepaper. The Web is indeed RESTful. The Web was built on HTTP’s uniform
interface (the methods described in Figure 1) and the focus is on interacting
with resources and their representations. Although in theory REST isn’t tied to
any specific platform or technology, the Web is the only major platform that
fully embodies REST today. So, in practical terms, if you’re going to build
something that’s RESTful today, you’ll probably do it on the Web using HTTP.
Some argue that generalizing the component interface limits
the capabilities of the system, but this is simply not true. There is great
power in the simplicity of a uniform interface because of the value it adds at
larger scales. The REST model is Turing-complete and can be used to implement
complex systems.
A comparison might help explain how this is possible – let’s
consider the popular LEGO® building
blocks as an example..
HTTP defines a similar model for the Web. The various
methods defined by the HTTP specification (see Figure 1) provide a uniform
interface for interacting with resources on the Web. All Web browsers, servers,
intermediaries, and custom applications understand this uniform interface and
the semantics of each operation. This allows them to connect to one another and
exchange information without issues, despite platform and technology
differences. And new Web components can be added at any time without causing
disruption or requiring changes to the other components already in existence.
The idea of a uniform interface is often hard to accept for
many developers, especially those who are used to working with RPC-based
component technologies. With RPC, every new component introduces a new
interface – a new set of methods – for accessing the component’s functionality.
Hence, the component developer is focused on designing and implementing
methods, and therefore, new application protocols. It’s been this way for
years, and few technologies move away from this trend.
Before clients can take advantage of a new component, they
must learn the intricacies of the new interface (application protocol) and the
semantics of each operation. Ultimately, as the number of interfaces increases,
so does the overall complexity of the system. This complexity can become
unwieldy to manage over time and often leads to brittle systems that can’t cope
with versioning and change.
A system built around a uniform interface for communication
provides stability because it rarely changes and there are only a few methods for
everyone to learn. Applications using a uniform interface are free to change at
any time while the communication methods connecting them remain stable over
time. This is how the Web has always worked, and one of the primary reasons it
has worked so well.
The move towards RESTful services is ultimately about moving
towards a programmable Web, one where we can replace humans with application
code. It’s essentially about applying the principles of REST to the domain of
Web services. Ultimately, designing RESTful services is no different than
designing RESTful Web applications except we need to facilitate removing humans
from the equation.
When you design a RESTful service, you have to think about
things differently. You no longer focus on designing methods. Instead, you
focus on the resources that make up your system, their URIs, and their
representations. RESTful services conform to the HTTP uniform interface – you
simply need to decide which of those methods you’ll support for each resource.
In order to remove humans from the equation, you’ll need to use resource
representations that are easy to programmatically consume.
Unlike traditional RPC-style frameworks that attempt to hide
communication details, RESTful services actually embrace HTTP and its features
and fully take advantage of them as much as possible. As a result, RESTful
services automatically receive the valuable benefits inherent in the Web
platform including the built-in security features, caching controls,
compression, and ultimately improved performance and scalability. And best of
all, you don’t have to wait for it – the Web platform is ready today – complete
with products, infrastructure, and helpful resources available for immediate
use.
In this section, we’ll start from a traditional RPC-based
service and redesign it to become a RESTful service. To accomplish this, first we’ll extract the
resources that make up the existing service. Then we’ll design a URI scheme for
identifying the resources and decide which HTTP methods they’ll support. And
finally, we’ll design the resource representations that will be supported by
each method.
Let’s suppose your company provides an online bookmarking
service similar to what’s provided by Windows Live Favorites, Google Bookmarks,
or delicious. We’ll assume it was originally implemented using SOAP with an
RPC-based design. The service supports the list of operations described in
Figure 3.
Figure 3: An
RPC-based bookmarking service
createUserAccount
Creates a new user account
getUserAccount
Retrieves user account details for the authenticated user
updateUserAccount
Updates user account details for the authenticated user
deleteUserAccount
Deletes the authenticated user’s account
getUserProfile
Retrieves a specific user’s public profile information
createBookmark
Creates a new bookmark for the authenticated user
updateBookmark
Updates an existing bookmark for the authenticated user
deleteBookmark
Deletes one of the authenticated user’s bookmarks
getBookmark
Retrieves a specific bookmark (anyone can retrieve a public bookmark;
only authenticated users can retrieve a private bookmark)
getUserBookmarks
Retrieves the user’s private bookmarks, allows filtering by tags
getUserPublicBookmarks
Retrieves the user’s public bookmarks, allows filtering by tags
getPublicBookmarks
Retrieves all public bookmarks, allows filtering by tags
Several of these operations are publicly accessible
including createUserAccount, getUserProfile, getUserPublicBookmarks,
getPublicBookmarks, and getBookmark (assuming the bookmark is public). Anyone
can use these operations without authentication. The remaining operations,
however, require the user to provide valid credentials and can be used only by
valid, authenticated users. For example, only an authenticated user can modify
or delete his account or create, update, and delete bookmarks.
Also, bookmarks can be marked as “public” or “private” and
they can be labeled with arbitrary textual “tags.” Anyone can retrieve public
bookmarks, but only authenticated users can access private bookmarks. Plus, all
of the operations that return bookmark collections can be filtered by “tags.”
The first step in designing a RESTful service is to identify
the resources the service will expose. From inspecting the operations in Figure
3, it looks like there are just a few resources in play:
However, we need to get a little more specific than this since
we’ll need the ability to operate on individual bookmarks as well as different
collections of bookmarks. After analyzing the current functionality, it’s
apparent we’ll need the ability to address the following types of resources:
You can think of these things as the “nouns” that make up
the service. The original service design outlined in Figure 3 focused on
“verbs” and not the underlying nouns. This is where RESTful design takes a
radical turn. With REST, you focus first on the nouns first (e.g., the
resources) because you’ll rely on a standard set of verbs (the uniform
interface) to operate on them within the service.
Now that we’ve identified the fundamental resources that
make up our service, our next task is to define identifiers for them. Since we
plan to host this service on the “Web,” we’ll rely on the Web’s URI syntax for
identifying these resources.
To keep things simple for consumers, we’ll use the service’s
base address to identify the list of all public bookmarks. So if our service
were hosted at, we’d browse to that address
to retrieve the list of all public bookmarks. Since the list of all public
bookmarks can get quite large, we should probably also provide a way to filter
the collection of bookmarks somehow. We can accomplish this by building
additional scoping information into the URI design. For example, we can use the
following query string to identify all public bookmarks marked with a
particular tag:
?tag={tag}
In this case, the “tag” is providing additional scoping information
that consumers can use to reduce the identified collection’s size. The syntax
I’m using here is referred to as URI template syntax.
Anything within curly braces represents a variable, like tag
in this case. Everything else in the URI (not enclosed within curly braces) is
considered a static part of the URI. Later, when we implement the service,
you’ll see how to map these URI variables to method parameters in our WCF code.
The URI template in this case is relative to the service’s
base URI. So you can identify all public bookmarks marked with the “rest” tag
using.
We can further filter the list of public bookmarks by
username. In this case, we’ll use the username as part of the path to filter
the collection by user before applying the tag scoping information:
{username}?tag={tag}
For example, you can identify all of skonnard’s bookmarks
marked with “wcf” using.
And you can access all of onion’s bookmarks marked with “silverlight” using.
Next, let’s think about how to identify a particular user.
Since we’ve already used a variable in the first path segment (for identifying
a user’s public bookmarks), we’ll need to specify a literal string in that
first segment to change the meaning of what comes next. For example, we can say
that all URIs starting with “users” will identify a specific user. We’ll use
the following templates to identify the user resources:
/users/{username}
/users/{username}/profile
And we can identify a user’s complete list of bookmarks by
adding “bookmarks” instead:
/users/{username}/bookmarks
Plus, like before, we can filter bookmarks by using “tag”
scoping information:
/users/{username}/bookmarks?tag={tag}
When it comes to identifying individual bookmarks, we have
to make a decision about how to do that. If we assign each bookmark a unique
Id, we could potentially use a simpler URI template for identifying individual
bookmarks based on the Id. However, since bookmarks really belong to a specific
user, it might make sense to make individual bookmark identifiers relative to a
particular user as shown here:
/users/{username}/bookmarks/{id}
Figure 4 summarizes the URI design for our RESTful bookmark
service. Now that we know what resources we’re dealing with and how to identify
them, we can turn our attention to thinking about which methods on the uniform
interface we’ll support for each of these resources.
Figure 4: The
BookmarkService URI design
A collection of all public bookmarks
A user’s collection of public bookmarks
An individual user account
users/{username}
A specific user’s public profile
users/{username}/profile
A user’s collection of bookmarks
users/{username}/bookmarks?tag={tag}
An individual bookmark
users/{username}/bookmarks/{id}
For the publicly accessible bookmark collections and the
public user profile resource, we’ll only support GET requests. You can think of
these as read-only resources. We’ll return a 200 (“OK”) when the requests are
successful. If the URI doesn’t identify a known user, we’ll return 404 (“Not
Found”).
The remaining resources will allow creating and modifying
resources so you can think of them as read/write resources. For example, we’ll
support GET, PUT, and DELETE on user account resources to replace the
equivalent operations in the RPC version. We’ll use PUT to create new user
accounts, GET to retrieve them, PUT (again) to update them, and DELETE to
delete them..
Once created, a user can retrieve her account resource by
issuing a GET request to her account URI.
She can issue a PUT request to update her account resource by supplying
an updated user account representation. She can also issue a DELETE request
(with no representation) to delete her account. When these operations are
successful, the service returns a 200 (“OK”) response. If the client attempts
to update or delete a non-existent account, the service will return a 404 (“Not
Found”) response.. If
successful, the service will return a 200 (“OK”). If the client uses a URI that
doesn’t exist, the service will return a 404 (“Not Found”) response.
Figure 5 summarizes the final design of our RESTful
interface for bookmark resources, showing which HTTP methods we’ll support with
each resource. We’ve been able to completely replace the functionality found in
the original RPC service through HTTP’s uniform interface.
Figure 5: RESTful
interface for user accounts
users/{username}/bookmarks
Our service requires the ability to authenticate users so we
can authorize the resources and methods they’re allowed to access. For example,
only authenticated users can access their own user account resources and
operate on them. And only authenticated users can create new bookmarks and
operate on them. If an unauthenticated user attempts to do so – or a user
attempts to operate on another user’s resources – the service needs to return a
401 (“Unauthorized”) response and deny access.
So we need to figure out how we’ll identify users in order
to authenticate them. HTTP comes with some built-in authentication mechanisms,
the most popular of which is basic access authentication. This is one of the
most popular authentication schemes used on the Web today because it’s so easy
and widely supported, but it’s also one of the most unsecure, because passwords
are sent across the wire in a simple-to-decode plain text format. One way around this is to require SSL (HTTPS)
for all HTTP traffic that will be using basic authentication, thereby
encrypting the pipe carrying the passwords.
Another approach is to use digest authentication, another
authentication scheme built into HTTP. Digest authentication prevents
eavesdropping by never sending the password across the wire. Instead, the
authentication algorithm relies on sending hash values computed from the
password and other values known only by the client and server. This makes it
possible for the server to recompute the hash found in an incoming message to validate
that client has possession of the password.
Here’s how it works. When a client attempts to access a
protected resource, the server returns a 401 (“Unauthorized”) response to the
client along with a “WWW-Authenticate” header indicating that it requires
digest authentication along with some supporting data. Once the client receives
this, it can generate an “Authorization” header containing the computed hash
value and send an identical request back to the server including the new
header. Assuming the client generates a valid “Authorization” header, the
server will allow access to the resource. Digest authentication is better than
basic but it’s still subject to offline dictionary and brute force attacks
(unless you enforce a really strong password policy), and it’s not as widely
supported by Web browsers and servers.
Another approach is to avoid both basic and digest
authentication and implement a custom authentication scheme around the
“Authorization” header. Many of these schemes use a custom Hash Message
Authentication Code (HMAC) approach, where the server provides the client with
a user id and a secret key through some out-of-band technique (e.g., the
service sends the client an e-mail containing the user id and secret key). The
client will use the supplied secret key to sign all requests.
For this approach to work, the service must define an
algorithm for the client to follow when signing the requests. For example, it
must outline how to canonicalize the message and which parts should be included
in the HMAC signature along with the secret key. This is important because the
client and service must follow the same algorithm for this to work. Once the
client has generated the HMAC hash, it can include it in the “Authorization”
header along with the user id:
Authorization:
skonnard:uCMfSzkjue+HSDygYB5aEg==
When the service receives this request, it will read the
“Authorization” header and split out the user id and hash value. It can find
the secret for the supplied user id and perform the same HMAC algorithm on the
message. If the computed hash matches the one in the message, we know the
client has possession of the shared secret and is a valid user. We also know
that no one has tampered with whatever parts of the message were used to
compute the HMAC hash (and that could be the entire message). In order to
mitigate replay attacks, we can include a timestamp in the message and include
it in the hash algorithm. Then the service can reject out-of-date messages or
recently seen timestamp values.
The HMAC approach is superior to both basic and digest
authentication, especially if the generated secrets are sufficiently long and
random, because it doesn’t subject the password to dictionary or brute force
attacks. As a result, this technique is quote common in today’s public facing
RESTful services.
For the service we’re designing, we could pick any of these
techniques to authenticate users. We’ll assume an HMAC approach for our
service, and that each user will be assigned a secret key through an
out-of-band e-mail after creating a new user account. And for all of the
non-public resources, we’ll look for a valid HMAC hash in the “Authorization”
header and return a 401 (“Unauthorized”) when necessary.
Now that we have a way for users to prove who they are,
we’ll need logic to authorize their requests, in other words, to decide what
they are allowed to do. For example, any authenticated or anonymous user may
retrieve any public bookmark, while private bookmarks may only be retrieved by
the authenticated user who owns them. We’ll see how to implementation this
authorization logic later in the paper.
Now we need to decide how we’re going to represent the
resources exposed by our service. There are many different data formats commonly
used to represent resources on the Web including plain text, form-encoding,
HTML, XML, and JSON, not to mention the variety of different media formats used
to represent images, videos, and the like. XML is probably the most popular
choice for RESTful services, although JSON has been growing in popularity
thanks to the Web 2.0/Ajax movement.
XML is easier to consume in most programming languages
(e.g., .NET) so it’s often the default format. However, for browser-based
scenarios (which abound), the JavaScript Object Notation (JSON) is actually
easier to consume because it’s a JavaScript native format. For the service
we’re designing, we’ll support both XML and JSON to accommodate both client
scenarios equally well.
An important thing to think about while designing resource
representations is how to define the relationships between different resources.
Doing so will allow consumers to navigate the “Web” of resources exposed by
your service, and even discover how to use navigate service by actually using
it.
Let’s begin by designing the XML representation for a user
account. When creating a new user account, we need the user to supply only a
name and e-mail address (remember, the username is represented in the URI). The
following is the XML format we’ll use for creating new user accounts:
<User>
<Name>Aaron Skonnard</Name>
</User>
However, when a user retrieves his user account resource,
the service will supply a different representation containing a little more
information, in this case an Id and a link. We’ll provide the Id that links
back to this particular user resource and a link to this user’s list of public
bookmarks:
<Bookmarks></Bookmarks>
<Id></Id>
There may be other pieces of information that make sense
only in either the request or response representations. A valid user can update
this representation with a new e-mail address or a different name and PUT it
back to the same URI to perform an update.
A user’s public profile will provide yet another
representation because we probably don’t want to share one user’s e-mail
address with another. Here’s what we’ll use for the user profile resource:
<UserProfile>
</UserProfile>
Now let’s turn our attention to bookmark resources. For our
example, a bookmark is a pretty simple data set. When a user creates a
bookmark, it must provide a title, a URL, some optional tags, and a
public/private flag. We’ll support the following representation for creating
new bookmarks:
<Bookmark>
<Public>true</Public>
<Tags>REST,WCF</Tags>
<Title>Aaron’s Blog</Title>
<Url></Url>
</Bookmark>
And we’ll use a slightly enhanced representation when
returning bookmark resources that includes an Id, additional information about
the user who created it, and a last-modified time:
<Id></Id>
<LastModified>2008-03-12T00:00:00</LastModified>
<Title>Aaron's Blog</Title>
<User>skonnard</User>
<UserProfile>
</UserProfile>
A list of bookmarks will simply be represented by a
<Bookmarks> element, containing a list of child <Bookmark> elements
as shown here:
<Bookmarks>
<Bookmark>
<Id></Id>
<LastModified>2008-03-12T00:00:00</LastModified>
<Public>true</Public>
<Tags>REST,WCF</Tags>
<Title>Aaron's Blog</Title>
<Url></Url>
<User>skonnard</User>
<UserProfile>
</UserProfile>
</Bookmark>
<Bookmark>...</Bookmark>
</Bookmarks>
These representations make is possible to navigate between
different types of resources, and they are simple to consume in any programming
framework that includes a simple XML API.
We can expose our resources in several representations to
accommodate different client scenarios. For example, some clients (like Web
browsers) will have an easier time dealing with JSON representations than XML.
But if we decide to support multiple formats, we’ll need the client to specify
which format it wants somehow. There are a few ways to handle this, either by
using HTTP’s content negotiation headers (e.g., Accept) or by encoding the
desired resource format into the URI.
Both approaches are completely RESTful, but I prefer the
latter because it allows the URI to contain all of the necessary information.
We’ll assume XML is the default representation for our URIs, and we’ll extend
them to support JSON by adding “?format=json” to the end of each URI template:
?tag={tag}&format=json
{username}?tag={tag}&format=json
users/{username}?format=json
users/{username}/profile?format=json
...
This is an example of what the new user account resource
would look like in JSON:
{User:{Email:'[email protected]',
Name:'Aaron Skonnard'}}
This is just another representation of the same resource.
Again, the reason for supporting multiple formats is to make things easier for
certain types of clients. It wouldn’t be hard to also support form-encoding to
simplify things for Web browser forms or other text-based formats (e.g., CSV)
in order to accommodate even more client application scenarios (e.g., Microsoft
Excel).
The problem with using a custom XML vocabulary is you’ll
have to provide metadata (like an XML Schema definition) and documentation for
clients consuming your resources. If you can use a standard format, on the
other hand, you will immediately have an audience that knows how to consume
your service. There are two standard formats that are quite popular today:
XHTML and Atom/AtomPub .
One of the benefits of using XHTML as the representation
format is that it can be rendered in a Web browser for human viewing during
development. With XHTML, you can represent lists of items and you can use forms
to encode additional metadata that describes how to interact with other linked
resources.
AtomPub is another popular choice because it was
specifically designed to represent and manipulate collections of resources.
There are many feed-aware clients (including modern Web browsers) that know how
to render Atom feeds, providing a human view that can prove helpful during
development.
The main downside to using either of these formats is that
they are somewhat constrained in terms of the data set the XML vocabulary was
designed to model (e.g., there’s not a natural way to map a purchase order into
an Atom entry). Both of these formats do provide extensibility elements for
injecting custom XML fragments, also commonly referred to as micro-formats.
However, introducing micro-formats begins to counteract the benefit of using a
standard representation format.
For our bookmark service, in addition to the XML format
we’ve defined, we’ll also expose the public bookmarks as an Atom feed. If we’re
going to support Atom feeds, we should probably also support RSS feeds since
they’re very similar formats and it might open the door for more feed readers.
Hence, we’ll support both by adding “feed” to the URI along with a “format”
parameter to indicate “atom” or “rss”:
feed?tag={tag}&format={format}
Now let’s look at how we can represent bookmarks using Atom.
Atom defines a standard format for representing feeds, which are essentially
just lists of time-stamped entries that can really represent anything. For
example, we can represent a list of bookmarks using this type of Atom feed:
<feed
xmlns="">
<title>Public Bookmarks</title>
<updated>2008-09-13T18:30:02Z</updated>
<id></id>
<entry>
<author>
<name>Aaron Skonnard</name>
</author>
<title>Aaron’s Blog</title>
<link
href=""/>
<id></id>
<updated>2008-09-13T18:30:02Z</updated>
<category term="REST,WCF"/>
</entry>
<entry>...</entry>
</feed>
We’ve simply defined a mapping between our bookmark fields
and the elements that make up an Atom <entry>. Once you have your data in
Atom/RSS format, it can be easily consumed by any Atom/RSS compatible client.
Figure 6 shows an Atom bookmark feed rendered in Internet Explorer and notice
how we’re able to search, sort and filter this feed within the browser using
the right-hand control pane.
Figure 6: An Atom
feed rendered in Internet Explorer
AtomPub defines a standard way to represent a service
document, which is a high-level description of the collections supported by a
service. AtomPub also defines a standard API for manipulating entries using the
standard Atom feed format along with GET, POST, PUT, and DELETE requests. The
following shows an example AtomPub service document describing our service’s
bookmarks collections:
<service
xmlns=""
xmlns:
<workspace>
<atom:title>Contoso Bookmark Service</atom:title>
<collection
href="" >
<atom:title>Public
Bookmarks</atom:title>
</collection>
<collection
href="" >
<atom:title>Aaron Skonnard's Public
Bookmarks</atom:title>
<collection>...</collection>
</workspace>
</service>
Figuring out the right representation to use for your
RESTful service is primarily about figuring out what types of clients you want
to accommodate and what scenarios you want to facilitate.
The things you have to discover when using a RESTful service
include the URI templates, the HTTP methods supported by each resource, and the
representations supported by each resource. Today, most developers discover
these things through either human-readable documentation or by actually
interacting with the service. For example, once you know the URI templates for
the service resources, you can browse to the various retrievable resources to
inspect their representations, and you can use HEAD and OPTION requests to
figure out what methods and headers a resource supports.
A HEAD request works just like a GET request but it returns
only the response headers and no entity body, allowing the client to determine
what the service is capable of returning. An OPTION request allows you to query
a resource to figure out what HTTP methods it supports. The service can return
the comma-separated list of supported HTTP methods to the client in the “Allow”
header. The following example shows how to issue an OPTIONS request for a
user’s bookmark collection:
OPTIONS HTTP/1.1
Assuming the client is authenticated as “skonnard”, the
service will return the following response indicating that the resource
supports GET and POST requests:
HTTP/1.1 200 OK
Allow: GET, POST
However, if someone other than “skonnard” issues the same
OPTIONS request, the service will return the following response indicating that
only GET requests are supported:
Allow: GET
HTTP also comes with a sophisticated built-in content
negotiation mechanism. Clients can provide the “User-Agent” and the various
“Accept” headers to indicate what media types (or representations) are
acceptable for the response. The server can then pick a representation best
suited for that particular client. When multiple acceptable representations
might exist, the server can return a 300 (“Multiple Choices”) response
including the URIs for each supported resource representation. The combination
of HEAD, OPTIONS, and the content negotiation headers provides a foundation for
runtime discovery.
If you want to make it possible for clients to discover the
exact representation formats, you can provide clients with schema definitions
that can be used to generate the client-side processing logic. Or you can
choose to use a standard format like XHTML or Atom that removes the need for
this altogether.
In addition to all of this, there are a few service description
languages that can be used to fully describe RESTful services. WSDL 2.0 is one
such language and the Web Application Description Language (WADL) is another,
but not many toolkits provide support for either today. Although having
WSDL-based code-generation would be a huge win for some consumers, not having
it hasn’t been a huge show-stopper thus far. After all, there are many
large-scale RESTful services around getting by just fine without it.
Nevertheless, my hope is that we’ll see additional innovation in this area in
the years ahead.
During URI design, beware of letting RPC tendencies slip
into your URI templates. It’s often tempting to include a verb in the URI
(e.g., /users?method=createUserAccount or even /users/create). Although this
may seem obvious at first, there are several popular services on the Web today
that break this rule. A service designed like this isn’t fully RESTful – it’s
more of a hybrid REST/RPC service .
This type of design misuses HTTP’s uniform interface and
violates the semantics for GET, which can cause big problems down the road when
dealing with retries and caching. Like we learned earlier, GET is intended to
be a safe operation (no side effects) but these operations do cause side
effects. This can lead to problems since other components will make incorrect
assumptions about these resources.
The primary reason some services do things this way is
because many of today’s Web browsers and firewalls only allow GET and POST
requests. Due to this limitation, many sites overload GET or POST to fake PUT
and DELETE requests. This can be accomplished by specifying the real HTTP
method in a custom HTTP header. A common HTTP header used for this purpose is
the X-HTTP-Method-Override header, which you can use to overload POST with a
DELETE operation as shown in this example:
/bookmarkservice/skonnard/bookmarks/123 HTTP/1.1
X-HTTP-Method-Override:
DELETE
Using this technique is widely considered an acceptable
practice for working around the limitations of today’s Web infrastructure
because it allows you to keep your URI design free of RPCisms.
Designing RESTful services properly is probably more
challenging than actually implementing them once you know exactly what you’re
trying to accomplish. However, the key to a successful and smooth
implementation is choosing a programming framework designed to simplify working
with HTTP.
Today, Microsoft offers exceptional support for HTTP across
a variety of programming frameworks. First, .NET comes with the System.Web and
System.Net assemblies, which contain the foundational classes for building HTTP
clients and servers. ASP.NET builds on this foundation and provides a
higher-level HTTP framework that simplifies the process of building Web
applications for human consumption.
Although ASP.NET could be used to build RESTful services,
the framework wasn’t designed with that goal in mind. Instead, Microsoft’s
service-oriented investments have gone into WCF, the unified programming model
for connecting applications on the .NET platform. Although WCF began as a SOAP
framework, it has quickly evolved into a first-class framework for both SOAP
and REST-based services. Now, WCF is the default choice for building services
regardless of which approach you wish to use.
Using WCF 3.5 to build RESTful services offers communication
and hosting flexibility, a simple model for mapping URI templates to methods,
and simplified support for numerous representations including XML, JSON, RSS
and Atom. In addition to this core support, Microsoft is now shipping the WCF
REST Starter Kit, which provides additional APIs, extension methods, and
various Visual Studio project templates to simplify REST development. The WCF
REST Resource Kit is expected to evolve through CodePlex, and some of its
features may make their way into future versions of the .NET framework.
The ADO.NET team was able to leverage the WCF REST support
when they built ADO.NET Data Services, a higher-level REST framework that
almost fully automates the process of exposing RESTful services around
underlying data/object entities using AtomPub. ADO.NET Data Services is a great
example of what’s possible when using WCF as your underlying REST communication
framework.
Throughout this section, we’ll take a closer look at the
built-in REST support found in WCF 3.5, the REST Starter Kit, and ADO.NET Data
Services. But first, let’s look at how you’d have to do it without WCF.
If you were going to implement our RESTful bookmark service
using an IHttpHandler-derived class, there are several things that you’d have
to manage yourself. IHttpHandler provides only a single entry point –
ProcessRequest – for processing all incoming HTTP requests. In order to
implement the RESTful interface we’ve designed, your implementation of
ProcessRequest will have to perform the following tasks:
Check out Figure 7 to get a feel for what this code might
look like. I’ve provided a few methods that abstract away a lot of details like
Matches and ExtractVariables but there is still a lot of tedious work going on
around the actual service logic (e.g., dealing with user accounts and
bookmarks).
Figure 7: A sample
IHttpHandler implementation
public class BookmarkService : IHttpHandler
{
public bool IsReusable { get { return true; } }
public void ProcessRequest(HttpContext context)
{
Uri uri = context.Request.Url;
// compare URI to resource templates and find match
if (Matches(uri, "{username}?tag={tag}"))
{
// extract variables from URI
Dictionary<string, string> vars =
ExtractVariables(uri, "{username}?tag={tag} ");
string username = vars["username"];
string tag = vars["tag"];
// figure out which HTTP method is being used
switch (context.Request.HttpMethod)
{
// dispatch to internal methods based on URI and HTTP method
// and write the correct response status & entity body
case "GET":
List<Bookmark> bookmarks = GetBookmarks(username, tag);
WriteBookmarksToResponse(context.Response, bookmarks);
SetResponseStatus(context.Response, "200", "OK");
break;
case "POST":
Bookmark newBookmark = ReadBookmarkFromRequest(context.Request);
string id = CreateNewBookmark(username, newBookmark);
WriteLocationHeader(id);
SetResponseStatus(context.Response, "201", "Created");
break;
default:
SetResponseStatus(context.Response, "405", "Method Not Allowed");
}
}
if (Matches(uri, "users/{username}/bookmarks/{id}"))
{
// dispatch to internal methods based on URI and HTTP method
// and write the correct response status & entity body
...
}
... // match addition URI templates here
}
}
WCF 3.5 provides a programming model that shields you from
the tedious aspects of this code – it shields you from most HTTP protocol
details, URIs, and the resource representations transmitted on the wire. It accomplishes this by providing a built-in
URI template programming model that makes it easy to match URIs and extract
variables. It also provides a new set of attributes for mapping HTTP method +
URI template combinations to method signatures, and some serialization
improvements for supporting different types of resource representations. And,
of course, it provides the underlying runtime components that know how to bring
these new RESTful programming constructs to life.
WCF 3.5 shipped with a new assembly called
System.ServiceModel.Web.dll, which contains a variety of new classes that
provide an easy-to-use “Web-based” programming framework for building RESTful
services. To begin using this new “Web” programming model, simply add a
reference to System.ServiceModel.Web.dll, a using statement to
System.ServiceModel.Web, and you’re ready to go.
The first thing to realize is that the WCF “Web” model is
still based on mapping a service interface to a set of methods. The only
difference for a RESTful service is what the interface looks like. Instead of
exposing a set of RPC-based operation names to the world, we’re going to define
the service interface in terms of HTTP’s uniform interface and a set of URI
templates. We’ll accomplish this by first defining a set of logical operations
for performing the resource logic, and then we can apply the new “Web”
attributes to define the mapping between the HTTP methods, our URI design, and
the corresponding methods.
WCF supports a variety of different mechanisms for working
with the resource representations that will be transmitted in the HTTP request/response
messages. You can always work directly with the raw request/response messages,
if you want, by defining your method signatures in terms of
System.ServiceModel.Channels.Message. If you take this route, you’re free to
use your favorite XML or JSON API to process the messages; however, most
developers prefer using a serialization engine that automatically moves between
messages and .NET objects that are easier to consume.
WCF supports several different serializers out-of-the-box
including the DataContractSerializer (the default), the
DataContractJsonSerializer, and even the XmlSerializer from ASP.NET Web
services. These serializers all perform essentially the same task, but they
each do it a little bit differently, and each comes with its pros and cons. For
example, the DataContractSerializer is very efficient and streamlined but
supports only a small subset of XML Schema. XmlSerializer, on the other hand,
allows you to build more advanced structures not supported by
DataContractSerializer. WCF allows you to choose the serializer you want to use
on a per-method basis when defining your service contracts.
For our bookmarking service, the DataContractSerializer
should be sufficient for our needs. So we’ll define a few classes that will
work with DataContractSerializer to represent our resources (see Figure 8).
Figure 8: User
Account and Bookmark Resource Classes
public class User
{
public Uri Id { get; set; }
public string Username { get; set; }
public string Name { get; set; }
public string Email { get; set; }
public Uri Bookmarks { get; set; }
}
public class UserProfile
{
public Uri Id { get; set; }
public string Name { get; set; }
public Uri Bookmarks { get; set; }
}
public class Bookmark
{
public Uri Id { get; set; }
public string Title { get; set; }
public Uri Url { get; set; }
public string User { get; set; }
public Uri UserLink { get; set; }
public string Tags { get; set; }
public bool Public { get; set; }
public DateTime LastModified { get; set; }
}
[CollectionDataContract]
public class Bookmarks : List<Bookmark>
{
public Bookmarks() { }
public Bookmarks(List<Bookmark> bookmarks) : base(bookmarks) {}
}
As of .NET Framework 3.5 SP1, DataContractSerializer now supports
serializing plain-old CLR objects (POCO, for short) without any serializer
attributes as shown in the example above. Before SP1, we would have had to
annotate the User and Bookmark classes with [DataContract] and [DataMember];
now you don’t have to. If you want more control over naming, default values,
and ordering, you can always add these attributes back to the class definition
but for this example, we’ll just accept the default mapping. In this case I
still had to annotate the collection class with [CollectionDataContract] in
order to make the name of the root element <Bookmarks> instead of the
default <ArrayOfBookmark>.
DataContractSerializer treats all of the fields found on
User and Bookmark as optional by default, so these classes can handle the
input/output representations for each resource. We’ve also defined a couple of
custom collection types for modeling lists of users and bookmarks. These
classes all conform to the resource representations we defined earlier in the
previous section.
The next step is to model the logical HTTP methods we need
to support (as outlined in Figure 4 and Figure 5 and) with method signatures
that use the resource classes we just defined. Figure 9 shows the definition of
a BookmarkService class that contains a method for each resource operations.
Figure 9: Modeling
the logical HTTP methods
public class BookmarkService
{
Bookmarks GetPublicBookmarks(string tag) {...}
Bookmarks GetUserPublicBookmarks(string username, string tag) {...}
Bookmarks GetUserBookmarks(string username) {...}
UserProfile GetUserProfile(string username) {...}
User GetUser(string username) {...}
void PutUser(string username, User user) {...}
void DeleteUser(string username) {...}
Bookmark GetBookmark(string username, string id) {...}
void PostBookmark(string username, Bookmark newValue) {...}
void PutBookmark(string username, string id, Bookmark bm) {...}
void DeleteBookmark(string username, string id) {...}
}
You’ll notice that most of these methods operate on User and
Bookmark objects or their respective collection classes. Some of them also
require extra parameters like username, id, and tag, which we’ll harvest from
the incoming URI according to the URI template variables.
The next thing we need to figure out is how to model the
various URI templates we defined in Figure 4 and Figure 5 so we can use them in
conjunction with the methods we just defined. The .NET Framework comes with the
System.Uri class for modeling URIs, but it doesn’t contain the variable or
matching logic. Hence, WCF provides a few additional classes for specifically
dealing with URI templates, variables, and matching. These classes include
UriTemplate, UriTemplateMatch, and UriTemplateTable.
When you construct a UriTemplate object, you supply a URI
template string like the ones we used in Figure 4 and Figure 5. These templates
may contain variables within curly braces (“{username}?tag={tag}”) and even an
asterisk (“*”), which acts as a wildcard, when you want to match anything from
that point on in the path. You can also specify default variable values within
the template, making it possible to omit that part of the path. For example, a
template of “{username}/{tag=all}” means that the variable “tag” will have the
default value of “all” when that path segment is omitted.
Once you have a UriTemplate object, you can call the Match
method, passing in a candidate Uri to see if it matches the template. If it
does, it returns a UriTemplateMatch object containing the bound variables;
otherwise, it simply returns null. You can also go the other direction – you
can call BindByPosition or BindByName to generate a new URI from the template,
supplying the required variable values. The following example illustrates how
to use Match and BindByPosition to move in both directions:
Uri baseUri = new Uri("");
UriTemplate uriTemplate = new UriTemplate(
"users/{username}/bookmarks/{id}");
// generate a new bookmark URI
Uri newBookmarkUri = uriTemplate.BindByPosition(baseUri, "skonnard", "123");
// match an existing bookmark URI
UriTemplateMatch match = uriTemplate.Match(baseUri, newBookmarkUri);
System.Diagnostics.Debug.Assert(match != null);
Console.WriteLine(match.BoundVariables["username"]);
Console.WriteLine(match.BoundVariables["id"]);
The UriTemplateTable class provides a mechanism for managing
a collection of UriTemplate objects. This makes it easy to call Match on the
table to find all templates that match the supplied Uri. Alternatively, you can
call MatchSingle to ensure it matches only a single UriTemplate in the table.
The WCF “Web” programming model makes it easy to map
UriTemplate objects to your method signatures through the new [WebGet] and
[WebInvoke] attributes. Once you have these attributes enabled, WCF will
perform its internal method dispatching based on UriTemplate matching logic.
Now that we have an understanding of UriTemplate, we can use
a few different WCF attributes to define the HTTP interface that our service
will support. First, it’s important to know that all WCF service contracts must
be annotated with [ServiceContract] and [OperationContract] regardless of
whether you’re planning to use SOAP or REST. These attributes control what
operations are ultimately exposed through the service. So we’ll first need to
add these attributes to our class.
Once we have those attributes in place, we can add the new
[WebGet] and [WebInvoke] attributes to our method signatures to define the
specific mapping to the HTTP uniform interface. The reason they provided two
attributes is because GET requests are fundamentally different from all the
others, in that they are safe, idempotent, and highly cacheable. If you want to
map an HTTP GET request to one of your service methods, you use [WebGet], and
for all other HTTP methods, you use [WebInvoke].
The main thing you specify when using [WebGet] is the
UriTemplate that the method is designed to handle. You can map the various
UriTemplate variables to the method parameters by simply using the same name in
both places. Figure 10 shows how we can map the various GET requests from our
URI design (in Figure 4 and Figure 5 and) to our new BookmarkService class.
Figure 10: Applying
[WebGet] to BookmarkService
[ServiceContract]
public partial class BookmarkService
{
[WebGet(UriTemplate = "?tag={tag}")]
[OperationContract]
Bookmarks GetPublicBookmarks(string tag) {...}
[WebGet(UriTemplate = "{username}?tag={tag}")]
[OperationContract]
Bookmarks GetUserPublicBookmarks(string username, string tag) {...}
[WebGet(UriTemplate = "users/{username}/bookmarks?tag={tag}")]
[OperationContract]
Bookmarks GetUserBookmarks(string username, string tag) {...}
[WebGet(UriTemplate = "users/{username}/profile")]
[OperationContract]
UserProfile GetUserProfile(string username) {...}
[WebGet(UriTemplate = "users/{username}")]
[OperationContract]
User GetUser(string username) {...}
[WebGet(UriTemplate = "users/{username}/bookmarks/{bookmark_id}")]
[OperationContract]
Bookmark GetBookmark(string username, string bookmark_id) {...}
...
}
We’ll handle the remaining HTTP methods with the [WebInvoke]
attribute. It works a lot like [WebGet], but with a couple of key differences.
Since it can be used with any HTTP method, you must specify which HTTP method
it will handle via the Method property. And for PUT and POST methods, where the
client will be supplying an entity body, you’ll need to add one more parameter
to the method signature capable of holding the deserialized entity body. It
should come after all of the UriTemplate parameters. Figure 11 shows how to
complete our HTTP mapping for the bookmark service using [WebInvoke].
Figure 11: Applying
[WebInvoke] to BookmarkService
[ServiceContract]
public partial class BookmarkService
{
[WebInvoke(Method = "PUT", UriTemplate = "users/{username}")]
[OperationContract]
void PutUserAccount(string username, User user) {...}
[WebInvoke(Method = "DELETE", UriTemplate = "users/{username}")]
[OperationContract]
void DeleteUserAccount(string username) {...}
[WebInvoke(Method = "POST", UriTemplate = "users/{username}/bookmarks")]
[OperationContract]
void PostBookmark(string username, Bookmark newValue) {...}
[WebInvoke(Method = "PUT", UriTemplate = "users/{username}/bookmarks/{id")]
[OperationContract]
void PutBookmark(string username, string id, Bookmark bm) {...}
[WebInvoke(Method = "DELETE", UriTemplate = "users/{username}/bookmarks/{id}")]
[OperationContract]
void DeleteBookmark(string username, string id) {...}
...
}
Notice how each of these methods takes either a User or
Bookmark object as the final parameter in the method signature – WCF will
deserialize the request body and pass it to us through this parameter.
If you compare this implementation to the one I showed back
in Figure 7, you should appreciate how much simpler WCF makes things on the
HTTP front. Now, when we implement the methods, we can focus primarily on the
business logic around managing the user account and bookmark resources. The WCF
programming model has effectively shielded us from most HTTP programming
details.
However, there are still some aspects of the HTTP
programming model that we do need to manage from within your method
implementations. For example, in most of our methods, we’ll need to inject
response headers, set the response status code and description, and generate
outbound links. So how do we get access to the underlying HTTP methods within
our WCF method implementations?
This is where the WebOperationContext class comes into play.
WebOperationContext provides properties for accessing the IncomingRequest and
OutgoingResponse messages, which you can use to inspect the HTTP request or to
manipulate the HTTP response before it’s sent. For example, you can simply call
WebOperationContext.Current.OutgoingResponse.SetStatusAsNotFound() to return a
404 (“Not Found”) response to the client. WebOperationContext is your primary
interface to HTTP.
You simply need to implement the logical HTTP operations for
each of the resources exposed by your service. To help you get a feel for the
type of code you’ll be writing within each method body, I’ve provided a few
complete method implementations for you to inspect in Figure 12. Notice how they
set different HTTP response codes, headers, and create outbound links through
helper methods. The GetUserLink help generates an outbound link based on the
same UriTemplates used by the service.
Figure 12: Sample
method implementations for user account resources
[ServiceContract]
public partial class BookmarkService
{
// in-memory resource collections
Dictionary<string, User> users = new Dictionary<string, User>();
Dictionary<string, Bookmark> bookmarks = new Dictionary<string, Bookmark>();
[WebGet(UriTemplate = "users/{username}")]
[OperationContract]
User GetUserAccount(string username)
{
if (!IsUserAuthorized(username))
{
WebOperationContext.Current.OutgoingResponse.StatusCode =
HttpStatusCode.Unauthorized;
return;
}
User user = FindUser(username);
if (user == null)
{
WebOperationContext.Current.OutgoingResponse.SetStatusAsNotFound();
return null;
}
return user;
}
[WebInvoke(Method = "PUT", UriTemplate = "users/{username}")]
[OperationContract]
void PutUserAccount(string username, User newValue)
{
User user = FindUser(username);
if (user == null)
{
// set status to created and include new URI in Location header
WebOperationContext.Current.OutgoingResponse.SetStatusAsCreated(
GetUserLink(username));
... // process new user backend logic
}
else if (!IsUserAuthorized(username))
{
WebOperationContext.Current.OutgoingResponse.StatusCode =
HttpStatusCode.Unauthorized;
return;
}
// create or update new user, but don't let user set Id/Bookmarks
newValue.Id = GetUserLink(username);
newValue.Bookmarks = GetUserBookmarksLink(username);
users[username] = newValue;
}
... // remaining methods ommitted
}
WCF provides a great deal of flexibility around hosting
services. Thanks to this flexibility, you can host your RESTful services over
HTTP in any application of your choosing, or you can choose to host them within
IIS/ASP.NET. The latter is probably the best choice if you’re building
large-scale “Web” services.
When hosting your RESTful WCF services, there are two key
components that you need to configure in order to enable the new “Web” behavior
within the runtime. First, you need to expose an endpoint that uses the new
binding for RESTful services – WebHttpBinding. Then, you need to configure the
“Web” endpoint with the WebHttpBehavior. The new binding instructs WCF to not
use SOAP anymore, but rather plain XML messages while the new behavior injects
custom dispatching logic based on the [WebGet] and [WebInvoke] attributes and
their corresponding UriTemplates.
Figure 13 illustrates how to accomplish this in your
application configuration file.
Figure 13:
Configuring WCF "Web" Services
<configuration>
<system.serviceModel>
<services>
<service name="BookmarkService">
<endpoint binding="webHttpBinding" contract="BookmarkService"
behaviorConfiguration="webHttp"/>
</service>
</services>
<behaviors>
<endpointBehaviors>
<behavior name="webHttp">
<webHttp/>
</behavior>
</endpointBehaviors>
</behaviors>
</system.serviceModel>
<configuration>
With this configuration in place, you can simply create a
ServiceHost instance based on BookmarkService and open it to get your RESTful
service up and running:
ServiceHost host = new
ServiceHost(typeof(BookmarkService),
new
Uri(""));
host.Open();
... // keep the host open
until you want to shut it down
In this example, we specified the base URI for the service
when constructing the ServiceHost, and that same address will be used for the
endpoint, since we didn’t specify an address on the endpoint itself.
If the hosting experience I just described feels a little
tedious to you, worry not, WCF has made things even easier through a custom
ServiceHost-derived class called WebServiceHost. When you use WebServiceHost
instead of ServiceHost, it will automatically create a Web endpoint for you
using the base HTTP address and configure the injected endpoint with the
WebHttpBehavior. So this example is equivalent to the prior example, only this
time we don’t need a configuration file at all:
WebServiceHost host = new
WebServiceHost(typeof(BookmarkService),
This little gem also greatly simplifies the process of
hosting WCF “Web” services within IIS/ASP.NET through .svc files. By specifying
the mapping from the .svc file to the service class name, we can take advantage
of the Factory attribute to specify the WebServiceHostFactory as shown here:
<%@ ServiceHost
Service="BookmarkService"
Factory="System.ServiceModel.Activation.WebServiceHostFactory"%>
This custom ServiceHostFactory intercepts the ServiceHost
creation process at run time and generates WebServiceHost instances instead. In
this case, the base URI of the service will simply be the URI of the .svc file
and no further configuration is necessary, unless you need to configure
additional behaviors.
If you’re hosting your WCF “Web” services in IIS/ASP.NET,
it’s also a good idea to enable the ASP.NET compatibility mode within the WCF
runtime. Doing so makes it possible for you to access the HttpContext object
managed by ASP.NET from within your WCF methods. You enable ASP.NET
compatibility by adding the following global flag to your Web.config file:
<configuration>
<system.serviceModel>
<serviceHostingEnvironment
aspNetCompatibilityEnabled="true"/>
</system.serviceModel>
</configuration>
Then, you’ll also want to declare on your service class
whether you allow, require, or don’t allow ASP.NET compatibility mode as
illustrated here:
[AspNetCompatibilityRequirements(RequirementsMode
=
AspNetCompatibilityRequirementsMode.Allowed)]
[ServiceContract]
public partial class
BookmarkService
{
...
You will most likely end up hosting your WCF “Web” services
in IIS/ASP.NET using .svc files like I’ve just shown you. One issue with using
this technique is what the final URIs will look like. If the .svc file is
located at, the final service URIs end up
like this:
The .svc portion of the URI is really a .NET implementation
detail and not something that you probably want to have in the URI. The easiest
way to work around this issue today is to leverage the Microsoft URL Rewrite
Module for IIS 7.0, which makes it possible to remove the “.svc” from the path
segment. For a complete example on how to do this, see Rob Bagby’s blog post on
Controlling the URI. Once you’ve applied the URL Rewrite Module to remove
“.svc”, your URIs will look like this:
Once you have your WCF service implemented, configured, and
hosted, you can begin testing your RESTful service by simply browsing to it
(see
Figure 14). You can also use any HTTP client utility to test the non-GET operations for creating,
updating, and deleting resources at this point.
Figure 14: Browsing
to your WCF RESTful service
Now that we’ve seen how to develop, host, and configure WCF
“Web” services, we’re ready to explore supporting additional representation
formats besides our custom XML vocabulary. We decided during our design phase
that we wanted to support JSON message formats to simplify things for
Ajax-based client applications. WCF makes this trivial to accomplish.
Both [WebGet] and [WebInvoke] expose two properties called
RequestFormat and ResponseFormat. These properties are of type WebMessageFormat,
an enum that contains two values, Xml and Json. The default value for these
properties is WebMessageFormat.Xml, which is why our service currently returns
XML. If you want a specific operation to support JSON, you simply need to set
RequestFormat or ResponseFormat to WebMessageFormat.Json.
Since we want to support both message formats on our
service, we’ll need to add another complete set of method signatures and new
UriTemplates that are JSON-specific.
Both method versions (XML and JSON) can call the same internal method to
perform the operation logic, so we won’t be duplicating any code. WCF will
simply perform the message-to-object translation differently based on the
target URI.
Figure 15 shows how to begin defining the JSON-specific
interface to our bookmarking service. Notice how we’ve added “?format=json” to
the end of the UriTemplate strings we’ve been using for the equivalent XML
operations (whose attributes have remained unchanged, by the way). We’ve also
added the RequestFormat/ResponseFormat properties, specifying
WebMessageFormat.Json, where applicable.
Figure 15: Defining a
JSON-specific interface
[ServiceContract]
public partial class BookmarkService
{
...
[WebInvoke(Method = "POST", RequestFormat=WebMessageFormat.Json,
UriTemplate = "users/{username}/bookmarks?format=json")]
[OperationContract]
void PostBookmarkAsJson(string username, Bookmark newValue)
{
HandlePostBookmark(username, newValue);
}
[WebGet(ResponseFormat= WebMessageFormat.Json,
UriTemplate = "users/{username}/bookmarks/{id}?format=json")]
[OperationContract]
Bookmark GetBookmarkAsJson(string username, string id)
{
HandleGetBookmark(username, id);
}
...
}
Notice how these methods are also dispatching to internal
methods called HandlePostBookmark and HandleGetBookmark, which are format
independent. Both the XML and JSON methods can call these same internal methods
to perform the business logic, reducing code duplication.
If you navigate to one of the JSON URIs in a browser, you’ll
be prompted to save the file to disk. Go ahead and save the file as a .txt file
and you can inspect the JSON response. With JSON enabled, Ajax clients can more
easily communicate with our RESTful service and consume the data we send back.
WCF even goes one step further on the Ajax front by
providing a special behavior that can automatically expose a WCF service as an
Ajax-friendly JSON service. This is made possible by EnableWebServiceBehavior,
which is wired-up when you use the new WebScriptServiceHost to host your
services. This behavior not only exposes your WCF service, it also adds
behavior to automatically generate Javascript proxies for Web browsers upon
request. If you browse to your service’s base address and add “/js” to the end,
you’ll get the auto-generated Javascript proxy. This makes it really simple for
Javascript code running in a browser to call your service.
This technique is only useful for simple Ajax-based services
that are primarily serving up data to Web pages. You cannot use UriTemplates in
conjunction with this type of Ajax-based service because the auto-generated
JavaScript proxies aren’t capable of dealing with them properly today.
In addition to the JSON support, WCF 3.5 also comes with
built-in support for feeds, including both RSS 2.0 and Atom feed formats.
They’ve provided a common API for building logical feeds regardless of the
format you intend to use for the feed. The API consists of various classes
found in System.ServiceModel.Syndication, including SyndicationFeed,
SyndicationItem, SyndicationContent, etc.
Once you have a SyndicationFeed object, you can format it
using either the Atom10FeedFormatter or the RSS20FeedFormatterClass. This gives
you the ability to support multiple feed formats when you need to accommodate
different client scenarios and make your feeds as widely consumable as
possible. Figure 16 illustrates how to build a SyndicationFeed instance containing
all of the public bookmarks. It allows the client to specify what type of feed
to retrieve in the URI.
Figure 16: Building a
SyndicationFeed containing the public bookmarks
[WebGet(UriTemplate ="feed?tag={tag}&format=atom")]
[ServiceKnownType(typeof(Rss20FeedFormatter))]
[ServiceKnownType(typeof(Atom10FeedFormatter))]
[OperationContract]
SyndicationFeedFormatter GetPublicBookmarksFeed(string tag, string format)
{
Bookmarks publicBookmarks = HandleGetPublicBookmarks(tag);
WebOperationContext ctx = WebOperationContext.Current;
List<SyndicationItem> items = new List<SyndicationItem>();
foreach (Bookmark bm in publicBookmarks)
{
SyndicationItem item = new SyndicationItem(bm.Title, "", bm.Url,
bm.Id.ToString(), new DateTimeOffset(bm.LastModified));
foreach (string c in bm.Tags.Split(','))
item.Categories.Add(new SyndicationCategory(c));
item.Authors.Add(new SyndicationPerson("", bm.User, ""));
items.Add(item);
}
SyndicationFeed feed = new SyndicationFeed()
{
Title = new TextSyndicationContent("Public Bookmarks"),
Id = ctx.IncomingRequest.UriTemplateMatch.RequestUri.ToString(),
LastUpdatedTime = DateTime.Now,
Items = items
};
if (format.Equals("atom"))
return new Atom10FeedFormatter(feed);
else
return new Rss20FeedFormatter(feed);
}
Notice how in Figure 16 the method is defined to return a
SyndicationFeedFormatter type but the implementation actually returns an
instance of either Rss20FeedFormatter or Atom10FeedFormater. To make this work,
we had to annotate the method with two [ServiceKnownType] attributes, one
specifying each of the derived SyndicationFeedFormatter types we needed to
return.
This type of feed can be consumed and viewed by any RSS/Atom
reader, which includes most of today’s modern Web browsers (see Figure 6), and
it can be easily syndicated by other feed-savvy sites. There is a great deal of
RSS/Atom infrastructure available on the Web today, so exposing your data this
way opens up numerous possibilities around how the data can be harvested.
WCF also provides built-in classes for AtomPub, a standard
API for interacting with Atom-based collections using a RESTful interface. You
can think of AtomPub as a standard application of the HTTP uniform interface
but applied specifically to Atom feeds, which are modeled as collections of
entries.
The WCF AtomPub support is found across several classes
including ServiceDocument, ServiceDocumentFormatter, AtomPub10ServiceDocumentFormatter,
and Workspace all found in the System.ServiceModel.Syndication namespace.
Figure 17 illustrates how to use these classes to generate an AtomPub service
document describing the feeds provided by our service.
Figure 17: Generating
an AtomPub service document describing our feeds
ServiceDocument HandleGetServiceDocument()
{
List<ResourceCollectionInfo> collections = new List<ResourceCollectionInfo>();
collections.Add(new ResourceCollectionInfo(
new TextSyndicationContent("Public Bookmarks"),
GetPublicBookmarksFeedLink()));
foreach (string user in users.Keys)
collections.Add(
new ResourceCollectionInfo(
new TextSyndicationContent(
string.Format("Public Bookmarks by {0}", user)),
GetUserBookmarksFeedLink(user)));
List<Workspace> workspaces = new List<Workspace>();
workspaces.Add(new Workspace(
new TextSyndicationContent("Contoso Bookmark Service"), collections));
return new ServiceDocument(workspaces);
}
To fully support the AtomPub protocol, we’d need to redesign
our service quite a bit to do everything in terms of Atom feeds and entries. In
other words, we’d no longer use our current representations for user accounts
and bookmarks. Instead, we’d figure out a way to represent user accounts and
bookmarks within the AtomPub format and implement the appropriate HTTP methods
(according to the AtomPub specification) using the [WebGet] and [WebInvoke]
attributes.
Now we need to implement the HMAC authentication scheme I
described earlier. The easiest way to accomplish this is to simply add a method
called AuthenticateUser to your service implementation and call it from each
service operation that we need to restrict access to. Figure 18 shows how to
implement HMAC authentication by simply signing the URL with the secret key.
Figure 18:
Implementing HMAC authentication
private bool AuthenticateUser(string user)
{
WebOperationContext ctx = WebOperationContext.Current;
string requestUri = ctx.IncomingRequest.UriTemplateMatch.RequestUri.ToString();
string authHeader = ctx.IncomingRequest.Headers[HttpRequestHeader.Authorization];
// if supplied hash is valid, user is authenticated
if (IsValidUserKey(authHeader, requestUri))
return true;
return false;
}
public bool IsValidUserKey(string key, string uri)
{
string[] authParts = key.Split(':');
if (authParts.Length == 2)
{
string userid = authParts[0];
string hash = authParts[1];
if (ValidateHash(userid, uri, hash))
return true;
}
return false;
}
bool ValidateHash(string userid, string uri, string hash)
{
if (!UserKeys.ContainsKey(userid))
return false;
string userkey = UserKeys[userid];
byte[] secretBytes = ASCIIEncoding.ASCII.GetBytes(userkey);
HMACMD5 hmac = new HMACMD5(secretBytes);
byte[] dataBytes = ASCIIEncoding.ASCII.GetBytes(uri);
byte[] computedHash = hmac.ComputeHash(dataBytes);
string computedHashString = Convert.ToBase64String(computedHash);
return computedHashString.Equals(hash);
}
You can simply call this method from each [WebGet] and
[WebInvoke] operation that requires authentication as illustrated here:
if
(!AuthenticateUser(username))
WebOperationContext.Current.OutgoingResponse.StatusCode =
HttpStatusCode.Unauthorized;
return;
}
Once we’ve authenticated a user, we can save the user’s
identity somewhere (e.g., as a user principal). In this example, I’m storing in
the message properties so that we’ll be able to authorize the user within
operations that need to further control access to certain resources. You could
make this implementation more elegant by integrating with [PrincipalPermission]
or by providing your own security attributes.
We could have also implemented this logic as a custom WCF
channel, thereby removing the need to make calls to AuthenticateUser altogether
and simplifying reuse, but building a custom channel is not an easy
proposition. The WCF REST Starter Kit simplifies this by introducing a new
request interception model specifically for situations like this when using the
WebHttpBinding.
Up to this point, we’ve walked through all of the WCF 3.5
framework support for building RESTful services. WCF 3.5 makes building RESTful
services quite easy today. Nevertheless, the WCF team wants to make building
RESTful services even easier through a suite of new helper classes and Visual
Studio project templates that they’ve packaged up and called the WCF REST
Starter Kit.
The WCF REST Starter Kit has been commissioned as a CodePlex
project, which is where you can download the code today. Microsoft plans to continue investing in the
WCF REST Starter Kit through iterative releases that continue to introduce new
features and capabilities to further ease the REST implementation challenges
you might face with today’s support. The WCF team will eventually take some of
these features and roll them into the next version of the .NET framework when
appropriate. If you’re serious about building RESTful services with WCF, keep
your eyes on this CodePlex project.
Once you start building RESTful services, you’ll begin
running into some common pain points that feel tedious or cumbersome, even when
using a modern framework like WCF (for example, writing the various HTTP status
codes and descriptions to the response message). Hence, the WCF REST Starter
Kit provides a new suite of APIs designed to address some of those common pain
points in an effort to make building RESTful services even easier with WCF
moving forward. These new classes and extension methods are found in the new
Microsoft.ServiceModel.Web assembly and namespace.
As an example, the WCF REST Starter Kit introduces a new
WebProtocolException class that you can throw in your service logic to specify
an HTTP status code, which feels much more natural to a typical .NET developer.
In addition to this, the WCF REST Starter Kit also comes with a new
WebServiceHost2 class and the corresponding WebServiceHost2Factory class, which
provides a zero-config experience tailored for a RESTful service. This custom
service host introduces some new extensions and behaviors including support for
an automatic help page that describes your RESTful service.
This new help page is a huge step forward for RESTful
service development. By default, you navigate to it at “/help”. And you can
annotate your “Web” operations with [WebHelp] to provide human-readable
descriptions for each operation found on your service. This help pages makes it
easy for consumers to figure out the service’s URI design, retrieve resource
schemas, and view examples of them. The obvious next step is to provide a
REST-savvy client experience on top of this complete with code generation.
The WCF REST Starter Kit also provides a simpler model for
controlling caching through the [WebCache] attribute that you can declaratively
apply to your service operations. And it adds numerous extension methods to the
WebOperationContext class that address some common REST-oriented tasks.
And finally, the WCF REST Starter Kit introduces a new
request interception model (based on RequestInterceptor) that you can use for
different types of HTTP interception tasks such as authentication. This model
makes it much easier to introduce your own request interception logic without
having to deal with writing a custom WCF message inspector and an associated
behavior. (see Figure 19)
Figure 19: Automatic
help page for RESTFul services
There are certain types of RESTful services that require a
lot of boilerplate code (for example, an Atom feed service or an AtomPub service)
that you shouldn’t necessarily have to write by hand. Hence, the WCF REST
Starter Kit comes with some valuable project templates that provide the
necessary boiler plate code for certain types of services, which should greatly
reduce the code you have to write. Once you’ve installed the WCF REST Starter
Kit, you’ll see a suite of new project templates in the Visual Studio New
Project dialog (see Figure 20). You simply need to choose one, enter the
remaining project details, and press OK. Then you’ll end up with a skeleton
REST project to start building on.
See.
The last few Atom-related project templates add value when
working with feeds. The Atom Feed Service template provides a sample
implementation that shows how to programmatically generate and return a
SyndicationFeed instance. You simply need to change the implementation to fill
in your business data as appropriate. And finally, the Atom Publishing Protocol
Service template provides a complete skeleton implementation for a fully
compliant AtomPub service, greatly reducing the amount of code you have to
write for this scenario, and allowing you to focus primarily on how to map your
resources to Atom.
Figure 21 for a description of each project template. When
you use one of these project templates, your remaining tasks include modifying
the resource class definitions, propagating those changes throughout the
implementation (you can use Visual Studio refactoring here), refining the URI
design when needed, and implementing the method stubs for each HTTP operation.
Figure 20: The WCF
REST Starter Kit project templates.
Figure 21: WCF REST
Starter Kit project template descriptions
HTTP Plain XML Service
Produces a service with simple GET and POST methods that you can
build on for plain-old XML (POX) services that don’t fully conform to RESTful
design principles, but instead rely only on GET and POST operations.
REST Singleton Service
Produces a service that defines a sample singleton resource
(SampleItem) and the full HTTP interface for interacting with the singleton
(GET, POST, PUT, and DELETE) with support for both XML and JSON
representations.
REST Collection Service
Similar to the REST Singleton Service only it also provides support
for managing a collection of SampleItem resources.
Atom Feed Service
Produces a service that produces a sample Atom feed with dummy data.
Atom Publishing Protocol Service
Produces a full-fledged AtomPub service capable of managing
collections of resources as well as media entries.
Let’s walk through a few examples using the WCF REST Starter
Kit to help illustrate how it can simplify the process of building these types
of RESTful services.
Let’s start by creating a bookmark collection service
similar to what we built manually earlier in this whitepaper. First, we’ll create a new project by
selecting the REST Collection Service template. Then, we’d end up with a WCF
project containing a resource class named SampleItem along with a service class
that implements a RESTful HTTP interface. From this point, we need to make only
a few changes.
The first thing we need to change is the name of the
resource class – we’ll change it from “SampleItem” to “Bookmark” and I’ll take
advantage of Visual Studio refactoring to propagate the change throughout the
project. Now I can fill in the Bookmark class with the fields I need for
representing a bookmark resource. I’ll make this Bookmark class look like the
one we’ve been using thus far:
public class Bookmark
public Uri Url { get; set; }
public string User { get; set; }
public string Title { get; set; }
public string Tags { get; set; }
public bool Public { get; set; }
public DateTime LastModified { get; set; }
The template also provides a class called SampleItemInfo,
which contains an individual Bookmark and the URL that links to it directly.
I’ll change the name to BookmarkItemInfo and use refactoring to propagate the
change. I’m also going to change the name of the “Item” field to “Bookmark”.
Here’s what the BookmarkItemInfo class looks like for this example:
public class
BookmarkItemInfo
public Bookmark Bookmark { get; set; }
public Uri Link { get; set; }
With these changes in place, my new bookmark collection
service is ready to test. Simply press F5 in Visual Studio and it will load the
service into the ASP.NET Development Server, and once it does, you should be
able to access the service. When the browser first comes up to test the
service, you’ll see an empty <ArrayOfBookmarkItemInfo> element because
the collection is currently empty.
At this point you can either use an HTTP client utility to
add some bookmarks to the collection (via POST) or you can pre-populate the
Bookmark items collection in the service constructor. Once you’ve populated the
Bookmark collection, you’ll get some <BookmarkItemInfo> elements back
when you browse to the service’s root address (e.g.,
“”), which essentially requests the entire
collection of bookmark resources. See Figure 22 for some sample results.
You can then browse to an individual bookmark resource by
following one of the <Link> elements returned in the
<BookmarkItemInfo> element. For example, you can retrieve the Bookmark
with an Id of “3” by browsing to “” in this
particular example (see Figure 23).
At this point you can also use POST, PUT, and DELETE
operations without any additional coding – the generated project template
already contains a default implementation for each one. You can POST new
<Bookmark> elements to the service’s root address and it will generate a
new Bookmark resource and assign it a new Id, and return a 201 Created response
with the corresponding Location header. You can also PUT <Bookmark>
elements to individual bookmark URIs to perform updates. And you can send
DELETE requests to individual bookmark resources to remove them from the
collection.
As you can see, for this particular type of RESTful service
(a collection-oriented service), the WCF REST Starter Kit made it possible to
get our implementation up and running with very little coding on our part. And
it provides mostly the same functionality that I built myself in our
BookmarkService example. It wouldn’t be hard to build on this starting point to
add user accounts and the remaining functionality.
Figure 22: Browsing
to the Bookmark collection service
Figure 23: Browsing
to an individual bookmark resource
When you need to generate a simple Atom feed service, create
a new project of type “Atom Feed Service” from the WCF REST Starter Kit. It
will generate a WCF service with a sample feed implementation like the one
shown in Figure 24. This service will run as-is and produce a sample Atom feed.
You simply need to modify the code to insert your data into the SyndicationFeed
instance.
Figure 24: Default
implementation for an Atom Feed Service
[ServiceBehavior(IncludeExceptionDetailInFaults = true,
InstanceContextMode = InstanceContextMode.Single,
ConcurrencyMode = ConcurrencyMode.Single)]
[AspNetCompatibilityRequirements(
RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
[ServiceContract]
public partial class FeedService
{
[WebGet(UriTemplate = "?numItems={i}")]
[OperationContract]
public Atom10FeedFormatter GetFeed(int i)
{
SyndicationFeed feed;
// TODO: Change the sample content feed creation logic here
if (i == 0) i = 1;
// Create the list of syndication items. These correspond to Atom entries
List<SyndicationItem> items = new List<SyndicationItem>();
for (int j = 1; j <= i; ++j)
{
items.Add(new SyndicationItem()
{
// Every entry must have a stable unique URI id
Id = String.Format(
CultureInfo.InvariantCulture, "{0}", j),
Title = new TextSyndicationContent(
String.Format("Sample item '{0}'", j)),
// Every entry should include the last time it was updated
LastUpdatedTime = new DateTime(
2008, 7, 1, 0, 0, 0, DateTimeKind.Utc),
// The Atom spec requires an author for every entry.
Authors =
{
new SyndicationPerson()
{
Name = "Sample Author"
}
},
// The content of an Atom entry can be text, xml, a link or
// arbitrary content. In this sample text content is used.
Content = new TextSyndicationContent("Sample content"),
});
}
// create the feed containing the syndication items.
feed = new SyndicationFeed()
{
// The feed must have a unique stable URI id
Id = "",
Title = new TextSyndicationContent("Sample feed"),
Items = items,
Links =
{
// A feed should have a link to the service that produced the feed.
GetSelfLink(),
}
};
WebOperationContext.Current.OutgoingResponse.ContentType = AtomContentType;
return feed.GetAtom10Formatter();
}
...
}
When you want to implement a service that conforms to the
Atom Publishing Protocol, you should use the “Atom Publishing Protocol Service”
project template that comes with the WCF REST Starter Kit. This template
generates a complete AtomPub service that exposes a single sample collection.
You can test the service immediately by browsing to the
service.svc file and the service will return an AtomPub service document
describing the collections it supports (see Figure 28). As you can see from
Figure 28, this service exposes a collection called “Sample Collection” that
you can access by adding “collection1” to the end of the service’s root URL.
When you access the collection, the service returns an Atom feed representing
the sample collection (see Figure 29). The service also supports adding,
updating, and deleting Atom entries through the standard AtomPub HTTP
interface.
Figure 25: Browsing
to the AtomPub service
Figure 26: Browsing
to the sample collection exposed by the AtomPub service
When you build AtomPub services using the WCF REST Starter
Kit, your job is to focus on the logical collections you want to expose. You’ll
need to define a mapping between your business entity collections and AtomPub
collections exposed by the service, which essentially boils down to defining a
mapping between your custom business entity classes and the WCF
SyndicationFeed/Item classes.
When building data-oriented services (that focus primarily
on CRUD operations), there’s an even easier solution than the WCF REST Starter
Kit. .NET Framework 3.5 SP1 introduced a new technology called ADO.NET Data
Services, which almost automates the process of exposing data entities as fully
functional RESTful services using the Atom Publishing Protocol. Interestingly, ADO.NET
Data Services builds on the WCF REST infrastructure we just discussed. With
ADO.NET Data Services, you define the entities you want to expose and the
infrastructure takes care of everything else. Let’s walk through a few
examples.
If you’re building your service on top of a database
resource, the easiest way to build an ADO.NET Data Service is to first add an
ADO.NET Entity Data Model (EDM) to your project. This launches a wizard that
walks you through the process of connecting to a database and defining a new
entity mapping. Once you’re done, the wizard generates an EDM definition within
your project. The class that sits behind the EDM definition derives from
ObjectContext, and as a result, it’s usable with ADO.NET Data Services.
Assume that I’ve walked through the wizard and called the
new model “BookmarksEntities”. Next, you add an ADO.NET Data Service item to
your site as illustrated in Figure 27.
Figure 27: Creating a
new ADO.NET Data Service
Doing this adds a new WCF BookmarksEDM.svc endpoint to your
site and a code-behind file containing a single class deriving from
DataService<T>, which looks like this:
public class BookmarksEDM :
DataService< /* TODO: your data source class name */ >
// This method is called only once to
initialize service-wide policies.
public static void
InitializeService(IDataServiceConfiguration config)
{
// TODO: set rules to indicate which
entity sets are visible, updatable, etc.
// Examples:
// config.SetEntitySetAccessRule("MyEntityset",
EntitySetRights.AllRead);
//
config.SetServiceOperationAccessRule("MyServiceOperation",
//
ServiceOperationRights.All);
}
Since we already have a data source class (the one I
generated using the EDM wizard), we can simply modify this to use my
BookmarksEntities class. We’ll also need to uncomment the call to
SetEntitySetAccessRule to allow access to the “Bookmarks” entity. Here’s what
the final class looks like:
public class BookmarksEDM :
DataService<BookmarksModel.BookmarksEntities>
config.SetEntitySetAccessRule("Bookmarks",
EntitySetRights.All);
We now have a fully functional RESTful service built around
the Bookmarks entity model. If you navigate to the BookmarksEDM.svc file at
this point, you should get an AtomPub service document back that looks like the
one show in Figure 28, listing the collections exposed by the service.
Figure 28: The
AtomPub service document returned by ADO.NET Data Services
Now, if you browse to the Bookmarks collection by simply
adding “Bookmarks” to the end of the URL, you should get an Atom feed
containing the list of Bookmarks found in the database (see Figure 29).
Notice that the Bookmark class properties have been
serialized within the Atom <content> element. You can further filter your
GET queries using the standard URI syntax defined by ADO.NET Data Services,
which includes a variety of operators and functions for performing logical data
comparisons. For example, the following URL retrieves the bookmarks only for
the User named “skonnard":'skonnard'
The service is also capable of handling POST, PUT, and
DELETE requests according to the AtomPub protocol. We now have a fully
functional AtomPub service and we hardly wrote any code. The ADO.NET Data
Services Framework was able to hide nearly all of the WCF/REST implementation
details.
Figure 29: The Atom
feed returned for the Bookmarks collection
You can also use ADO.NET Data Services to expose collections
of in-memory objects. The process is similar to what I just described, only
you’re not going to use an EDM definition this time. Instead, you’ll have to
write an in-memory data source class to replace the EDM definition. Figure 30
provides a complete implementation showing how to expose an in-memory
collection of Bookmark objects.
Figure 30:
Implementing an ADO.NET Data Service over an in-memory collection
[DataServiceKey("Id")]
public class Bookmark
{
public string Id { get; set; }
public string Url { get; set; }
public string User { get; set; }
public string Title { get; set; }
public string Tags { get; set; }
public bool Public { get; set; }
public DateTime LastModified { get; set; }
}
// this class replaces the EDM definition - it provides an in-memory data source
public class BookmarkService
{
static List<Bookmark> bookmarks = new List<Bookmark>();
static BookmarkService()
{
... // initialize 'bookmarks' collection with a bunch of objects
}
public IQueryable<Bookmark> Bookmarks
{
get { return bookmarks.AsQueryable<Bookmark>(); }
}
}
public class BookmarkDataService : DataService<BookmarkService>
{
public static void InitializeService(IDataServiceConfiguration config)
{
config.SetEntitySetAccessRule("Bookmarks", EntitySetRights.All);
}
}
At this point you end up with an AtomPub service capable
responding to GET requests for your in-memory resources. If you want the
service to support POST, PUT, and DELETE requests, you must also implement
IUpdateable (which was handled for us when using the EDM approach).
ADO.NET Data Services is a great example of an automated
REST framework built on the WCF REST framework. For more information, see the
ADO.NET Data Services learning center on MSDN.
The easiest way to consume a RESTful service is with a Web
browser –just browse to the URIs supported by the service and view the results
– but this obviously works only for GET operations. You’ll need to use an HTTP
client utility/library to test the non-GET methods. In general, the process for
programmatically consuming RESTful services is a bit different than the process
for consuming SOAP services, because RESTful services typically don’t come with
a WSDL definition. However, since all RESTful services implement the same
uniform interface, consumers inherently know the basic mechanics for
interacting with the service through a traditional HTTP library. On the
Microsoft platform, you can use MSXML2.XMLHTTP, System.Net, or WCF. Let’s look
at a few client-side examples.
Figure 31 provides the complete code for a simple HTTP
command-line client utility written in JavaScript. It uses the MSXML2.XMLHTTP
component that you’d use from scripting running within a Web browser (Ajax)
application. The nice thing about this particular utility is that you can use
it from a command prompt to quickly test your non-GET operations. Simply copy
and paste this text into a text file and name it httputility.js. Then you can
run it from a command window to issue HTTP requests.
Figure 31: A Simple
JavaScript HTTP Client
if (WScript.Arguments.length < 2)
{
WScript.echo("Client HTTP Request Utility\n");
WScript.echo("usage: httprequest method uri [options]");
WScript.echo();)
{
WScript.echo();();
WScript.echo("******* Response ********* ");
WScript.echo(e.message);
}
function printResponse(req)
{
WScript.echo();
WScript.echo("******* Response ********* ");
WScript.echo("HTTP/1.1" + " " + req.status + " " + req.statusText);
var headers = req.getAllResponseHeaders();
WScript.echo(headers);
WScript.echo(req.responseText);
}
Consuming RESTful Services with System.Net
If you’re writing .NET code, you can take advantage of the
System.Net classes to programmatically issue HTTP requests and process the
responses. The following code illustrates how easy this can be by using the
HttpWebRequest and HttpWebResponse classes:
static void
GetPublicBookmarks()
string uri =
"";
HttpWebRequest req = WebRequest.Create(uri)
as HttpWebRequest;
HttpWebResponse resp = req.GetResponse() as
HttpWebResponse;
Bookmarks bookmarks =
DeserializeBookmarks(resp.GetResponseStream());
foreach (Bookmark bm in bookmarks)
Console.WriteLine("{0}\r\n{1}\r\n", bm.Title, bm.Url);
It’s not hard to imagine how you could build a REST client
library around your services (that uses code like this) to make them even
easier for .NET clients to consume them. It’s also not hard to imagine how you
could write a generic REST client on top of these classes that can be used with
all your services.
WCF 3.5 provides an abstraction over HTTP for consuming
RESTful services the “WCF” way. To use this technique, you need a client-side
interface definition containing [WebGet] & [WebInvoke] signatures for each
operation you want to consume. You can use the same UriTemplate variable
techniques in the client-side method definition. Here’s an example of how to
public interface
IBookmarkService
[WebGet(UriTemplate =
"?tag={tag}")]
[OperationContract]
Bookmarks GetPublicBookmarks(string tag);
...
Then you can create a WCF channel based on this interface
definition using the new WebChannelFactory class that ships with WCF 3.5. This
factory knows how to create channels that are aware of the [WebGet] &
[WebInvoke] annotations and know how to map the method calls to the uniform
HTTP interface. Here’s an example of how you can retrieve the public bookmarks
tagged with “wcf”:
WebChannelFactory<IBookmarkService> cf = new WebChannelFactory<IBookmarkService>(
new Uri(""));
IBookmarkService channel = cf.CreateChannel();
Bookmarks bms = channel.GetPublicBookmarks("WCF");
foreach (Bookmark bm in bms)
Console.WriteLine("{0}\r\n{1}", bm.Title, bm.Url);
If you happened to define your server-side service contract
using an interface definition, you can simply share that same interface
definition with your WCF client applications. But unfortunately, there’s no way
to automatically generate the client side RESTful contract from a service
description.
Because ADO.NET Data Services uses a standard AtomPub
representation, it was also able to provide a simple service description
language known as the Conceptual Schema Definition Language (CSDL), originally
defined for the ADO.NET Entity Data Model. You can browse to the metadata for a
particular ADO.NET Data Service by adding “$metadata” to the service’s base URL
(see Figure 32).
This description contains enough information to generate
client-side proxies. ADO.NET Data Services comes with a client utility called
DataServiceUtil.exe for automatically generating the client-side classes. You
can also perform an “Add Service Reference” command within Visual Studio to do
the same thing. Once you’ve generated the client-side classes, you can write
code like this to query your service:
BookmarkService bookmarkService = new BookmarkService(
new Uri(""));
// this generates the following URL:
//('WCF',Tags)&
// $orderby=LastModified
var bookmarks = from b in bookmarkService.Bookmarks
where b.Tags.Contains("WCF")
orderby b.LastModified
select b;
foreach (Bookmark bm in bookmarks)
Console.WriteLine("{0}\r\n{1}", bm.Title, bm.Url);
This illustrates the potential of code generation combined
with a truly RESTful design. The client is interacting with a resource-centric
model, yet all of the HTTP details have been abstracted away.
Figure 32: Browsing
to ADO.NET Data Service metadata
The following is a collection of common questions that come
up when developers first begin looking at REST and comparing it with the SOAP
and WS-* feature set, which has become quite popular today.
SOAP defines a simple XML-based protocol for exchanging and
processing XML payloads that can be annotated with XML headers. The XML headers
can influence behavior around routing, security, reliability, and transactions.
Over the past several years, the W3C and OASIS have been busy codifying a suite
of SOAP header specifications (e.g., WS-Security, WS-ReliableMessaging,
WS-AtomicTransactions, etc) that we collectively refer to as “WS-*”. SOAP +
WS-* can be used to implement distributed systems that have complex security
and reliability requirements without sacrificing interoperability or requiring
you to use a specific transport protocol. This is one key area where SOAP and
REST differ – while SOAP is transport-neutral, REST is very transport-centric
in that it embraces HTTP for the features it needs.
One of the reasons SOAP has gained so much traction
throughout the industry is because of its support for client-side code
generation. Thanks to the Web Services Description Language (WSDL) and a
variety of supporting code generation tools, SOAP developers have been able to
generate client-side proxy classes that make it easy to begin consuming a
service in a variety of different programming languages, while hiding the nasty
details of the various WS-* protocols being used on the wire. This fact alone
is why many major corporations have standardized on SOAP for their services.
This is another area where SOAP and REST differ – most REST services don’t come
with this type of code-generation support.
Although WSDL makes SOAP easy from an integration
perspective, it doesn’t necessarily simplify the overall system design. In
fact, despite SOAP’s attempt to move away from RPC, most SOAP frameworks still
push developers towards an RPC model. This can become problematic because it’s
widely believed that RPC doesn’t work as well at larger scales, where a more
loosely-coupled document-oriented usually works better. At that point, many
will realize that most SOAP services are no different than COM+ components, at
least from a versioning perspective. REST, on the other hand, encourages a
document-oriented model through a uniform interface (although noun versioning
remains an issue).
Once you understand these realities, it raises some tough
questions around which of these models – REST or SOAP – makes the most sense
for a particular system. Anyone who tells you that one of them is always the
right choice (regardless of which one), is probably trying to sell something or
push an agenda. The reality is: both models provide design philosophies and
tools for solving problems, each with their pros and cons. It’s your job as an
architect/developer to analyze the differences and make a choice.
Most of the examples we’ve seen thus far have simply modeled
data entities and the basic CRUD operations. More complex concepts like
business processes and transactions can also be handled using the uniform
interface and RESTful principles. It requires you to think about those concepts
as resources themselves, and then you can deal with them like any other
resource.
For example, if you’re building a banking service, it’s easy
to see how account information could be modeled using REST principles, but
harder to see how you’d deal with something like the “transfer funds between
accounts” process. However, once you start treating “transfers” as a resource,
it begins to make more sense. You perform transfers issuing a POST request to a
“transfers” resource. Any business process can potentially be modeled as a
resource this way, although it may not feel as natural or as simple as defining
a new verb/operation like you normally would in an RPC model.
Some argue that RESTful services are limited on the security
front since they can’t use the WS-Security specifications. While it’s true they
can’t use WS-Security, in my experience this hasn’t been a major show-stopper
in the real world. The fact is: WS-Security provides a transport-neutral
security framework that essentially provides the same features that SSL does
for HTTP. In other words, SSL already provides most of the security features
that WS-* enables for SOAP-based services.
So what does WS-Security (and the related specifications)
enable over the built-in SSL features? It provides a few key features: 1) a
single security framework capable of dealing with multiple intermediaries and
potentially multiple transports, and 2) support for federated security. The
idea of multiple intermediaries is interesting in theory, but doesn’t seem to
be a common requirement out in the real world. However, support for federated
security is becoming more common in large distributed systems so it’s a
requirement that often pushes people towards SOAP and WS-* today.
Although you can implement federated security without SOAP
and WS-*, there isn’t much .NET support for the alternate protocols. There are
a variety of specifications that defined HTTP-centric federated security solutions
including the SAML 2.0 HTTP bindings, OpenID, and OAuth to name a few. However,
since WCF doesn’t provide built-in support for any of these specifications
today, you either have to find a third-party library or go after it yourself,
which isn’t a trivial proposition. This is one area where today’s REST
frameworks need to play catch-up. It’s likely that the WCF REST Starter Kit
will provide additional features and guidance in this area in a future release.
WS-ReliableMessaging adds a TCP-like reliability layer to
SOAP, which makes it possible to implement HTTP-based services that have
exactly-once ordered delivery guarantees. While this can be very attractive in
some distributed scenarios, it’s an area of the WS-* that isn’t widely used in
my experience. You can ensure exactly-once delivery over native HTTP by using
idempotent operations (PUT/DELETE) and relying on consumers to retry when they
don’t receive a confirmation of success. Idempotency is highly underutilized in
today’s Web designs – it’s the key to reliability in your RESTful services.
HTTP doesn’t come with a built-in transaction protocol
equivalent to WS-AtomicTransactions or the related specifications. However,
it’s interesting to note that you can implement the notion of logical
transactions in a RESTful service, again by treating the “transaction” as a
resource. You can create a new “transaction” resource through a POST operation.
Then you can PUT/DELETE other resources to transaction resource (reliably via
idempotent retries). Along the way, you can retrieve the transaction itself by
issuing a GET request for the “transaction” resource. And you can commit or
roll back the transaction by issuing either a PUT or DELETE request to the
“transaction” resource. This is another area where something is possible in
REST, but probably not the simplest approach to solving the problem.
Designing Web services that embrace RESTful principles
allows your systems to enjoy the benefits inherent in the Web platform
immediately. RESTful design moves you away from traditional RPC and instead
requires you to focus on resources, how your resources are identified, and how
you interact with your resources through a uniform interface. In my experience,
making this mental shift is the most challenging aspect of moving towards the
REST architectural style; the remaining details are easy.
Microsoft is strongly committed to REST moving forward. As
you’ve seen, WCF 3.5 makes it easy to support either REST or SOAP, so you can
pick what best suits your problem space. The new “Web” programming model
introduced in 3.5 by [WebGet], [WebInvoke], UriTemplate, and SyndicationFeed
lowers the bar for authoring RESTful services. And with the introduction of the
new WCF REST Starter Kit, they’ve lowered the bar even further. The project
templates make it possible to get RESTful services up and running in minutes.
And for exposing data in your database in traditional CRUD scenarios, it
couldn’t be easier with ADO.NET Data Services, which practically automate the
process of building RESTful data-oriented services.
Aaron Skonnard is a cofounder of Pluralsight, a premier
Microsoft .NET training provider offering both instructor-led and online
training courses. Aaron is the author of numerous books, articles, and
whitepapers, as well as Pluralsight’s REST, Windows Communication Foundation,
and BizTalk courses. Aaron has spent years developing courses, speaking at
conferences, and teaching developers throughout the world. You can reach him at. | http://msdn.microsoft.com/en-us/library/dd203052.aspx | crawl-002 | en | refinedweb |
For exception families (ex: System.Net namespace), the steps are very similar.
One thing to be aware of, changing the options for an exception family (ex: System.IO) will change the setting for all nodes beneath it (ex: System.IO.FileNotFoundException) to match. Note: If you are debugging Smart Device (NetCF) projects using the beta 2 release of Visual Studio 2005, you will need to turn off the debugger's Just-My-Code (JMC) feature to be able to stop on first chance exceptions.Take care!-- DKDisclaimer(s):This posting is provided "AS IS" with no warranties, and confers no rights.Some of the information contained within this post may be in relation to beta software. Any and all details are subject to change. | http://blogs.msdn.com/davidklinems/archive/2005/07/19/440632.aspx | crawl-002 | en | refinedweb |
Monopoly
From Uncyclopedia, the content-free encyclopedia.
- This article is about the popular Parka Brothas board game about trading real estate. For the Wilde Brothas board game about trading Uncyclopedia namespaces, see Uncyclopoly.
A popular board game, Monopoly was allegedly invented by Kim Jong Il, but more likely by virtually anyone else, especially Sega & Amiga. Although a historian said that Monopoly was the work of J.R.R Tolkien to commemorate the Massacre on the fjords so let's all believe him and go have a cup of tea.
While we believe him, we should also note that Monopoly is the longest running game show that never appeared on television. The main player in Monopoly is Karma, which will always win no matter what. so don't think that just because you scored $1000 off your bud, you won't get raped next turn by the freakin' boardwalk.
It is known who invented the game, but you don't need to know. Until more evidence is collected, these are officially considered to have been the work of witches.
[edit] Uncle
Andross Moneybags
The guy on the box is known as Uncle
Andross Moneybags. He is the game's host, and is best friends with Bill Gates, Donald Trump, and Scrooge Mc Duck. His enemy is the Star Fox team. He has a real estate empire, just like Oprah has a media empire. Being rude to him will surely get you in Jail, without the chance to pass Go and collect 200 dollars. There is a large chance you may lose this money right away because you are a jackass overconfident, and probably taunted the other players by saying "Now, I'm the one who has the brains money to rule buy Lylat!" which will result in Karma kicking your ass.
[edit] Variations
- Bondopoly: Players could blow up any enemy's lair like the Crab Key in Dr. No, Goldeneye, Venice floating bank, Igloo in Iceland, the Zorin's Silicon Valley hideout or Ernst Stavro Blofeld's Volanic lair.
- Commiopoly: The players are prohibited from owning any of the properties individually.
- Mircosopoly: players are not allowed to own property.
- Anarcopoly: No rules.
- Somaliopoly: No rules. Everyone loses.
- Outbackopoly: No rules. Just right.
- Antimonopoly: Any player holding all properties of a set must give at least one to another player; or he may split his wealth equally and take 2 turns per round. Most go for the second option.
- Enronopoly: Players go around the board doubling their money every time they pass go. Players cannot go bankrupt. If you go to jail, you and everyone seated next to you loses.
- Oligopoly: Players can build on a property if they own one of a set.
- Hollywoodopoly: Players can buy or stop buy nightclubs, hotels, movie studios, theme parks, and glamour stores or even "Gossip Colum". In "Chance" or "Community Chest" might say Go to Rehab, Go Direcly to Rehab, Do Not Pass Go, Do Not Collect $2,000+, Time To Act, Pay $100,000 or Paparazzis, Tabloid Hounds, Move Back 3 Spaces.
- Sodomopoly: GLBT players automatically win.
- Naziopoly: Jews are not allowed to own property. Stations lead to concentration camps. Go to Jail is replaced by Go to Auschwitz.
- Mafiopoly: Players are allowed to have properties and improvements, provided that they give the godfather a cut.
- Africopoly: On average these games don't last as long. The exact rule differences no one will agree on but you start with only $2.50 monopoly money. Also, Go To Jail is replaced with landmines, and AIDS has its own square, which can be obtained by landing on it whilst it is in another player's possession, and replaces all subsequent properties landed on by said player.
- Galipoly: Same rules as the original, but is played on a battlefield, usually during a botched attempt by British Generals to help out an allied nation.
- hip hopoloy : it involves dancing and real estate investment usually played with some phat beats
- MSNopoly: lol
- Battleship: Players use their imagination.
- Thermopylae:.
- Emopoly: How fast can you make your life a miserable thing?
- Startrekopoly: How many planets can you annex before the Romulans, Klingons, Borg, Dominion, Cardassians, and Ori take the rest. The goal is to annex planets to build fleets of Starships with.
- Ghettopoly
- Iraqopoly
- Sadism and Monopolism: A rare and valuable variation on the game. Win or lose, it's your choice.
- trambopoly: a newer version of monopoly that uses trampolines and has krusty the clown on the box
- edna krabopoly dats just wierd
- krip-popoly smoke some weed shoot some bloods roll a 6 and go to jail,dont drop the soap BITCH
the little car is 'pimped' with hydraulics
- Blackopoly Go directly to jail. Every time.
- Monopolyopoly Players compete to purchase monopoly boards.
- Zimbabopoly Same as regular monopoly, but with 100 Trillion dollar bills that won't be worth much in American Monopoly. Players don't go to jail, they simply dissapear if they say something bad about Robert Mugabe. Hotels replaced by cattle. Chance cards replaced by Cholera pills.
== *It dosnt matter how you play it, the jew is always gonna win..
==
[edit] Variant Edition
Monopoly players have traditionally played by 'house rules.' These variations (and the reverence with which they are followed) have caused great consternation and offense to sticklers, especially the game's publisher. After years of officious rule-mongering, Parker Brothers has finally caved. In a press release in 2004, Mr. Pinibags, spokesperson for Parker Brothers, admitted, "You were right all along." A new 'Variant Edition' of Monopoly hit store gamecases shortly afterward.
The new game features a reprinting of the game board, rules, and ticker-tape strips to match a more liberal philosophy. The changes include:
- Removal of the restriction on agreements between players. If two players bankrupt a third by pooling their assets and end up in stalemate, the rules advise that the winner be decided by a knife fight.
- Any player may lend money to any other player. If no one can remember how much money who owes whom, a tribunal is set up, which inevitably sides with the most socially popular player.
- Issue of million dollar bills, for players who like money. The resulting game lengthening is counteracted with a host of illogical taxes.
- Concession that Free Parking does in fact pay a jackpot.
- Approval of travelling railroads. If a player lands on a railroad that is his, he can move his token to any square of his choice, quickly unbalancing the game in his favour.
- If you land on a property you own, you have to "pay yourself money."
- Anyone whose token is sent to jail automatically loses. The loser's property is placed in the Free Parking jackpot, for an unfair windfall that still imparts bragging rights.
- "Lenience cards" can be bought for "protection"
- The loser will be cerimonially stuffed and mounted before replacing the thimble as a playing piece.
[edit] Monopoly 3D
Monopoly 3D (a.k.a. Monopoly^1.5) is an agonizing variant, played on two Monopoly boards stacked above and below 4 CD racks. A third Monopoly board is cut up into squares which are slotted into the 4 CD racks. Thus there are 104 squares in the setup instead of the usual 36. Correspondingly, there are 6 facilities (3 Oil Derricks and 3 Phone Companies) and 12 railroads. A felt tipped marker must be used to add patterns to two of the three duplicates of each property, to differentiate them as being in different cities: plain, striped, and dotted are usual. Two of the three sets of property cards that must be procured must be marked up in a similar fashion. Four of the eight corner squares may then be customized to read things like "Kitty," "Workers' union," "Frontier claims office," or "Go to the nexus."
Players' tokens begin on the original Begin square. Players have a choice of two directions when passing each corner square. Otherwise the rules are the same. However, the game only makes sense when played by large numbers of people in a partying context. The game also takes an astronomical period of time to complete, so all professionally crafted sets should carry "Aging" and "Axing" warnings.
[edit] Monopoly, Kabbalah, and divination
Like the Tarot, the Monopoly boardgame has been used for fortune telling purposes. It has been discovered that the 22 properties of the gameboard correspond to the Major Arcana of Tarot and to the 22 letters of the Hebrew alphabet. Wao that is interesting...
[edit] Different Variations Of Monopoly
Many different types of Monopoly exist, these include then and now, bristol, chav and nazi. The then and now includes the feature of paying with a credit card whereas the chav edition features squares including Lidl, Burger King and Aldi. The Nazi game allows you to buy countrys instead of houses! (Makes you feel like a real Nazi) The chav edition was released due to increasing pressure by the fact that everyone these days has been fucked or robbed by one. Hitler of course sent a messenger in the shape of George Bush to force the nazi edition, as everyone knows.
Template:Monopoly board layout
Template:Monopoly board layout
[edit] See also | http://uncyclopedia.wikia.com/wiki/Monopoly | crawl-002 | en | refinedweb |
David Ebbo's blog - The Ebb and Flow of ASP.NET
To get the latest build of T4MVC:.
This is similar to the RedirectToAction/ActionLink support, but applied to route creation. The original Nerd Dinner routes look like this:
routes.MapRoute(
"UpcomingDinners",
"Dinners/Page/{page}",
new { controller = "Dinners", action = "Index" }
);
routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
new { controller = "Home", action = "Index", id = "" } // Parameter defaults
);", // Route name
"{controller}/{action}/{id}", // URL with parameters
MVC.Home.Index(), // Default action
new { id = "" } // Parameter defaults
);! :)
Short version: the MVC T4 template (now named T4MVC) is now available on CodePlex, as one of the downloads in the ASP.NET MVC v1.0 Source page.
Yesterday, I posted asking how people felt about having the template modify their code in small ways. Thanks to all those who commented! The fact that Scott Hanselman blogged it certainly helped get traffic there :)
The majority of people thought that it was fine as long as.
The template on CodePlex (version 2.0.01 at the top of the file) supports what I described in my previous post, plus some new goodies.:
One caveat is that you have to initiate the cycle by opening and saving T4MVC.tt once. After you do that, you don’t need to worry about it.
Credit for this idea goes to Jaco Pretorius, who blogged something similar.
The template generates static helpers for your content files and script files. So instead of writing:
<img src="/Content/nerd.jpg" />
You can now write:
<img src="<%= Links.Content.nerd_jpg %>" />
Likewise, instead of
<script src="/Scripts/Map.js" type="text/javascript"></script>
You can write:
.
I also fixed a number of bugs that people reported and that I ran into myself, e.g.
I’m sure there are still quite a few little bugs, and we’ll work through them as we encounter them
Update: Please see this post for what came out of this ‘poll’, and for a pointer to the newest T4 template on CodePlex.
When working on my MVC T4 template, I was not able to use reflection to discover the Controllers and Actions, because the code that the template generates is itself in the same assembly as the controllers. So that causes a bit of a chicken and egg problem.
Instead, I had to get out of my element and learn something I was not familiar with: the Visual Studio File Code Model API. It’s very different from using reflection, because instead of working at the assembly level, you work at the source file level.
You have to first locate the source file you want to look into. You can then ask for the namespaces that it contains, and the classes that they contain, and finally the various members in those classes. To be honest, I find this API quite ugly. It’s a COM interop thing with a horrible object model that looks like it grew organically from version to version rather than having been designed with usability in mind. So all in all, I used it because I had to, but the whole time I was wishing I could use reflection instead.
But then I made an important realization. Ugly as it is, this object model supports something that would never be possible with reflection: it lets me modify the source code!
If you look at my previous post, I wrote “But to make things even more useful in the controller, you can let the T4 template generate new members directly into your controller class. To allow this, you just need to make you controller partial”. And I have logic in the template that tests this, and does extra generation if it is partial, e.g.
if (type.ClassKind == vsCMClassKind.vsCMClassKindPartialClass) { ... }
But instead, I have now realized that I can turn this check into an assignment, and change the class to partial if it isn’t already!
type.ClassKind = vsCMClassKind.vsCMClassKindPartialClass;
Likewise, I have scenarios where I can do cool things if the Controller actions are virtual, and I can just change them to be with a simple line:
method.CanOverride = true;
To be clear, those statements actually modify the original source file, the one that you wrote. While this is certainly powerful and opens up some new doors, it also raises a big question which is the main purpose of this post:
We’re only talking about pretty harmless things (making classes partial and methods virtual), but I know developers can get nervous if even small changes magically happen in their source files.
So please tell me how you feel about this, e.g. is it more:
Tell me where you stand, and please don't sue me.
So I have had this blog since October 2005, and the entire time it has been named “David Ebbo’s blog”. There were a number of solid reasons that had led me to choose this catchy name:.
Update: Please read this post for the newest and greatest..
Before we go and re-invent the wheel, let’s discuss what the issues with the runtime T4 approach were, and how this is solved by this new approach.
Complex configuration: to enable the runtime template, you had to add a DLL to your bin, modify two web.config files, and drop two T4 files in different places. Not super hard, but also not completely trivial. By contrast, with this new approach you just drop one .tt file at the root of your app, and that’s basically it.
No partial trust support: because it was processing T4 files at runtime, it needed full trust to run. Not to mention the fact that using T4 at runtime is not really supported! But now, by doing it at design time, this becomes a non-issue.
Only works for Views: because only the Views are compiled at runtime, the helpers were only usable there, and the controllers were left out (since they’re built at design time). With this new approach, Controllers get some love too, because the code generated by the template lives in the same assembly as the controllers!
Let’s jump right in and see this new template in action! We’ll be using the Nerd Dinner app as a test app to try it on. So to get started, go to, download the app and open it in Visual Studio 2008 SP1.
Then, simply drag the T4 template (the latest one is on CodePlex) into the root of the NerdDinner project in VS. And that’s it, you’re ready to go and use the generated helpers!
Once you’ve dragged the template, you should see this in your solution explorer:
Note how a .cs file was instantly generated from it. It contains all the cool helpers we’ll be using! Now let’s take a look at what those helpers let us do.
Open the file Views\Dinners\Edit.aspx. It contains:
<% Html.RenderPartial("DinnerForm"); %>
This ugly “DinnerForm” literal string needs to go! Instead, you can now write:
<% Html.RenderPartial(MVC.Dinners.Views.DinnerForm); %>
Now open Views\Dinners\EditAndDeleteLinks.ascx, where you’ll see:
<%= Html.ActionLink("Delete Dinner", "Delete", new { id = Model.DinnerID })%>
Here we not only have a hard coded Action Name (“Delete”), but we also have the parameter name ‘id’. Even though it doesn’t look like a literal string, it very much is one in disguise. Don’t let those anonymous objects fool you!
But with our cool T4 helpers, you can now change it to:
<%= Html.ActionLink("Delete Dinner", MVC.Dinners.Delete(Model.DinnerID))%>
Basically, we got rid of the two unwanted literal strings (“Delete” and “Id”), and replaced them by a very natural looking method call to the controller action. Of course, this is not really calling the controller action, which would be very wrong here. But it’s capturing the essence of method call, and turning it into the right route values. And again, you get full intellisense:
By the way, feel free to press F12 on this Delete() method call, and you’ll see exactly how it is defined in the generated .cs file. The T4 template doesn’t keep any secrets from you!
Likewise, the same thing works for Ajax.ActionLink. In Views\Dinners\RSVPStatus.ascx, change:
<%= Ajax.ActionLink( "RSVP for this event",
"Register", "RSVP",
new { id=Model.DinnerID },
new AjaxOptions { UpdateTargetId="rsvpmsg", OnSuccess="AnimateRSVPMessage" }) %>
to just:
<%= Ajax.ActionLink( "RSVP for this event",
MVC.RSVP.Register(Model.DinnerID),
new AjaxOptions { UpdateTargetId="rsvpmsg", OnSuccess="AnimateRSVPMessage" }) %>
You can also do the same thing for Url.Action().
As mentioned earlier, Controllers are no longer left out with this approach.
e.g. in Controllers\DinnersController.cs, you can replace
return View("InvalidOwner");
by
return View(MVC.Dinners.Views.InvalidOwner);
But to make things even more useful in the controller, you can let the T4 template generate new members directly into your controller class. To allow this, you just need to make you controller partial, e.g.
public partial class DinnersController : Controller {
Note: you now need to tell the T4 template to regenerate its code, by simply opening the .tt file and saving it. I know, it would ideally be automatic, but I haven’t found a great way to do this yet.
After you do this, you can replace the above statement by the more concise:
You also get to do some cool things like we did in the Views. e.g. you can replace:
return RedirectToAction("Details", new { id = dinner.DinnerID });
The previous runtime-based T4 template was using reflection to learn about your controllers and actions. But now that it runs at design time, it can’t rely on the assembly already being built, because the code it generates is part of that very assembly (yes, a chicken and egg problem of sort).
So I had to find an alternative. Unfortunately, I was totally out of my element, because my expertise is in the runtime ASP.NET compilation system, while I couldn’t make use of any of it here!
Luckily, I connected with a few knowledgeable folks who gave me some good pointers. I ended up using the VS File Code Model API. It’s an absolutely horrible API (it’s COM interop based), but I had to make the best of it.
The hard part is that it doesn’t let you do simple things that are easy using reflection. e.g. you can’t easily find all the controllers in your project assembly. Instead, you have to ask it to give you the code model for a given source file, and in there you can discover the namespaces, types and methods.
So in order to make this work without having to look at all the files in the projects (which would be quite slow, since it’s a slow API), I made an assumption that the Controller source files would be in the Controllers folder, which is where they normally are.
As for the view, I had to write logic that enumerates the files in the Views folder to discover the available views.
All in all, it’s fairly complex and messy code, which hopefully others won’t have to rewrite from scratch. Just open the .tt file to look at it, it’s all in there!
In addition to looking at the .tt file, I encourage you to look at the generated .cs file, which will show you all the helpers for your particular project.
This was briefly mentioned above. The T4 generation is done by VS because there is a custom tool associated with it (the tool is called TextTemplatingFileGenerator – you can see it in the properties). But VS only runs the file generator when the .tt file changes. So when you make code changes that would affect the generated code (e.g. add a new Controller), you need to explicitly resave the .tt file to update the generated code. As an alternative, you can right click on the .tt file and choose “Run Custom Tool”, though that’s not much easier.
Potentially, we could try doing something that reruns the generation as part of a build action or something like that. I just haven’t had time to play around with this. Let me know if you find a good solution to this.
This was also the case with the previous template, but it is worth pointing out. Because all the code is generated by the T4 template, that code is not directly connected to the code it relates to.
e.g. the MVC.Dinners.Delete() generated method results from the DinnersController.Delete() method, but they are not connected in a way that the refactoring engine can deal with. So if you rename DinnersController.Delete() to DinnersController.Delete2(), MVC.Dinners.Delete() won’t be refactored to MVC.Dinners.Delete2().
Of course, if you resave the .tt file, it will generate a MVC.Dinners.Delete2() method instead of MVC.Dinners.Delete(), but places in your code that call MVC.Dinners.Delete() won’t be renamed to Delete2.
While certainly a limitation, it is still way superior to what it replaces (literal strings), because it gives you both intellisense and compile time check. But it’s just not able to take that last step that allows refactoring to work.
It is worth noting that using Lamda expression based helpers instead of T4 generation does solve this refactoring issue, but it comes with a price: less natural syntax, and performance issues.
It has been pretty interesting for me to explore those various alternative to solve this MVC strongly typed helper issue. Though I started out feeling good about the runtime approach, I’m now pretty sold on this new design time approach being the way to go.
I’d be interested in hearing what others think, and about possible future directions where we can take this.
Update: Please see this newer post for the latest and greatest MVC T4 template
Earlier this week, I wrote a post on using a BuildProvider to create ActionLink helpers. That approach was using CodeDom to generate the code, and there was quite some user interest in it (and Phil blogged it, which helped!).
Then yesterday, I wrote a post on the Pros and Cons of using CodeDom vs T4 templates for source code generation. They are drastically different approaches, and while both have their strengths, T4 has definitely been getting more buzz lately.
The logical follow-up to those two posts is a discussion on using T4 templates to generate MVC strongly typed helpers. The general idea here is to use the existing ASP.NET extensibility points (BuildProvider and ControlBuilder), but rely on T4 templates to produce code instead of CodeDom. Hence, I called the helper library AspNetT4Bridge (I’m really good at naming things!).
As far as I know, this is the first time that T4 templates are executed dynamically inside an ASP.NET application, so let’s view this as an experiment, which has really not been put to the test yet. But it is certainly an exciting approach, so let’s see where it takes us!!
This is similar to the previous section, except it covers the case where you need to generate raw URL’s rather than HTML <a> tags. Instead of writing:
<%= Url.Action("Edit", new { id = item.ID }) %>
<%= Url.UrlToTestEdit(item.ID) %>.
This post is supposed to be about using T4 templates, and so far we haven’t said a whole lot about them. They are certainly the magic piece that makes all this work. We are actually using two different .tt files, which cover two distinct scenarios:! :)
Look for this logic in AspNetT4BridgeBuildProvider.cs.!
So here we are, dynamically executing T4 templates at runtime in an ASP.NET app. One big caveat that I mentioned in my previous post is that you’re not really supposed to do that! Copying from there:!).!
There are many scenarios where the need to generate source code arises. The MVC helpers I introduced in my last post is one such example. Note that I am focusing on generating source code here, and not on scenarios where you may want to generate IL directly (which certainly do exist as well, but it’s a difference discussion).
To perform the code generation, there are several different approaches that can be used. The most simplistic one is to use a plain StringBuilder and write whatever code you want to it. It’s rather primitive, but for simple scenarios, it just might be good enough.
For more complex scenarios there are two widely different approaches that I will be discussing here: CodeDom and T4 templates. Let’s start by introducing our two competitors.
CodeDom: this has been around since the Framework 2.0 days, and is used heavily by ASP.NET. Its main focus is on language independence. That is, you can create a single CodeDom tree and have it generate source code in C#, VB, or any other language that has a C# provider. The price to pay for this power is that creating a CodeDom tree is not for the faint of heart!
T4 Templates: it’s a feature that’s actually part of Visual Studio rather than the framework. The basic idea here is that you directly write the source code you want to generate, using <#= #> blocks to generate dynamic chunks. Writing a T4 template is very similar to writing an aspx file. It’s much more approachable than CodeDom, but provides no language abstraction. Funny thing about T4 is that it’s been around for a long time, but has only been ‘discovered’ in the last year or so. Now everyone wants to use it!
Let’s say that you’re trying to generate a class that has a method that calls Console.WriteLine("Hello 1") 10 times (with the number incrementing). It’s a bit artificial, since you could just as well generate a loop which makes the call 10 times, but bear with me for the sake of illustration, and assume that we want to generate 10 distinct statements.
First, let’s tackle this with CodeDom. In CodeDom, you don’t actually write code, but you instead build a data structure which later gets translated into code. We could say that you write metacode. Here is what it would look like:
using System;
using System.CodeDom;
using Microsoft.CSharp;
using Microsoft.VisualBasic;
class Program {
static void Main(string[] args) {
var codeCompileUnit = new CodeCompileUnit();
var codeNamespace = new CodeNamespace("Acme");
codeCompileUnit.Namespaces.Add(codeNamespace);
var someType = new CodeTypeDeclaration("SomeType");
someType.Attributes = MemberAttributes.Public;
codeNamespace.Types.Add(someType);
// Create a public method
var method = new CodeMemberMethod() {
Name = "SayHello",
Attributes = MemberAttributes.Public
};
someType.Members.Add(method);
// Add this statement 10 times to the method
for (int i = 1; i <= 10; i++) {
// Create a statement that calls Console.WriteLine("Hello [i]")
var invokeExpr = new CodeMethodInvokeExpression(
new CodeTypeReferenceExpression(typeof(Console)),
"WriteLine",
new CodePrimitiveExpression("Hello " + i));
method.Statements.Add(new CodeExpressionStatement(invokeExpr));
}
// Spit out the code in both C# and VB
(new CSharpCodeProvider()).GenerateCodeFromCompileUnit(codeCompileUnit, Console.Out, null);
(new VBCodeProvider()).GenerateCodeFromCompileUnit(codeCompileUnit, Console.Out, null);
}
}
You will either find this beautiful or atrocious depending on your mind set :) Basically, writing CodeDom code is analogous to describing the code you want. Here, you are saying:
Build me a public class named SomeType in the namespace Acme. In there, create a public method named SayHello. In there, add 10 statements that call Console.Write(…).
Build me a public class named SomeType in the namespace Acme. In there, create a public method named SayHello. In there, add 10 statements that call Console.Write(…).
It certainly takes a fair amount of work to do something so simple. But note that you’re not doing anything that ties you to C# or VB or any other language. To illustrate this language abstraction power, this test app outputs the code in both C# and VB, with no additional effort. That is one of the strongest points of CodeDom, and should not be discounted.
Now, let’s look at the T4 way of doing the same things. You’d write something like this (just create a test.tt file in VS and paste this in to see the magic happen):
<#@ template language="C#v3.5" #>
namespace Acme {
public class SomeType {
public virtual void SayHello() {
<# for (int i=1; i<=10; i++) { #>
System.Console.WriteLine("Hello <#= i #>");
<# } #>
}
}
}
As you can see, for the most part you’re just writing out the code that you want to generate, in the specific language that you want. So for ‘fixed’ parts of the code, it’s completely trivial. And when you want parts of the generation to be dynamic, you use a mix of <# #> and <#= #> blocks, which work the same way as <% %> and <%= %> blocks in ASP.NET. Even though it’s much simpler than CodeDom, it can get confusing at times because you’re both generating C# and writing C# to generate it. But once you get used to that, it’s not so hard.
And of course, if you want to output VB, you’ll need to write the VB equivalent. To be clear, only the ‘gray’ code would become VB. The code in the <# #> blocks can stay in C#. If you want the <# #> blocks to be in VB, you’d change the language directive at the top. The generator code and the generated code are two very distinct things!
Now that we’ve looked at samples using both techniques, let’s take a look at their Pros and Cons, to help you make on informed decision on which one is best for your scenario.
Hopefully, this gave you good overview of the two technologies. Clearly, T4 is the more popular one lately, and I certainly try to use it over CodeDom when I can. With proper framework support, it would become an even easier choice.
One downside of using Html.ActionLink in your views is that it is late bound. e.g. say you write something like this:
<%= Html.ActionLink("Home", "Index", "Home")%>
The second parameter is the Action name, and the third is the Controller name. Note how they are both specified as plain strings. This means that if you rename either your Controller or Action, you will not catch the issue until you actually run your code and try to click on the link.
Now let’s take the case where you Action takes parameters, e.g.:
public ActionResult Test(int id, string name) {
return View();
}
Now your ActionLink calls looks something like this:
<%= Html.ActionLink("Test Link", "Test", "Home", new { id = 17, name = "David" }, null) %>
So in addition to the Controller and Action names changing, you are vulnerable to the parameter names changing, which again you won’t easily catch until runtime.
One approach to solving this is to rely on Lambda expressions to achieve strong typing (and hence compile time check). The MVC Futures project demonstrates this approach. It certainly has merits, but the syntax of Lambda expressions in not super natural to most.
Here, I’m exploring an alternative approach that uses an ASP.NET BuildProvider to generate friendlier strongly typed helpers. With those helpers, the two calls below become simply:
<%= Html.ActionLinkToHomeIndex("Home")%><%= Html.ActionLinkToHomeTest("Test Link", 17, "David")%>
<%= Html.ActionLinkToHomeIndex("Home")%>
<%= Html.ActionLinkToHomeTest("Test Link", 17, "David")%>
Not only is this more concise, but it doesn’t hard code any of the problematic strings discussed above: the Controller and Action names, and the parameter names.
You can easily integrate these helpers in any ASP.NET MVC app by following three steps:
1. First, add a reference to MvcActionLinkHelper.dll in your app (build the project in the zip file attached to this post to get it)
2. Then, register the build provider in web.config. Add the following lines in the <compilation> section:
<buildProviders>
<add extension=".actions" type="MvcActionLinkHelper.MvcActionLinkBuildProvider" />
</buildProviders>
3. The third step is a little funky, but still easy. You need to create an App_Code folder in your app, and add a file with the .actions extension in it. It doesn’t matter what’s in the file, or what its full name is. e.g. add an empty file named App_Code/generate.actions. This file is used to trigger the BuildProvider.
I included all the sources in the zip, so feel free to look and debug through it to see how it works. In a nutshell:
At this point, this is just a quick proof of concept. There are certainly other areas of MVC where the same idea can be applied. e.g. currently it only covers Html.ActionLink, but could equally cover Url.Action(), or HTML form helpers (standard and AJAX).
Please send feedback whether you find this direction interesting as an alternative to the Lambda expression approach.
Actually.
Undeniably, this case is broken, and is the primary reason that we can’t turn on this optimization by default. Luckily, in practice this situation in not extremely common, which is why the optimization is still very usable for users that are aware of the limitations.
ASP.NET uses a per application hash code which includes the state of a number of things, including the bin and App_Code folder, and global.asax. Whenever an ASP.NET app domain starts, it checks if this hash code has changed from what it previously computed. If it has, then the entire codegen folder .
In addition to this preview, you’ll want to also install Microsoft .NET RIA Services in order to get some useful tooling. This too can be confusing, because it makes it sound like it’s tied to RIA and Silverlight in some way, when in fact it is not.
The deal is that there is this new. For more information about the Silverlight side of things, check out Nikhil’s post (and his MIX talk).
Important: after getting the ASP.NET DD preview and the RIA Services mentioned above, you’ll need to do a little extra step to avoid a tricky setup issue. Find System.Web.DynamicData.dll under DefaultDomainServiceProject\bin in the ASP.NET Preview zip file, and copy it over the one in \Program Files\Microsoft SDKs\RIA Services\v1.0\Libraries\Server (which is where the RIA install puts them).
Now you’re actually ready to start playing with DomainService.
First, you’ll want to make a copy of the whole DefaultDomainServiceProject folder so you can work with it without touching the ‘original’. Then, just open DefaultDomainServiceProject.sln in VS. Normally, this would be a Project Template, but right now we don’t have one. });
Now you can just Ctrl-F5 and you should get a working Dynamic Data app. Note how it only lets you do things for which you have CRUD methods. e.g. you can edit Products but not Categories.
To make things more interesting, try various things:).
Last Friday, I gave a talk at MIX on various things that we’re working on in ASP.NET data land. This includes both some Dynamic Data features and some features usable outside Dynamic Data.
The great thing about MIX is that they make all talks freely available online shortly after, and you can watch mine here. Enjoy!
I’ll try to blog in more detail about some of the features discussed in the talk in the next few days.
When.
If.
When!
There are many ways to customize a ASP.NET Dynamic Data site, which can sometimes be a bit overwhelming to newcomers. Before deciding what customization makes sense for you, it is important to understand the two major buckets that they fall into:
They can both very useful depending on your scenario, but it is important to understand how they are different in order to make the right choices. The rule of thumb is that you want to stay in the world of generic customization whenever possible, and only use the schema specific customization when you have to. Doing this will increase reusability and decrease redundancy.
We’ll now look at each in more details.
Generic customization includes everything that you can do without any knowledge of the database schema that it will be applied to. That is, it’s the type of things that you can write once and potentially use without changes for any number of projects. It lets you achieve a consistent and uniform behavior across an arbitrarily large schema, without needing to do any additional work every time you add or modify a table.
Here are some key types of generic customization:
Under the ~/DynamicData/PageTemplates folder, you’ll find some default Page Templates like List.aspx and Edit.aspx. If you look at them, you won’t find any references to a specific table/column. Instead, they define the general layout and look-and-feel of your pages in an agnostic way.
e.g. if you look at List.aspx, you’ll see a GridView and a LinqDataSource (or EntityDataSource with Entity Framework), but the data source doesn’t have any ContextTypeName/TableName attributes, and the GridView doesn’t define a column set. Instead, all of that gets set dynamically at runtime based on what table is being accessed.
You can easily make changes to these Page Templates, such as modifying the layout, adding some new controls or adding logic to the code behind. It’s a ‘normal’ aspx page, so you can treat is as such, but you should never add anything to it that is specific to your schema. When you feel the need to do this, you need to instead create a Custom Page (see below).
Under the ~/DynamicData/FieldTemplates folder, you’ll find a whole bunch of field templates, which are used to handle one piece of data of a given type. e.g. DateTime_Edit.ascx handles DateTime columns when they’re being edited.
As is the case for Page Templates, Field Templates should always be schema agnostic. That is, you don’t write a field template that’s only meant to handle a specific column of a specific table. Instead, you write a field template that can handle all columns of a certain type.
For example, in this post I describe a field template that can handle Many To Many relationships. It will work for any such relationship in any schema (though it is Entity Framework specific).
Under Page Template above, I mentioned that the GridView didn’t specify a column set. Instead, the way it works is that there is something called a Field Generator which comes up with the set of columns to use for the current table.
The idea is that the Field Generator can look at the full column set for a table, and use an arbitrary set of rules to decide which one to include, and in what order to include them. e.g. it can choose to look at custom model attributes to decide what to show.
Steve Naughton has a great post on writing a custom field generator, so I encourage you to read that for more info on that topic.
Any time you make a customization that is specific to a certain table or column, you are doing schema specific customization. Here are the main type of things you can do:
When you want to have a very specific look for a certain page (e.g. for the Edit page for your Product table), a generic Page Template will no longer do the job. At that point, you want to create a Custom Page. Those live under ~/DynamicData/CustomPages/[TableName]. e.g. to have a custom page to edit products, you would create a ~/DynamicData/CustomPages/Products/Edit.aspx.
You can start out with the custom page being an identical copy of the page template, but then you can start making all kind of schema specific changes. For instance, you could define a custom <Fields> collection in the DetailsView, at which point you no longer rely on the Field Generator (discussed below). Taking this one step further, you can switch from a DetailsView into a FormView (or ListView), giving you full control over the layout of the fields.
Another very important type of schema specific customization is model annotation. This is what most introductory Dynamic Data demos show, where you add CLR attributes to the partial class of your entity classes.
For instance, you can add a [Range(0, 100)] attribute to an integer field to specify that it should only take values between 0 and 100. Dynamic Data comes with a number of built-in annotation attributes, and you can easily build your own to add news ways to annotate your model.
The general idea here is that it is cleaner to add knowledge at the model level than to do it in your UI layer. Anything that you can add to describe your data in more details is a good fit for model annotations.
All of the different types of customization that I describe above have useful scenarios that call for them. My suggestion is to get a good understanding of what makes some of them Generic while others are Schema Specific, in order to make an informed decision on the best one to use for your scenarios.
For the longest time, I had set my blog subtitle to “Dynamic Data and other ASP.NET topics”, which some might argue would not have won any originality contests. On the bright side, it was a fine match for my equally original blog title: “David Ebbo's blog”.
So I went on a quest for a new subtitle as part of my 2009 New Year resolutions (ok, it’s the only one so far). As for changing the Title itself, I’ll save that for New Year 2010.
My first stop in this memorable journey resulted in the use of the extremely witty word ‘Ebblog’. While undeniably cool, I somehow decided to pass on it. The future will probably not tell whether it was a good move.
Still looking to capitalize on my uniquely catchy last name, I came up with the soon-to-be memorable phrase: “The Ebb and Flow of ASP.NET”.
Some online dictionary defines it as “the continually changing character of something”. I guess that’s not so bad.
And then there is Wikipedia, which defines it as “a form of hydroponics that is known for its simplicity, reliability of operation”. Of course, I don’t have a clue what hydroponics means (and neither do you), but the rest sounds pretty good.
So that’s that. Thanks for letting me waste your time. By now, the momentum is clearly building for renaming the blog Title itself, but you’ll just have to wait another year for this thriller to unfold. | http://blogs.msdn.com/davidebb/ | crawl-002 | en | refinedweb |
Summary: Learn how to create a custom activity for Microsoft Office SharePoint Server 2007 to send an e-mail message that has an attachment.
Applies to: Microsoft Office SharePoint Server 2007, Microsoft Visual Studio 2008
Mike Rand, 3Sharp
June.
public partial class SendMailWithAttachmentActivity : Activity
{ …
Now you must create some Dependency Properties. Add the following fields inside your class definition..
using System.Net.Mail;
The last thing you must do here is override the Execute method. Add the following method inside your class definition..
Start Visual Studio 2008.
Adding a Custom Activity to the Workflow
Now, you must add your custom activity to your simple;
Running the Workflow Project from Visual Studio 2008
Press F5 to run your workflow. When you activate this workflow on a document in a document library, an e-mail message is generated and sent with the attachment you specified.
Figure 3. E-mail message with attachment
This Microsoft Office Visual How-To demonstrates how to create a simple custom activity. Custom activities enable you to encapsulate business logic in an efficient, reusable manner. You can update this custom activity to include validation, toolbox designer graphics, and the ability to include more than a single attachment.
Watch the Video
Video Length: 00:09:06
File Size: 8.19 MB WMV
Visual Studio Developer Center
Workflow Resource Center for SharePoint Server
Configuring and Deploying Workflows to SharePoint Server 2007 Using Solution Packages
Global Assembly Cache Tool
SendMailWithAttachmentsTest
[tfl - 14 : Server : Framework : Public :
Replace SendMailWithAttachmentsTest with SendMailWithAttachmentActivity.
Please give what should i replace with SendMailWithAttachmentsTest.
I have tried with SendMailWithAttachmentActivity.
I was wondering how I would be able to use this great example for a custom activity. There is an example in this link::: but it uses Visual Studio 2005, I'm currently using Visual Studio 2008. It also doesn't really have a very good description on what needs to be done.
Does anyone have a solution to this? Or maybe point me in the right direction.
Thanks
Is there anyway that I can reference email groups within the SendmailWithAttachmentActivity.To attribute. I have tested it and no emails seem to get sent to the addresses in my group.
[tfl - 18 04 09] You should post questions like this to the MSDN Forums at or the MSDN Newsgroups at. You are much more likely get aquicker response using the forums than through the Community Content.
For specific help about:Visual Studio : Framework : Public : | http://msdn.microsoft.com/en-us/library/cc627284.aspx | crawl-002 | en | refinedweb |
Adi Oltean's Weblog - Flashbacks on technology, programming, and other interesting things
Let's assume that you want is to write some simple code that writes to a text file. A few assumptions:1) You need avoid corruptions of any kind. 2) Either all of your writes have to make it to the disk, or none of them. 3) The file is updated serially - no concurrent updates from separate processes are allowed. So only one process writes to the file at a time.4) No, you cannot use cool new techologies like TxF.
Remember, all you want is just to write to a text file - no fancy code allowed.
What are the possible problems?
Many people mistakenly think that writing to a file is an atomic operation. In other words, this sequence of function calls is not going to cause garbage in your file. Wrong. Can you guess why? (don't peek ahead for the response).
echo This is a string >> TestFile.txt
echo This is a string >> TestFile.txt
The problem is that the actual write operation is not atomic. A potential problem is when the machine reboots during actual write. Let's assume that your file write is ultimately causing two disk sectors to be overwritten with data. Let's even assume that each of these sectors is part of a different NTFS clusters, and these two clusters are part of the same TestfFile.txt file. The end of the first sector contains the string "This is" and the beginnning of the second sector "a string". What if one of the corresponding hardware write commands to write these sectors is lost, for example due to a machine reboot? You ended up with only one of these sectors overwritten, but not the other. Corruption!
Now, when the machine reboots, there will be no recovery at the file contents level. This is by design with NTFS, FAT, and in fact with most file systems, irrespective to the operating systems. The vast majority of file systems do not support atomicity in data updates. (That said, note that NTFS does have recovery at the metadata level - in other words, updates concerning file system metadata are always atomic. The NTFS metadata will not become corrupted during a sudden reboot)
The black magic of caching
So in conclusion you might end up with the first sector written, but not with the second sector. Even if you are aware of this problem you might still mistakenly think that the first sector is always written before the second one. In other words, assuming that "this is" is always written before "a string" in the code below:
using System;using System.IO;class Test { public static void Main() { using (StreamWriter sw = new StreamWriter("TestFile.txt")) { sw.Write("This is"); sw.Write("a string"); } }}
using System;using System.IO;class Test { public static void Main() { using (StreamWriter sw = new StreamWriter("TestFile.txt")) { sw.Write("This is"); sw.Write("a string"); } }}
This assumption is again wrong. You again can have a rare situation where the machine crashes during your update, and "a string" can end up in the file, but "This is" not saved. Why?
One potential explanation is related with the caching activity. Caching happens at various layers in the storage stack. The .NET Framework performs its own caching in the Write method above. This can interfere with your actual intended order of writes.
So let's ignore .NET and let's present a second example, this time using pure Win32 APIs:
WCHAR wszString1[] = "This is"; WCHAR wszString2[] = "a string";
fSuccess = WriteFile(hTempFile, wszString1, sizeof(WCHAR) * wcslen(wszString1), &dwBytesWritten, NULL); if (!fSuccess) ... fSuccess = WriteFile(hTempFile, wszString2, sizeof(WCHAR) * wcslen(wszString2), &dwBytesWritten, NULL);
Again, here you can also have caching at the operating system level, in the Cache Manager, where the file contents can be split across several in-memory data blocks. These blocks are not guaranteed to be written in their natural order. For example, the lazy writer thread (a special thread used by Cache Manager that flushes unused pages to disk) can cause an out-of-order flush. There are other considerations that can cause an out-of-order data flush, but in general you need to be aware that any cache layers in your I/O can cause writes to be randomly reordered.
The same reasoning applies to our third example:
echo This is >> TestFile.txtecho a string >> TestFile.txt
echo This is >> TestFile.txtecho a string >> TestFile.txt
Again, you cannot be sure that the file will not end up corrupted - you can have rare scenarios where the resultant file with contain either the word "This" or the word "string" but not both!
The solution? One idea is to use special write modes like FILE_FLAG_WRITE_THROUGH or FILE_FLAG_NO_BUFFERING, although in these cases you lose the obvious benefit of caching. You have to pass these flags to CreateFile(). Another idea is to manually flush the file contents through the FlushFileBuffers API.
So, how to do atomic writes, then?
From the example above, it looks like it is entirely possible that our writes migth complete partially, even if this case is extremely rare. How we can make sure that these writes are remaining atomic? In other words, my write to this file should either result in the entire write being present in the file, or no write should be present at all. Seems like an impossible problem, but that's not the case.
The solution? Let's remember that metadata changes are atomic. Rename is such a case. So, we can just perform the write to a temporary file, and after we know that the writes are on the disk (completed and flushed) then we can interchange the old file with the new file. Something like the sequence below (I used generic shell commands like copy/ren/del below but in reality you need to call the equivalent Win32 APIs):
Write process (on Foo.txt): - Step W1: Acquire "write lock" on the existing file. (this is usually part of your app semantics, so you might not need any Win32 APIs here)- Step W2: Copy the old file in a new temporary file. (copy Foo.txt Foo.Tmp.txt)- Step W3: Apply the writes to the new file (Foo.Tmp.txt). - Step W4: Flush all the writes (for example those being remaining in the cache manager). - Step W5: Rename the old file in an Alternate form (ren Foo.txt Foo.Alt.txt)- Step W6: Rename the new file into the old file (ren Foo.Tmp.txt Foo.txt)- Step W7: Delete the old Alternate file (del Foo.Alt.txt)- Step W8: Release "write lock" on the existing file.
This solution has now another drawback - what if the machine reboots, or your application crashes? You end up either with an additional Tmp or Alt file, or with a missing Foo.txt but with one or two temporary files like Foo.Alt.txt or Foo.Tmp.txt). So you need some sort of recovery process that would transparently "revert" the state of this file to the correct point in time. Here is a potential recovery process:
Recovery from a crash during write (on Foo.txt): - Step R1: If Foo.txt is missing but we have both Foo.Alt.txt and Foo.Tmp.txt present, then we crashed between Step W5 and Step W6. Retry from Step W6. - Step R2: If Foo.txt is present but Foo.Tmp.txt is also present, then we crashed before Step W5. Delete the Foo.Tmp.txt file. - Step R3: If Foo.txt is present but Foo.Alt.txt is also present, then we crashed between Step W6 and Step W7. Delete the Foo.Alt.txt file.
More and more problems...
The sequence of operations above looks good, but we are not done yet. Why? Sometimes shell operations like Delete, Rename can fail for various reasons.
For example, it might just happen that an antivirus or content indexing application randomly scans the whole file system once in a while. So, potentially, the file Foo.Tmp.txt will be opened for a short period which will cause either the step W7 or R1..R3 to fail due to the failed delete. And, not only that, but also Rename can fail if the old file already exists, and someone has an open handle on it. So even the steps W2 or W5 can fail too...
The fix would be to always use unique temporary file names. In addition, during the recovery process, we will want to clean up all the "garbage" from previous temporary file leftovers. So, instead of files like Foo.Tmp.txt or Foo.Alt.txt, we should use Foo.TmpNNNN.txt and Foo.AltNNNN.txt, together with a smart algorithm to clean up the remaining "garbage" during recovery. Here is the overall algorithm:
Write process (on Foo.txt): - Step W1: Acquire "write lock" on the existing file. - Step W2: Copy the old file in a new unique temporary file. (copy Foo.txt Foo.TmpNNNN.txt)- Step W3: Apply the writes to the new file (Foo.TmpNNNN.txt). - Step W4: Flush all the writes (for example those being remaining in the cache manager). - Step W5: Rename the old file in a new unique Alternate form (ren Foo.txt Foo.AltNNNN.txt)- Step W6: Rename the new file into the old file (ren Foo.TmpNNNN.txt Foo.txt)- Step W7: Delete the old Alternate file (del Foo.AltNNNN.txt). If this fails, simply ignore. The file will be deleted later during the next recovery. - Step W8: Release "write lock" on the existing file.
Recovery from a crash during write (on Foo.txt): - Step R1: If Foo.txt is missing but we have both Foo.AltNNNN.txt and Foo.TmpNNNN.txt present, then we crashed between Step W5 and Step W6. Retry from Step W6. - Step R2: If Foo.txt is present but Foo.TmpNNNN.txt is also present, then we crashed before Step W5. Try to delete all Foo.TmpNNNN.txt files and ignore failures. - Step R3: If Foo.txt is present but Foo.AltNNNN.txt is also present, then we crashed between Step W6 and Step W7. Try to delete all Foo.AltNNNN.txt files and ignore failures.
That's it!
PingBack from
PingBack from
PingBack from | http://blogs.msdn.com/adioltean/archive/2005/12/28/507866.aspx | crawl-002 | en | refinedweb |
This.
Navigate to the solution file (*.sln), and double-click it. By default, the solution files are copied to the following folders:
C:\Program Files\Windows Mobile 6 SDK\Samples\PocketPC\CS\WebCrawler
C:\Program Files\Windows Mobile 6 SDK\Samples\PocketPC\CS\WebCrawler
Microsoft Visual Studio 2005 launches and loads the solution.
Build the solution (Ctrl+Shift+B).
Deploy the solution (F5).
The mobile device and/or emulator has access to the Internet.
This code example application does not make use of the WindowsMobile namespace (or any of its child namespaces).
SDK: Windows Mobile 6 Professional SDK and Windows Mobile 6 Standard SDK
Development Environment: Visual Studio 2005.
ActiveSync: Version 4.5. | http://msdn.microsoft.com/en-us/library/bb158791.aspx | crawl-002 | en | refinedweb |
This column shows you how to secure the .NET Services Bus and also provides some helper classes and utilities to automate many of the details.
Juval Lowy
MSDN Magazine July 2009
Read more!
This is Part 1 of a multipart article series on Windows 7. This article is about the new user profile storage concept in Windows 7, called Libraries.
Yochay Kiriaty
MSDN Magazine June 2009
The .NET Services Bus is arguably the most accessible, powerful, and useful piece of the new Windows Azure Cloud Computing initiative. See how it manages cloud communications.
Visual Studio 2008 and the .NET Framework 3.5 provide new tools and support that extends Windows Communication Foundation (WCF). Visual Studio 2008 also automates a number of manual WCF tasks for the developer as well.
MSDN Magazine February 2008
Instance management refers to a set of techniques used by Windows Communication Foundation to bind a set of messages to a service instance. This article introduces the concept and shows you why you need instance management.
MSDN Magazine June 2006
Managing state and error recovery using transactions is the topic of this month’s installment of Foundations.
MSDN Magazine January 2009
The ...
Kenny Kerr sings the praises of the new Visual C++ 2008 Feature Pack, which brings modern conveniences to Visual C++.
Kenny Kerr
MSDN Magazine May 2008
Paul DiLascia
MSDN Magazine August
using System.Security.Permissions;
public class BankAccount
{
[PrincipalPermission(SecurityAction.Demand,Role="Teller")]
public long OpenAccount(){...}
[PrincipalPermission(SecurityAction.Demand,Role="Customer")]
[PrincipalPermission(SecurityAction.Demand,Role="Teller")]
public long GetBalance(){...}
/* Rest of the implementation */
}
using System.EnterpriseServices;
public class BankAccount : ServicedComponent
{
[SecurityRole("Teller")]
public long OpenAccount(){...}
[SecurityRole("Customer")]
[SecurityRole("Teller")]
public long GetBalance(){...}
/* Rest of the implementation */
}
[assembly: SecurityRole("Teller")]
[assembly: SecurityRole("Customer")]
public interface IPrincipal
{
IIdentity Identity { get; }
bool IsInRole(string role);
}
IPrincipal principal = Thread.CurrentPrincipal;
public interface IIdentity
{
string AuthenticationType { get; }
bool IsAuthenticated { get; }
string Name { get; }
}
static void Main()
{
UnifiedPrincipal.SetModel(SecurityRoleModel.EnterpriseServices);
/* Rest of Main() */
}
[assembly: ApplicationName("MyApp")]
static public void SetModel(SecurityRoleModel model)
{
Assembly callingAssembly = Assembly.GetCallingAssembly();
string appName = GetAppNameFromAssembly(callingAssembly);
SetModel(appName,model);
}
static public void SetModel(string appName,SecurityRoleModel model)
{
UnifiedPrincipal principal = new UnifiedPrincipal(model);
principal.AppName = appName;
}
AppDomain currentDomain = Thread.GetDomain();
currentDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal);
Thread.CurrentPrincipal = this;
if(m_DefaultPrincipal is UnifiedPrincipal == false)
{
currentDomain.SetThreadPrincipal(this);
}
protected bool IsInWindowsGroup(string group)
{
return m_DefaultPrincipal.IsInRole(group);
}
From the May 2002 issue of MSDN Magazine | http://msdn.microsoft.com/en-us/magazine/cc301838.aspx | crawl-002 | en | refinedweb |
Creates.
The command line to be executed.
The maximum length of this string is 32,768 terminating null character to the command-line string to separate the file name from the arguments. This divides the original string into two strings for internal/2000: The ACLs in the default security descriptor for a process come from the primary or impersonation token of the creator. This behavior changed with Windows XP with SP2 and Windows Server 2003./2000: The ACLs in the default security descriptor for a thread come from the primary or impersonation token of the creator. This behavior changed with Windows XP with SP2 and Windows Server 2003.
If this parameter TRUE, each inheritable handle in the calling process is inherited by the new process. If the parameter is FALSE, the handles are not inherited. Note that inherited handles have the same value and access rights as the original handles..
GetFullPathName("X:", ...)., /*...*/);
For an example, see
Creating Processes.
Send comments about this topic to Microsoft
Build date: 7/2/2009
/* The library I used is from Dev-C++ 4.9.9.2. I haven't try the VC's. The OS is Windows XP, SP2 */
The "Parameters lpApplicationName" section above says "To run a batch file, you must start the command interpreter; set lpApplicationName to cmd.exe and set lpCommandLine to the name of the batch file." But when I did so in a console program with:
CreateProcess("C:\\Windows\\system32\\cmd.exe", "runBat.bat", NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi);
or
CreateProcess(NULL, "C:\\Windows\\system32\\cmd.exe runBat.bat", NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi);, the cmd.exe ignored the "runBat.bat" completely, i.e. the new cmd instance was booted smoothly while my "runBat.bat" wasn't executed. Furthermore, when I specified the cmd.exe without its complete path as:
CreateProcess("cmd.exe", "runBat.bat", NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi);CreateProcess() return with FALSE, and GetLastError() return 2.
At last I found the followings work:
CreateProcess(NULL, "runBat.bat file1.txt file2.txt", NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi);
and
CreateProcess("runBat.bat ", "file1.txt file2.txt", NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi);
However, when I wanted to redirect the stdin and stdout of a called program in runBat.bat to the file1.txt and file2.txt respectively, only the former could work as expected. The latter caused such situation: file1.txt was opened with notepad and the cmd window was blocked until I closed the notepad manually. Furthermore, file2.txt had not be written in at all.
THERE SHOULD BE A WAY TO LAUNCH a process with the window as "ALWAYS ON TOP" USING THIS FUNCTION. This is not a robust enough function.
Note: This is not an appropriate forum for editorializing. A process that is not designed to be TOPMOST should not be created as TOPMOST. An application that is designed to be TOPMOST should be capable of making itself TOPMOST.
I found that the lpEnvironment is ignored on Windows Vista for some applications, but I haven't not been able to track down when this occurs. One reproducable example occurs when I name my exe "Photoshop.exe" lpEnvironment is ignored, and when the same EXE is renamed to "p3.exe" it is no longer ignored. I don't know if this is some special case for handling compatability with photoshop or this issue affects a larger population of apps?
For these applications the environment variables are inherited from the parent process rather than using the values specified in lpEnvironment. I have not run into this problem with any older platforms.
how can I fetch the value returned by the called exe?
Answer: Use GetExitCodeProcess() after the process has exited (you can use the wait function to make sure it did).
"The command line to be executed. The maximum length of this string is 32K characters. [...]
The Unicode version of this function,
CreateProcessW
, can modify the contents of this string."
In other words, I have to allocate a buffer of 32K characters if I want to provide CreateProcessW() with the lpCommandLine argument?
Answer: No; read the part that says "maximum". Questions such as that do not belong here.
Answer^2: You misunderstood my intention. I think this part of the documentation is unclear and wanted to facilitate clarification. If a function modifies a string argument and that string argument has a maximum length of 32K characters, for me that means I have to provide 32K of writable memory when I call the function or risk a theoretically possible buffer overflow. This could be clarified by stating "the function can modify the contents of this string, but will not enlarge the string (eg. add additional characters to its end)".
STARTUPINFO startupInfo = {0};startupInfo.cb = sizeof(startupInfo);PROCESS_INFORMATION processInformation;// Try to start the processBOOL result = ::CreateProcess( L"C:\\Windows\\NOTEPAD.exe" NULL, NULL, NULL, FALSE, NORMAL_PRIORITY_CLASS, NULL, NULL, &startupInfo, &processInformation);if(result == 0) throw std::runtime_error("Could not create process");
Answer^2: Wrong. UNICODE is defined. Besides, passing a wchar_t string where a char string is expected results in a compiler error, so there's not chance CreateProcessA() was used. Upon that, CreateProcess() works fine if I use lpCommandLine to specify the executable (also as wide char string).
Answer^3: I tried that code right now and it just works; besides I've used CreateProcess that way countless times. Sorry, both the documentation and the implementation are perfect; the problem must be yours. Check GetLastError's return value.
#include <windows.h>#include <stdio.h>#include <tchar.h>#include <conio.h>
void _tmain( int argc, TCHAR *argv[] ){STARTUPINFO si;PROCESS_INFORMATION pi;STARTUPINFO sj;PROCESS_INFORMATION pj;
ZeroMemory( &si, sizeof(si) );si.cb = sizeof(si);ZeroMemory( &pi, sizeof(pi) );
ZeroMemory( &sj, sizeof(sj) );sj.cb = sizeof(sj);ZeroMemory( &pj, sizeof(pj) );// Start the child process p1.exe. Make sure p1.exe is in the// same folder as current application. Otherwise write the full path in first argument.if(!CreateProcess(L".\\p1.exe", NULL, NULL, NULL, FALSE, 0, NULL, NULL, &sj, &pj)){printf( "Hello CreateProcess failed (%d)\n", GetLastError() );getch();return;}
// Start child process p2.exe. Make sure p2.exe is in the// same folder as current application. Otherwise write the full path in first argument.if(!CreateProcess(L".\\p2.exe", NULL, NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi)){printf( "CreateProcess2 failed (%d)\n", GetLastError() );getch();return;}
// Wait until child processes exit.WaitForSingleObject( pi.hProcess, INFINITE );WaitForSingleObject( pj.hProcess, INFINITE );
// Close process and thread handles.CloseHandle( pi.hProcess );CloseHandle( pi.hThread );CloseHandle( pj.hProcess );CloseHandle( pj.hThread );getch();}
For example:["C:/Program Files/MyApplication/MyApp.exe" Arg1 Arg2 Arg3]^lpApplicationName AND lpCommandLineOr can they be different?Would I get a terminating null here:["C:/Program Files/MyApplication/MyApp.exe"]0x00[Arg1 Arg2 Arg3]Is the terminating null inserted in all cases, or only if the program name is repeated?
<DllImport("kernel32", CharSet:=CharSet.Auto)> _
Public Shared Function CreateProcess(ByVal lpApplicationName As String, ByVal lpCommandLine As String, ByVal lpProcessAttributes As SECURITY_ATTRIBUTES, ByVal lpThreadAttributes As SECURITY_ATTRIBUTES, <MarshalAs(UnmanagedType.Bool)> ByVal bInheritHandles As Boolean, ByVal dwCreationFlags As Integer, ByVal lpEnvironment As IntPtr, ByVal lpCurrentDirectory As String, ByVal lpStartupInfo As STARTUPINFO, ByVal lpProcessInformation As PROCESS_INFORMATION) As Integer
End Function
[DllImport("kernel32", CharSet=CharSet.Auto)]
internal static extern int CreateProcess(string lpApplicationName, string lpCommandLine, NativeTypes.SECURITY_ATTRIBUTES lpProcessAttributes, NativeTypes.SECURITY_ATTRIBUTES lpThreadAttributes, [MarshalAs(UnmanagedType.Bool)] bool bInheritHandles, int dwCreationFlags, IntPtr lpEnvironment, string lpCurrentDirectory, NativeTypes.STARTUPINFO lpStartupInfo, NativeTypes.PROCESS_INFORMATION lpProcessInformation);
Instaead of using the unmanaged CreateProcess API in your managed code, use the System.Diagnosics.Process class and the Start static method. For more info on this class, see and for more information on the Start method, see: | http://msdn.microsoft.com/en-us/library/ms682425(VS.85).aspx | crawl-002 | en | refinedweb |
This article is an excerpt from Programming Microsoft Office Business Applications by Steve Fox, Rob Barker, Paul Stubbs, Joanna Bichsel, and Erika Ehrli Cabral, from Microsoft Press (ISBN 9780735625365, copyright Microsoft Press.
Contents
Workflow Processes in the Real World
Introduction to Workflow in the 2007 Microsoft Office System
Creating Custom Workflows
Customizing Business Rules
Summary
Further Reading and Resources
Take a moment to think about all the things you do every day in your work. You may come up with a huge list of individual tasks that are directly related to your organizational responsibilities and accountabilities. You will also come to realize that you interact extensively with your peers and direct managers. If you work in a medium- or large-size company, you may also cross-collaborate with departments or groups outside of your own. All this interaction happens in many different ways. It can happen in the many e-mail messages you send and receive every day, in phone calls, during the frequent knock-on-someone-else’s door, in meetings, and in chat messages. Even when we do not realize we’re doing it, we all play roles in different business processes. We all interact and need these frequent small communication exchanges to get things done and to make the necessary decisions that keep our businesses working. However, in many cases, coordinating business processes is complicated, and employees do not collaborate in the most efficient way. Also, some business processes are quite complex in the sense that they require interaction between multiple actors and systems.
There is more to all these challenges and opportunities. While all of this out-of-band interaction takes place, we fail to update back-end systems that exist to help store information related to a business process. Sometimes we do not update systems because of all the overhead and increased work that they bring. We either get the work done or spend our time tracking metadata. Back-end systems represent a huge investment for companies, and sadly, they may be seen as overhead since they do not map to the real way that people work. In many cases, back-end systems require end users to log on to other systems to copy and paste data. The consequence: frustrated employees and slow business productivity.
Fortunately, new workflow technologies provide the option to automate and coordinate business processes. Workflows help bring together the power of human collaboration with the power of software to improve communication and task management. Workflows allow you to route specific tasks to different actors and systems. Therefore, you can streamline business processes while greatly increasing business efficiency. Additionally, workflows provide end users the ability to work productively by not requiring them to log on to all the systems in which the enterprise has invested.
The goal of this chapter is to introduce you to a new set of workflow capabilities offered by the 2007 Microsoft Office system. You will explore different Microsoft products and technologies that you can use to build custom Office Business Application (OBA) workflow solutions. Hopefully, this chapter can help you understand how to leverage your current technology investments and think about the wide variety of custom workflow solutions and architectures that you can use to optimize the efficiency of your business processes.
A workflow can be defined as a set of related tasks or activities that form an executable representation of a business process. Workflows help improve human interaction by automating individual tasks and streamlining processes.
The 2007 Microsoft Office system includes a set of applications, servers, services, and tools designed to work together to build and deploy custom workflow solutions. Additionally, you can use the Microsoft .NET Framework Windows Workflow Foundation (WF) and Microsoft Visual Studio 2008 to build powerful workflow solutions that integrate with the 2007 Microsoft Office system. Depending on your business needs, you can use different combinations of these tools and technologies to create workflow solutions that connect line-of-business (LOB) information with Office client applications. Therefore, you can provide end users with a simplified and well-known set of programs to interact with LOB data and reduce the complexity of business workflows. Figure 7-1 shows a roadmap of key developer tools and technologies you can use to create custom workflow solutions.
As a developer, you face a common question: Which set of tools and technologies should I use to build a custom solution workflow? The following sections of this chapter provide a high-level overview of the different key developer tools and technologies that you can use to build workflow-enabled applications. It also discusses advantages and disadvantages of each one to guide you to a better informed technology decision.
Windows Workflow Foundation (WF) is a platform component of the Microsoft .NET Framework 3.5. WF provides a workflow runtime engine, an extensible programming model, and tools that help build and execute workflow-enabled applications. WF allows asynchronous programming to help define persistent workflow applications. That is, you can define long-running workflows that preserve state and wait indefinitely until a user or application executes the next activity. Additionally, WF allows you to build applications that consist of one or more workflows. The workflow runtime engine executes individual activities that compose workflows one at a time and hosts the execution inside any Windows process, including console applications, Windows forms applications, Windows Services, ASP.NET Web sites, and Web services. For more information about the WF programming model, visit Windows Workflow Foundation Programming Guide.
You can use Microsoft Visual Studio 2008 to create workflow solutions using a graphic workflow designer, or you can create entire workflow solutions programmatically. Since workflows are based on activities, WF provides a Base Activity Library (BAL) that includes a set of general purpose activities that are common to workflow solutions. They include Code, Sequence, While, IFElse, and other activities that help model primitive operations that exist in different programming languages. Additionally, you can define custom activities and create custom activity libraries (CAL).
There are three main namespaces you can use to create workflows programmatically
using WF:
System.Workflow.Activities Defines activities that can be added to workflows to create and run an executable representation of a work process.
System.Workflow.ComponentModel Provides the base classes, interfaces, and core modeling constructs used to create activities and workflows.
System.Workflow.Runtime Contains classes and interfaces you can use to control the workflow runtime engine and the execution of a workflow instance.
Figure 7-2 shows the WF components: activities, workflows, custom activity libraries, the WF runtime engine, the WF services, and the WF BAL. They are all hosted in a process that executes a workflow-enabled application.
Workflow Authoring Styles
The WF supports two main authoring styles of workflow programs:
Sequential workflow Executes activities in a predefined pattern and represents a workflow as a procession of steps that must be executed in order until the last activity is completed. This type of workflow can be modeled as a flowchart. Therefore, to design a graphical representation of a workflow, you can use flowchart structures such as start, activity, repetition, loops, and finish. A good example to explain a sequential workflow is a simple notification system. Imagine that you need to build a vacation leave notification system for your company. In this solution, the workflow starts when Valeria, an employee, opens the vacation leave notification form to define the days she plans to be away for the holidays. Once the form is complete, the employee submits the form and the system sends a notification to her manager, Monica, who reviews the summary document, assessing the number of days the employee is planning on being away. If the manager approves the document, the document is moved to an “approved document library” that backs up all time-off summary documents. Next, the system will notify the rest of the team that Valeria plans to be on vacation for the holidays. However, if the manager rejects the document, the system will notify Valeria, and she will have to review the manager’s comments attached in the form. In either case, the workflow reaches an end and terminates the execution.
State machine workflow Responds to external events as they occur and represents a group of states, transitions, and events, which trigger transitions between these states. We will use a document-publishing process to explain state machine workflow. Imagine that you work for an editorial company that publishes technical articles for software developers and you are asked to build an article publishing workflow application. In this solution, the workflow starts when Hubert, a well-known contributor, submits a technical article using a Web-based publishing system. Submitting an article triggers a OnDocumentCreated event. This event calls a Web service that stores the technical article in a Microsoft Office SharePoint 2007 document library and sends an e-mail message to the corresponding subject-matter experts to ask for a technical review. The technical article will remain in a DocumentCreated state until all approvers review and approve the technical article. Once this step is completed, the system will trigger an OnDocumentApproved event. This event changes the status of the technical article to DocumentApproved and sends an e-mail message to the publisher to notify him that this particular technical article is ready for publishing. Carlos, the publisher, fills out a form specifying that this document is ready for publishing and submits the form to the printer. This will trigger an OnDocumentPublished event that changes the status of the article to DocumentPublished, after which an e-mail message is sent to the contributor notifying him that his technical article was sent to the printer. Finally, the printer logs on to the publishing system and marks the technical article as completed. This launches an OnDocumentCompleted event that changes the status of the document to DocumentCompleted and sends an e-mail message notifying all the participants in the workflow that the technical article was printed. The workflow reaches an end and terminates the execution.
Figure 7-3 shows a sequential workflow and a state machine workflow diagram.
As we explained earlier, the WF provides a workflow runtime engine, an extensible programming model, and tools that help build and execute workflow-enabled applications. However, if you want to build custom workflow solutions that integrate seamlessly with the 2007 Microsoft Office system, you can consider using the workflow services provided by Windows SharePoint Services 3.0 and Microsoft Office SharePoint Server 2007.
Windows SharePoint Services 3.0 provides enhanced support to create document-oriented features and helps integrate a human dimension to custom workflow solutions. Windows SharePoint Services workflows can assign tasks to users and allow users to see status on any workflow instance. Windows SharePoint Services workflows can be added to documents, list items, or content types and are made available to users at the document library or list level.
We explained earlier that the WF runtime engine executes individual activities that compose workflows one at a time and hosts the execution inside any Windows process. In the same way, Windows SharePoint Services can also act as a host for the WF runtime engine. Windows SharePoint Services supports the WF runtime engine and WF services. However, it provides a different programming model. The Microsoft.SharePoint.Workflow namespace inherits many of the classes from the System.Workflow namespace and provides a new set of classes, interfaces, and enumerations that represent the workflow functionality contained in Windows SharePoint Services. For more information, see Microsoft.SharePoint.Workflow.
An important item to consider while designing workflow OBAs is the tools you can use to create them. You can create custom workflow OBA solutions using Microsoft SharePoint Designer 2007 or the workflow designer in Visual Studio 2008. The last section of this chapter explains how to create custom workflow OBAs using these tools.
In Windows SharePoint Services, a workflow that is running on a specific item is a workflow instance, and workflow templates are workflow programs that are available on a site, list, or content type. Association is the process of binding a workflow template to a site, list, or content type. Workflows can be associated to servers or Web forms running Windows SharePoint Services.
Figure 7-4 shows the workflow architecture in Windows SharePoint Services 3.0.
Workflow templates may or may not have workflow input forms. These forms allow end users to interact with workflows and can be of four different kinds.
Association form Allows site administrators to capture general parameter settings for a workflow, such as the name of the workflow and the association of the workflow with a specific list, document library, or content type.
Initiation form Allows end users to specify or override options when a workflow instance starts. A workflow can be initiated either as the result of a manual action by the end user or triggered from an event. This form is presented to the user only when a workflow is started manually.
Task form Allows end users to specify the details of tasks assigned to them.
Modification form Allows users to define new tasks or delegate the task to a different end user.
Windows SharePoint Services defines the previous forms as ASP.NET 2.0 pages. While designing OBAs, you have the option to create custom ASP.NET 2.0 Web forms to define the behavior you need for association, initiation, task, and modification forms. Additionally, you can use custom forms to control the Web design of the forms. For example, you can configure a set of forms using your company logo and Web site style sheets. You can use master pages in custom association, initiation, task, and modification forms to display the same user interface you use for your internal Web sites.
Windows SharePoint Services provides a set of definition and configuration files that contain the information necessary to create a workflow template and instantiate workflows. These files can be represented by different combinations of files, including an XML markup file that includes the declarative metadata of the workflow, in combination with a code-behind file that contains custom code representing the properties and behavior of the workflow.
The workflow definition schema allows you to define multiple settings such as the name, GUID, description, and URLS for custom forms, the name of the workflow assembly and class, and custom metadata of the workflow. Listing 7-1 shows a sample XML markup file for an Expense reporting solution. Note how the AssociationURL, InititiationURL, and ModificationURL attributes of the workflow element define the custom forms that this workflow solution uses for association, instantiation, and modification, respectively.
Listing 7-1. This is an example of an XML markup file for an expense report solution
</expenseReportAssociationPage.aspx”
InstantiationUrl=“_layouts/expenseReportInitiationPage.aspx”
ModificationUrl=“_layouts/expenseReportModificationPage.aspx”>
<Categories/>
<AssociationData>
…
</AssociationData>
<MetaData>
…
</MetaData>
</Workflow>
</Elements>
This sample has been edited for clarity.
Keep in mind that Windows SharePoint Services must use ASP.NET 2.0 to define input forms for workflow solutions. Microsoft Office InfoPath forms cannot be used for Windows SharePoint Services workflows. If you have an existing ASP.NET 2.0 application and want to take advantage of workflows in SharePoint, you can use Windows SharePoint Services workflows. On the other hand, Windows SharePoint Services workflow solutions can’t use Office client applications to interact with end users. That includes InfoPath forms. If you want seamless support for 2007 Microsoft Office system clients in custom OBA workflow solutions, Microsoft Office SharePoint Server 2007 is the better option.
Microsoft Office SharePoint Server 2007 provides a true enterprise portal platform and builds upon the Windows SharePoint Services 3.0 infrastructure. In the context of OBA workflow solutions, Office SharePoint Server 2007 provides the following capabilities:
Hosting of InfoPath forms thanks to the integration with InfoPath Forms Services
Integration with 2007 Microsoft Office system client applications such as Microsoft Office Outlook 2007, Microsoft Office Word 2007, Microsoft Office PowerPoint 2007, and Microsoft Office Excel 2007
Customization of out-of-the-box (OOB) workflows
Customization of workflows
Support for InfoPath Forms
Office SharePoint Server 2007 allows users to interact with workflows using InfoPath forms in the same way that users interact with ASP.NET 2.0 Web forms for Windows SharePoint Services workflows. Some developers prefer InfoPath forms over ASP.NET 2.0 forms for a couple of reasons:
InfoPath forms can be displayed by the 2007 Microsoft Office system clients. For example, you can host an InfoPath form in Word 2007, Excel 2007, PowerPoint 2007, or Outlook 2007.
InfoPath forms are easier to create. InfoPath 2007 provides a designer experience that helps you create forms faster and with less code. Also, InfoPath forms provide built-in validation and data proofing.
You can use a workflow definition schema file to define InfoPath forms for association, initiation, tasks, and modification that integrate with Office SharePoint Server 2007 workflows solutions. To define the custom forms you want to use in a workflow template definition, you must set the form URLs of each specific process (association, initiation, modification, edit task) to the appropriate ASP.NET 2.0 predefined page of an Office SharePoint Server 2007 instance as shown in Listing 7-2. Next, you add an element that specifies the URN for the custom InfoPath form you built for each type of process. Listing 7-2 shows a sample XML markup file for an expense reporting solution that uses InfoPath forms.
Listing 7-2. Example of XML markup file for an expense report solution using InfoPath forms
</>
<!-- Tags to specify InfoPath forms for the workflow; delete
tags for forms that you do not have -->
<MetaData>
<Association_FormURN>
urn:schemas-OBAExpenseReport-com:workflow:ReviewRouting-Assoc
</Association_FormURN>
<Instantiation_FormURN>
urn:schemas-OBAExpenseReport-com:workflow:ReviewRouting-Init
</Instantiation_FormURN>
<Task0_FormURN>
urn:schemas-OBAExpenseReport-com:workflow:ReviewRouting-Review
</Task0_FormURN>
<Task1_FormURN>
urn:schemas-OBAExpenseReport-com:workflow:ReviewRouting-Review
</Task1_FormURN>
<AssociateOnActivation>false</AssociateOnActivation>
</MetaData>
</Workflow>
</Elements>
To design an InfoPath form for a workflow in Office SharePoint Server 2007, you use InfoPath 2007 and start designing a custom form the same way you would with any other InfoPath 2007 form. In general, to create InfoPath forms for workflow solutions, you start by adding controls on your form. Next, you data bind the controls so that the form can send and receive data to Office SharePoint Server 2007 and the workflow instance. Next, you add and customize a button that submits your form data to Office SharePoint Server 2007. Finally, you set the forms security level to Domain and publish the form. For more information about creating InfoPath forms, see Create an InfoPath form for office supply requisitions.
Figure 7-5 shows an InfoPath form used as an initiation form for an expense reporting workflow solution.
Integration into 2007 Microsoft Office System Client Applications
One of the most powerful features offered by Office SharePoint Server 2007 is the ability to connect custom workflow solutions with Office Client applications. This allows end users to interact in a natural way with OBA workflow solutions. End users can interact with workflows directly from Word 2007, Excel 2007, PowerPoint 2007, and Outlook 2007. As mentioned in the previous section of this chapter, Office client applications can host InfoPath forms.
This option is available for users who installed InfoPath 2007.
For example, you can open, fill out, and submit InfoPath forms from Outlook 2007. InfoPath e-mail forms help streamline workflow processes that you use to collaborate and share data using well-known applications and, therefore, reduce the need for training. Figure 7-6 shows how you can open InfoPath forms in Outlook 2007.
Another powerful possibility offered by Office SharePoint Server 2007 is the ability for end users to initiate a workflow from Word 2007. You can open a Word 2007 document and create a running workflow instance by using the Start New Workflow option. This greatly simplifies the process of starting workflows involving document approval. Earlier in this chapter, we explored a state machine workflow document publishing scenario in which you work for an editorial company that publishes technical articles for software developers and you are asked to build an article publishing workflow application. In this scenario, contributors can use Word 2007 to initiate a document approval workflow right after they finish writing an article. Figure 7-7 shows how a contributor can initiate a document approval workflow instance from Word 2007.
Another integration possibility includes reporting tools that provide an aggregate analysis of workflow history. You may be interested in analyzing workflow processes to identify problems, bottlenecks, and the performance of your organization. Office SharePoint Server 2007 includes several predefined Excel 2007 reports that provide aggregate analysis of workflow history. Additionally, you can use Visio 2007, Access 2007, or custom monitoring solutions to analyze workflow history information stored in SharePoint lists.
Out-of-the-Box Workflows
Office SharePoint Server provides a set of out-of-the-box (OOB) workflow solutions. Information workers (IWs) can use these workflow templates with no developer intervention. However, you can customize predefined workflows based on your business needs. The predefined workflows templates in Office SharePoint Server 2007 include the following:
Approval Provides business logic and predefined settings that help you route documents for approval. When the workflow is initiated, the end user defines a list of approvers.
Collect feedback Provides business logic and predefined settings that help you route documents for review. This workflow is similar to the previous one. However, this workflow allows end users to provide feedback.
Collect signatures Provides business logic and predefined settings that help you collect signatures for a document. This workflow works like the previous ones, but requires end users to provide a signature to complete a workflow. This workflow can be started only in Office client programs.
Disposition approval Provides business logic and predefined settings that help users decide whether to retain or delete expired documents.
Three-state workflow Provides business logic and predefined settings that help users track the status of a list item through three states. It can be used to manage business processes that require organizations to track a high volume of issues or items, such as customer support issues, sales leads, or project tasks.
Translation management workflow Provides business logic and predefined settings that help manage document translation. This workflow allows users to assign tasks and track the status of translated documents.
To create OOB workflow solutions, you start by opening a document library, list, or content type. Next, you select Settings, Document Library Settings, and then Workflow. This opens an Add Workflow page that allows you to configure an OOB workflow. Figure 7-8 shows how you can create an OOB workflow in a preexisting workflow document library in Office SharePoint Server 2007.
For more information, see Business Document Workflow.
Customization of Workflows
In many cases, you may want to create a custom workflow solution that is specific to your business needs. Office SharePoint Server 2007 and Windows SharePoint Services allow you to build custom workflow solutions using Office SharePoint Designer 2007 or Visual Studio 2008. The next section walks you through the process of building custom workflows using both tools.
You can create custom OBA workflow solutions for Windows SharePoint Services and for Office SharePoint Server 2007 using either Office SharePoint Designer 2007 or Visual Studio 2008. Both tools provide design templates that help you build workflow solutions. However, each authoring tool provides different capabilities.
Office SharePoint Designer 2007 provides a simple rules-based approach that enables you to create workflow solutions based on preexisting activities and to associate workflows directly to lists or document libraries. This authoring tool allows you to define workflows with no custom code. Office SharePoint Designer also simplifies the deployment process, since workflow markup, rules, and supporting files are stored in a specific document library. For those reasons, it is a tool commonly used by information workers and Web designers. You can use this authoring tool to build simple OBA workflow solutions that automate business processes in your enterprise such as document approval, document review, task management, and more. Figure 7-9 shows how you can create new workflows or open existing ones using Office SharePoint Designer as an authoring tool for workflows.
Office SharePoint Designer 2007 provides a wizard-like designer that enables you to define workflow solutions that use a set of business rules and predefined rules, for example, sending notifications via e-mail. This greatly simplifies the process of building workflows. However, when you are making a technology decision, there are some considerations you should keep in mind if you plan to use Office SharePoint Designer 2007:
Workflow association You can associate workflows to lists or document libraries, but you cannot associate workflows to content types. Once you associate a workflow, you cannot change which list or document library a workflow is attached to. Instead, you must create a new workflow and attach it to the list that you want.
Forms support You can design custom ASP.NET 2.0 forms, but SharePoint Designer does not provide support for InfoPath forms. Something else to consider is that workflows authored from Office SharePoint Designer 2007 only support initiation and task completion forms. This is because workflows designed in SharePoint designer cannot be modified while running. Therefore, you cannot define modification forms. As explained earlier, you associate workflows directly to lists or document libraries, meaning the association process happens in design time. Therefore, you cannot define association forms.
Workflow authoring styles You can create sequential workflows, but SharePoint Designer does not support state machine workflows.
Expense Report Scenario
Imagine that you work for Adventure Works Cycles, a sports store franchise that sells bicycles worldwide. This company has sales representatives that visit different countries to help new franchisees open new sports stores. You are asked to build a simple Expense Reporting OBA workflow solution. In this solution, sales representatives will start by filling out a simple expense report form that provides their name, the expense purpose, the expense total, and their direct manager’s name and e-mail address. When sales representatives submit the expense report, the workflow business rules need to verify that the expense total does not exceed $5,000. If the expense total exceeds $5,000, the workflow will submit an e-mail to the direct manager asking for approval. If the expense total is less than or equal to $5,000 the system will send an e-mail to the accounting department asking it to reimburse the sales representative with the expense total amount. Since this is a simple scenario that needs no coding, you decide to use Office SharePoint Designer 2007 to author this OBA workflow solution.
To create this workflow solution, first create a document library in Office SharePoint Server 2007. For this scenario, you name the document library Expense Reports. You must add an Expense Total column of type number to the document library. Once that is configured, use Office SharePoint Designer 2007 to associate the OBA workflow solution you are creating. Figure 7-10 shows the document library you use for the expense reporting solution.
To create the workflow solution, you open Office SharePoint Designer 2007. Next, you select the Open Site option from the File menu to select the SharePoint site where you want to create the workflow. Next, in the File menu, select New, and then click Workflow. This step will launch the Workflow Designer window, as shown in Figure 7-11.
The Workflow Designer window allows you to configure the name of the workflow. In this case, you will name the workflow ExpenseReport. Site visitors see this name when they view the Workflow pages and Workflow status in the Web browser. The Workflow Designer window enables you to select the SharePoint list or document that you want to use and associates it with the workflow. In this case, you will select the Simple Expense Reporting Demo document library that you created previously. Finally, the Workflow Designer window provides three check boxes that enable you to select whether you want to:
Allow the workflow to be started manually from an item.
Automatically start the workflow when a new item is created.
Automatically start the workflow whenever an item is changed.
For the expense reporting solution, you must select the first two checkboxes, as shown in Figure 7-11.
The Workflow Designer also provides an Initiation button that enables you to define the workflow parameters to collect data from participants who manually start workflows. In this case, you can define parameter values for the employee name, the expense purpose, the expense total, the manager’s name, and the manager’s e-mail address. Figure 7-12 shows the workflow initiation parameters for the expense reporting workflow scenario.
Once you are done defining the workflow initiation parameters, you can define the set of business rules for the workflow. The requirements of the OBA workflow solution define business rules that are specific for this scenario. We explained earlier that when sales representatives submit an expense report, the workflow business rules need to verify that the expense total does not exceed $5,000. If the expense total exceeds $5,000, the workflow submits an e-mail to the direct manager asking for approval. If the expense total is less than or equal to $5,000 the system sends an e-mail to the accounting department asking it to reimburse the sales representative with the expense total amount. The Workflow Designer window enables you to define business rules by using a set of predefined conditions that apply to your scenario. You can compare fields in a current list and perform actions based on satisfied conditions. For this solution, you need to define an if-then-else condition to compare if the expense total field is greater than $5,000. If this condition is satisfied, the manager will receive an e-mail notification, else the accounting department will receive an e-mail notification. Figure 7-13 shows the business rules definition process.
Once you click the Finish button in the Workflow Designer window, the workflow is saved and attached to the list you specified. Each time a sales representative submits an expense report, Office SharePoint Server 2007 launches a workflow instance and validates the business rules.
At the beginning of this chapter, we discussed the challenges and opportunities that modern enterprises currently face when creating solutions that better integrate back-end systems and LOB information with commonly used information worker applications. Today, developers have in their hands a new set of developer tools and technologies that simplify the process of building solutions that connect back-end systems with Office client applications. As discussed in Chapter 4, you can define Software + Services (S+S) to build services that connect to LOB systems. The Office platform simplifies the integration between software services and software to simplify the consumption of these services.
In addition, Microsoft provides Visual Studio 2008 and Visual Studio Tools for the Microsoft Office system (3.0) as authoring tools that greatly simplify the development of elaborate OBA workflow solutions. Visual Studio Tools for Office 3.0 is a component that ships with Visual Studio 2008. Chapter 2 explains how you can build custom smart client solutions for your OBAs using Visual Studio Tools for Office. In addition, Visual Studio Tools for Office has an improved set of workflow project templates that support Rapid Application Development (RAD) of custom workflow SharePoint Solutions. Together, Visual Studio 2008 and Visual Studio Tools for Office provide tools that help you create custom workflow templates that manage the life cycle of documents and list items in a SharePoint Web site. Some of these tools include a graphic designer, a complete set of drag-and-drop activity controls, and the necessary assembly references you need to build a workflow solution. Visual Studio Tools for Office also includes the New Office SharePoint Workflow wizard, which greatly simplifies the configuration process and steps for creating workflow templates in Visual Studio. Figure 7-14 shows the different Visual Studio installed templates for the 2007 Office system. These templates include a SharePoint Sequential Workflow template and a SharePoint State Machine Workflow template.
Workflow solutions in Visual Studio 2008 allow a greater level of customization than workflow solutions authored in Office SharePoint Designer 2007. Not only can you use a predefined set of activities, but you can create new activities for use as workflow components. Additionally you can define forms for association, initiation, modification, and task editing using either ASP.NET 2.0 Web forms or InfoPath forms. Visual Studio 2008 provides support to design, code, and publish the forms to the server.
As explained earlier, you can use Visual Studio 2008 to build workflow solutions for either Windows SharePoint Services or Office SharePoint Server 2007. Once compiled, workflow solutions are packaged as templates that can later be associated to different lists, document libraries, and content types. This provides another great advantage with respect to workflow solutions built with Office SharePoint Designer 2007. Another great advantage is that Visual Studio 2008 allows you to build workflow solutions and custom activities that you can reuse in different workflow solutions. You can, for example, create a custom activity that adds workflow steps as tasks in Outlook.
Finally, since you are using Visual Studio 2008, you are given all the advantages provided by Visual Studio as an authoring tool for developer solutions. Some of these advantages include the use of code-behind files, IntelliSense, debugging, use of the different workflow object models that enable workflow extensibility, and support for building custom classes and Web services that can bring LOB data.
Note You can build workflow solutions with Visual Studio 2005 using the Visual Studio 2005 Designer for Windows Workflow Foundation add-in. This add-in is available as part of the Microsoft Windows Workflow Foundation Runtime Components and Visual Studio 2005 Extensions for Windows Workflow Foundation. However, Visual Studio 2008 reduces complexity and greatly speeds development of SharePoint workflow OBA solutions. For that reason, this book chapter focuses on showing the latest workflow enhancements added to Visual Studio 2008 and Visual Studio Tools for Office. For more information about the Visual Studio 2005 Designer for Windows Workflow Foundation, see Microsoft Windows Workflow Foundation Runtime Components Beta 2.2 and Visual Studio 2005 Extensions for Windows Workflow Foundation Beta 2.2. For more information about the Visual Studio 2005 extensions for the .NET Framework 3.0 (Windows Workflow Foundation), visit Visual Studio 2005 extensions for .NET Framework 3.0 (Windows Workflow Foundation).
Visual Studio 2008 offers a basic activity library (BAL) that provides a set of predefined activities you can use to define a workflow template. An activity is a step or task that performs an action in a workflow—for example, sending an e-mail, adding tasks to Outlook 2007, adding items to SharePoint lists, or connecting to an LOB database to retrieve or save information. While designing workflows, you can use the toolbox in Visual Studio 2008 to drag and drop activities to your workflow solution. Activities are built as classes, and therefore, have properties, events, and methods as any other class. You can use activities from either the Windows Workflow tab or the SharePoint Workflow tab. The Windows Workflow tab provides a set of activities provided by the Windows Workflow Foundation, while the SharePoint Workflow tab provides a set of activities that are specific to Windows SharePoint Services and Office SharePoint Server 2007. Some of these activities include OnWorkflowActivated, CreateTask, DeleteTask, SendEmail, and CompleteTask. Additionally, you can build your own custom activities by creating a class that implements the SequentialWorkflowActivity class if you need a sequential activity or the StateMachineWorkflowActivity if you need a state machine activity. Figure 7-15 shows the Windows Workflow 3.0 and the SharePoint Workflow toolbox tabs in Visual Studio 2008.
For more information about creating custom activities, see Tutorial: Create a Custom Activity.
The next section of this chapter provides a high-level overview of, and guidance for, creating sequential workflows and state machine workflows using Visual Studio 2008 as an authoring tool.
Sequential Workflows
As we explained earlier, sequential workflows execute activities in a predefined pattern and represent a workflow as a procession of steps that must be executed in order until the last activity is completed. As shown in Figure 7-14, Visual Studio 2008 has a new SharePoint Sequential Workflow project template that provides a graphic workflow designer, a complete set of drag-and-drop activity controls, and the necessary assembly references you need to build a sequential workflow solution.
Previously, we talked about a scenario in which you have to build a vacation leave notification system for your company. When an employee saves a document, the direct manager receives an e-mail notification to let her know that she has to approve vacation time for an employee. If the manager approves the document, the system sends an e-mail to the employee. If the manager rejects the vacation request, the employee receives an e-mail with comments from the manager. In either case, the workflow reaches an end and terminates the execution. Because you want to add more customization and use debugging, you decide to use Visual Studio 2008.
To create this solution, you start by creating a simple Vacation and Time Off document library. You should add the following four columns to the document library, as shown in Figure 7-16.
Employee Name Create this column as Single line of text.
Manager’s Name Create this column as Single line of text.
Planned days off Create this column as Number.
Notes Create this column as Multiple line of text.
Workflow participants will use this document library to store vacation approval forms. Next, in Visual Studio 2008, you start by opening the New Project dialog box and selecting the SharePoint 2007 Sequential Workflow project located under the Office node. In the name box, you type OBAVacationApprovalDemo. This last step opens the New Office SharePoint Workflow wizard, as shown in Figure 7-17.
The next window prompts for a workflow name and a site for debugging the page. Click Next to accept the default settings. The next step requires that you select the document library, task list, and history list you want to use when debugging. In this case, you accept the default settings. The last step of the wizard requires you to define the conditions for how your workflow is started. Visual Studio 2008 allows you to automatically associate a workflow to a document library or list. Additionally, you can choose to handle the association step manually. In this case, you will associate the workflow with the Vacation and Time Off document library you created previously. Finally, the wizard asks for the conditions for how your workflow is started. In this case, you choose Manually by users and When an item is created.
Next, you open the Windows Workflow activities and the SharePoint Workflow activities in the toolbox to drag and drop activities to the Visual Studio 2008 design surface. Visual Studio 2008 enables you to create workflow diagrams just as you can create flowchart diagrams using Visio 2007. Figure 7-18 shows a completed sequential workflow diagram for the vacation leave notification system.
You can double-click each activity to customize code behind just as you can double-click buttons on Windows Forms solutions to add custom code. You can also define properties for each activity using the Properties window in Visual Studio 2008. Listing 7-3 shows the contents of the OBAVacationApprovalDemo sequential workflow class.
Listing 7-3. OBAVacationApplicationDemo sequential workflow class
using System;
using System.ComponentModel;
using System.ComponentModel.Design;;
using Microsoft.Office.Workflow.Utility;
namespace OBAVacationApprovalDemo {
public sealed partial class Workflow1 : SequentialWorkflowActivity {
private bool _taskCompleted = false;
public Workflow1() {
InitializeComponent();
}
public Guid workflowId = default(System.Guid);
public SPWorkflowActivationProperties workflowProperties = new
SPWorkflowActivationProperties();
public static DependencyProperty approveTaskIdProperty =
DependencyProperty.Register(“approveTaskId”, typeof(System.Guid),
typeof(OBAVacationApprovalDemo.Workflow1));
[DesignerSerializationVisibilityAttribute(DesignerSerializationVisibility.Visible)]
[BrowsableAttribute(true)]
[CategoryAttribute(“Misc”)]
public Guid approveTaskId {
get {
return
((System.Guid)(base.GetValue(OBAVacationApprovalDemo.Workflow1.approveTaskIdProperty)));
}
set {
base.SetValue(OBAVacationApprovalDemo.Workflow1.approveTaskIdProperty, value);
}
}
public static DependencyProperty approveTaskPropertiesProperty =
DependencyProperty.Register(“approveTaskProperties”, approveTaskProperties {
get {
return
((Microsoft.SharePoint.Workflow.SPWorkflowTaskProperties)
(base.GetValue(OBAVacationApprovalDemo.Workflow1.approveTaskPropertiesProperty)));
}
set {
base.SetValue(OBAVacationApprovalDemo.Workflow1.approveTaskPropertiesProperty, value);
}
}
private void approveTaskCreation(object sender, EventArgs e) {
try {
approveTaskId = Guid.NewGuid();
approveTaskProperties = new
Microsoft.SharePoint.Workflow.SPWorkflowTaskProperties();
approveTaskProperties.AssignedTo =
System.Threading.Thread.CurrentPrincipal.Identity.Name;
approveTaskProperties.Title = “Vacation and Time off Workflow Task”;
approveTaskProperties.Description = String.Format(
“This is a vacation and time off request ” +
“submitted by {0} [Employee Name] to {1} [Manager’s Name]. ” +
“The employee is planning to take {2} days off.”,
CustomFieldValue(“Employee Name”),
CustomFieldValue(“Manager’s Name”),
CustomFieldValue(“Planned days off”));
approveTaskProperties.PercentComplete = (float)0.0;
approveTaskProperties.StartDate = DateTime.Now;
approveTaskProperties.DueDate = DateTime.Now.AddDays(10);
approveTaskProperties.EmailBody = “Your vacation and time off request
was approved by your manager.”;
approveTaskProperties.SendEmailNotification = true;
}
catch (Exception ex) {
throw (new Exception(“Unable to initialize workflow task.”, ex));
}
}
private string CustomFieldValue(string fieldName) {
object item = this.workflowProperties.Item[fieldName];
string s = this.workflowProperties.Item.Fields[fieldName]
.GetFieldValueAsText(item);
if (s != null) {
return s;
}
else {
return String.Empty;
}
}
private void approveTaskNotCompleted(object sender,
ConditionalEventArgs e){ e.Result = !_taskCompleted;
}
public static DependencyProperty afterApproveTaskPropertyChangeProperty =
DependencyProperty.Register(“afterApproveTaskPropertyChange”, afterApproveTaskPropertyChange {
get {
return
((Microsoft.SharePoint.Workflow.SPWorkflowTaskProperties)
(base.GetValue(OBAVacationApprovalDemo.Workflow1.afterApproveTaskPropertyChangeProperty)));
}
set
{base.SetValue(OBAVacationApprovalDemo.Workflow1.afterApproveTaskPropertyChangeProperty, value);
}
}
private void onTaskChanged1_Invoked(object sender, ExternalDataEventArgs e) {
if (afterApproveTaskPropertyChange.PercentComplete == 1.0) {
_taskCompleted = true;
}
}
}
}
Once you are done writing code for the vacation leave notification system, you can test the solution by using the Visual Studio debugger. When you debug the solution, Visual Studio deploys the solution to a SharePoint site and it adds the workflow template to a library or list. You can start an instance of the workflow template to test the solution while using standard debugging tools that help you debug your code as you would do with any other Visual Studio solution.
State Machine Workflows
As we explained earlier, state machine workflows respond to external events as they occur and represent a group of states, transitions, and events, which trigger transitions between these states. As shown in Figure 7-14, Visual Studio 2008 also provides a new SharePoint State Machine Workflow project template that provides a graphic workflow designer, a complete set of drag-and-drop activity controls, and the assembly references you need to build a state machine workflow solution.
To create state machine in Visual Studio 2008, start by creating a document library or list that you want to use for a custom solution. Next, in Visual Studio 2008, open the New Project dialog box and select the SharePoint 2007 State Machine Workflow project located under the Office node. In the name box, type the name of your solution. This last step opens the New Office SharePoint Workflow wizard, as shown in Figure 7-16.
The design, development, and debugging of state machine workflows is almost identical to that of sequential workflows. As mentioned before, the only difference is that state machine workflows are event-driven, and therefore, you can create event-driven workflow solutions. Figure 7-19 shows a state machine workflow diagram for the article publishing workflow application scenario we discussed earlier.
Deployment
Visual Studio 2008 simplifies the deployment process by providing a deployment wizard that helps you create a workflow template package that you can use to deploy in different servers. Additionally, when you debug a workflow solution, Visual Studio 2008 deploys the workflow template and required configuration files to the SharePoint development site you used to create your solution. However, if you want to deploy the workflow template to a different server, you must perform additional deployment and configuration steps.
You can create a feature package to encapsulate a workflow solution and deploy it to different servers. A feature package is a CAB file with a .wsp file-name extension that contains the following files:
Feature.xml XML-based file that contains a manifest listing the contents of a workflow solution. It provides high-level information, including the title, description, version, and scope of the workflow. Listing 7-4 shows a sample feature.xml file for the OBAVacationApprovalDemo solution.
Listing 7-4. Sample feature.xml file for the OBAVacationApprovalDemo solution class
<?xml version=“1.0” encoding=“utf-8” ?>
<Feature Id=“cf5c48e7-3428-4982-a039-898cbff616c2”
Title=“OBAVacationApprovalDemo feature”
Description=“Vacation and Time Off Approval>
Workflow.xml XML-based file that specifies details about the workflow assembly, metadata, and the custom forms (InfoPath forms or ASP.NET Web forms) needed for the workflow. Listing 7-5 shows a sample workflow.xml file for the OBAVacationApprovalDemo solution.
Listing 7-5. Sample workflow.xml file for the OBAVacationApprovalDemo solution
<?xml version=“1.0” encoding=“utf-8” ?>
<Elements xmlns=“”>
<Workflow
Name=“OBAVacationApprovalDemo”
Description=“Vacation and Time Off Approval Workflow”
Id=“36cc9d57-d857-42ea-ad90-e461d58203ac”
CodeBesideClass=“OBAVacationApprovalDemo.Workflow1”
CodeBesideAssembly=“OBAVacationApprovalDemo, Version=1.0.0.0, Culture=neutral,
PublicKeyToken=6f9ec6d2f579b3c8”>
<Categories/>
<MetaData>
<StatusPageUrl>_layouts/WrkStat.aspx</StatusPageUrl>
</MetaData>
</Workflow>
</Elements>
Compiled assembly The feature package installs a compiled workflow assembly into the global assembly cache (GAC). We recommend that you sign the assembly using a strong key before you deploy the workflow solution to the server.
Custom forms You must include custom forms needed for the workflow. If the workflow solution uses ASP.NET Web forms, the feature package must provide instructions to deploy the forms to the Layouts folder of the server that will run the workflow solution. On the other hand, if the workflow solution uses InfoPath forms, the feature package must provide instructions to deploy the forms to the Features folder of the server that will run the workflow solution shown in Listing 7-7.
You can install InfoPath forms automatically to the server if you define the forms using the element of the feature.xml file.
Building a Feature Package
To build a feature package, you start by defining the solution files and the destination directory of all workflow solutions that must be deployed to the front-end Web server. The previous configuration and installation instructions must be defined using a manifest.xml file and a CAB file.
Manifest.xml XML-based file used as a header file that defines the files that must be deployed to a front-end Web server. Listing 7-6 shows a sample manifest.xml file used for the OBAVacationApprovalDemo solution.
Listing 7-6. Sample manifest.xml file used for the OBAVacationApprovalDemo solution
<?xml version=“1.0” encoding=“utf-8”?>
<Solution SolutionId=“36cc9d57-d857-42ea-ad90-e461d58203ac”
xmlns=“”>
<FeatureManifests>
<FeatureManifest Location=“OBAVacationApprovalDemo\feature.xml”/>
</FeatureManifests>
<Assemblies>
<Assembly DeploymentTarget=“GlobalAssemblyCache”
Location=“OBAVacationApprovalDemo.dll”/>
</Assemblies>
</Solution>
Solution.ddf file CAB file that specifies which files to include in the output CAB file. Listing 7-7 shows a sample solution.ddf file used for the OBAVacationApprovalDemo solution.
Listing 7-7. Sample solution.ddf file used for the OBAVacationApprovalDemo solution
.OPTION EXPLICIT
.Set CabinetNameTemplate=OBAVacationApprovalDemo.wsp
.Set DiskDirectoryTemplate=CDROM
.Set CompressionType=MSZIP
.Set UniqueFiles=“ON”
.Set Cabinet=on
.Set DiskDirectory1=OBAVacationApprovalDemo
Solution\manifest.xml manifest.xml
.Set DestinationDir=OBAVacationApprovalDemo
OBAVacationApprovalDemo\Feature.xml
OBAVacationApprovalDemo\workflow.xml
OBAVacationApprovalDemo\bin\debug\OBAVacationApprovalDemo.dll
Once you create the previous feature package files, you can create a .wsp package file using the makecab.exe command-line utility, as shown in Listing 7-8.
Listing 7-8. Create a .wsp package file using the makecab.exe command-line utility
makecab /f Solution\solution.ddf
Once you create the feature package, you use the stsadm.exe command-line tool to install and activate the workflow solution. Listing 7-9 shows how to install a feature using the stsadm command-line tool.
Listing 7-9. Use the stsadm.exe command-line tool to install the workflow solution
stsadm –o installfeature –filename <path of the Feature.xml file relative to the
12\TEMPLATE\FEATURES folder >
Listing 7-10 shows how to activate a feature using the stsadm command-line tool.
Listing 7-10. Use the stsadm.exe command-line tool to activate the workflow solution
stsadm –o activatefeature –name < folder in FEATURES directory containing the Feature
.xml file > -url
It is recommended that a server administrator deploys the workflow solution to a front-end Web server. Once the workflow solution is installed and activated, a site administrator must associate workflows with lists, document libraries, or content types.
So far, we’ve explored sample scenarios that have well-known business rules. For example, in the expense report scenario, the business rules are always the same. This workflow solution defines the direct manager as a single approver, and the content of the notifications is predefined as well. Depending on the expense total value, the workflow solution will either send a notification to the manager or to the accounting department. However, there are many real-world applications that have complex business rules. Routing for approval can depend on many business variables, and notifications can change depending on some other variables. Imagine that in the same expense reporting solution, you have to route an expense report to up to ten different managers, depending on the expense purpose, the expense total, and the date of submission. Additionally, depending on the expense purpose, the content of the notifications sent by the workflow will have some slight differences. This means that there may be multiple workflow solutions with different routing levels and notifications. In all previous solutions discussed in this chapter, the approvers and notifications of the solutions are predefined, meaning that each workflow instance will always follow the same business logic and execute the same predefined activities. However, for OBA workflow solutions in which you need more flexibility, you must build a separate component that encapsulates complex business rules. Depending on the complexity of the business rules of your solution, you may either create a database that stores and validates all these business rules, create custom business layer classes or Web services, or use Excel 2007 as a business rule storage for decision sets.
Excel 2007 simplifies the rules capture and decision process for notifications and approvals. Implementing decision tables using Excel 2007 represents low costs for companies that already invested in 2007 Microsoft Office systems licenses. To add to the benefits, Excel 2007 provides a comfortable environment for rules administrators and requires minimal investment in training. Figure 7-20 shows routing rules in Excel 2007 and Figure 7-21 shows notification rules in Excel 2007.
You can use Excel 2007 to store decision tables that capture approval, routing, and notification values. Thanks to the Open XML Formats, you can easily extract routing and approval domain information from Excel 2007 spreadsheets. Depending on your business needs, you can create a sequential or state machine workflow template using Visual Studio 2008. As part of your workflow template solution, you can use the Open XML Object Model API to retrieve business rules information from the decision tables stored in Excel 2007 spreadsheets.
Now that you have learned about the workflow capabilities offered by the 2007 Microsoft Office system, you can start thinking about different custom OBA workflow solutions that you can create for your company. You can use different Microsoft products and technologies to leverage your current technology investments and greatly simplify your business processes. Creating workflow solutions can be as simple as running configuration wizards using Office SharePoint Designer 2007, and as flexible as you can imagine if you use Visual Studio 2008 as an authoring wizard. What’s most interesting is that you can enable workflow participants to use Office client applications, since they are already comfortable working with these programs. The possibilities are endless and rely completely on your business needs. Enjoy the process of building custom OBA workflow solutions. The results can be gratifying.
The 2007 Microsoft Office system is a true developer platform that helps connect LOB information and Office client applications. Workflow solutions integrate many different Microsoft products and technologies, and we want to provide as many as possible. The space is limited, but here is a list of key developer resources that can help you with a deep dive into workflow capabilities offered by different Microsoft products and technologies.
Introducing Microsoft Windows Workflow Foundation: An Early Look
Windows Workflow Foundation Samples
Simple Human Workflow with Windows Workflow Foundation
Essential Windows Workflow Foundation (Microsoft .NET Development Series) by Dharma Shukla and Bob Schmidt (Addison-Wesley Professional)
Developer Introduction to Workflows for Windows SharePoint Services 3.0 and SharePoint Server 2007
Understanding Workflow in Microsoft Windows SharePoint Services and the 2007 Microsoft Office System
How-To Video: Building a Basic Approval Workflow with SharePoint (MOSS 2007) and Visual Studio
Windows SharePoint Services 3.0: Software Development Kit (SDK)
SharePoint Server 2007 Developer Portal
SharePoint Server 2007 Software Development Kit
Workflow Resource Center
Andrew May's WebLog: SharePoint Workflow Object Model Maps for Download
Microsoft SharePoint Products and Technologies Team Blog: Workflow
Business Document Workflow
Workflow projects in CodePlex
Workflow in the 2007 Microsoft Office System by David Mann (Apress)
InfoPath Forms for Workflows
Building Simple Custom Approval Workflows with InfoPath 2007 Forms
Scenarios for Using InfoPath and InfoPath Forms Services
Use InfoPath E-mail Forms in Outlook
Using InfoPath E-mail Forms
BlogOffice SharePoint Designer 2007
Microsoft Office SharePoint Designer 2007: Create a Workflow
Workflow Development in Office SharePoint Designer
SharePoint Workflow Solutions
Visual Studio Tools for Office Team Blog
Workflow Deployment Using Features
How to: Deploy a Workflow Template
Creating a Solution Package in Windows SharePoint Services 3.0
Stsadm command-line tool (Office SharePoint Server)
Introducing the Office (2007) Open XML File Formats
Open XML SDK Documentation
Manipulating Excel 2007 and PowerPoint 2007 Files with the Open XML Object Model (Part 1 of 2)
Manipulating Excel 2007 and PowerPoint 2007 Files with the Open XML Object Model (Part 2 of 2)
Preparing Open XML documents using MOSS and WF
Office Developer Center
Microsoft Office Interactive Developer Map
Office Business Applications: Price Exception Management
Erika Ehrli Cabral’s Blog | http://msdn.microsoft.com/en-us/library/cc534997.aspx | crawl-002 | en | refinedweb |
.
If you would like to receive an email when updates are made to this post, please register here
RSS:
Alright, I understand your point.
I can smell my cortext burning. :)
Jomo Fisher--Reading one of my favorite blogs this morning--Eric Lippert's Fabulous Adventures in Coding--I
You argue for the efficiency of this solution, but you only speak of speed, not of memory. This solution assumes that you are willing to shoulder the significant extra memory burden of keeping multiple copies of the queue for each queue object. Of course, if you are using immutable objects, then memory concerns are probably not at the top of your priority list, and .NET in general does not have have memory efficiency as a primary design goal. However, whereas the immutable stack does not take any more memory than the mutable stack (and can in fact take less when you have multiple instances derived from each other), the immutable queue will always take significantly more memory than its mutable counterpart. I can see the lack of transparency into these kinds of implementation details leading to significant problems down the road.
I'm not following your train of thought here. You agree that an immutable stack takes no more memory than a mutable stack, and sometimes far less when multiple instances share state. An immutable queue is built out of two immutable stacks, so why shouldn't an immutable queue gain the same sharing benefit?
I do not see why an immutable queue would take up significantly more memory than a mutable queue. I agree that an immutable queue spends more _time_ allocating memory than a mutable queue, but the garbage collector will throw away the old state if it is no longer alive. Why should the working set of an immutable queue be any larger than the working set of a mutable queue?
"An immutable queue is built out of two immutable stacks..."
And therefore uses up roughly twice the memory, because it has to keep both the forwards and backwards versions in memory at the same time, whereas with a normal queue the backwards version doesn't even need to exist. Is that not obvious?
I'm still not following you.
Can you show me a sequence of enqueues and dequeues which results in a queue which logically has n items, where the total size of the forward and backwards stacks added together is not n?
David, look at it this way.
Define f as the size of the forwards stack, b as the size of the backwards stack. The memory size of the queue is f + b.
If you enqueue you push something onto the backwards stack. So enqueueing goes from memory size f + b to f + (b + 1).
If you dequeue, there are three cases. The queue starts with memory size f + b:
if f >= 2 then the new queue has size (f - 1) + b
else if f == 1 && b == 0 then the new queue has size 0
else if f == 1 && b > 0 then the new queue has size b + 0
Which in every case goes from f + b to f + b - 1.
So, enqueuing increases the memory size by one, dequeuing decreases the memory size by one.
Is that clear?
Eric wouldn't it be simpiler to point out that f and b don't contain duplicates of each others contents ?
Dr Calvin: "I make the robots appear more human"
Det. Spooner: "There wasn't that easier to say?"
Dr Calvin: "No, not really"
;)
Eric, your articles are very interesting, thank you!
Stack of Pushed objects and lazy instantiated stack-cache of elements to Pop - wow, you're genious!
But this queue is very unreliable in multithreaded environment.
Look at this line (pushing new elements):
IQueue<T> _myQueue = _myQueue.Push(newElement);
We are going to lose elements here where several threads do this!
I think we'll always have to use locks anyway.
No, the problem is that you are narrowly defining what you mean by "reliable in a mulithreaded environment". You are defining 'reliable' to mean 'two threads can enqueue "the same" queue and both enqueued items are present in the resulting queue. This is a very "procedural programming" way to think about thread safety -- thread safety is PRESERVATION OF ALL SIDE EFFECTS.
If that's the kind of reliability you want then you should probably be using a mutable threadsafe queue! If you try to use an immutable queue for this purpose then essentially you will have to synchronize access not to the queue -- the queue is threadsafe -- but to the variable which contains the "canonical" version of the queue.
Immutable queues provide a _different_ kind of reliability, namely "two threads can enqueue the same queue at the same time, and the result on each thread is NOT polluted by the result on the other." This is a very functional programming way of thinking about thread safety -- thread safety is IMMUNITY FROM SIDE EFFECTS. Enqueueing X onto queue Q1 should result in the queue which is Q1 plus X, always, independent of any nonsense happening to Q1 on another thread.
Once you embrace a world where side effects are to be eliminated, not preserved, things get much easier to reason about.
There is a little mess with immutable and lock-free structures in comments. It is a different things. Immutable structures are thread safe utilizing the "don't worry about it" principle, but they don't provide the data coherency. Lock-free structures are thread safe by the same mean and also coherent.
Eric,
Two features that would make this sort of thing easier would be case classes a la Scala, and pattern matching. That way you could write Haskell style algebraic data types easily without all that boiler plate.
I got thinking about immutability when I was trying to use Enums to create lists of things the IDE recognized. As you know,
Example:
public Enum fruitType
{
Apple,
Orange,
Pear
}
This is so much better than creating lists of things that the IDE doesn't recognize, i.e. Hashtable dictionaries.
public Hashtable fruitTypeHT = new Hashtable ();
fruitTypeHT["Apple"] = null;
fruitTypeHT["Orange"] = null;
fruitTypeHT["Pear"] = null;
In the case of the Enumerated list, if you wish to change Pear to AsianPear it is simple to use the refactoring to rename all instances of Pear to AsianPear. There is no such help for the Hashtable version, plus, it is possible to still modify the hashtable dictionary contents during run-time.
It would be fantastic if we could have "popsicle" Enums then we could fill them from the database during program startup, yet still refer to concrete items in the Enum list in the code. I know this may seem like wanting to have your cake and eat it too; maybe so. But this is the basic difficulty with working with meta-data that is held in database tables. Your code has to refer to specific items of meta-data, but you want a way to easily find or change all specific instances referenced. This would be a great problem to solve, and I am convinced immutability will be a key part of the solution.
One thing I tried to my hand at was creating an Immutable Hashtable and an Immutable ArrayList. They are wrappers for standard Hashtables and ArrayLists, but there is a makeReadOnly () method in each that freezes the list from that point on. I use this to handle some forms of meta-data from the database, where I don't want my program to ever attempt to change or add items to the list pulled from the DB.
What do we have to do to get MS to beef up Enums? I'm already standing on my head!!
For some reason, there's been a lot of buzz lately around immutability in C#. If you're interested in
can someone explain why you wouldn't use a linked list with a front pointer to the next item to be dequeued and a back pointer to the last item queued?
D
There are three possible cases:
1) Singly linked list with links pointing back to front. How are you going to efficiently "back up" the front pointer on dequeue?
2) Singly linked list with links pointing front to back. How are you going to enqueue on the back pointer without mutating the current back link? That link is immutable.
3) Doubly linked list -- same problem as #2; you have to mutate state.
Linked lists make great mutable queues. The goal of this series is to describe how to make immutable data structures.
i know there is some mistake in making student data using linklist please solve this problem .
////////////////////////////////////////////////
using System;
namespace ConsoleApplication10
{
/// Summary description for Class1.
class Class1
{
static void newdata()
{
Console.WriteLine("Successfull");
Console.ReadLine();
}
static void display()
Console.WriteLine("There is nothing to Display.");
//static void e_exit()
//{
// }
[STAThread]
static void Main(string[] args)
string input;
Console.WriteLine ("Student Record");
Console.WriteLine ("==============");
Console.WriteLine ("a- Enter Data ." );
Console.WriteLine ("b- Display Data ." );
Console.WriteLine ("c- Exit." );
input = Console.ReadLine();
switch(input)
{
case "a" : newdata();break;
case "b" : display();break;
// case c : e_exit();
}
main();
Console.ReadLine();
}
}
} | http://blogs.msdn.com/ericlippert/archive/2007/12/10/immutability-in-c-part-four-an-immutable-queue.aspx | crawl-002 | en | refinedweb |
By default, Microsoft Office SharePoint Server 2007 provides the Microsoft Single Sign-On (SSO) service for storage and mapping of credentials for use in connecting with third-party or back-end systems. Many companies already have developed an in-house credential storage system or use a solution other than the Microsoft Single Sign-On service. As an alternative to maintaining credential mapping in two places, Office SharePoint Server 2007 provides a mechanism called pluggable SSO. This feature allows you to specify an alternate SSO provider to the standard SSO provider in Office SharePoint Server 2007.
Before you build the SSO provider, you must set up your environment. This walkthrough assumes you have set up Office SharePoint Server 2007, installed a copy of the AdventureWorks 2000 database from the Microsoft Download Center, and have ensured that the domain name is LITWAREINC. If you are using a different domain name, you must adjust the code examples in this walkthrough.
The domain accounts and groups shown in the following table are assumed to be present.
ExternalPartners
Domain Group
InternalSales
Tom Tompson
Domain User
Jerry Jones
InternalAccess
ExternalAccess
For complete instructions on how to set up the database and necessary user accounts, see the README.txt file provided with the AdventureWorks 2000 database.
Replacing the default SSO provider in Office SharePoint Server 2007 involves implementing the Microsoft.SharePoint.Portal.SingleSignon.ISsoProvider interface, installing it into the global assembly cache, and registering the new SSO provider with Office SharePoint Server 2007.
You can register only one SSO provider for Office SharePoint Server 2007. Registering a new SSO provider replaces the default SpsSsoProvider class in Office SharePoint Server 2007. Because only one SSO provider can be in use at a time, it is recommended you stop the Microsoft Single Sign-On service when using a custom SSO provider.
You will need to implement the GetCredentials and GetSsoProviderInfo methods of the ISsoProvider interface to create a minimally functional SSO provider. This walkthrough shows you how to create a simple SSO provider and use it to access data through the Business Data Catalog.
In this walkthrough, our custom SSO provider maps users who are in the InternalSales group to the InternalAccess user account for retrieving product data from the AdventureWorks 2000 database.
This section shows you how to build and register a simple SSO provider, and describes exception handling for the provider.
Downloads To download the sample provider, see SharePoint Server 2007: Software Development Kit.
You create an SSO provider assembly in Microsoft Visual Studio 2005 by creating a class library project. Add a reference to the Microsoft.SharePoint.Portal.SingleSignon.dll (found in the %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\12\ISAPI directory) to your project. Implement the ISsoProvider interface, as shown in the following example.
%ProgramFiles%\Common Files\Microsoft Shared\web server extensions\12\ISAPI
using System;
using System.Collections.Generic;
using System.Reflection;
using System.Text;
using System.Web;
using System.Web.Services;
using Microsoft.SharePoint.Portal.SingleSignon;
namespace SampleSSOProvider
{
/// <summary>
/// SimpleSSOProvider
/// </summary>
public class SimpleSSOProvider: ISsoProvider
{
public Application.ApplicationInfo[] GetApplicationDefinitions()
{
//NOTE: Used by SpsSsoProvider, not necessary for SimpleSSOProvider
throw new NotSupportedException();
}
public Application.ApplicationField[] GetApplicationFields(string AppID)
{
//NOTE: Used by SpsSsoProvider, not necessary for SimpleSSOProvider
throw new NotSupportedException();
}
public Application.ApplicationInfo GetApplicationInfo(string AppID)
{
Application.ApplicationInfo applicationInfo = new Application.ApplicationInfo("SimpleSSOProvider", "SimpleSSOProvider", Application.ApplicationType.GroupWindows, "[email protected]");
Application.ApplicationInfo applicationInfo = new Application.ApplicationInfo("SimpleSSOProvider","SimpleSSOProvider",Application.ApplicationType.GroupWindows,"[email protected]", (SsoCredentialContents)((Int32)SsoCredentialContents.UserName + (Int32)SsoCredentialContents.Password + (Int32)SsoCredentialContents.WindowsCredentials));
*/
return applicationInfo;
}
public Uri GetCredentialManagementURL(string AppID)
{
//NOTE: Used by SpsSsoProvider, not necessary for SimpleSSOProvider
throw new NotSupportedException();
}
public SsoCredentials GetCredentials(string AppID)
{
//Note: Used by SpsSsoProvider, necessary for any SimpleSSO Provider. Implementation discussed in detail in the next section of this topic
}
public SsoCredentials GetCredentialsUsingTicket(string Ticket, string AppID)
{
//NOTE: Used by SpsSsoProvider, necessary for Simple SSO Provider when used by Excel Services.
//TODO: Implement Ticket management code; currently just return SsoCredentials
return GetCredentials(AppID);
}
public string GetCurrentUser()
{
//NOTE: Used by SpsSsoProvider, not necessary for SimpleSSOProvider
throw new NotSupportedException();
}
public SsoCredentials GetSensitiveCredentials(string AppID)
{
//NOTE: Used by SpsSsoProvider, necessary for Simple SSOProvider when used by Excel Services
//TODO: Implement Sensitive Credential method, for sample just returning basic credentials
return GetCredentials(AppID);
}
public SsoProviderInfo GetSsoProviderInfo()
{
//TODO: Used by SpsSsoProvider, necessary for any SimpleSSOProvider
}
public string GetTicket()
{
//NOTE: Used by SpsSsoProvider, necessary for SimpleSSOProvider when used by Excel Services
//TODO: Implement Ticket management code; currently just return a string
return "No Ticket Management";
}
public void PutIdentityOnRequest(ref System.Web.Services.Protocols.HttpWebClientProtocol request, string AppID)
{
//NOTE: Used by SpsSsoProvider, not necessary for SimpleSSOProvider
throw new NotSupportedException();
}
public void PutIdentityOnRequestUsingTicket(ref System.Web.Services.Protocols.HttpWebClientProtocol request, string Ticket, string AppID)
{
//NOTE: Used by SpsSsoProvider, not necessary for SimpleSSOProvider
throw new NotSupportedException();
}
}
}
At a minimum, you must implement the GetCredentials and GetSsoProviderInfo methods. The SimpleSSOProvider class that we created returns new credentials based on the current user and the application identifier (AppID) we supplied. We can obtain information about the current user using the CurrentPrincipal property of the thread we are executing on (System.Threading.Thread.CurrentPrincipal). The following code shows the implementation of the GetCredentials method.
public SsoCredentials GetCredentials(string AppID)
{
//NOTE: Used by SpsSsoProvider, necessary for any SimpleSSOProvider
System.Diagnostics.Trace.WriteLine("Entering SimpleSSOProvider::GetCredentials");
System.Diagnostics.Trace.Indent();
// Retrieve the logged in user's information
string domain = System.Environment.UserDomainName;
System.Diagnostics.Trace.WriteLine("User domain is " + domain);
try {
System.Diagnostics.Trace.WriteLine("Context user:" + System.Threading.Thread.CurrentPrincipal.Identity.Name);
// Start building an SsoCredentials object to store two values - UserName and Password
SsoCredentials creds = new SsoCredentials();
creds.Evidence = new System.Security.SecureString[2];
switch (AppID){
case "AdventureWorks":
System.Diagnostics.Trace.WriteLine("Application is AdventureWorks");
if (System.Threading.Thread.CurrentPrincipal.IsInRole("InternalSales"))
{
System.Diagnostics.Trace.WriteLine("User is in InternalSales? " + System.Threading.Thread.CurrentPrincipal.IsInRole("InternalSales"));
// Provide components for the InternalAccess account token
creds.Evidence[0] = MakeSecureString(domain + "\\InternalAccess");
creds.Evidence[1] = MakeSecureString("pass@word1");
}
else
{
// Provide components for the ExternalAccess account token
creds.Evidence[0] = MakeSecureString(domain + "\\ExternalAccess");
creds.Evidence[1] = MakeSecureString("pass@word1");
}
break;
default:
throw new SingleSignonException(SSOReturnCodes.SSO_E_APPLICATION_NOT_FOUND);
}
// Put the UserName/Password values into the credential object
creds.UserName = creds.Evidence[0];
creds.Password = creds.Evidence[1];
System.Diagnostics.Trace.Unindent();
return creds;
}
catch(SingleSignonException ex) {
System.Diagnostics.EventLog.WriteEntry("SimpleSSOProvider", "Caught SSO Exception: " + ex.ToString());
throw;
}
catch(Exception ex) {
System.Diagnostics.EventLog.WriteEntry("SimpleSSOProvider", "Caught Exception: " + ex.ToString());
throw new SingleSignonException(SSOReturnCodes.SSO_E_EXCEPTION, ex);
}
}
The SsoProvider implementation does not require SsoCredentialsContents, but certain other client applications might expect SsoCredentialsContents. In the example provided, Excel Services will attempt to connect to resources with a Windows logon using the UserName and Password that were set. If no value was provided for WindowsCredentials, the UserName and Password would be set on the connection string.
None
No evidence provided.
UserName
Set if UserName exists.
Set if Password exists.
Evidence
Set if using extended fields (up to five total, including UserName and Password)
MappedGroup
Set if application definition is a Group definition.
WindowsCredentials
Set if application definition is Windows Authentication.
The GetSsoProviderInfo method simply returns information about the provider, such as the Vendor name and Version, as shown in the following code.
public SsoProviderInfo GetSsoProviderInfo()
{
//NOTE: Used by SpsSsoProvider, necessary for any SimpleSSOProvider
SsoProviderInfo ssoProvInfo = new SsoProviderInfo();
ssoProvInfo.AssemblyName = Assembly.GetExecutingAssembly().FullName;
ssoProvInfo.Vendor = "AdventureWorks";
ssoProvInfo.Version = "1.0";
return ssoProvInfo;
}
If the SSO provider will be consumed by Excel Services, you must also provide an implementation for the GetCredentialsUsingTicket and GetTicket methods.
The SimpleSsoProvider class that we created shows a very simple example of an SSO provider. A real-life implementation must retrieve credentials from a secure repository and protect any values while they are stored in memory.
The SsoCredentials object returned by GetCredentials uses the SecureString class to store the UserName and Password properties, as well as all Evidence values. SecureString encrypts its data so it cannot be deciphered easily.
Our SimpleSSOProvider throws an instance of SingleSignonException and uses standard SSOReturnCodes fields if it cannot properly determine the AppID. The following table shows some common SSOReturnCodes fields for several error cases.
SSO_E_ACCESSDENIED
Access is denied.
SSO_E_CREDS_NOT_FOUND
Credentials could not be found for the requested user or application.
SSO_E_SSO_NOT_CONFIGURED
The SSO provider service is not configured properly.
SSO_E_APPLICATION_NOT_FOUND
The application definition cannot be found.
SSO_E_EXCEPTION
The SSO provider service threw an exception.
To install the SimpleSSOProvider, you must register it in the global assembly cache and then register it with the ProviderAdmin console application (located in the bin directory of the Office SharePoint Server 2007 installation). The ProviderAdmin application replaces the current SSO provider with the one you specify. In a server farm environment, you must register the new SSO provider with each computer in the farm. The following procedures show you how to register the provider and how to remove a custom provider and reinstate the original.
The ProviderAdmin tool takes the fully qualified assembly name and the name of the class that implements the ISsoProvider interface. To register the SimpleSSOProvider in our example, the ProviderAdmin tool executes the following command.
Microsoft.SharePoint.Portal.SingleSignon.ProviderAdmin.exe
"SampleSSOProvider, Version=1.0.0.0, Culture=neutral,
PublicKeyToken=e447e624e7099fd1"
"SampleSSOProvider.SimpleSSOProvider"
To remove a custom SSO provider and reinstate the original SSO provider in Office SharePoint Server 2007, unregister the SSO provider by using the following command.
Microsoft.SharePoint.Portal.SingleSignon.ProviderAdmin.exe /u
Web Parts or other components that need access to an SSO provider should no longer use the Credentials object. Using the Credentials object retrieves only the default SSO provider included with Office SharePoint Server 2007, even if you have registered a new provider by using the ProviderAdmin tool. To obtain a reference to the currently registered ISsoProvider, use the following procedure.
Use the GetSsoProvider method on the SsoProviderFactory class to obtain a reference to the currently registered ISsoProvider. Your code can use the GetCredentials method on the ISsoProvider interface to obtain application credentials, as follows.
ISSOProvider issop;
issop = SsoProviderFactory.GetSsoProvider();
SsoCredentials ssocred = issop.GetCredentials("AdventureWorks");
The SsoCredentials class provides access to credentials by way of the SecureString class. You can use a number of different methods to convert a SecureString instance to a usable format, such as the SecureStringToBSTR method, as shown in the following example.
ISsoProvider provider = SsoProviderFactory.GetSsoProvider();
SsoCredentials creds = provider.GetCredentials("AdventureWorks");
IntPtr pUserName = IntPtr.Zero;
try
{
pUserName = System.Runtime.InteropServices.Marshal.SecureStringToBSTR(creds.UserName);
//NOTE: After this has been converted to a String object, it remains in
//memory until the garbage collector collects it.
String userName = System.Runtime.InteropServices.Marshal.PtrToStringBSTR(pUserName);
}
finally
{
// Free zero out and free the BSTR pointers.
if (IntPtr.Zero != pUserName)
{
System.Runtime.InteropServices.Marshal.ZeroFreeBSTR(pUserName);
}
}
In addition to the access you have from Web Parts to the default SSO provider, you can also use your custom SSO provider from registered Business Data Catalog applications.
To use your custom SSO provider with a database line-of-business (LOB) system, modify the Business Data Catalog schema to add the SsoApplicationId and SsoProviderImplementation properties to your LOBSystemInstance XML tag, as follows.
<LobSystem>
<!—Database connection properties elided -->
</Properties>
</LobSystemInstance>
Because our provider returns Windows credentials, the AuthenticationMode property is set to receive WindowsCredentials. When Business Data Catalog retrieves credentials from the SSO provider, it will execute a LogonUser() call to set up impersonation prior to attempting to gain access to the database.
In our example, a user is mapped to either the InternalAccess or ExternalAccess accounts for retrieving Product data from an AdventureWorks 2000 database.
For more information on Business Data Catalog schema, including the configuration necessary to implement SSO for Web service LOB systems, see LobSystemInstance in the Business Data Catalog: Metadata Model.
The ability to replace the default Office SharePoint Server 2007 SSO provider allows you to better integrate your SharePoint sites with investments already made in your enterprise. You can make use of pre-existing credential stores developed in-house or supplied as part of a third-party package. Your custom provider can then be accessed from Web Parts or Business Data Catalog objects to take full advantage of the custom SSO provider. | http://msdn.microsoft.com/en-us/library/ms566925.aspx | crawl-002 | en | refinedweb |
Win32, Fusion, CLR, .Net Framework, and others
One of the most common Deny Of Service (DOS) attack in Windows is kernel object name squatting.
For example, two processes want to access some shared resources. In order to keep the integrality of the shared resources, the two processes will cooperate by waiting on a named event object.
The problem is, there is no restriction on who can create the named Event object. If the two processes choose a well known name for the shared event object, a malicious application can create the event object before the two processes run. Now if any of the two processes run, it will not be able to create the event, and fail to run.
The problem is even worse, as the owners of the good processes can be administrators, and the owners of the malicious process can be a guess user and it will still work.
Complex steps have to be taken to mitigate this problem. Since you can’t choose well known name, you have to generate a random name (usually a GUID), and store it somewhere that only the owners of the good processes can be read, which requires delicate and sensitive ACL manipulation.
In short, it is very hard to securely share synchronizations between processes, even for administrators.
A solution to this problem is private object namespaces that only certain people can use. This way, you can safely create named kernel objects in the private namespace, without worrying that someone may hijack it maliciously.
In Windows Vista, Kernel team implements exact that.
Object Namespaces
This is easily my most favorite kernel change in Windows Vista.
PingBack from
PingBack from | http://blogs.msdn.com/junfeng/archive/2006/04/23/581161.aspx | crawl-002 | en | refinedweb |
Index
Links to LINQ
This.
If you would like to receive an email when updates are made to this post, please register here
RSS
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Couple of questions for you:
Can a type param be both in and out so would the following be allowed:
public interface ICopier<in out T>
{
T Copy(t item);
}
Also can you explain why the need for the extra keywords? I can't imagine why but I am sure there is a reason.
thanks
Josh
I'm with Josh, please explain us why we need those in-out keywords. I can't figure out why is needed to do something that is what you expect, specially when talking about covariance
I think I understand the background. This is a feature created to support F# functional programming types. Immutable functions without side effects is a HUGE thing in functional programming. The contravariant delegates are a way of representing these functions in C#.
Basically, delegates with "in" parameters are delegates that can produce no side effects. "out" delegates are the more common garden variety delegates we all know and love.
Daedius, in this example the "out" keyword is not a "common delegate". Indeed, it is making something that actually cannot be done "covariance".
delegate void Action1<in T>(T a);
static void Main(string[] args)
{
// Covariance
Func1<Cat> cat = () => new Cat();
Func1<Animal> animal = cat;
// Contravariance
Action1<Animal> act1 = (ani) => { Console.WriteLine(ani); };
Action1<Cat> cat1 = act1;
}
What I´m really want to know, is why the compiler cannot do this automatically, simply by allowing you do covariance and contravariance without any keyword at all. I think if it was possible actually in some parts of the language why not in generics and delegates? As Charlie has said "In Visual Studio 2010 delegates will behave as expected", so if it's something we expect, why we are forced to use a special keyword to get expected behavior?
It is a limitation in the name of type safety. You are declaring that the interface or delegate can accept base or derived classes of the generic parameter by explicitly defining that said interface or delegate will either only allow T as input, or only allow T as output. The interface or delegate cannot be both.
If you are using covariance it means that the interface or delegate can output a value that is a class of the specified type parameter or derived from that class. If you pass a Func<Cat> to a method that expects a Func<Animal> it works fine since it will return a Cat, which is derived from Animal. That method can then treat the Cat as if it were an Animal without any ill effects.
If you are using contravariance it means that the interface or delegate will receive input of a value that is a class of the specified type parameter or derived from that class. If you pass an Action<Animal> to a method that expects an Action<Cat> it works fine as it will pass a Cat which is derived from Animal. The body of the called delegate can treat the Cat as if it were an Animal without any ill effects.
These roles are pretty specific and cannot be reversed. A method that takes a Func<Cat> cannot accept a Func<Animal> because the Animal that it returns might not be a valid Cat and it would fail. Similarly a method that takes an Action<Cat> cannot accept an Action<Animal> for the same reason. The new keywords really only apply to the implementers of fairly fundamental structures, such as IEnumerable<out T> or Action<in T>. Their benefit comes offering the flexibility in the consumption of types that consume those structures. You might never use those keywords, but because of them you can now do this:
// Illegal in C# 3.0
IEnumerable<Animal> animals = new List<Cat>();
"IEnumerable<Animal> animals = new List<Cat>();"
This is HUGE. I can't believe how many special interfaces I have to use to get true "programming to an interface" right now just because of this limitation.
For example, right now I have to do this:
Domain object, with CRUD or Mapper ops (much easier to code to a concrete or abstract class collection than to an interface that has no CRUD methods):
public IList<RuleState> ChildStates
get { return m_ChildStates; }
then in the interface for my runtime configurator (on the same class), I have to do this:
IList<IRuntimeRuleState> IRuntimeConfig.GetRuleStates()
List<IRuntimeRuleState> list = new List<IRuntimeRuleState>();
foreach(IRuntimeRuleState rs in m_ChildStates)
list.Add(rs);
return list;
In c# 4, I could just do:
return m_ChildStates;
Much easier to code, not to mention far more efficient.
Can't wait.
Your comment about this "acting as the programmer expects" is exactly right. I was very excited about generics, but my excitement was tempered when I found out I couldn't do:
...
private List<SomeObject> m_Field;
public IList<ISomeObject> Prop
get { return m_Field; }
As I posted previously, this covariance fix will alleviate this problem, and make us all better programmers by giving us the tools to be better programmers.
Unfortunately, it won't alleviate your problem specifically, because IList<T> cannot be covariant. The reason is that it uses T both in arguments of methods - such as "void IList<T>.Add(T)"; and in return values - such as "T IList<T>.this[int]". This means that you won't be able to write this in C# 4.0:
interface IFoo { ... }
class Foo : IFoo { ... }
IList<IFoo> list = new List<Foo>();
In fact, at a quick glance, there are only a few basic types in the FCL that would be covered by new variance declarations: Predicate<T>, Comparer<T>, Action<...> and Func<...>, IEnumerable, IEquatable, IComparable. ICollection<T> and anything further down the inheritance chain will have to remain invariant.
As a result, I doubt that we'll see "in" and "out" in application code in class declarations often (if at all). On the other hand, it will probably become good style to always use them when declaring delegate types.
I'm with int19h on this one. Since you cannot have both in and out applied to the parameter, it seems like you won't be able to use covariance/contravariance with the most interesting collections...
btw, should I assume that we won't be getting Design By Contract on C# 4.0? Why isn't this part of tha language?It's one of those things that should have been there for a long time!
One of the things that we’ll have in C# 4.0 is covariance/contravariace working on generics…I mean, sort
Really guys, this is so far away from what programming really means, ie working with real people to find out what they want and then implementing it that it's a total waste of time.
Has anyone here actually done a programming for a living (and I don't count writing programming books or similar wanky stuff as programming)
Thanks Halo_Four, you have cleared a lot of things.
So the problem here is with classes that output and input the same generic parameter, so that is why it cannot be done without the "in" and "out" keywords. Also note that as Bruce Pierson and int19h commented, unfortunately this will not cover all the expected cases, like those when the parameter is used both in and out.
OOP Sceptic I´m in a real world project right know and those discussions allow me to understand better the tool that I´m using every day. And my better understanding of the tool means better products for the client, and better support.
Thanks, int19h, for the clarification, even though it's depressing...
@OOP Sceptic
I currently have many customers using my software to actually run their manufacturing businesses (boats, trailers, clothing, and more), and it suffers from being difficult to modify to suit their needs because of the lack of flexible design patterns, and old-style procedural programming. I'm currently re-writing it using expert design pattern guidance, and I cannot believe the difference it makes to my sanity when I take the time to understand and apply these principles.
This is indeed very promising and will be helpful in many scenarios, but I'm dissapointed to see that it doesn't solve the problems with ICollection and IList.
Would it not be possible to define a new set of collection interfaces without this problem?
Or alternatively, add a mechanism that allows specifying the variance at the method level instead of at type level, so that when a generic type parameter is used for both in and out the specific use (for a method) can be explicitly specified.
It may be necessary or desired to also specify the variance on consumption, similar to how ref/out must be specified both at declaration and on invocation.
Mind you, I don't usually think about these things and Eric clearly lives in another dimension.. :)
That said, surely there must be some way to allow people to write "IList<IFoo> list = new List<Foo>();" as this is such a common pattern. I'll take compiler magic like for "int? i = null" over any "will not compile" error.
You can't allow "IList<IFoo> list = new List<Foo>();", otherwise you could end up with:
class Foo : IFoo {}
class Bar : IFoo {}
list.Add(new Bar());
The only option would be to split the interface declaration into four parts - non-generic (Clear, Count, IsReadOnly, RemoveAt), covariant (GetEnumerator, indexer), contravariant (Add, Contains, IndexOf, Insert, Remove) and invariant (CopyTo).
Note that CopyTo would have to be invariant because it accepts an array of type T. You can only pass an array of U to a method which expects an array of type T if U is derived from or equal to T, but you can only put an object of type T in an array of type U if T is derived from or equal to U. Therefore, the only type U which satisfies both conditions is the type T, and the method must be invariant.
I think that this also explains why the compiler can't automatically infer the "variance-ness" of a type parameter.
Example:
static void Fill(IFoo[] values)
values[0] = new Foo();
values[1] = new Bar();
Fill(new object[2]); // Compiler error
Fill(new Foo[2]); // ArrayTypeMismatchException
Fill(new Bar[2]); // ArrayTypeMismatchException
Fill(new IFoo[2]); // Works
And if you can follow that gibberish, you've probably had too much coffee! ;)
Sorry - I obviously meant to say that the indexer get would be covariant, while the indexer set would be contravariant.
Nice to finally see this in the language, but the choice of keywords is just horrible:
1.) we already have the out keyword for parameters, but with completly different semantics. And generic type "parameters" are similar enough to function parameters to confuse many people, especially if they are new to the language.
2.) without looking I cannot tell which one of the keywords is used for contravariance and which is used for covariance. "In" and "out" is not vocabulary I'd normally use when talking about type hierachy relations.
3.) there are far better alternatives. Why not simply use "super" and "sub" instead. It is far easier to remember that "sub T" means you can use subtypes of T and "super T" means you can use supertypes of T.
Actually, "in" and "out" are fairly obvious because they clearly outline the restrictions on the usage of a type parameter. An "in" type parameter can only be used for values that are inputs of methods - that is, non-out/ref arguments. An "out" type parameter can only be used for values that are outputs of methods - return values, and out-arguments
> Would it not be possible to define a new set of collection interfaces without this problem?
It is possible if you basically split each collection interface into three parts: covariant, contravariant, and both. E.g. for a list:
interface IListVariant
int Count { get; }
bool IsReadOnly { get; }
void Clear();
void RemoveAt(int);
interface IListCovariant<out T>
T Item[int] { get; set; }
IEnumerator<T> GetEnumerator();
interface IListContravariant<in T>
void Add(T);
bool Contains(T);
void CopyTo(T[], int);
int IndexOf(T);
void Insert(int, T);
bool Remove(T);
interface IList<T> : IListVariant<T>, IListCovariant<T>, IListContravariant<T>
And so on for all other collection types (and all other interfaces that could also be so split). So far I haven't seen any indication that this is going to happen in .NET 4.0 (at least it's not mentioned on the "what's new in 4.0" poster), and, looking at the code above, I think it is understandable :)
> Or alternatively, add a mechanism that allows specifying the variance at the method level instead of at type level, so that when a generic type parameter is used for both in and out the specific use (for a method) can be explicitly specified.
I will repeat what I said elsewhere, and just say that the proper way to enable full variance is to do what Java guys did, and use variance markers at use-site, not at declaration-site, same as Java does it. For example, here are two methods that use IList<T> differently:
// Method can take IList<T> where T is string or any base class of string
void AddString(IList<in string> list, string s)
// cannot use any methods of list that return values of type T here, only those who take arguments of type T
list.Add(s);
//list[0]; // illegal here!
// Method can take IList<T> where T is object or any class derived from object
object GetFirst(IList<out object> list, int i)
// cannot use any methods of list that take arguments of type T here, only those that return values of type T
return list[i];
//list.Add(s); // illegal here!
@ iCe and Bruce Pierson
Thanks for responding. I too have many hundreds of people running their finance businesses on my software.
I appreciate that advances in any particular technology are going to improve that technology, BUT it seems to me that all these somewhat esoteric terminologies are just solutions waiting for problems.
Maybe this new stuff will help you, but it's a little late in the day for magic solutions - surely these wacko extension merely highlight the flaws in OOP any way!
@ int19h:
CopyTo can't be contravariant, as I tried to explain above. Passing an array is equivalent to passing a ref parameter. Although the method shouldn't read from the array, there's nothing in the declaration to prevent it.
Although it would be nice if "out" parameters could be used in contravariant interfaces, I don't think the CLR would support it. As I understand it, the only difference between "out" and "ref" parameters is the C# compiler's definite assignment checks.
>As I say, contravariance is a bit confusing.
maybe the names are weird but usage is just polymorphism (some interface is expected).
I can already see this is going get missused. instead of using a factory pattern for Covariance
and aggregation to implement Contravariance. (pass you concrete object implementing an interface into another object that will make it work)
i wish there was some real world usage in these examples..
Welcome to the 47th Community Convergence. We had a very successful trip to PDC this year. In this post
> instead of using a factory pattern for Covariance and aggregation to implement Contravariance. (pass you concrete object implementing an interface into another object that will make it work)
Since covariance and contravariance in C# 4.0 will work only on interfaces (and delegates, which are semantically really just one-method interfaces), I don't see your point.
In fact, I don't understand it at all. How would aggregation help deal with the present problem that IEnumerable<Derived> cannot be treated as (i.e. cast to) IEnumerable<Base>, even though it is clearly typesafe and meaningful to do so?
@OOP: I think learning about aditions to the language is a great way for us to learn new ways to applys technology solutions through code to our business problems. While I was very skeptical of Linq at first, I have come to enjoy Linq to Objects, as a quick way to filter and sort my collections (Especially when binding to grids).
@Everyone Else: I still don't get the need for the new key words, and I'm afraid unless I sat down with an expert who could pound it into my head with a base ball bat I won't get it. Since I'm the sole developer at my company, guess I'll have to finally go and attend a community event.
Thanks
C# 4.0 Dynamic Lookup I really like the way the C# team tackled bring dynamic programming to the language
> IEnumerable<Derived> cannot be treated as (i.e. cast to) IEnumerable<Base>, even though it is clearly typesafe and meaningful to do so?
just so all can see what you mean, here is the test:
class Test
{
public Test()
{
List<Base> items = new List<Base>(this.GetItems());
}
public IEnumerable<Derived> GetItems()
yield return new Derived();
}
public class Base
public class Derived : Base
it fails to compile.
I agree semantically it 'could' compile.
On purpose during the Symantic Analysis stage the compiler won't let it compile.
Why?
You should be returning:
public IEnumerable<Base> GetItems()
so you never couple higher layers with concrete types.
not IEnumerable<Derived> which is not meaningful since the Test class only needs the Base interface.
again i wish there was some real world examples on why this is actually needed.
Running behind on everything again, picked up a nasty stomach bug that laid me out for a few days (not
There are a few new features coming out in C# 4.0. I gathered some posts that will help you to "get
As someone pointed much earlier (so I am not 1st) - the keywords are necessary - you cant now - at runtime - what is the actual generic type of generic collection, and furthermore - using "generic" cast you will loose type information which is necessary for compile time checks and leads to stupid runtime fails, of which I dreamed to forget when I first saw generics...
public class Base
{
}
public class Derived : Base
public class Tests {
public static void Test1() {
List<Derived> derivedList = new List<Derived>();
List<Base> baseList = derivedList; // sounds fair?
baseList.Add(new Base()); // wrong, collection is actually of of Derived type, and...
Derived d = derivedList[0]; // should actually make an implicit up-cast to work (and fail in this case)
}
So you cant know, at run time, what type is safe for generic cast, unless you limit this type to use "T" only in input or only in output method parameters.
전에 쓴 post에 있는 새로운 IDE 기능은 dynamic과 COM interop에 관련되어 새고, 당연히 이 밖에도 여러가지 새로 VS10에 추가 되는 IDE
Wow, that's intense stuff. Thanks for the summary. Sure, I'll learn about lambduh's, dynamic and functional C# programming features, but seriously, I haven't had OOP problems in C# where I even need to know how to spell variance, covariance and contravariance. Oh, and I have personally released hundred's of thousands of lines of pure C# in production right now for my clients, just one app right now is 557,000 loc that. I bet I could reduce the loc, but at what price, maintainability and readability? No thanks. Small focused classes lead naturally to composition, which is far superior to inheritance for most pattern implementations to get green tests and keep them green.
Nuestro buen amigo Pete que ya nos ha compartido en el paso muy buenos artículos de XNA, ahora nos comparte | http://blogs.msdn.com/charlie/archive/2008/10/27/linq-farm-covariance-and-contravariance-in-visual-studio-2010.aspx | crawl-002 | en | refinedweb |
Adi Oltean's Weblog - Flashbacks on technology, programming, and other interesting things
A friend of mine who works in the Indigo team sent me a link to the latest documentation on Indigo:
The how-to section on self-hosted services caught my attention. Now, it is possible to expose a secure Web service hosted in your own process. Yes, I said secure - in the sense that it is now possible to perform true Windows authentication. For example it is really easy to expose a Web service in your process that allows incoming Web service calls only from other processes running as certain users.
This has far reaching implications on using Indigo for regular Windows applications or services. Until now, exposing a secure interface in managed code was a pain. As we all know, .NET Remoting (the native remoting method in .NET 1.0 and 1.1) was not truly secure if you hosted it in a regular process or service. It was not secure simply because there was no authentication and authorization model built in. The only way to get some security was to use an ASP.NET based hosting, but this would require an additional dependency on IIS 6 and ASP.NET. (Actually, to be fair, there is a way to add Windows-based authentication and authorization, but it's pretty complicated. See this and this link for more details). And DCOM was the only option left.
Indigo changes all that. Even more, opposed to writing a classical DCOM, the Indigo programming model is pretty straightforward. Things like defining and implementing a contract, creating endpoints can be done in a few lines of code.
Here is a sample C# console application that exposes a simple interface as a Web service:
using System.ServiceModel;
using System.IO;
using System;
// Define the MathService by creating an interface and applying the
// ServiceContractAttribute and OperationContractAttribute attributes.
[ServiceContract]
public interface IMathService
{
[OperationContract]
[PrincipalPermission(SecurityAction.Demand, Role="Administrators")]
int Add(int x, int y);
}
// Implement the interface to create the service.
public class MathService: IMathService
public int Add(int x, int y)
{
return x + y;
}
// Host the service.
static class MathApp
static void Main(string[] args)
// Create a generic ServiceHost object of type MathService.
// Specify a base address for the endpoints.
// Combined with the base address, the endpoint's
// address becomes:.
// Use configuration to set up the relative address.
Uri baseUri = new Uri("");
ServiceHost<MathService> host = new ServiceHost<MathService>(baseUri);
// Opening the host sets up the communications infrastructure.
host.Open();
// Exiting the application would bring down the Service.
Console.WriteLine("Press enter to exit");
Console.ReadLine();
host.Close();
}
This needs to be coupled with a special configuration file, located in the same directory as your assembly (remember your web.config file?)
<!--Define the endpoint for MathService in configuration.-->
<configuration>
<system.ServiceModel>
<services>
<service serviceType="MathService" >
<endpoint>
<endpoint address="Ep1"
bindingSectionName="basicProfileBinding"/>
</endpoint>
</service>
</services>
</system.ServiceModel>
</configuration>
Security is also nicely designed. The Windows integrated authentication is now the default option. This makes sense since, after all, if you are writing a service which lives (potentially) in the Local SYSTEM logon session, then you must control in a precise manner what users are able to call in. As you can see, the PrincipalPermission attribute above will allow incoming calls from clients running as Local Administrator.
To play with these samples, you need to download first the Indigo Community Technology Preview, but you must be a MSDN Universal Subscriber to download it. See this link for more details. Soon, a freely downloadable drop will be made available.
PingBack from
PingBack from | http://blogs.msdn.com/adioltean/archive/2005/03/16/397001.aspx | crawl-002 | en | refinedweb |
Everything that is related to application development, and other cool stuff...
Microsoft ASP.NET AJAX comes with a new method for getting a reference to an object representing an element on the page, e.g. input control, button, etc. -- $get.
However, you might notice that there is another function that appears to do same thing – $find… So, what’s the difference between them, and which should you use and when?
First, let’s see how the two methods are implemented…
$get is an alias for getElementById. In addition, this function has additional code for those browsers that have not implemented getElementById. Here is how it’s defined:
var $get = Sys.UI.DomElement.getElementById = function Sys$UI$DomElement$getElementById(id, element) {
/// <param name="id" type="String"></param>
/// <param name="element" domElement="true" optional="true" mayBeNull="true"></param>
/// <returns domElement="true" mayBeNull="true"></returns>
var e = Function._validateParams(arguments, [
{name: "id", type: String},
{name: "element", mayBeNull: true, domElement: true, optional: true}
]);
if (e) throw e;
if (!element) return document.getElementById(id);
if (element.getElementById) return element.getElementById(id);
// Implementation for browsers that don't have getElementById on;
}
$find is a shortcut for Sys.Application.findComponent. Components on the client… in JavaScript? Yes!
Here are some characteristics of components that differentiated them from controls and behaviors (source:):
1. Components typically have no physical UI representation, such as a timer component that raises events at intervals but is not visible on the page.
2. Have no associated DOM elements.
3. Encapsulate client code that is intended to be reusable across applications.
4. Derive from the Component base class.
So, the Sys.Application.findComponent function (implementation of the $find alias) is defined as follows:
var $find = Sys.Application.findComponent;
function Sys$_Application$findComponent(id, parent) {
/// <param name="id" type="String"></param>
/// <param name="parent" optional="true" mayBeNull="true"></param>
/// <returns type="Sys.Component" mayBeNull="true"></returns>
var e = Function._validateParams(arguments, [
{name: "id", type: String},
{name: "parent", mayBeNull: true, optional: true}
]);
if (e) throw e;
// Need to reference the application singleton directly beause the $find alias
// points to the instance function without context. The 'this' pointer won't work here.
return (parent ?
((Sys.IContainer.isInstanceOfType(parent)) ?
parent.findComponent(id) :
parent[id] || null) :
Sys.Application._components[id] || null);
It is recommended that you use the findComponent (or $find) method to get a reference to a Component object that has been registered with the application through the addComponent (or $create) method.
Note: if parent is not specified, the search is limited to top-level components; if parent represents a Component object, the search is limited to children of the specified component; if parent is a DOM element, the search is limited to children components of the specified element.
Looking at the code, you might be wondering on the performance difference… My tests show that as a general rule (i.e. given a page of average complexity and average number of controls and components) the performance is roughly the same between the $get and $find. However, my tests were not comprehensive.
Bottom line:
If you would like to receive an email when updates are made to this post, please register here
RSS
PingBack from
This scares me. $get is used to overcome browsers that don't support .getElementById(), yet it doesn't handle the fact that this method although available in IE 5,6,7 is completely broken.
Bug #152
Bug #154
Thus your "proxy" method, doesn't actually solve anything these days because all browsers have the method, just IE doesn't implement it correctly!
I would highly advice incorporating the final fix in Bug 152's workaround section to bring this method up to par.
Thanks,
Trevor W
Please help me to find the controls inside a Gridview using $get.I am using web method as a callback function.
Its working with the control in a aspx page but not with the controls inside a Gridview .How can i acces the controls inside Gridview
I feel your pain!
Help Me!
[email protected] | http://blogs.msdn.com/irenak/archive/2007/02/19/sysk-290-asp-net-ajax-get-vs-find.aspx | crawl-002 | en | refinedweb |
!
I have an xls file created and i want toopen it add new sheet in existing file and save the xls file is there any one who can help me in this?
Hi Erika,
I am developing an Excel Add-In in Visual Studio 2005 (C#) for Excel XP and higher.
I read somewhere that I need to explicitly release all objects even the Range object that I get from get_Range() method.
What else need to be released in an Add-In ?
regards
Abhimanyu Sirohi
I am having problems with excel reference in my .NET project.
I added reference to Excel object in my .NET application by including Excel 11.0 object library. My app worked fine until the following line was added to one of the classes where the excel operations are performed.
using Excel = Microsoft.Office.Interop.Excel;
When the above line is added the build throws error in my machine saying
I:\Transform\Spreadsheet\ExcelLoader.cs(12): Namespace '' already contains a definition for 'Excel'
If the above line is removed, it works fine in my machine, but throws a different error in my colleague's machine.
Can somebody help me in telling me under what circumstance the above line should be added, and why the .NET environment in my machine is complaining about the line.
Any help is greatly appreciated.
-Siya
Siya,
This article may help
Also Office XP PIA's are available at
I am developing an Excel Add-In in Visual Studio 2005 (C#) for Excel 2007.
I would like to know how to dynamically remove the Excel Cells in Memory.
Thannks,
Ram
Hi,
I m facing one problem
How to Catch Any Excel File Which user opens..
I want to catch that file from my program and if any changes made in that file then i want to save those changes as well as old changes also i want ...
I want to make program in C#.NET or ASP.NET can you help me about this?
Contact me on:
[email protected]
Thanks in advance
How i can set the background color for a range of cells?
Thanks
hai friends,
I need help from u. i.e.,How to programmatically generate Microsoft Excel sheets with having dorpdown Lists in some columns through C#.net 2005.
can any one know please tell me the solution
I got to this post by searching for "0x800A03EC". In my case, I had something like:
xlSheet.Cells[0,0] = "a value";
The error went away when I switched this to:
xlSheet.Cells[1,"A"] = "a value";
@Ahmed: it's simple ;)
Excel.Range range = worksheet.get_Range("A1", "I9");
range.Interior.Color = System.Drawing.Color.Green.ToArgb();
Note that the Color property must be set to an RGB integer value, or you will get an exception.
@sfuqua: you got that error because Excel indices start at 1, not 0 as you might expect.
Thus, to get the first (top-left) cell in a worksheet, you would use:
xlSheet.Cells[1, 1] = "a value";
I am developing a web aplication where the user uses a editor to store text on a database. then this information is exported to a excel file.
the problem I am facing is that when I set the text on a cell with new lines, it shows the text on one line and the new line characters are shown as small boxes. I need to show this text as if the user has enter Alt + Enter on the cell and the text viewed on multiples fows on the same cell
Any information will be appreciated
For everyone who is getting the 0x800A03EC exception:
Excel cell indexing starts from 1 (NOT 0), if you try to access a cell like [0,x] or [x,0], the exception will be raised.
I am creating some UDFs (in C# automation) and these are working fine. But i can't put descriptions of the functions and arguments.
Please help me.
Thanks
Mousum
Hi Erika, I have this situation, I need to name, at the moment of the creation, every Sheet in my Worksheet, is that possible? i.e:
...
Microsoft.Office.Interop.Excel.Application excel;
excel.ActiveWorkbook.Worksheets.Add( missing ,excel.ActiveWorkbook.Worksheets[ excel.ActiveWorkbook.Worksheets.Count ] , missing , missing );
///below I'm creating the necessary Sheets in my Worksheet; if the aplication needs 3 sheets, the code below, automatically will generate Sheet1, Sheet2, Sheet3. It is possible to generate it with other names?
I am struggling to obtain argb colors from an excel sheet to store in a database, to be used later.
below is the closest i got but it gives me wrong colors back (vb6 stored a value and it worked, same code in vs2005 errors)
Dim Col As Color
Col = Color.FromArgb(worksheet.Cells(ExcelRow, 4).interior.color.GetHashCode)
Dim a As Byte = Col.A
Dim r As Byte = Col.R
Dim g As Byte = Col.G
Dim b As Byte = Col.B
fld = Format(a, "000") & "," & Format(r, "000") & "," & Format(g, "000") & "," & Format(b, "000")
''''''''''
2nd program
acell = Excel.ActiveSheet.Cells(row, col)
Fld = data1("rowcolor").Value
acell.BackColor = Color.FromArgb(Mid(Fld, 1, 3), Mid(Fld, 5, 3), Mid(Fld, 9, 3), Mid(Fld, 13, 3))
thanks any help appreciated
Erica you are so awesome! I just realized that I was setting my columns at 0.
Thanks,
Carlos.
Can any 1 tell me how to delete a row in excel through .Net(VB/c#)
i've tried with
excl(Excel object).Rows(i).Delete()
but it is giving error
You need to get a range object and then delete the range as below. (I don't have experience delete rows, but this is how I delete cells.
r1.Delete(XlDeleteShiftDirection.xlShiftUp);
Thnks a lot it is very use full for me.....!
Can someone tell me how to format the data in Excel File. For example i have data in the format "0527" in the dataset.But in the Excel file it is being displayed as "527" only. It is skipping the prefixed zeros. Please help
Hi friends,
i want to display the data in asp.net along with cell colors as it is present in excel sheet, plz can any one help me. i'm using c# as code behind.
I hate excel co-ords, so I made this.
Helps with looping...
<code>
private static string ConvertToExcelCoord(int Col, int Row)
{
int c1 = -1;
while (((int)'A') + Col > ((int)'Z'))
{
Col -= 26;
c1++;
}
return (c1 >= 0 ? ((char)(((int)'A') + c1)).ToString() : "") + ((char)(((int)'A') + Col)).ToString() + ((int)(Row + 1)).ToString();
}
</code>
There should probably be a check and an exception if the coords are too big...
I need to insert alt+enter in excel programmatically to show data in separate lines within a cell.
Efrain Juarez asked this as well, but nobody replied so far.
As my friends wrote before me, i need to insert Alt+Enter in Excel to show data in separate lines within a cell.
Anyone knows a solution?
Thanks a lot everybody.
Well, I have de solution to represent Alt + Enter programmatically With VB.NET.
WorkSheet(Row, Column).Value = "TextForFirstLine" & Chr(10) & "TextForSecondLine"
So, Chr(10) represents Alt+Enter.
I hope this solution helps you.
Salutations everybody.
Can you suggest how to create borders around the cells(like a table format) in the Excel.
- Shruthi
Is there a way to filter a field with more than two criteria. In VBA you can specify an array list of criteria and I have been trying to implement something similar in C# with no luck. Below is the VBA example:
ActiveSheet.ListObjects("tableOpenedData").Range.AutoFilter Field:=8, _
Criteria1:=Array("BLT / Desktop Tool", "Delivery", "Editorial"), Operator:= _
xlFilterValues
Any suggestions on doing this in C# would be appreciated.
Thanks!
How to programatically pass ALT+ENTER from c#.net
Can you suggest how to create borders around the cells(like a table format) in the Excel ?
chartRange.BorderAround(Excel.XlLineStyle.xlContinuous, Excel.XlBorderWeight.xlMedium, Excel.XlColorIndex.xlColorIndexAutomatic, Excel.XlColorIndex.xlColorIndexAutomatic);
ttp://csharp.net-informations.com/excel/csharp-format-excel.htm
tks.
If you would like to receive an email when updates are made to this post, please register here
RSS
Trademarks |
Privacy Statement | http://blogs.msdn.com/erikaehrli/archive/2005/10/27/excelmanagedautofiltering.aspx | crawl-002 | en | refinedweb |
Evan's Weblog of Tech and Life Server2004-10-19T03:02:00ZNew location same old content more or less :-)<P><FONT face=Tahoma size=2>If.</FONT></P> <P><FONT face=Tahoma size=2 :-)</FONT></P> <P><FONT face=Tahoma size=2></FONT> </P><img src="" width="1" height="1">EvanF & Observation<p><font face="Tahoma"><font size="2">I haven't talked about methodology in a long time (at least not on here). I was recently in a conversation with someone about doing some research using a remote screen capture tool on a PC. The person (who shall remain nameless -- but you know who you are!) basically recruited participants to try out a build of a new concept for a particular software program and was using a tool that allows all of the interaction on the computer to be encoded to a video file that can be sent back over the internet. This way the researcher could watch a video of how the user interacted with the software while sitting back at her office.<?xml:namespace prefix = o<o:p></o:p></font></font></p> <p><font face="Tahoma" size="2">I happened upon her in her office when she was watching the video. During the video I noticed that there were gaps in the video, in that there was a pause in what the user was doing while in the midst of the task trying to be accomplished. We talked a little about what we had seen in the video, and I brought up this observation to her. She didn't think that it was all that relevant -- and I disagreed and suggested that unless she knew why the user wasn't finishing the activity in a continuous fashion, then she really didn’t really understand how the user was doing the task.</font></p> <p><font face="Tahoma" size="2">For example:< trying to get the task done and the kids were fighting in the background?< puzzled by what he were trying to do and just sat there trying to figure out what to click next?< asking someone for help?< forget what they were doing when he was returning to the task and thus sitting motionless for a while before starting up again?< really make errors as a result of not remembering where they were at in the task or were the errors the result of something else?</font></font></p> <p><font face="Tahoma"><font size="2">At this point she wasn’t very happy.<span style="mso-spacerun: yes"> </span>A lot of planning had gone into setting this study up and getting the software properly instrumented and set up for the participants.<span style="mso-spacerun: yes"> </span>At the time she was planning the study, it hadn’t dawned on her that the context in which the user is performing the tasks is as important as the task at hand.<span style="mso-spacerun: yes"> </span><i style="mso-bidi-font-style: normal">(Design solutions to address where to click next are much different than a solution focused on helping the user remember what they just did if they’re being distracted while performing the task.)<span style="mso-spacerun: yes"> </span></i></font></font></p> <p><font face="Tahoma" size="2">Technology is great, it can really help to better understand a situation, but if you’re not getting the full context then you’re missing a part of the bigger picture.<span style="mso-spacerun: yes"> </span>A lot of products are built in a vacuum, not because there’s missing data about the user, but rather they don’t take into account the larger ecosystem.<span style="mso-spacerun: yes"> </span>If you can capture the user in that context, when they’re doing a task you’re interested in, then you’ll have a much richer set of data about how to design a product.</font></p> <p> </p><img src="" width="1" height="1">EvanF onto something completely different<p><font face="Tahoma" size="2">Many.</font></p> <p><font face="Tahoma" size="2">All that said, I am going to start blogging again, but the content might be light or on topics that have little relation to much else, but that's par for the course on the web as we're all amateurs looking to be discovered :-)</font></p><img src="" width="1" height="1">EvanF's a party going on...<p><font face="Tahoma" size="2">There's a party going on right now. It's been a while in coming, but the momentum is finally building.....</font></p> <p><font face="Tahoma" size="2">The 1,000,000th Tablet PC was sold during February!</font></p> <p><font face="Tahoma" size="2"?</font></p> <p><font face="Tahoma" size="2">Well whoever it was, here's to you! Now it's on to the next 9 million. There's all kinds of interesting stuff being done for the platform... when are you going to buy one?</font></p><img src="" width="1" height="1">EvanF and small computers...<p>While my OQO still hasn't arrived, I was able to borrow one from someone on the team who isn't really using his and thus I wanted to give the device a workout during an event that was in theory designed for the device. Well let's just say I had some technical difficulties with the machine which I'm working on resolve.... but at the highest level and since MSFT is paying me to evaluate the device for now I'll just say that it's an interesting companion.</p> <p>But on to small computers.... There were 2 devices caught my eye, not from a mobility perspective but rather for a feature/function/size perspective for the home.</p> <p>First from <a href="">Aurora Multimedia</a> was the XPC Pro. This is a PC is a full function PC with video and surround sound capabilities in a package 8.5 x 1.75 x 13 with a built in DVD drive. They were showing it off as a second PC to attach to your TV etc, but I think that there are probably some other potential usages that are pretty interesting. However don't have a clue about the performance etc, so maybe I'll get to try one out</p> <p>Second was basically a complete DVD player/Stereo about the size of a car radio that can be mounted in a drive bay of a PC or used as a standalone component. This was the VPC-2000 from <a href="">Asour Technology</a>. This reminded me of the days that I worked for Compaq and we had designed a car stereo looking component in the front of the PC that did a lot of the same functions, but at the time was limited by Windows and ultimately failed since the actions didn't happen in real time. You would change the volume and 10 seconds later Windows would respond and then the volume would actually change. However in the implementation that I saw here, I thought it was a well integrated package that really does a great job at consolodating the functionality.</p> <p>One other interesting product was a thing called the Pocket Surfer from <a href="">Datawind</a>. This is basically a thin client that's about the size of a checkbook but really thin and light. It uses bluetooth to your cell phone and connects to a backend server to let you browse the internet. A much larger screen than a blackberry or most of the palm or ppc devices out there so it has some interesting applicational uses.</p> <p>I'll say more about my experiences with the OQO once I finish up my overall evaluation...</p><img src="" width="1" height="1">EvanF on a small computer...<p><font face="Tahoma" size="2"><a href="">Lora </a>and I exchanged some lengthy emails last night on this subject. Here's the one that kicked it off, basically indicating that my pricing structure was definitely off. While I later concede some of her points, there are some other factors involved that I'll just have to leave to your imagination...</font></p><font size="2"> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">Hey Evan,</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">Are your daughters going to use Linux? LOL You took me back in time. I felt like I was reading an article from 1999 or 2000 because of the hardware description.</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">I'm not sure that your costs are right. Was that Microsoft cost or real cost after import taxes, US sales office profit margin, then for sale?</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">You say $50 for the board, but let's use a board that is available en mass and people are doing what you described:</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">VIA EPIA-V8000A VIA C3, uses SDRAM, has audio, video, and LAN - Cost $88 in bulk; retails for $99 Case with Power supply - 90W minimum required - Cost somewhere around $50 - $100 depending on appearance and quality of power supply SDRAM memory - nonECC, unbuffered, 512MB - Cost $89 on open market, B or C grade, so it is a step up from toy grade Hard drive - 40GB for around $48 Windows XP Home $83 Total Cost $358 plus freight * 1.05 = $375</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">Wow the system builder just made $17. I'm sure they'll be thrilled. lol Of course, there is keyboard, mouse, optical drive, and display too. Even if the case and motherboard are sub $100, as you suggest, then you're still not any more competitive over current $399 systems. What's compelling about it?</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">If you want a $399 system, there are plenty to choose from, and some with quite a bit more processing power -- either AMD or Intel based. (Yes, then you have a fan.)</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">3) Remember, 5+ year old kids want streaming audio, streaming video, play games, play MP3s while chatting, chat, and more chat. Four year old technology can't handle this. Plus, they want USB 2.0, IR, and bluetooth to sync with their other gadgets. Run as much as a kid would, and then see if you think it's fast enough. Go to a few pre-teen sites, the javascript doll making sites, play iTunes, and have 5 chat windows open, plus homework, and then see if it's OK. Adults are usually more careful about what they open than kids.</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">4) Put it into a robot that can do the things 2-4 year old kids want and OK, I can rationalize it a little better then. Or, just use a smartphone and figure out a way to attach it to a monitor. Either of those ways are cheap and cute.</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"></font><font size="1">It's a tough product to sell, and a few companies like AJump, EWiz, Max Group, ASI, etc are building low cost, small boxes. Intel has repeatedly had a problem with microATX, VIA with miniITX, and now with the miniBTX. If anything, VIA with miniITX was able to ride the edge of custom systems and portable gaming machines for LAN parties, but these are not sub $400 systems. (Check out </font><A href=""><u><font color="#0000ff"><font size="1"></font></u></font></a><font size="2"><font size="1">)</font></p> <p dir="ltr" style="MARGIN-RIGHT: 0px"><font size="1">-- Lora</font></p> <p dir="ltr"><font face="Tahoma">She's right that it's easy to build a cheap"ish" system using commercially available parts, but of course you get what you pay for. But what if I was paying for a different value proposition to begin with? Could you make a market as big as the current PC industry with a different value proposition? Probably not. But maybe just maybe you could create something just a little special.</font></p></font><img src="" width="1" height="1">EvanF really small computer<p><font face="Tahoma" size="2">Over the last week, I've been playing with a development board from a chip manufacturer that is a relatively dated product in that it really isn't all that fast, doesn't use any cutting edge technology, but runs Windows XP at an acceptable level. Sure I'm not going to be going to play any games on this machine, but for doing my general work day in and day out, this is an awesome board.</font></p> <p><span style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><font face="Tahoma">The dev board is about the size of a standard size hard drive, but the board has a ton of space on it and could easily be miniaturized to encompass less than the volume of a laptop drive! It has all the ports you'd expect USB, audio, VGA etc, but it runs at a very low processing speed compared to anything that you'd buy today. What I find really neat about this device is that there is no fan, the power supply is just about non-existent and I can put my hand right on the CPU while the machines been on for days on end. Okay so it's a little warm, but by no means is it burning hot.<?xml:namespace prefix = o<o:p></o:p></font></span></p> <p><span style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><font face="Tahoma">Why do I like this little dev box so much? Well right now I have a second PC at home for my daughters to play games (things like Fredie Fish -- nothing that's a demanding application), but the cheap PC that I put together for them is just so noisy and takes up a lot of space. Just imagine if I built a PC around a board like this, plugged in an external CD (unpowered) and ran the USB cable up to the desk next to the keyboard. An ultra compact machine that does everything that I need it to do. <o:p></o:p></font></span></p> <p><span style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><font face="Tahoma">But it gets better. The cost of the entire board and the existing housing of this device probably is clearly less than $100 maybe as close as $50. If someone were to manufacture this in volume, you could probably have a complete PC (keyboard, mouse, CD, HD, motherboard, etc.) that runs XP, Office and other standard apps plus general games (nothing ultra-intensive) for $100 for the entire package, software extra of course :-)<o:p></o:p></font></span></p> <p><span style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana"><font face="Tahoma">Would you buy one? Line starts after me!</font></span></p><img src="" width="1" height="1">EvanF and Vegas<p><font face="Tahoma">So for the first time I actually get to go to Vegas for CES. Guess now that Comdex is dead, have to go for the major show. But speaking of shows, Vegas has a lot of them and I know nothing about the ins and outs of getting tickets for these shows. The reason I'm wondering is that my wife is going to come along with me and she most definitely wants to see a show. Probably a magic show or something like that, but what are the tricks to getting really good tickets for cheap prices? I'm figuring someone out there has the knowledge that google just barfs over giving a gazillion paid links for cheesy sites etc. Anyone with the inside scoop want to fill me in?</font></p><img src="" width="1" height="1">EvanF with MSN Spaces<p><font face="Tahoma">So I've set up a purely </font><a href=""><font face="Tahoma">personal blog</font></a><font face="Tahoma"> on MSN. The picture management capability is interesting as it's a lot easier than having my own website (which I haven't updated in eons). I'm thinking that maybe I'll give up my vanity domain and just simply use something like MSN Spaces for posting pictures of the family and kids and giving a quick update on what's going on. </font></p> <p><font face="Tahoma">A) It seems like it would be a lot less work to update</font></p> <p><font face="Tahoma">B) It's free (for the time being)</font></p> <p><font face="Tahoma">C) Is there a C?</font></p> <p><font face="Tahoma">Any thoughts on the potential downsides?</font></p><img src="" width="1" height="1">EvanF not on it's way :-(<p><font face="Tahoma">Well appears that the OQO that I had ordered, wasn't really ordered, so I'm not going to get a Hannukah present this year, maybe in time for New Years or perhaps in time to take it for a test run at CES. Having had a few minutes to look at the one that one of my co-workers is using (and is frustrated with) it seems to me that perhaps something like this on the floor of CES would be great particularly when paired with wireless connectivity (either via a cell phone or Wifi). However for CES a camera is real useful too (but no integrated camera). Of course I have to see whether or not this whole pariing of devices works out for this particular application; and of course there is that small little detail of cost-justification...</font></p> <p><font face="Tahoma">Oh well, I've got some other devices that I've got to do some more in depth evaluation of.</font></p><img src="" width="1" height="1">EvanF a second or third or fourth computer...<p><font face="Tahoma" size="2">I know almost everyone out there who will read this blog has more than 1 computer at their disposal already, so what I really want to know is if you were to give an additional computer to your brother, sister, mom, dad, great aunt Sally, or whomever who only has a single PC today, what would you want them to experience? That is to say, if you were to give them the computer what would you want them to get out of having this second computer that they don't get out of having the original computer that they already have. And if you were to do this, I want to know more than "it's a better computer" than the old one (in that you want to replace the old one with the new one) rather what benefit if any would there be to having these multiple computers for this particular person. Place yourself in their shoes, not your own, of course you want one machine to develop on, one machine to play on, one machine to experiment on, one machine to ...., but does your great aunt Sally?</font> </p><img src="" width="1" height="1">EvanF on it's way?<font face="Tahoma" size="2">So in my quest to have time on every small form factor machine out there and to figure out what the real end value is behind these devices, I did order an OQO. Appears it's somewhat backordered, as they got more orders than they expected -- which is either good news for them or that they produced a very small number to begin with and anything over that is more than they expected :-) Needless to say, I found </font><a href=""><font face="Tahoma" size="2">JK's comments</font></a><font face="Tahoma" size="2"> on what he's been seeing relative to this machine troubling. Particularly the comment about the digitizer. Getting these things to work well is tricky - all kinds of little things interfere with the accuracy of the electro-magnetic digitizer and if you look at the edges on any Tablet PC, you're bound to find a spot or two (or more) where the calibration is just off and you can't do much with it. What worries me here is that since the device is so small, there's only a very limited amount of space that a "human" can target and that will often be close to the edge on a device like this. Thus this would severely limit the overall usefulness of the digitizer itself. Guess we'll just have to wait and see when I get mine.</font><img src="" width="1" height="1">EvanF Wall - 1 : Evan - -5<p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><font face="Tahoma"><font size="2">Well the day before last was a banner day for me... I went to hear a talk about innovation, left the talk to go out to the lobby, and smacked my head right into a glass wall. I must commend the janitors as it was the cleanest piece of glass I never saw. End result was that the glass wall is still standing, but I took 5 stitches above the eyebrow. Okay enough wallowing in self pity and utter embarrassment. Back to work...<span style="FONT-SIZE: 10pt; FONT-FAMILY: Arial"><?xml:namespace prefix = o<o:p></o:p></span></font></font></p><img src="" width="1" height="1">EvanF on Display...<p><font face="Tahoma" size="2">In my previous </font><A href=""><font face="Tahoma" size="2">post</font></a><font face="Tahoma" size="2">, :-)</font></p><img src="" width="1" height="1">EvanF Tablet...<div class="Section1"> <p><font size="2" face="Tahoma"><span style='font-size:11.0pt;font-family:Tahoma'>Well not exactly new, but new to me… This afternoon I was helping the Tablet User Research team clean out a storeroom where they’ve been keeping their equipment that was for studies. Interestingly enough there are all kinds of interesting tablets in there particularly for certain studies, but there’s also plenty of really old equipment such as pre-production Acer TM100s and some of the early prototypes that we used way back when. As we were going through I spotted this “<a href="">tablet</a>”. So it’s now in my possession. It’s a one of the kind Compaq Tablet that was created specifically for the announcement that Compaq was entering the tablet market. Of course it kind of looks like a giant ipaq, but it’s a real tablet and was nothing at all like what Compaq released. The secret to this tablet is that it’s really the same tablet as the prototypes we had been using, but a new “shell” was constructed around it to make this prototype seem real. Way back then while Compaq was already committed to entering the market, they didn’t want their ID revealed to the world, so it was all smoke and mirrors. A little piece of Tablet PC history, rescued from the storeroom…</span></font></p></div><img src="" width="1" height="1">EvanF | http://blogs.msdn.com/evanf/atom.xml | crawl-002 | en | refinedweb |
Gilbert Corrales and I metup last week here at Redmond and he and I were talking to about how I wanted to make use of an EventDispatcher approach to routing events around the code base (the analogy we came up with was "the difference between a cough and a sneeze" - well maybe not a sneeze heh).
EventDispatcher then lead to a framework and before you knew it, we were up to around 2am in my office coding and this is what we ended up with (first cut).
Assume for a second that you have a room full of blind people with no sound, and in the middle you essentially have a machine that handles notifications around Sneezing and Coughing (or any other bodily functions you can think of).
Let's also say that PersonA wants to know if anyone coughs (so he/she react) or that PersonB wants to know if anyone Sneezes.
First things first, lets take a look at the "Room" itself. As it will be the host in this equation (assume it's a really smart room that can detect BodilyFunctions as they happen).
1: private void TheRoom()
2: {
3: // Add People to the Dark Room.
4: Person personA = new Person();
5: Person personB = new Person();
6: Person personC = new Person();
7:
8: // Define the curiousity of all Persons..
9: personA.Name = "Scott";
10: personA.DefineCuriousity("Sneeze");
11: personB.Name = "Gilbert";
12: personB.DefineCuriousity("Sneeze");
13: personC.Name = "David";
14: personC.DefineCuriousity("Cough");
15:
16:
17: // If someone Sneezes/Coughs, let's tell everyone in the room
18: // about it via a DisplayBoard (in this case, 3 text fields).
19: NexusEvent BodilyFunctionEvent = new NexusEvent(OnBodilyFunction);
20: EventDispatcher.Subscribe("Sneeze", BodilyFunctionEvent);
21: EventDispatcher.Subscribe("Cough", BodilyFunctionEvent);
22:
23: // Ok, for arguments sake, lets force a bodily function to occur.
24: personA.Sneeze();
25: personB.Cough();
26: personC.Sneeze();
27:
28: }
Pretty self-explanatory right?
Let's now take a look at the Anatomy of a person and let's see what makes them tick (don't worry, if your squeamish or can't stand the sight of blood, that's ok, this is rated PG 13 and you won't be offended).
1: public class Person
2: {
3: public string Name { get; set; }
4: private string curiousity { get; set; }
5:
6: public void DefineCuriousity(string eventType) {
7: // You can add your own Subscription Logic here
8: // to each individual Person to react to a case of
9: // either a sneeze or cough (ie Move Person 20px to the right
10: // then let out a speech bubble "eeww!!!"
11: }
12:
13: public void Sneeze()
14: {
15: this.PerformBodilyFunction("Sneeze");
16: }
17:
18: public void Cough()
19: {
20: this.PerformBodilyFunction("Cough");
21: }
23: private void PerformBodilyFunction(string eventType)
24: {
25: PersonEventArgs prsn = new PersonEventArgs();
26: prsn.PersonsBodilyFunction = eventType;
27: prsn.PersonsName = this.Name;
28: EventDispatcher.Dispatch(eventType, prsn);
29: }
30: }
Tip: For those of you whom are new to .NET you will notice that the property Name only has a get; and set; but nothing else? well that's the power of VisualStudio working there. As when you use the "prop" approach to setters/getters it automates the battle for you, so no more creating set/get values with reference to hidden private properties).
Tip: I prefer to keep string pointers as once you start embedding object references into various data packets that float around the code, sometimes you can get lost in a Garbage Collection nightmare. Play it safe, agree that its your job as a developer to keep public objects identifiable as much as you can and unique, so others can find you!)
This concept is something I've used for many years in other languages, and it's quite a nice tool for your day to day RIA solutions. As depending on how you implement it, it can at times get you out of a bind fast and it also does a really nice job of enforcing a layer of abstraction where at times it doesn't appear to be required (yet later saves your bacon, as the age old "ooh glad I had that there now i think about it" does apply with this ball of code)
1: /// <summary>
2: /// Event delegate definition for Nexus related events.
3: /// </summary>
4: /// <param name="args">Arguments associated to the event.</param>
5: public delegate void NexusEvent(NexusEventArgs args);
6:
7: /// <summary>
8: /// Implementation of a multi-broadcaster event dispatcher.
9: /// </summary>
10: public class EventDispatcher
11: {
12: /// <summary>
13: /// Holds a list of event handlers subscribed per event.
14: /// </summary>
15: private static Dictionary<string, List<NexusEvent>> _subscribers = new Dictionary<string, List<NexusEvent>>();
17: /// <summary>
18: /// Subscribes a handler to an event for deferred execution.
19: /// </summary>
20: /// <param name="evtName">Name of the event to which the handler will be subscribed to.</param>
21: /// <param name="eHandler">Handler to be executed everytime the event gets dispatched</param>
22: public static void Subscribe(string evtName, NexusEvent eHandler)
23: {
24: List<NexusEvent> handlers;
25:
26: if (!_subscribers.TryGetValue(evtName, out handlers))
27: {
28: handlers = new List<NexusEvent>();
29: _subscribers.Add(evtName, handlers);
30: }
31:
32: handlers.Add(eHandler);
33: }
34:
35: /// <summary>
36: /// Removes a command from an event for deferred execution.
37: /// </summary>
38: /// <param name="evtName">Name of the event to which the handler will unsubscribed from.</param>
39: /// <param name="eHandler">Handler to be removed from been dispatched.</param>
40: public static void RemoveSubscription(string evtName, NexusEvent eHandler)
41: {
42: List<NexusEvent> handlers;
43:
44: if (_subscribers.TryGetValue(evtName, out handlers))
45: {
46: handlers.Remove(eHandler);
47: }
48: }
49:
50: /// <summary>
51: /// Broadcasts an event thru its correspondant subscribers.
52: /// </summary>
53: /// <param name="evtName">Name of the event to be broadcasted.</param>
54: /// <param name="args">Arguments associated to the event been propagated.</param>
55: public static void Dispatch(string evtName, NexusEventArgs args)
56: {
57: List<NexusEvent> handlers;
58:
59: if (_subscribers.TryGetValue(evtName, out handlers))
60: {
61: args.EventName = evtName;
62:
63: foreach (NexusEvent d in handlers)
64: {
65: d(args);
66: }
67: }
68: }
69: }
If you would like to receive an email when updates are made to this post, please register here
RSS
Rob Houweling with a Sketch Application (2 parts), Martin Mihaylov continuing with Shapes, Karen Corby
Ping back from Samiq Bits
[... Visited Scott on his office and end up co-writing the first preview of a MVC framework for Silverlight 2.0. - those kind of things that happen just because, so don't ask...]
we should chat I have a similar approach to an event dispatcher.... but uses custom attributes to have a single delegate fire for multiple events etc.
but as scott warns, approach w/ caution. There are times where an event dispatcher solves lots of problems... but it shouldnt be overused when the current event models work pretty | http://blogs.msdn.com/msmossyblog/archive/2008/06/24/silverlight-how-to-write-your-own-eventdispatcher.aspx | crawl-002 | en | refinedweb |
Ron Jacobs
Enterprise Services
Updated January 2003
Applies To:
Microsoft Windows Server 2003
Microsoft Windows Enterprise Services
Summary: This article discusses the improvements to COM+ on Microsoft Windows Server 2003 and how .NET developers using Microsoft Windows Enterprise Services can take advantage of these enhancements to build highly scalable and reliable web applications. (9 printed pages)
Introduction Scalability Enhancements Availability Enhancements Manageability Enhancements Programming Model Enhancements Summary
Microsoft® Windows® Enterprise Services provides .NET developers with a powerful set of services for developing robust and scalable server applications. When used from the Microsoft .NET Framework, COM+ services are referred to as Enterprise Services.
Microsoft Windows 2000 includes COM+ version 1.0 with services such as transactions, role-based security, queued components, and loosely coupled events. Microsoft Windows XP and Microsoft Windows Server 2003 includes Enterprise Services (COM+ version 1.5) with many new services to increase the overall scalability, availability, and manageability of your server applications.
These new enhancements with Enterprise Services fall into these categories:
Scalable systems can expand to meet future needs. COM+ Services have proven to be the most highly scalable transaction processing middleware in the world today with 6 of the top 10 TPC benchmarks (as of April 2002, see for up to date benchmarks). Now with Windows Server 2003 and Enterprise Services based on COM+, you can build applications with even greater scalability.
The scalability enhancements include:
One important Enterprise Services feature deals with transactions. Application developers can simply ask for a transaction without worrying about how many resource managers will be involved, or how the components involved in the transaction interact. Transactions ensure the integrity of the applications data by locking particular data records for a certain amount of time.
How much locking you do and for how long is based on the isolation level. A higher isolation level means more locking, and consequently, a lower potential for incorrect data. Additionally, higher isolation levels also mean less concurrency and lower performance. A lower isolation level means less locking and more concurrency. Lower isolation levels also present a higher potential for incorrect data. Choosing a lower isolation level can result in improved performance but you must be aware of the types of concurrency problems you can encounter based on your choice of isolation level.
When running on Windows 2000, COM+ services defaults to the highest isolation level (SERIALIZABLE) that guarantees all work is completely isolated during a transaction. However, this isolation level comes at a price of reduced concurrency and throughput. On Windows 2000 you can change the isolation level after the transaction begins with the SQL command SET TRANSACTION ISOLATION LEVEL. SET TRANSACTION ISOLATION LEVEL changes the effective isolation level for the remainder of the transaction. Alternatively, locking hints can set the level for a single query.
For more information on locking hints, see Locking Hints.
Enterprise Services, when running on Windows Server 2003, allows you to specify the isolation level you want to use with your transaction by specifying the Isolation property when you declare the transaction attribute. Specifying the Isolation property when you declare the transaction attribute gives you the option of choosing the isolation level that works best for your application. Additionally, the transaction attribute documents the developer's intended isolation level. For example:
[Transaction(Isolation=TransactionIsolationLevel.ReadCommitted)]
public class Account
{
//...
}
Enterprise Services server applications, running on Windows 2000, always run in a single process. There are certain types of legacy COM objects that are thread unaware and therefore must always run in the main thread of this process. The resulting effect is that the application does not scale very well because all requests must be queued to execute on the main thread. When running this type of application on Windows Server 2003, you can specify that a pool of processes will be used. This approach increases throughput in the same way that a thread pool can increase throughput in a single process. Additionally, having a pool of processes helps protect against process failures since each client request will be dispatched to the next process in the list.
Highly available systems are critical in the world we live in and Enterprise Services offers enhancements designed to keep servers running trouble free 24 hours a day, 7 days a week. These enhancements include:
The performance of most applications degrades over time. This can be a result of memory leaks, reliance on third-party code, and non-scalable resource usage. Enterprise Services running on Windows Server 2003 provides process recycling as a simple solution to gracefully shut down a process and restart it. Process recycling significantly increases the overall stability of your applications, offering a quick fix for known problems and a safeguard against unexpected ones.
You can configure process recycling administratively through the component services UI, or programmatically through the COM+ administrative SDK. You can shut down and recycle processes based on several criteria, including elapsed time, memory usage, the number of calls, and the number of activations.
For more information, see Application Recycling.
Developers using Enterprise Services and Windows Server 2003 now have the ability to configure an Enterprise Services server application as a Microsoft Windows NT service. Previously, only base COM applications could run as a Windows NT service.
This feature gives you more control over when your Enterprise Services application starts. Marking it to run as a Windows NT service means that the application can be loaded into memory when the system boots. This is especially useful if you want to make your Enterprise Services application highly available by installing it on a clustered server. For improved security, you can also run your service under special built-in accounts designed for services such as NetworkService or LocalService. These accounts include the minimum privileges required to run as a service and do not require you to change passwords when they expire.
For more information, see Service User Accounts.
System administrators know it can be difficult to configure a server so that its resources are adequate for peak loads. When a server does not have enough physical resources to make peak demand, the server can exhaust virtual memory.
Exhausting virtual memory becomes a problem if the user code or system code does not properly handle memory allocation failures. The server begins to slow down, and as memory is exhausted, memory allocations fail. The server then executes error paths to handle the allocation failures. Finally, if an error path contains a bug in the system or user code running on the server, it is extremely difficult to trap and handle safely.
Enterprise Services, running on Windows Server 2003, prevents situations in which these error paths might be run on a server. Rather than wait for memory allocations to fail in a piece of code, Windows Server 2003 checks memory before it creates a new process or object. If the percentage of virtual memory available to the application falls below a fixed threshold, the activation fails before the object is created. By failing these activations that would normally run, the low-memory activation gates feature greatly enhances system reliability.
Web services increases availability by making your service available to all kinds of clients. Enterprise Services uses .NET remoting to bring Web services to the world of COM by making it possible for any COM object to be accessed as a Web service, or to call a Web service as though it were any other COM object. You can also use .NET remoting as a simple way to make your .NET class library accessible to Web-service clients by simply selecting a box, or adding an assembly attribute to your application. When registered, this approach will cause your Enterprise Services application to create a Web site on Microsoft Internet Information Server (IIS), and to connect the Web site to your components automatically.
The following example shows how to use the ApplicationActivation attribute with the SoapVRoot attribute to automatically publish your components as a Web service.
using System;
using System.EnterpriseServices;
[assembly: ApplicationActivation(ActivationOption.Server, SoapVRoot="AccountWebService")]
For more information, see COM+ Web Services: The Check Box Route to XML Web Services and Microsoft .NET and Windows XP COM+ Integration with SOAP.
Windows Server 2003 makes managing Enterprise Services applications easier than ever before, and includes the following features:
When running on Windows 2000, it is not possible to prevent server applications from being activated. If you stop the application and a request to activate a component from that application is received, the application will immediately start again. Administrators need to be able to pause or disable applications so they can make changes to the application configuration, or assemblies and component dlls, without removing the server from the network.
When running Enterprise Services on Windows Server 2003, you can pause any running process. While the process is paused, it will not accept requests for further activations. If requests are received, a new process will start. This enables administrators to troubleshoot applications by attaching a debugger or doing a process dump while the process is paused. Other clients will have their work routed to a new process so they can continue with their work.
You can also disable an application or component so that activations will not be processed. If an application or component is disabled it will remain disabled until you enable it even if you reboot the server. This allows administrators to disable an application in order to deploy a new version of the assemblies, or to make other changes to the application configuration.
It is not easy to troubleshoot applications in a production environment. How do you gather information on a problem without disturbing the running processes? Enterprise Services running on Windows Server 2003 provides a solution through its new process dump feature. This feature allows the system administrator to dump the entire state of a process without terminating it.
For example, the application service provider (ASP) hosting your application finds a problem, but cannot provide much more information other than the fact that something is not working. With the process dump feature, the ASP can take a non-invasive snapshot of the running processes. The details are saved in a log file, which the ASP can send to you or Microsoft Product Support Services, making it much easier to troubleshoot the Enterprise Services application. Using advanced debuggers like WinDBG, you can quickly identify which thread is causing the problem and then diagnose the problem.
On Windows 2000, COM+ allows you to configure an implementation of a component only one time on a computer. When using Enterprise Services with Windows Server 2003, you can use a feature called application partitions that allows multiple versions of COM and .NET components to be installed and configured on the same machine. This feature can save you the cost and time-consuming effort of using multiple servers to manage different versions of an application.
To see the benefits, consider a hosted application. For example, you have 1000 different customers accessing your application through an ASP. All 1000 customers require different back-end database and different security settings. To do this with Windows 2000, you would have to install 1000 versions of the application on 1000 physically separate servers.
With the application partitions feature in Enterprise Services, you can create 1000 partitions, one for each version of the application, on a single computer. Each partition acts, in effect, as a virtual server. After installing the application into each partition, you create partition sets that map users to the logical servers. This mapping determines the application version that your customer will access. Users can only access components from partitions in their partition sets. The benefit of partitions is clear. It is much easier and more cost-effective to manage one large server, or even a few larger servers, rather than many small servers.
For more information, see Creating and Configuring COM+ Partitions.
From its inception, Microsoft Transaction Server (MTS), followed by COM+, introduced a new style of component-based development for server applications. Enterprise Services continues this trend, and when you use Enterprise Services on Windows Server 2003, the following features are available to assist you in building server applications:
With COM+ on Windows 2000, you could only configure a particular implementation of a component one time on a computer. The component's CLSID referred to a combination of the component's code and its configuration requirements. Windows Server 2003 provides component aliasing, which allows you to configure one physical implementation of a component many different times.
To illustrate the benefits of component aliasing, consider the example of a component that only takes calls to store procedures in a database. You tested and debugged this component, and now want to use it in 10 different applications. Each of those 10 applications connects to a different back-end database. You can use object construction strings to pass in the DSN information for each database. The problem is that you can only register and configure the component once, so you can only specify one object constructor string.
Previously, you would have cut and pasted the code to create 10 different components, one for each application. This would mean 10 times the amount of code to test, debug, and maintain. Component aliasing enables you to configure the same physical implementation of the component 10 different ways. You get component re-use at a binary level, rather than at the source-code level. This means less code, lower development costs, and faster time to market.
With COM+ on Windows 2000, all components in your application are public. A public component can be activated from other applications. However, you might have several helper components within an application which are meant to be called only from other components within that application. Windows Server 2003 enables you to mark these components as private. A private component can only be seen and activated by other components in the same application.
The public and private components feature provides the developer with more control over what functionality to expose. You need only document and maintain the public components. You also have the option of creating private components that cannot be accessed from outside the application, but may still take advantage of all the features of Enterprise Services.
Many server applications need to do specific initialization and cleanup when they are started and shut down. When running on Windows 2000 (prior to SP3) there is no way to execute code when the hosting process starts and, while you could take care of the initialization on the first activation, dealing with cleanup is even more difficult because cleanup has to be done when the last component is released.
When running on Windows Server 2003 (and Windows 2000 SP3), you can create a class that implements the IProcessInitializer interface. When the process starts up it will call IProcessInitializer.Startup, and when shutting down it will call IProcessInitializer.Shutdown. This gives your component the opportunity to take any necessary action such as initializing connections, files, caches, and so on.
For example, a component that wants to open a log file could implement IProcessInitializer, open the log file at process start, and close it at process shutdown:
public class ServerLog : ServicedComponent, IProcessInitializer
{
// ...
public void Startup(object punkProcessControl)
{
OpenLogFile();
// Record startup in the log file
Write("Server application started");
}
public void Shutdown()
{
// Record shutdown in the log file
Write("Server application shut down");
CloseLog();
}
}
On Windows 2000, if you want to take advantage of COM+, you have to package your managed code into a class derived from ServicedComponent that is registered with component services. This forces developers who want services such as transactions to factor their class designs into transactional and non-transactional classes.
Windows Server 2003 allows you to programmatically enter and leave a Service Domain by making a pair of API calls. When your code executes within the service domain, it behaves as though it is in a serviced component. Services such as transactions will be applied to your component automatically. This scenario makes it possible to build a component that uses transactions for some methods, and does not have to inherit from ServicedComponent.
The .NET platform is a great way to write new applications with rich tools and languages making developers more productive than ever. Enterprise Services helps to make these applications enterprise ready with a number of features including:
You can use the .NET Framework on Windows 2000 and create a great application. Building and deploying your application on Windows Server 2003 will give you more options for the design and architecture of your system and your application will run faster and more reliably than ever before. | http://msdn.microsoft.com/en-us/library/ms952392.aspx | crawl-002 | en | refinedweb |
MSDN Site Manager Blogs
Laurie Moloney
Visual Studio, Windows Vista
Kerby Kuykendall
Visual Basic, Visual C#
John Boylan
Visual C++
Erika Ehrli
Office
Eric Brandt
.NET Framework, ASP.NET
Your Host
All of my postings are provided "AS IS" with no warranties, and confer no rights.
The Windows SDK is available for customers to install as either an
This SDK release supports the following platforms:
...and is compatible with:
Here is a small sampling of what’s in this SDK, with a more complete list available in the Getting Started section in the Windows SDK documentation.
Check out the Windows SDK blog and the Windows SDK MSDN Developer Center over the coming days and weeks for more information about the Windows SDK. As always, please look over the Release Notes for a description of known issues before you install the SDK.
The Code Gallery has found its way to the ASP.NET DevCenter home page - Subscribe now and be the first to know when new all the newest ASP.NET code samples and sample applications are posted.
You can check out all of the latest code snippets and samples from the comfort of your favorite DevCenter home page right above the CodePlex feed. This new feed is scoped to provide only content specific to the ASP.NET developer and as always, you can also subscribe to this feed in your favorite aggregator. From the MSDN Code Gallery homepage you can also navigate code samples via the tag cloud.
This just in on the Dynamics DevCenter:
Microsoft Dynamics CRM 4.0 Implementation GuideThe Microsoft Dynamics CRM 4.0 Implementation Guide contains comprehensive information about how to plan, install, and maintain Microsoft Dynamics CRM 4.0.
Microsoft Dynamics CRM 4.0 Deployment SDK The Microsoft Dynamics CRM 4.0 Deployment SDK presents information to help you write code using the Deployment Web service.
Microsoft Dynamics Mobile consists of a mobile sales application and tools to enable customers and partners to create and run mobile business solutions. In 2008, the release of Microsoft Dynamics NAV 5.0 SP1 contains the integration components for Microsoft Dynamics Mobile 2008. Microsoft Dynamics Mobile 2008 also supports Microsoft Dynamics NAV 4.0 SP3.The following documentation is available:
The MSDN Code Gallery is the latest developer portal offering for code snippets, samples, sample applications, and other great resources - including pages that describe samples, supporting docs with screenshots, and design documents. We've also added hosted discussions about the code samples, sample projects or other resources that have been added to the gallery. The MSDN Code Gallery is open to everyone to contribute to and it is a pure storage site with no project management capabilities so If you need to manage a live code project and collaborate with others on it, head over to CodePlex, another offering that we provide for open source project hosting.
For more see Soma's Blog: MSDN Code Gallery - snippets, samples and resources
The Dynamics NAV 5.0 Online Help Toolkit contains tools and information that you need to customize and compile HTML Help for Microsoft Dynamics NAV 5.0. You will need to log into PartnerSource to access the toolkit.
Last."
The .NET Compact Framework 3.5 Redistributable includes everything you need to test .NET Compact Framework 1.0, 2.0 and 3.5 applications.
Here is some other great new content on the DevCenter that has been posted to support this launch:
Tuesday, December 11, 2007 2:22 PM Download
Download
Microsoft Dynamics CRM 4.0 SDKComing soon! The SDK for Microsoft Dynamics CRM 4.0 contains all new information about creating plug-ins, working with custom workflow activities, using the new Web services, and much more.
Microsoft Dynamics CRM 3.0: Update Rollup No. 2Update Rollup 2 is a tested, cumulative set of updates for Microsoft Dynamics CRM Server 3.0 and Microsoft Dynamics CRM client for Outlook 3.0, including performance enhancements, that are packaged together for easy deployment.
Learn
Microsoft Dynamics CRM 3.0: Extending Marketing AutomationLearn how to programmatically manage marketing or sales events, such as seminars, using the Microsoft Dynamics CRM 3.0 Software Development Kit (SDK). Also, learn how to create and integrate a .NET Framework 2.0 Web application into Microsoft Dynamics CRM. Microsoft Dynamics CRM 3.0: Creating and Publishing Knowledge Base ArticlesLearn how.
Microsoft Dynamics CRM 3.0 SDK SDK contains a wealth of resources, including code samples, that are designed to help you build powerful vertical applications using the Microsoft Dynamics CRM platform.
Wednesday, November 28, 2007 10:51 AM
The ongoing improvements to the ASP.NET DevCenter continue with this latest celebrity (Tahiti 1.5.1) update.
We heard what lots of you have been saying loud and clear with your generous feedback and traffic (or lack thereof), so we are rolling out these updates in an effort unbury some really great learning content that has been hidden behind tabs or buried deep below the surface.
Have a look and let us know what you think: if we missed something you would love to see added.
Thanks ~ Eric
Saturday, November 15, 2007 12:05 PM - It is also available as a separate download for ASP.NET 2.0).
Download the latest version here:.
Monday, November 19, 2007 10:09 AM
Set your browsers to download...
The latest version of the .NET Framework and VS 2008 are here. Experience the latest release of the most productive and powerful development tool and user interface platform on the planet. Learn about the new features for Visual Studio 2008 and the .NET Framework 3.5, from built-in ASP.NET AJAX support, to the new Visual Studio Web page designer, to the enhanced JavaScript support. Then, download a free copy of Visual Web Developer to try it out yourself!
There’s a whitepaper by David Chappell describing the .NET Framework 3.5 here:
Also check out the .NET Framework 3.5 common namespaces and types poster here:
Thursday, November 08, 2007 10:53 AM
The Dynamics AX team has just made a crop of great new content available on the MSDN DevCenter
New developer documentation this month includes:
For a full list of new and updated topics, see: DAX_ContentChanges_2007_10_October.doc, available on the Microsoft Dynamics AX 4.0 Documentation Updates site.
Monday, November 12, 2007 12:17 PM I’m really pleased to announce the launch of the new Tahiti 1.5.1 design layout updates to the ASP.NET Developer Center and the Learn ASP.NET pages. These pages will continue to evolve over the next few weeks!
Eight weeks ago we released a design update (Code Namded: "Tahiti") to the Visual Basic and C# developer centers which was intended to:
We listened to our internal (TAGM, DevDiv, Subsidiary) and external stakeholders, and rolled out this design with the intent to monitor its performance and iterate as appropriate. Well the time to iterate is now! Traffic is up across the life of all of the pilot sites and the key page task:
Bundled with this release is the new events control rolled out on the Visual C# Developer Center in the right rail. We’ve also implemented a geo-detection block for non EN-US, English speaking subsidiaries which over-rides the US events widget.
Tuesday, November 06, 2007 9:00 AM
The Microsoft Dynamics GP team has made their new developer toolkit available on MSDN. The Developer Toolkit currently is available for CustomerSource users and paid MSDN subscribers and includes Web Services for Microsoft Dynamics GP, Visual Studio Tools for Microsoft Dynamics GP, and eConnect, which allow you to build Microsoft .NET-based solutions, customizations, and extensions.
Trademarks |
Privacy Statement | http://blogs.msdn.com/ericbrandt/ | crawl-002 | en | refinedweb |
NAMEauto tcl_startOfNextWord str start tcl_startOfPreviousWord str start tcl_wordBreakAfter str start tcl_wordBreakBefore str start procedures command
source [file join [info library] init.tcl] auto_mkindex command. If cmd is found in an index file, then the appropriate script is evaluated to create the command. The auto_load command returns 1 if cmd was successfully namespace, basename_endOfWord str start
- Returns the index of the first end-of-word location that occurs after a starting index start in the string str. An end-of-word location is defined to be the first non-word character following the first word character after the starting point. Returns -1 if there are no more end-of-word locations after the starting point. See the description of tcl_wordchars and tcl_nonwordchars below for more details on how Tcl determines which characters are word characters.
- tcl_start_startOfPreviousWord str start
- Returns the index of the first start-of-word location that occurs before a starting index start in the string str. Returns -1 if there are no more start-of-word locations before the starting point.
- tcl_wordBreakAfter str start
- Returns the index of the first word boundary after the starting index start in the string str. Returns -1 if there are no more boundaries after the starting point in the given string. The index returned refers to the second character of the pair that comprises a boundary.
- tcl_wordBreakBefore str start
- Returns the index of the first word boundary before the starting index start in the string str. Returns -1 if there are no more boundaries before the starting point in the given string. The index returned refers to the second character of the pair that comprises a boundary.
VARIABLES
The following global variables are defined or used by the procedures in the Tcl library:
- auto_execs
- Used by auto_execok to record information about whether particular.
- env(TCL_LIBRARY)
- If set, then it specifies the location of the directory containing library scripts (the value of this variable will be assigned to the tcl_library variable and therefore returned by the command.
- tcl_nonwordchars
- This variable contains a regular expression that is used by routines like tcl_end underscores are considered non-word characters.
- tcl_wordchars
- This variable contains a regular expression that is used by routines like tcl_end_pending
- Used by unknown to record the command(s) for which it is searching. It is used to detect errors where unknown recurses on itself infinitely. The variable is unset before unknown returns.
SEE ALSOinfo(n), re_syntax(n)
KEYWORDSauto-exec, auto-load, library, unknown, word, whitespace
Important: Use the man command (% man) to see how a command is used on your particular computer.
>> Linux/Unix Command Library | http://linux.about.com/library/cmd/blcmdln_auto_load.htm | crawl-002 | en | refinedweb |
Localization is the customization of applications for a given culture or locale. Localization consists primarily of translating the user interface into the language of a particular culture or locale. Localizing an application involves creating a separate set of resources (such as strings and images) that are appropriate for the users of each targeted culture or locale and that can be retrieved dynamically depending on the culture and locale.
This topic contains the following sections:
Localization and Resource Files
Localizing Out-of-Browser Applications
Localization and String Size
Displaying Chinese, Japanese, and Korean Text
Deploying a Targeted Localized Application
Retrieving Specific Localized Resources
The .NET Framework for Silverlight uses a hub-and-spoke model to package and deploy resources. The hub is the main assembly that contains the nonlocalizable executable code and the resources for a single culture, which is called the neutral or default culture. The default culture is the fallback culture for the application. The spokes connect to satellite assemblies, and each satellite assembly contains the resources for a single supported culture, but does not contain any code. At run time, a resource is loaded from the appropriate resource file, depending on the value of the application's current user interface (UI) culture, which is defined by the CultureInfo..::.CurrentUICulture property.
To understand how resources are loaded, it is useful to think of them as organized in a hierarchical manner. A localized application can have resource files at three levels:
At the top of the hierarchy are the fallback resources for the default culture, for example, English ("en"). These are the only resources that do not have their own file; they are stored in the main assembly.
At the second level are the resources for any neutral cultures. A neutral culture is associated with a language but not a region. For example, French ("fr") is a neutral culture.
At the bottom of the hierarchy are the resources for any specific cultures. A specific culture is associated with a language and a region. For example, French Canadian (fr-CA) is a specific culture.
When an application tries to load a localized resource, the Resource Manager does the following:
It tries to load the resource that belongs to the user's specific culture. If there is no satellite assembly for the specific culture, or if the resource cannot be found in it, the search for the resource continues with step 2.
If the operating system provides a list of preferred fallback languages, the Resource Manager iterates the list and tries to load the resource for each culture until it finds a match. If there is no match, or if the operating system does not support fallback logic, it continues to the next step.
If there is no satellite assembly for the specific culture or the resource cannot be found in it, or if resources for the cultures defined by operating-system fallback logic cannot be found, the Resource Manager tries to load the resource from the satellite assembly for the parent of the current UI culture (the neutral culture).
If that satellite assembly does not exist, or if the resource cannot be found in it, the Resource Manager loads the fallback resources from the assembly that is specified by the System.Resources..::.NeutralResourcesLanguageAttribute class.
If no resources are found, resources for the invariant culture are used.
It is best to generalize as much as possible when deciding whether to store resources in satellite assemblies for specific cultures or for the neutral culture. For example, when you localize an application for the French (France) culture ("fr-FR"), we recommend that you store all resources that are common to French cultures in the satellite assembly for the neutral culture. This enables users of other French-speaking cultures, such as the French (Canada) culture ("fr-CA"), to access localized resources instead of the resources for the default culture.
Visual Studio and the .NET Framework common language runtime handle most of the details of compiling the resource files and loading the appropriate resource for the user's current culture, provided that the resource files have been named correctly and the project has been configured properly. For information about including localized resource files in your application, see How to: Add Resources to a Silverlight-based Application.
After you add the localized resource files to your application, you can use the ResourceManager class to retrieve string resources. The following example illustrates how to retrieve a string named Greeting. (The Visual Basic code relies on the My.Resources class, which wraps calls to ResourceManager methods.) When you use the ResourceManager class, you do not have to specify the resources that you want to retrieve, unless you want to retrieve localized resources other than the resources for the current UI culture. The runtime automatically tries to retrieve the correct resource based on the current thread's CultureInfo..::.CurrentUICulture property. If a culture-specific resource for the current culture cannot be found, the runtime uses the method described in The Hierarchical Organization of Resources to provide a resource.
Dim greeting As String = My.Resources.StringLibrary.Greeting
ResourceManager rm = null;
rm = new ResourceManager("SilverlightApplication.StringLibrary", Assembly.GetExecutingAssembly());
string greeting = rm.GetString("Greeting");
When the project is compiled, the resources for the default culture are included in the main assembly for the Silverlight-based application, which is included in the .xap file. In addition, the satellite assembly for each culture that is specified by the <SupportedCultures> property in the project file is included in the application's .xap file. Each satellite assembly is also listed in the <Deployment.Parts> section of the AppManifest.xml file.
In the application's .xap file, each satellite assembly is packaged in a subdirectory of the application directory. The individual satellite assemblies are all named applicationName.Resources.dll and are stored in a subdirectory whose name specifies the culture that is represented by the satellite assembly's resources. For example, resources for the German (Germany) culture ("de-DE") are stored in the de-DE subdirectory.
This approach creates a download that includes all the localizable resources of a Silverlight-based application. If an application has been localized for many cultures or the size of localized resources is significant, this method of deployment will increase the size of the application and lengthen its download time. Instead, you can create separate versions of a Silverlight-based application that target specific cultures.
To make your application localizable, you should also modify the strings that are defined in your application's XAML. For example, your application might include a Button control that is defined as follows:
<Button Content="Click Me!" />
Instead of providing a single static string, you can make the XAML localizable so that the text displayed by the Button control reflects the user's culture.
Making XAML localizable is similar to localizing code: You place all the localizable strings into a separate resource (.resx) file, and you use XAML code that extracts the strings from that file. To extract the strings, you use the {Binding} markup extension to bind a XAML property to the strongly typed wrapper that Visual Studio generates for resource files. For instructions, see How to: Make XAML Content Localizable.
You can also make XML content, rich text, non-string values, and non-dependency properties localizable. These and other scenarios are discussed in the topics listed in the See Also section at the end of this topic.
To support localized out-of-browser applications, you have to create a new .xap file for each localized culture or locale that your application supports. In Visual Studio, you can do this by creating new build configurations. For more information, see Deploying a Targeted Localized Application later in this topic.
For more information about developing out-of-browser applications, see Out-of-Browser Support and How to: Configure an Application for Out-of-Browser Support.
You may also want to make the window title, shortcut name, and description of the out-of-browser application localizable. Ordinarily, an out-of-browser application retrieves its window settings from the application's OutOfBrowserSettings.xml configuration file. To localize the application's window settings, you create a unique OutOfBrowserSettings.xml file for each culture that you support. For example, if you localize your application for the French (France) culture, you would create an OutOfBrowserSettings.fr-FR.xml file. You can then modify the project settings to use the new settings file. For detailed instructions, see How to: Customize Your Localized Out of Browser Application How to: Localize Information About an Out-of-Browser Application.
Back to top
A translated string may take more room on the screen than the original string. If you are developing your Silverlight-based application in English and localizing it, you should assume that strings in some languages are about 40% longer than in English. For this reason, it is recommended that you use automatic layout for elements that contain localized text. This involves the following:
Avoid the Canvas control, which uses hard-coded sizes and positions. Instead, use the Grid control, the StackPanel, or other panel elements that support automatic layout.
Avoid Glyphs. Instead, use the TextBlock or TextBox controls.
For long strings, set TextWrapping="Wrap" on TextBlock controls.
Do not set the Height and Width properties of your TextBlock control. Let the Silverlight runtime decide the size automatically. You may also set either the Height or the Width property and let Silverlight calculate the other property. If you do this, be sure to set TextWrapping="Wrap".
To properly display Chinese, Japanese, and Korean text, Silverlight needs to know which language it is displaying, because the same Unicode characters are displayed differently depending on language. Silverlight determines the language from the FrameworkElement..::.Language property or from the xml:lang attribute. If your XAML applies to only a single language, you can simply set the property at the top of the file, as shown in the following example, which sets the language to Japanese.
<UserControl Language="ja-JP" />
However, if you want your XAML to be localizable, you want to set the Language property to a localized value. To do this, create an entry such as Language in your resource file, and assign it a value that is the name of the localized language or culture. Change the XAML to use a {Binding} markup extension in the same way that you would localize any other property. For example:
<UserControl Language="{Binding Path=SilverlightApp.Language,
Source={StaticResource LocalizedStrings}}"
In most cases, you want users to download a Silverlight-based application that contains only the resources of their neutral and specific culture, instead of the resources in the complete set of localized satellite assemblies. You can use Visual Studio to create a localized Silverlight-based application that contains a designated subset of your application's satellite assemblies. For instructions, see How to: Create a Build that Targets a Specific Culture. You can then use ASP.NET to deploy the application based on the user's culture.
After you create a set of .xap files that contain the localized resources, you must ensure that the user can download the version of the Silverlight-based application that is appropriate for his or her culture. There are many ways to do this. For example, you can let the user select a language from a portal page, you can read the headers in the user request and redirect the user to the localized application, or you can dynamically modify the HTML page served to the user so that it embeds the localized version of the application.
The following example illustrates one possible implementation. When the user requests a portal page, the ASP.NET page's current culture and current UI culture is automatically set to match the language preference from the user's browser settings. If a localized version of the Silverlight-based application is available for that language, the user is redirected to it. If not, and a Silverlight-based application for the user's neutral culture is available, the user is redirected to that application. Otherwise, the user is redirected to the application for the default culture. The example requires that the ASP.NET page's Page tab include the UICulture="auto" attribute.
Imports System.Globalization
Imports System.Threading
Partial Class _Default
Inherits System.Web.UI.Page
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
' Create array of available languages.
Dim availableCultures() As String = {"fr-FR", "en-US"}
Dim availableNeutralCultures() As String = {"fr", "en"}
Dim url As String = "TestPage.html"
' Get user's language preferences.
Dim currentUi As CultureInfo = Thread.CurrentThread.CurrentUICulture
' Redirect if user language preference is available.
For Each culture As String In availableCultures
If culture = currentUi.Name Then Response.Redirect(culture + "/" + url)
' Get user's neutral culture.
Dim neutralCulture As CultureInfo = currentUi.Parent
' Determine if neutral culture is supported.
For Each culture As String In availableNeutralCultures
If culture = neutralCulture.Name Then Response.Redirect(culture + "/" + url)
' Fall through to non-localized version of application.
Response.Redirect(url)
End Sub
End Class
using System;
using System.Globalization;
using System.Net;
using System.Threading;
using System.Web;
using System.Web.UI;
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
// Create array of available languages.
string[] availableCultures = {"fr-FR", "en-US"};
string[] availableNeutralCultures = { "fr", "en" };
string url = "TestPage.html";
// Get user's language preferences.
CultureInfo currentUi = Thread.CurrentThread.CurrentUICulture;
// Redirect if user language preference is available.
foreach (string culture in availableCultures)
{
if (culture == currentUi.Name)
Response.Redirect(culture + "/" + url);
}
// Get user's neutral culture.
CultureInfo neutralCulture = currentUi.Parent;
// Determine if neutral culture is supported.
foreach (string culture in availableNeutralCultures)
{
if (culture == neutralCulture.Name)
Response.Redirect(culture + "/" + url);
}
// Fall through to non-localized version of application.
Response.Redirect(url);
}
}
In some cases, you may want to retrieve resources from a culture other than the user's current culture. You can do this in two ways:
You can set the current culture and current UI culture of a Silverlight-based application in the Object tag that is responsible for loading the application from an HTML page by adding parameters such as the following:
<param name="uiculture" value="pt-BR" />
<param name="culture" value="pt-BR" />
In this case, the values of uiculture and culture are used to initialize the thread cultures of the Silverlight-based application to "pt-BR" if that culture is supported on the client. If the value of the culture or uiculture parameter is auto or if the parameter is absent, the current culture and current UI culture default to the values defined by the client operating system that is running the browser.
You can set the current culture and current UI culture programmatically, typically in the application's Application..::.Startup event handler, by assigning a CultureInfo object that represents the specified culture to the System.Threading.Thread.CurrentThread.CurrentUICulture property. The following example illustrates how to do this. It creates an array of culture names, selects one at random, and uses it to set the application's current UI culture.
Private Sub Application_Startup(ByVal o As Object, ByVal e As StartupEventArgs) Handles Me.Startup
' Create an array of culture names.
Dim cultureStrings() As String = {"en-US", "fr-FR", "ru-RU", "en-GB"}
' Get a random integer.
Dim rnd As New Random()
Dim index As Integer = rnd.Next(0, cultureStrings.Length)
' Set the current culture and the current UI culture.
Thread.CurrentThread.CurrentCulture = New CultureInfo(cultureStrings(index))
Thread.CurrentThread.CurrentUICulture = New CultureInfo(cultureStrings(index))
Me.RootVisual = New Page()
End Sub
private void Application_Startup(object sender, StartupEventArgs e)
{
// Create an array of culture names.
string[] cultureStrings = { "en-US", "fr-FR", "ru-RU", "en-GB", "fr-BE", "de-DE" };
// Get a random integer.
Random rnd = new Random();
int index = rnd.Next(0, cultureStrings.Length);
// Set the current UI culture.
Thread.CurrentThread.CurrentCulture = new CultureInfo(cultureStrings[index]);
Thread.CurrentThread.CurrentUICulture = new CultureInfo(cultureStrings[index]);
// Load the main control
this.RootVisual = new Page();
}
You can also retrieve a culture-specific resource on a resource-by-resource basis. To do this, specify the culture whose resource is to be retrieved in the call to the resource retrieval method. For example, to retrieve a string for the Spanish (Spain) culture ("es-ES") when the application's current culture is English (United Kingdom) ("en-GB"), you would call the ResourceManager..::.GetString(String, CultureInfo) method instead of the ResourceManager..::.GetString(String) method. Similarly, to retrieve an image or some other binary resource, call the GetObject method instead of the GetObject method. | http://msdn.microsoft.com/en-us/library/cc838238(VS.95).aspx | crawl-002 | en | refinedweb |
This chapter focuses on key Microsoft Application Center 2000 (Application Center) activities—keeping cluster configuration settings and content synchronized and deploying content. Detailed information is provided about each of the features as they are used, including set up and configuration tips. You'll also get an inside look at the sequence of events and processing activities that occur when you use a particular feature.
Synchronization Service Overview and Architecture
Synchronization Reliability
Synchronization Modes and Processing
Synchronizing Cluster Configuration Settings
Synchronizing Content and Applications
Deploying Applications
Special Cases
Configuring, Managing, and Monitoring Synchronization
The Application Center Synchronization Service provides the means for deploying content and applications and synchronizing a cluster—including content, applications, network configurations, and load balancing configuration. As you can see, synchronization is a core technology for maintaining consistent settings and content across Application Center clusters.
The Synchronization Service does not attempt to solve the problems of shrink-wrapped application installation or wide area deployment. However, it does not require any special hardware and implements the following guiding principles:
It works with existing publishing tools, such as Web Distributed Authoring and Versioning (WebDAV) and Microsoft FrontPage 2000.
Synchronization is virtually transparent to the user, and multiple content types are replicated seamlessly across the cluster (for example, files, metabase settings, and system Data Source Names [DSNs]).
At a minimum, synchronization performance is at the same level as the Site Server Content Deployment System (formerly called the Content Replication System) when handling large content updates. Typically it takes about 450 seconds to replicate 250 MB by using the Content Deployment System.
The Application Center "shared nothing" design means that every cluster member can stand alone as a complete unit for serving content and providing access to applications. It also means that any member can assume the cluster controller's role if required to do so, provided that the member contains a complete set of the controller and cluster configuration settings.
As noted in Chapter 2, "Feature Overview," Application Center replicates:
Web content and Active Server Pages (ASP) applications
COM+ applications
Virtual sites (and their associated ISAPI filters) and content
Global ISAPI filters
File system directories and files
Metabase configuration settings
Exportable Cryptographic Application Programming Interface (CAPI) server certificates
Registry keys
System DSNs
Windows Management Instrumentation (WMI) settings that Application Center uses
You can divide all of the items in the preceding list into two broad categories: configuration settings and content (or applications, depending on the cluster). Before examining these two categories in more detail, let's take a look at the underlying architecture that supports the Synchronization Service.
Application Center, like the Content Deployment System, uses a single master replication model, which means that changes to a cluster member are not pushed out to the rest of the cluster; instead, all changes to members come from the cluster controller. Another implication of this model is what happens if changes are made to an individual member. If these changes include data that's contained in the controller's master inventory, they are overwritten the next time synchronization occurs. If, however, these changes are outside the scope of the inventory, the controller isn't aware of them and they're ignored, as illustrated in Figure 6.1.
Figure 6.1 How the single master replication model affects changes to member configuration or content
Note Only files outside the scope of the inventory are ignored; all Internet Information Services 5.0 (IIS) configuration settings on members are overwritten by the controller's IIS configuration settings, which includes all the files that IIS references.
The architecture for the Synchronization Service consists of two parts: the replication engine and replication drivers. Generally speaking, the replication engine manages the synchronization process, while the drivers manage different content types. Figure 6.2 provides a high-level view of the Application Center Synchronization Service architecture.
As noted earlier, the replication engine drives content replication and cluster synchronization. It uses asynchronous communication to support its two-phase synchronization process—first transferring the data, and then writing it. Although synchronization is a two-phase process, the absence of commit and rollback capability means that this process is not truly transactional.
Note The replication engine encrypts DCOM and RPC traffic by using the same encryption mechanism (RPC Packet Privacy) that Microsoft Windows 2000 Server employs between domain controllers and servers. However, HTTP traffic, which only consists of files, is not encrypted. This means that metabase and registry entries are secure, but sensitive information, such as a password, stored in a file is not.
The engine's primary responsibilities include handling administration requests coming from the user interface, coordinating replications among the various drivers, and performing general synchronization tasks. These tasks encompass:
Cross-server communication—The engine handles communications between the controller and any member that's a synchronization target.
Security—The engine uses the ACC_
computername account for cross-server communication.
Figure 6.2 The ApplicationCenter Synchronization Service architecture
Batching—Changed items are batched during automatic synchronization at the replication source for either a certain period of time (10 seconds) or until the buffer size is exceeded. Because the buffer size is 250 items and the maximum number of buffers is 200, you can theoretically batch up to 50,000 changes.
Transport—The engine uses DCOM (or RPC if it's operating in stager mode) to transfer batched updates
Eventing, logging, and notifications—The engine sends events, as appropriate, in response to specific conditions. (Note: the various drivers also do this.)
The relationship between the replication engine and the various drivers is shown in Figure 6.3, which also shows how data is transferred between the source and target servers.
Figure 6.3 Replication engine and driver architecture
During the synchronization process, two types of data may be sent—either an XML content description list or binary data. There are three types of information that can be expressed as an XML list:
An IHave content list, which describes all the content present on the source.
An Action list, which is a request for updated content that a target sends to the source.
An Update list, which contains all the content that was updated, or a link to the content if it was transferred through the built-in HTTP engine.
For the sake of future synchronization processing examples, let's refer to the preceding lists as IHave, Action, and Update, respectively.
As shown in Figure 6.3, the two protocols for transferring data from the source server to a target server are HTTP and DCOM. RPC is used as an alternative transport to DCOM in staging scenarios. RPC is used because it supports the use of port specification, which is required for deployments through a firewall.
Note The default port specification is port 4243 for HTTP file transfers and port 4244 for RPC data transfers.
When DCOM replication takes place, the driver passes to the engine a well-formed XML object. This object contains the change description for a single resource, such as a path statement for a file or directory.
In cases where replication takes place through a firewall, which is a typical staging scenario, the DCOM data is converted to RPC format and passed through the specified RPC port on the firewall. The replication engine transports the XML object (UTF-8 encoded) to the destination server. The receiving server ensures that the entire batch is received, reconstitutes the RPC data as DCOM, and then instructs the driver to apply the changes by writing the new data to the target.
This method sends the data in binary format by using FastFile transmit, an HTTP API, which is capable of moving large amounts of data quickly. Communication between the source and target servers takes place on a single port and uses HTTP. If used through a firewall, this port has to be enabled as well. (Only the file system driver uses this protocol to copy data.)
Note Although synchronization is accomplished by using HTTP FastFile, configuring bandwidth throttling on the Web server has no affect on synchronization because the replication engine does not use IIS or the front-end adapter.
Each driver shown in Figure 6.3 is responsible for replicating its own particular type of content. Of course, there are several additional activities that each driver handles. Depending on the driver, these activities can encompass:
Monitoring the driver data store.
Reading the driver-specific data.
Writing data to an XML object or to a driver-specific file transfer API.
Comparing the content list on the source and target.
Locking and/or unlocking a resource for writing.
Handling the transfer and committing it to permanent storage for a given resource.
Handling security at the resource level.
Resource replication order
The replication engine calls the appropriate driver based on the order that resources are specified in the metabase (LM\WebReplication\DriverOrder), and should not be changed.
The default order for resource replication is:
FS—file system
MB—metabase
IIS—IIS sites and bindings
NET—network and Network Load Balancing (NLB) configuration
REG—registry
DSN—DATA SOURCE NAMES
CAPI—CryptoAPI
WMI—Windows Management Instrumentation
COM+—COM+ applications
Assuming that a synchronization or deployment needs to replicate all of the preceding resources, each driver will be called sequentially, up to, and including the NET settings resource group. After the NET settings are applied, the balance of the changes (for example, DSNs and WMI settings) is un-ordered because they are applied concurrently.
Note When an administrator creates a virtual directory that points to an existing path, this approach serves to guarantee that files referenced by virtual directories are applied to each member before the virtual directory settings are written to the IIS metabase.
Let's examine the Application Center replication drivers in more detail, starting with the File System driver.
The File System driver uses standard Microsoft Win32 APIs for read/write functions and picks up change notify from ReadDirChangesW, which works with FAT- and NTFS-formatted disks. The driver uses a file's last modified date, size, and attributes to create a signature. The File System driver does content comparisons of the content tree of the controller and individual members to identify content that needs to be replicated.
The driver informs the replication engine of changes as they occur, and when a file is synchronized, the entire file is copied to the target. The File System driver monitors the following areas by default:
All virtual directories identified in the IIS metabase.
All custom errors defined by the metabase.
All ISAPI and IIS filters identified in the IIS metabase.
User-specified directories in an application.
The File System driver assumes that the target has the same disk/directory configuration as the controller and does not attempt to translate drive letters or paths. Therefore, it is necessary to configure all members so that they have identical directory and file structures. The driver detects UNC paths and disk volumes that are using the exchange file system and does not replicate them.
FAT-based file systems do not support ACLs. If you synchronize or deploy from a server that uses a FAT file system to a server that uses NTFS, the files copied to the target will inherit the access control lists (ACLs) from the parent target where they're written on the target. In the case of file synchronization from an NTFS source to a FAT target, the synchronization or deployment will be aborted and rolled back and an appropriate error event generated.
Note By default, the global replication definition is set to replicate ACLs. You can turn off this feature in the cluster_name Properties dialog box, and then enable/disable ACL replication in a deployment in the New Deployment Wizard.
If possible, you should make sure that your file systems are homogenous across the cluster. The key issue with mixed file systems is that the granularity of LastModified time, which is 2 seconds on FAT, and 1 second on NTFS. So, if NTFS is the source and LastModified time is an odd number, the file is always replicated, whether or not it has been updated.
The File System driver bypasses most file locking issues by renaming the old version of the file and moving the new version from a temporary location to its final destination. If this operation fails, the driver attempts to repeat this operation several times before stopping that particular write operation and moving on to the next file in its update list.
The Metabase driver uses the metabase BaseObject to perform read/write functions and utilizes the existing IIS notification code for change notify. The driver uses all the property identifiers and values in a metabase node to create a signature.
The Metabase driver is granular to the node level; that is, when there's notification of a property change, the driver sends all the properties for that node. For example, if you make a change to the ServerComment for site 1, every property in /W3SVC/1/Root will be replicated (this is not recursive).
During a full-synchronization replication, the driver walks the entire metabase and compares its values to those of a member. Any required settings and attributes are copied to the target.
Note Unless you are using a non-NLB cluster, IUSR and IWAM account information is replicated, as well as the IP addresses associated with the controller's virtual sites.
The Registry driver supports read/write functions and obtains change notify by using standard Win32 APIs. This driver doesn't use a signature because of the small size of the data to be transferred, and the default configuration of Application Center does not replicate any registry keys. However, you can identify and add registry keys to an application. The Applications view supports navigation through the entire registry down to the key level. Although the browser doesn't display the key's values, this registry path is sufficient for identifying keys and their values for replication purposes.
Note Two things should be noted about this driver. First, it isn't intended to serve as a registry backup solution (for example, it doesn't handle secured keys). Second, the driver doesn't compare key values; it just sends the entire key content.
The CAPI driver, via the CAPI2 APIs, uses IIS to interrogate the metabase looking for CAPI certificates and a certification trust list (CTL) that is referenced by IIS virtual sites. Then, it will access the CAPI store to extract the relevant information. After receiving this information, the CAPI driver runs these identifiers through a hashing algorithm to generate a signature. See the Signatures sidebar on the following page. When certificates change, the new information is encrypted with the signature, and then it is replicated to members.
Note By default, IIS server certificates are exportable. However, if you disable this capability, they can't be synchronized to other members. The CAPI driver generates an event for each certificate it can't replicate.
The WMI driver performs read/write operations via standard WMI functions (for example, put, get, and delete member of instance) and a custom event consumer provider. It also provides change notify. The driver uses the complete text of an object's instance to derive a signature.
The driver is responsible for comparing, and then moving, the Health Monitor and Application Center namespaces. During a full synchronization, the WMI driver compares controller and member signatures; if they don't match, the driver transfers the newer class/instance text to the target server. After the text is copied, the driver stores the class/instance information in WMI.
The COM+ driver uses the COM+ administrative APIs for read/write functions and uses object properties to create a signature. The driver is also used for replicating selected COM+ Global Catalog settings. Because the Global Catalog doesn't support change notification, these settings are replicated only when a full synchronization takes place or when a COM+ application is replicated. The driver compares COM+ applications on the source and target servers and synchronizes objects that have changed.
Note Only the New Deployment Wizard uses the COM+ driver, so you can synchronize COM+ applications only by using the New Deployment Wizard.
For more a more in-depth look at COM+ replication, see "Special Cases
"
later in this chapter.
Signatures
Application Center uses signatures to compare files or values.
A file signature is created by taking a set of numeric values, such as file size and attributes, and running these values through the MD5 hashing algorithm to compute a value, which is the signature. Every time a file changes, a new signature gets created. By comparing the new signature to the old, you can assume that the file was changed in some fashion.
In most cases, Application Center uses the last write, file size, and file attributes (except for archived files) to create any required signatures.
There are several scenarios that can cause a synchronization failure. For example, the network disconnection of a single server or having a single target server fail and be unreachable can cause a partial failure but would not cause the synchronization to quit. A complete failure is only likely to happen if there is a cluster-wide network or TCP/IP failure.
Note If the cluster controller fails when synchronization is occurring, the synchronization will fail and the content will not be applied. After you bring the controller back online (or designate a new controller) we recommend that you initiate a full synchronization to ensure that the entire cluster is in synchronization.
Application Center uses the existing TCP/IP and DCOM mechanisms for transport recovery if there is a mid-point transfer failure. The only driver responsible for recovery is the File System driver because it implements its own transport.
Apply time recovery is the responsibility of the individual drivers because it occurs when a driver is writing its data stream. If the target is locked, the driver is responsible for retrying the write. Each content driver determines the retry count and interval for its own data. If the apply can't be done within its retry parameters, the driver skips that set of changes and generates an event notification that's sent to the replication engine.
Each synchronization will have an end state—that is, a degree of success—which generates different events and performs subsequent actions depending on the end state. (See Table 6.1.) A "success" end state occurs when all content is replicated and applied to all the targets correctly. Because changes to content are typically set to automatic synchronization mode, full synchronization will be applied only where necessary.
Table 6.1 Replication Results
Result
Corrective action
Events generated
Success—All changes are replicated and applied to all servers.
None
Successful replication.
Partial Success—One or more, but not all, servers fail to receive or have changes applied.
Resynchronize the failed items to the targets.
Failure events for specific target(s).
Complete Failure—Every server fails to receive or have changes applied.
Repeat the synchronization.
Failure events for every target.
Note Because the Application Center Synchronization Service runs in a separate process from IIS, even a synchronization failure that requires a restart won't affect the cluster's ability to serve content.
As noted earlier in this chapter, the Synchronization Service, running as LocalSystem, uses the replication engine and its drivers to keep cluster members in sync with the cluster controller. This service operates in one of two basic modes: automatic and on-demand. Each mode determines the timing and scope—members and content—of cluster synchronization.
By default, the system is configured for automatic updates. Applications will synchronize changes—configuration or content—as they are made, and will do a full synchronization of the entire cluster on an established time interval.
When you make changes to the cluster controller, such as changing a member's drain time (a configuration change) or modifying part of an application (a content change), automatic synchronization is started. The changes to the controller are then replicated across the cluster. Change-based synchronization is enabled by the automatic session.
When automatic synchronization is first started, a session is created on the cluster controller (for the purpose of illustration, we'll call it the AutoSession). The AutoSession only exists on the controller and its sole purpose is communicating with the replication drivers in order to receive notifications of configuration or content changes. This activity is made possible by the replication engine, which listens for notifications from the replication drivers indicating that a change has occurred in their name space. (You may recall, from "The Replication Drivers" section earlier in this chapter, that each driver is responsible for tracking changes to its namespace[s].)
The AutoSession is a long running session, and it creates all the necessary short-lived synchronization sessions that are required to apply changes to the cluster. When the AutoSession receives a change notification, it creates a new, short-lived session. This new session, which exists on the controller and the members, behaves like any other replication session.
Note Unlike the process described in "Processing Activities During a Full Synchronization" later in this chapter, there is no exchange and comparison of IHave lists, only an Update list is sent to the targets.
The short-lived session only exists for the length of time it takes to complete the synchronization. After the changes are applied to the members, the session is removed.
If there is a failure of either the controller or member, the session is canceled (by either the controller or member). There is no formal retry mechanism for these sessions, except for the hourly automatic sessions that perform a full synchronization.
Warning The settings for interval- and change-based automatic synchronization is combined. As Figure 6.7 indicates, disabling automatic updates also disables hourly full synchronization.
During the setup process, Application Center is configured to automatically perform a "periodic full synchronization" of the cluster every 60 minutes (the default interval). A full synchronization of the cluster means that all the configuration settings and all the applications are synchronized to each member.
Typically, on-demand synchronization is done when you create a new application or deploy an application. This is a partial synchronization, which is to say that only a selected portion of the controller's content is synchronized, rather than a full synchronization of the cluster. However, you can force a full synchronization of the entire cluster at any time. Table 6.2 summarizes both the automatic and on-demand synchronization options that are available to you.
Table 6.2 Synchronizations Summary
Synchroniza-tion type
Invoked by
Resources
Target
System applications replicated?
Requires target in synchroni-zation loop?
Automatic: change-based
Enable in cluster_name
Any changed files associated with an application or any changed system settings.
All members in loop.
Y
Automatic: interval-based
Enable and configure in cluster_name
Properties.
All applications and system settings.
On-demand: member
Right-click member_name, and then click Synchronize.
Specified member.
N
On-demand: application
In the Applications view, select the application, and then click Synchronize.
Selected application.
On-demand: cluster
Right-click cluster_name, and then click Synchronize Cluster.
Note You can alter the default configuration settings at the cluster, controller, and individual member levels through their Properties dialog box. For more information, see "Configuring, Managing, and Monitoring Synchronization" later in this chapter.
The following processing activities typically take place when the entire cluster is synchronized.
The first thing that takes place is a call from the replication engine to all the drivers, instructing them to walk through their individual namespace (for example, application files and directories, metabase settings, and user-specified DSNs) on the controller and compare this information with each cluster member. The information that's different is then copied to the member and applied.
Note In order to optimize the synchronization process, each driver maintains a token that indicates the time when the driver was last synchronized. When a full synchronization occurs, the token is checked to see if it's already in synchronization or not. If it is, the need for lengthy and resource-consuming content comparisons is eliminated.
There is, of course, more inter-server communication going on than indicated in the preceding summary. Figure 6.4 illustrates the communications and data transfer activities that occur between a controller and member during a full synchronization.
Figure 6.4 Controller and member communications during full synchronization
Before stepping through the major communications and data transfer activities, let's examine the lists that provide the foundation for replication: IHave, Action, and Update.
IHave—lists all the current configuration settings and content with their signature that's stored on the cluster controller.
Action—specifies the data that the member needs to be in synchronization with the controller.
Update—contains all the resources that the member requested in its Action list.
Now let's walk through the main processing activities during a full synchronization. First, authentication occurs between the controller and the member. The user name and credentials for the ACC_
computername account are used in intra-cluster replication, or administrative credentials (user name and password specified in the New Deployment Wizard) are used for inter-cluster deployment. In an inter-cluster deployment, the account must have administrative rights on the target. After inter-server communication is validated, a session is started for all the source drivers, and the replication engine instructs the target replication engine to do the same.
After the target replication engine gets a session running for its drivers, the controller sends its IHave list to the member. The member compares this list to its own IHave list and generates an Action list, which is sent to the controller. This Action list contains all the items that the member needs to be perfectly synchronized with the controller. How does it deal with deletes? The cluster member will note that the IHave list did not contain something that it has, and will delete those items during the apply phase.
The cluster controller does three things in response to the Action list. First, it processes the Action list and creates an update list that is sent to the member. This update list contains the actual items that have been requested in the Action list. For all the drivers except the File System driver, the items are actually included in the Update list.
The following sample illustrates the XML code that you'd expect to see in an IHave list:
<SCOPE TYPE="IHAVE" LAST="0" RESTYPE="FS" ROOT="c:\inetpub" ETAGTYPE="STRONG"
ITEMSIZE="8810" ENUM="FULL" ITEMCOUNT="2" DIFFUSION="MULTICAST">
<NODE SUBPATH="wwwroot" ENUM="FULL">
<ITEM NAME="default.asp" ETAG="67ED898" SIZE="345"></ITEM>
<ITEM NAME="myimg.jpg" ETAG="5656365" SIZE="8465"></ITEM>
<NODE SUBPATH="Images"/>
<NODE SUBPATH="Apps"/>
</NODE>
</SCOPE>
<SCOPE TYPE="IHAVE" LAST="0" RESTYPE="FS" ROOT="c:\inetpub\wwwroot" ETAGTYPE="STRONG"
ITEMSIZE="4" ITEMCOUNT="1">
<NODE SUBPATH="Images" ENUM="FULL">
<ITEM NAME="image2.jpg" ETAG="463ED898" SIZE="4"></ITEM>
</NODE>
</SCOPE>
During this phase, the replication drivers on the controller copy the updates to a temporary storage location on the member. Then, after all the content has been transferred, the controller signals the target that it's ready to commit the session. When the transfer is finished, the member signals the controller that this phase is complete and it's ready to commit the changes.
Finally, the session is committed and the files and other resources are copied from their temporary location to the appropriate locations on the member. After the member notifies the controller that the commit was successful, the session is ended.
Most cluster synchronization, whether it's interval-based—which is just an instance of a full synchronization—or user-initiated (for example, the manual synchronization of an application), follows the same basic process as the one just described.
Note Some drivers skip one or more parts of this process. For instance, updates to the registry content on the controller generate the UpdateList directly, without going through the IHave/Action phases.
During synchronization, Application Center replicates controller and cluster configuration settings to all of the cluster members.
Note During synchronization, a member that is in offline state (for load balancing) is still synchronized to the controller unless you explicitly disable synchronization on that member.
Even if synchronization is disabled on a member, there are certain settings that are still copied from the controller to each member on an ongoing basis. This is done to ensure that every member has up-to-date information about the cluster infrastructure and topology. Some examples of this configuration information are: IP address binding, network adapter information, and server computer names.
See Also: Appendix G, "Managing IIS IP Bindings."
The controller replicates several network settings across the cluster, including the following:
The load-balanced IP address on the front-end network adapter in a multi-adapter configuration
IP subnet
Default gateway and gateway cost metric
DNS server search order
DNS domain
Primary and Secondary WINS server
WINS node type
In a cluster that has NLB enabled, the controller replicates the following information:
The cluster name.
The cluster IP address.
The message exchange period (in milliseconds) between cluster nodes.
The number of messages that can be missed from a node before initiating convergence.
The User Datagram Protocol (UDP) that should be used to receive remote control commands.
A switch that specifies whether or not remote control and enumeration are enabled for the controller.
The start and end port numbers for disabled port rules.
The start port number, end port number, protocol, and affinity for load-balanced port rules.
When dealing with server content and applications, it's important to realize that Application Center treats everything it serves to clients as an application. An application is simply a collection of all the software resources for a Web site, virtual directory, or COM+ application (for example, Web sites, static Web pages, ASP pages, and components). In the context of Application Center, the Default Web Site created by IIS is an "application" that has specific resources associated with it. An application, then, can also be regarded as a resource manifest or inventory.
This model allows administrators to think of their sites in terms of logical groups of resources, and it can be used to optimize deployment and synchronization tasks.
In the Application Center cluster environment, there are default and user-defined applications; each type of application consists of a collection of resources that are required for an application. Let's start by examining the default applications that Application Center identifies after you create a cluster.
In addition to the standard Default and Administration Web sites used by IIS, Application Center adds its own administrative site—the Application Center 2000 Administrative Site. This IIS node contains Application Center–specific Web content that is required for the Web-based Administrative client as well as displaying information in the details pane of the Microsoft Management Console (MMC) user interface.
If you look at the Applications view (right-click the Applications node in the console tree), you'll see that a fourth application, AllSites, is listed. Take note of the fact that this application lists the previous three Web sites as its resources, which means that it contains all the metabase information stored in the LM/W3SVC path. This approach ensures that all the applications (and their resources) on the controller are consistently synchronized across the cluster. By providing a single reference point for all the Web site information on the controller, AllSites can be used as a tool to synchronize the entire metabase, either manually or automatically.
Note If you publish a site to the controller, but don't explicitly identify it and its resources as an Application Center "application," the site and it's resources are automatically associated with the AllSites application because the target, Default Web Site, is an AllSites resource.
Application Center lets you define your own applications, which provides a greater degree of granularity than the default applications. It also enables you to accommodate special situations that aren't addressed by conventional Web publishing. Creating a new application is a two-step process that involves naming the new application, and then identifying the resources that are associated with it.
After you name the new application, it appears in the upper part of the Applications view. The application naming convention enforces these rules:
Names must be from 1 through 127 characters.
Leading and trailing white spaces are trimmed from the name.
Invalid characters are: ", ', <, and >
Tip Because a globally unique identifier (GUID) is created for each application, you can use the same friendly name for more than one application. However, this is not recommended because it is likely to create some confusion when managing your applications.
The next step is to use the drop-down list in the lower part of the Applications view and add the resources that you want to have associated with the application.
The following options are available for identifying resource types.
All Resources
COM+ Applications
Data Sources
File System Paths
Registry Keys
Web Sites and Virtual Directories
The All Resources category lists all the resource types that are associated with an application when it's highlighted in the upper part of the Applications view.
This list displays the COM+ application names that are currently available. By default, Application Center, COM+ (for example, COM+ Utilities), IIS (for example, IIS Out-Of-Process Pooled Applications), and System resources are provided. Resource names followed by Remote Server Names (RSN) are COM+ proxies. In order to change the RSN, you have to run the COM+ administrative tool.
Warning You should never replicate any of the default COM+ applications, for two reasons. First, it isn't necessary; and second, it can result in unpredictable server configurations following synchronization.
This category provides a flat list of system DSNs. These DSNs are stored in the \HKEY_LOCAL_MACHINE \SOFTWARE \ODBC \Odbc.ini\ODBC Data Sources key. DSNs are used to connect applications to databases. That is why they are very important to replicate with the rest of your Web application. However, the preferred way to reference remote databases is via "Connection strings." These connection strings, which are imbedded directly into an ASP page or an application, eliminate the need for DSNs. The DSN driver is supplied to ensure backward compatibility with earlier applications.
Note Only system DSNs can be added to an application, but you can add a file DSN as a file.
Each entry in the Odbc.ini subkey assigns a logical DSN to a set of attributes that includes:
The name of the ODBC database server.
The name of the database on the server.
Whether multiple databases are supported.
The type of server (for example, Microsoft SQL Server).
A description of network and connection parameters.
Additional information, such as character set conversions.
An authenticated user can browse the entire file namespace and select the appropriate paths for the application. Each item that's selected will appear as the full path after it's added to the application resource list.
The resource enumerator provides the ability to browse down through any of the registry key paths, such as HKEY_LOCAL_MACHINE \SOFTWARE \MICROSOFT APPLICATIONCENTER SERVER, but users are restricted from enumerating the entire hive.
This resource list consists of IIS Web sites and virtual directories. Only Web sites and virtual directories are displayed because Web directories and files can be browsed and added via the File System resource enumerator.
Once you've created an application, you can rename it, edit each resource category, add more resources, or delete the application.
Note Removing a resource only deletes it from the application resource inventory that you're editing; it does not delete the original resource from the disk.
The ability to create and manage your own application definition is especially useful when you want to identify specific resources that change frequently and are common to several Web sites. You can eliminate the need to replicate entire sites by creating a single custom application that contains these resources. This technique also reduces errors, because you are updating resources in one place rather than trying to manually propagate changes in several locations.
Table 6.3 summarizes the alert notifications associated with application resource list editing.
Table 6.3 Alerts for Application Resource Editing
Title
User notification
COM+ Application Created
A new COM+ application was created with the name: name. The value for name is the COM component.
Resources Not Found
No resources of the specified type could be found to add to the application.
Resource Addition Error
The resource was not added because an error occurred while trying to add the resource.
Application Does Not Exist
The resource was not added because the current application no longer exists.
COM+ Proxy Error
The resource was not added because an error occurred while trying to add the COM+ proxy resource.
Invalid Resource
The resource name is not valid. Verify the resource name, and then try again.
A by-product of the application creation process is the replication definition, which the Application Center synchronization feature uses to replicate content.
After you define an application, Application Center transparently creates a replication definition. Application Center accesses replication definitions—which include, among other things, the application name and GUID—to determine exactly what content to replicate from the controller to members. In addition to the definitions that are created when you publish to the controller's Web site or explicitly create an application, there are special replication definitions for two Application Center applications. For illustration purposes, let's refer to these two applications as "SystemApp" and "DefaultRep."
The first application, SystemApp, defines which resources are always synchronized to the cluster, even if a target is out of the synchronization loop. It includes metabase information, such as a pointer to the current controller and a list of members, and has a corresponding replication definition, which we'll call the System Replication Definition.
The second application, DefaultRep, is synchronized only to members that are in the synchronization loop. It contains information such as cluster configuration information, a list of default monitors, and information about network adapter drivers. DefaultRep also has its own replication definition.
Figure 6.5 provides an overview of the interaction among the Application Center applications, a user-defined application, and their replication definitions when an automatic synchronization occurs on a cluster. The resource lists for the user-defined application (Application 1) and the hidden DefaultRep application (Application 2) are merged into a single replication definition. This definition is a union of all the application resource lists on the system, and it's updated every time an application is added or deleted. When the replication engine creates the replication session, it merges the System and Application replication definitions. This design is economical in terms of resources, because it will send all of the data to the targets while only consuming one session between the controller and members.
Note If a member isn't in the synchronization loop, it still receives all of the content description, but it filters out all items except those that are contained in System Replication Definition.
Figure 6.5 Using replication definitions to create an automatic replication session
Note If an application resource is detected as missing (for example, a file was deleted) during synchronization, a 5047 event is fired.
Once an application exists on a server—regardless of how it was created—you can use the New Deployment Wizard to deploy an application (or applications) to one or more cluster members.
The New Deployment Wizard is a tool that enables you to easily deploy applications from a source server to a specified target. This wizard, like the others you've seen so far, provides a step-by-step method for deploying new content, either within a cluster or to another cluster. The New Deployment Wizard is designed to address the concerns of system administrators who want to maintain a secure, yet usable, buffer between their testing and production environments. (Chapter 8, "Creating Clusters and Deploying Applications," presents a detailed staging and deployment scenario that uses the New Deployment Wizard.)
In terms of processing, content deployment is virtually identical to synchronizing content across a cluster. The New Deployment Wizard uses the synchronization service's replication engine and drivers to move applications from one member to another. The notable difference is that you have to provide administrative-level credentials on the target to launch a deployment.
Let's step through the New Deployment Wizard dialog and examine its processing in detail.
The first page of the wizard tells you what the wizard does and provides the following warnings:
Content and configuration data on the target is overwritten if it's contained in the resource inventory for this deployment.
If the target is a controller, any data that's copied over will be synchronized across that controller's cluster.
This lets you assign a label to the deployment and specify the target destination. By default, the deployment is labeled with the current time and date stamp; however, you can enter a custom label to make tracking easier. Two target options are offered:
A target within the current cluster.
One or more targets on an external cluster.
Your selection determines whether additional credentials are required—a destination target outside the cluster requires credentials with administrative privileges. If you're deploying within the current cluster, this isn't necessary because you must be connected to, or logged on to, the source as an administrator to deploy content.
On this page, you must provide credentials for the target. Because you can only provide one set of credentials, they have to be valid for all of the targets that you identify in the next wizard page.
You can build the target list one server at a time, by entering the server name or by browsing the network, and then clicking the Add button. Each time you add a server the wizard contacts the target and verifies that the credentials you previously provided are suitable. If not, the wizard rejects the server as a deployment target.
Note If your deployment target is a cluster member, you will see the Target servers within the cluster page. It is functionally identical to the Target Servers or Controllers page, except that available targets are displayed on screen.
This wizard provides this page to allow you to specify which applications to deploy. The first option is to deploy the entire server image, which consists of all the content and configuration settings on the source. This choice provides a simple method for creating a backup copy of the controller, if it happens to be the deployment source.
The second option is deploying one or more of the applications that are listed. The wizard builds this list from the master inventory. At a minimum, the list consists of the Administration Web site, AllSites, the Application Center 2000 Administrative site, and the Default Web Site. If any user-defined applications exist on the source, they are displayed as well.
This page of the wizard offers the following deployment choices:
Replicate File and Directory Permissions—You can either leave existing file and directory permissions intact on the target or copy over the file and directory permissions that are in place on the source.
Deploy COM+ Applications—If you're deploying a COM+ application, this flag is necessary because some services on the target will be reset. If this is the case, it may be necessary to restart the target. If you've identified a COM+ application as your source, you have to plan your deployment and the consequences.
Note COM+ application deployment is a multi-step process. First, the application is deployed to the target cluster controller. Second, the new application on the target cluster controller is synchronized to the cluster members. The subject of COM+ application deployment is addressed in detail in Chapter 8, "Creating Clusters and Deploying Applications."
Deploy Global ISAPI Filers—This situation is similar to deploying COM+ applications, except that only IIS will be reset, which can potentially affect the entire cluster if the target is the controller.
Warning If you're deploying either a COM+ application or global ISAPI filters, some planning is essential, and you need to be fully aware of the potential consequences of such a deployment.
For a more in-depth look at COM+ and ISAPI filter replication, see "Special Cases" later in this chapter.
The final page lets you initiate a deployment by clicking the Finish button. This page provides a pause that gives you a chance to scroll back through the wizard and double-check your selections—just in case you missed something or failed to take into account what will happen.
After you click the Finish button, the deployment is launched and all of the pertinent information is passed to the Application Center Synchronization Service, which does the actual configuration and content copying from the source server to the target server.
Deploying content over a Wide Area Network (WAN)
A frequent question is whether or not Application Center can deploy content over a WAN—it can—however, we do not recommend using Application Center for WAN deployments.
This position is based on the design criteria that were used for the initial release of the product. During the product design stage, the product team determined that the highest priority was the efficient replication of content and configuration settings within a demilitarized zone (also known as a DMZ perimeter network), which typically involves moving content from a stager to a cluster over a LAN. Accordingly, the Application Center Replication Service, which is the core technology for deployment, is designed to move large amounts of data quickly and securely over a network that is both stable and capable of consistent throughput. In other words, LAN-based replication is predictable.
WAN-based replication on the other hand, involves moving data over a diverse range of WAN links whose stability and throughput capabilities are unpredictable. In the real world, these links can range anywhere from a 56-KB modem connection to an OC-12 line. In this world, a replication system needs to provide transfer restart capability to handle dropped connections and allow the sender to prioritize WAN traffic, and implement bandwidth throttling to accommodate throughput issues. The Site Server Content Deployment System (SSCD) provides many of these capabilities; however, you should be aware that SSCD is designed to handle files and has limited support for configuration replication. (It will replicate a subset of the IIS metabase.)
For the present time at least, we recommend using the Content Deployment System, which is included on the Application Center product CD, as a viable alternative for deploying files over a WAN.
Future releases of Application Center will include the features that are currently found in SSCD.
Of the various types of information that is replicated across a cluster, there are some elements that require special handing:
User accounts
COM+ components
Health Monitor configuration items
Each of these is described in the following sections.
Although Application Center creates its own account (ACC_
computername) for cluster administration purposes, which is replicated, other user account replication is limited to the IIS default anonymous user account (IUSR_). This restriction is imposed to allow users to install and work with Application Center clusters without requiring a separate domain server for each cluster.
The IUSR_ account that's replicated is the name found at the W3svc node and a replication service does not search down this tree for additional IUSR_ accounts. For scenarios that employ multiple anonymous user accounts, a dedicated domain server is the recommended best practice.
Warning If the IUSR_ account is a Windows domain account, it isn't replicated.
In order to synchronize account information between the metabase and the Security Account Manager (SAM), the user has to change account information, that is to say, the password, by using the SAM.
The greatest challenge to deploying components on a cluster is the fact that today's technology locks the DLLs that house objects, making it impossible to rename or update the DLL. Typically, you have to stop all services related to the DLL, copy over the new files, register the new DLL, restart the services and, all too frequently, restart the server.
The COM+ application driver eases the burden of component deployment by stopping the processes associated with the in-use DLL, installing the Windows Installer packages (.msi) file for the application, and restarting the appropriate processes. It's not a perfect solution, but it simplifies component deployment in a cluster by doing most of the work for you.
Note Because there isn't any native support for COM in Application Center, you will have to convert your COM-based applications to COM+ applications.
Before you can deploy COM+ objects (or applications), you have to identify them as an application resource by using the Applications node. At the same time, you have to identify the resources that are associated with the component itself, such as files and registry keys. Although Application Center will allow you to create this application on the controller, it's not recommended; a far less risky approach is creating the application on a stager, and then deploying it to the controller. (For detailed information about COM+ application deployment, see Chapter 8, "Creating Clusters and Deploying Applications.")
Note Unlike Web-based applications, COM+ applications aren't automatically synchronized across the cluster. As you may recall, component replication is an on-demand service because it may require restarting a server.
The COM+ application driver uses information about the COM+ application (by using the Microsoft Installer APIs) to provide a form of versioning for these applications. Because these APIs can enumerate and query a particular application, it's possible to gather Global and Application properties. A hashing algorithm creates a signature from these properties, and the COM+ driver uses this signature to compare versions of an application on the source and target servers.
When you initiate a COM+ application replication, the following sequence of actions takes place:
The target compares its application signature to the signature contained in the controller's replication definition, and if they're different it requests the application in its Action list. The COM+ application driver generates an .msi file for the application and uses the File driver to send it to the member.
NoteApplication Center uses the ExportApplication and InstallApplication methods of the ICOMAdminCatalog interface to export and install COM+ applications.
Before executing the .msi file and installing the application, user connections are drained and the server is set offline. The main reason for draining connections is to ensure that the DLL is unlocked when it comes time to commit the replication.
The COM+ driver stops the W3svc Service and other services that are listed in the metabase. To obtain a list of all the services that are shut down, from the command line, type: mdutil enum /webreplication.
The driver changes the command-line property to prevent activation from DCOM calls or Component Load Balancing (CLB).
The driver stops any COM+ applications that are associated with the application that's being installed.
The driver makes a call to a COM+ API with the appropriate install level. (It always installs users and roles.)
The driver executes the .msi file and installs the application.
The COM+ driver restarts the services on the target, including application-related services, IISAdmin, and all Web-related services.
NoteApplication
Center
does not search for externally dependent DLLs or executables associated with the COM+ application. These files must be associated with the virtual site, added to an application, or already installed on the target. The COM+ driver is only granular to the application level; all the application components are replicated.
Although global ISAPI filters are replicated with the File driver, special handling is required because the servers require an IIS reset when changes are made to a global ISAPI filter.
Note When a new filter is installed at the virtual site level, IIS doesn't need to be reset.
Like components, global ISAPI filters are replicated by using the New Deployment Wizard. Because global ISAPI filter handling can cause a server reset on the target, the global ISAPI filter deployment option is turned off by default.
Detecting an ISAPI filter change at all levels and taking the correct action on the target server involves parsing the metabase driver's IHave list from the source server and looking for the following cases:
A change is made to the global filter load order. In this case no IIS reset is called.
The ISAPI filter file is updated. If a global filter's FilterPath is updated, the file driver detects the change. It will trigger an IIS reset if the change is related to a global filter; if the change is at the virtual site level, no reset is called.
A new ISAPI filter is added. The metabase driver looks for the addition of a filters entry in the W3svc metabase. If a new entry is found, the driver requests an IIS reset; if the change is at the virtual site level, no reset is called.
An ISAPI filter is deleted. The metabase driver looks for the deletion of a filters entry in the W3svc metabase. If a new entry is found, the driver requests an IIS reset; if the change is at the virtual site level, no reset is called.
The basic processing sequence for implementing any ISAPI filter changes follows the steps used for installing a new global filter:
Copy the file and metabase setting to the target.
Before the session is committed, stop IIS (call the equivalent of the IISReset command-line utility to do a reset IIS/stop).
Apply the metabase setting.
Restart IIS (call the equivalent of the IISReset command-line utility to do a reset IIS/start).
As noted previously, if you make changes to filters at the virtual site level, IIS will not be reset.
Most of the Microsoft Health Monitor data resides in the Health Monitor namespace, but data is also stored in the Application Center and Windows 2000 namespaces. Because Health Monitor uses WMI extensively to store the Application Center monitoring policies and other configuration data, this driver is used to handle WMI data.
The nature of WMI data
WMI is a non-transactional store that can be viewed as an object-oriented database management system (OODBMS). There are objects organized in classes, which in turn reside in namespaces. Classes are part of a hierarchy and they have qualifiers, properties, and methods. They can also have instances that are uniquely identified by a key.
WMI provides an "intrinsic event" notification for changes to classes and instances that the WMI/Health Monitor driver uses to perform full or automatic synchronization.
During setup, Application Center installs the class definitions—contained in .mof files—for events and providers, and then compiles them. Setup also installs and configures the base classes for the monitoring application, Health Monitor.
The WMI/Health Monitor replication driver has its own namespace and supports change notify by using the WMI event consumer provider, which registers the following class events:
When a class is created.
When a class is deleted.
When a class is modified.
When a class instance is created.
When an instance is deleted.
When an instance is modified.
The driver uses temporary consumers for changes in the three different namespaces where the instances/classes to be replicated are stored. Any change notify event triggers an automatic replication.
During a full synchronization, the driver parses the Health Monitor namespace and applies the signature it derives from instance names and property values. After comparing signatures, any newer instance text is copied to the target. The same processing takes place in the Application Center namespace, where instances of Event Filters and classes derived from the configuration base are stored.
Note All the required .mof files are copied to a temporary directory on the target. Then, the WMI/Health Monitor driver compiles these files, puts the new instances in WMI, and commits the session.
The Health Monitor namespace contains information such as configuration rules, actions, and event filters. There is a System class with a single instance for every computer. Everything else in the hierarchy descends from the System instance. The Health Monitor namespace hierarchy consists of three major nodes: Actions, Non-Synchronized Monitors, and Synchronized Monitors. The latter are Application Center–defined monitors that are used for cluster and member monitoring. These monitors are replicated to all members every time the cluster is synchronized. For more information about monitoring and eventing, please see Chapter 9, "Working with Monitors and Events."
Because some of the configuration settings are specific to individual servers on the cluster, this driver recognizes the difference between cluster-wide and local monitors, and synchronizes only the members to the global monitors (the Synchronized Monitors shown in Figure 6.6). This collection of monitors includes:
ApplicationCenter Monitors: The status of specific replication events, and request forwarding initialization.
ApplicationCenter Log Monitors: The size of the Application Center Event and Performance Logging database.
Online/Offline Monitors: The status of the Web server (online or offline).
Processor: Processor utilization.
Web Site Monitors: Result of a request for the default Web page from the local_host.
Figure 6.6 The Health Monitor monitors that ApplicationCenter synchronizes across the cluster
Note The Email, Take Server Offline, and Bring Server Online actions are replicated only if they are associated with, a synchronized monitor—also known as a synchronized data group or data collector.
Performance data is provided by a set of performance counters that you add, delete, or modify through the user interface when viewing either a cluster or member performance display. The Application Center namespace stores configuration data for a Performance log consumer and an Event log consumer.
The WMI replication driver replicates all the performance counter classes so that this logging will occur over the entire cluster. Although the driver has to replicate all the classes derived from the Application Center performance configuration base (which records the performance counters to watch), the base itself is a Perfmon provider class whose instances are dynamically created and updated at run time. As a result, instances don't have to be replicated. There is also a counter class that is used for controlling what is actually displayed on the performance display. The driver has to replicate all instances of this class.
Eventing is enabled through filters that are bound to event consumers in all three namespaces. The instances that require replication, along with the filter and binding information, are stored in the three different namespaces that the driver is concerned about: Application Center, Health Monitor, and Windows 2000 Events (root\Cimv2).
During synchronization there are more than 90 events fired that provide and record information, warning, or error messages. One example of how you can use this event information is the synchronized monitor (Application Center Monitors) that's installed by default. Select * From MicrosoftAC_Replication_Session_General_Event flags a critical condition and sends out an email notification if the EventID is 5038. This Error event is fired if there is failure during a synchronization session.
You can use the MicrosoftAC_Replication_Event class and its children to access the extensive data that Application Center collects during synchronization and deployment. The schema for the MicrosoftAC_Replication_Event class is documented in Chapter 9, "Working with Monitors and Events," and detailed event information is documented in Appendix D, "Application Center Events."
See also: Chapter 9, "Working with Monitors and Events," and Chapter 11, "Working with the Command-Line Tool and Scripts."
Application Center enables you to configure synchronization on the cluster-wide and individual member levels. As shown in Figure 6.7, the clustername Properties dialog box for the cluster lets you specify:
Whether or not automatic updates are enabled, as well as the time interval between full synchronizations.
Whether or not file and folder permissions are replicated.
Figure 6.7 The RKStager Properties dialog box for configuring synchronization on a cluster
At the individual member level, you can use the membername Properties dialog box to take the member out of the synchronization loop by disabling synchronization for the member.
The synchronization exclusions feature provides increased flexibility by allowing you to synchronize part of a file tree by creating an exclusion list. This list is granular down to the individual file and file extension level.
This feature addresses two common scenarios:
First, it isn't necessary to synchronize certain types of files across a cluster. This is particularly true for files which are created dynamically, for example *.log files, and server-specific configuration files such as *.ini files.
Second, there are cases, particularly on a stager, where you don't want to replicate a test site. Through the exclusion list, you can ensure that a vsite's directories/subdirectories are not copied to a cluster, nor across a cluster. However, the vsite's metabase information will be replicated.
You can create a synchronization exclusions list using by the Synchronization Properties dialog box (Figure 6.8), which is opened by right-clicking the Synchronizations node in the Application Center console tree.
Note You have to create the exclusion list on the target rather than the source system. All files are transferred during synchronization, and exclusions are applied during the apply phase by the target. In cases where an exclusion list exists on both the source and target, the exclusions configured for the target are implemented.
This feature gives you the ability to exclude settings and content from replication on the basis of:
File name and path (C:\Samplefolder\Somefile.ext).
File name extension (C:\Samplefolder\*.ini).
Directory (C:\Samplefolder\Sampledirectory\).
You can add as many items to the exclusions list as you want, provided that you use the preceding parameters for your exclusions.
Note Exclusion by directory also includes specified subdirectories.
Figure 6.8 The Synchronization Properties dialog box for configuring synchronization exclusions
Note Implemented via a File System driver attribute, the exclusion list can be created at the individual member level.
The exclusions list is a very useful tool; however, it should be used cautiously on any cluster because the list overrides all the default replication settings. There is the potential to completely disable all replication for critical settings and content on a cluster.
In addition to displaying real-time synchronization status information on the cluster and controller nodes' details pane (See sidebar). Application Center also provides a Synchronizations node in the console tree where you can view detailed synchronization and deployment information (Figure 6.9).
Replication phases during synchronization
During synchronization several status messages are displayed in the Synchronizations view that indicate which phase the synchronization is in. These phases and the major activities (Figure 6.4) for each are as follows:
Initializing—The source server is starting a replication job.
Scanning—The IHave lists are created on both the source and target(s), the lists are compared, and the target generates an Action list.
Transferring—The Update list is sent to the target, and the required files are copied to a temporary directory on the target. The target notifies the source that it's ready to commit the session.
Applying—The replicated files and configuration settings are written to the target.
Figure 6.9 The Synchronizations node and its details view
The upper part of the Synchronizations view displays summary information for a synchronization or deployment that's finished. (During synchronization, status indicators show the current stage of the synchronization: reading, transferring, or applying files. The following summary is provided:
Start date
Start time
Name (of the deployment; synchronizations default to the cluster name)
Status (success, partial success, or failure)
Errors (number of)
Elapsed time (for the entire synchronization)
The lower part of the Synchronizations view provides detailed information for a specific synchronization by using a General tab and an Events tab. The General tab displays:
The target name.
The synchronization status.
The number of errors.
The number of files (and their total size) that were transferred.
The number of files (and their total size) that were applied.
The Events tab, illustrated in Figure 6.10, enables you to access event information that's generated for the synchronization. You can view All, Errors Only, Errors and Warnings, or a specific event by providing an event identifier as a filter.
Figure 6.10 Detailed error and warning event information for synchronization
Application Center's Synchronization and Deployment feature set provides a robust tool for quickly deploying complex business applications throughout your organization; these services also ensure that servers in both Web and COM+ application clusters remain synchronized with each other. Additionally, any activity related to synchronization and deployment is easy to monitor and audit. | http://technet.microsoft.com/en-us/library/bb734911.aspx | crawl-002 | en | refinedweb |
Summary: Review the recommended best practices for developing forms-based applications in Microsoft Office Groove 2007. (11 printed pages)
Joshua Mahoney, Microsoft Corporation
Josh Goldman, Microsoft Corporation
November 2007
Applies to: Microsoft Office Groove 2007
Contents:
Using Scripts in the Groove Forms Tool
Using a Separate Groove Workspace for All Development
Identifying Tools with a Unique Tool Name and Version Number
Design Name
Design Version
Naming Fields Explicitly
Defining Programmatic Aliases
Using Clear Names to Organize Script Files
Maintaining Version Number of Script Files
Placing System Callout Implementations in Form Script File
Using CSS-Defined Style File
Guidelines for Scripts
Considering UI Modes in System Callouts
Enclosing Top-Level Scripts in Try-Catch Blocks
Handling Errors and Providing Error Messages
Using Documented, Supported API Variables and Functions
Using Scripting Conventions
Adding Comments in Script Code
Conclusion
Additional Resources
The Microsoft Office Groove 2007 Forms tool provides a framework that enables you to create customized forms for displaying and entering data. You can create fields, forms, and views and enhance the look of your forms using available styles or by creating custom styles.
You can also use form scripts to add custom script code to your Groove Forms tool. These scripts run when the user is accessing the form. Scripts make it possible for you to extend the capabilities of the Forms tool to handle specialized application requirements.
This article describes how to develop the design and how to organize the design elements so that the design is easier to maintain. You can use the best practices for any Forms tool design even if you do not add scripts.
In addition to standard HTML Document Object Model (DOM) script programming, your scripts can do the following:
Run when certain Forms events occur, such as form initialization and submitting new or updated data.
Access Groove services to get context information such as the members of the workspace, or to perform methods such as sending an instant message.
Access data in the field values in the currently displayed form and underlying Forms record.
Perform lookups on data in the current tool, or in other Groove Forms tools, or in InfoPath Forms tools in the same or other workspaces.
Define a function that is executed to provide an initial value for a field.
You can associate each form script with a form, and execute it on HTML DOM events or on Groove Forms tool events such as form initialization. Forms scripts have access to the HTML DOM, to the underlying record, and to the Groove environment.
This article describes how to develop the design and how to organize the design elements so that the design is easier to maintain. The following best practices are useful for any Forms tool design even if you do not add scripts.
Follow these recommendations to make it easy to update and maintain Groove forms that you create for your team or for others.
During the process of developing a Groove Forms tool, it is a good idea to make changes and publish new versions of the tool often, for testing purposes and to refine the tool’s design.
Be sure to work on your tools in a separate development workspace because any changes to the tool you save are sent to all members of the workspace that contains the tool.
Limit the membership of your development workspace to only those users participating in the development or testing of the tool; otherwise members of the space who are not working on the tool receive and process data unnecessarily.
Use the Replace Design feature to show an iteration of the tool to end users for review when you make significant changes. The Replace Design feature of the Forms tool updates an end user workspace with the latest version of the tool. If you only make small changes, you can make the changes in the existing tool and not replace the design.
Be sure to save incremental versions of the template of the tool as you develop it so you can revert to a previous version, if necessary, or discard changes you do not want to keep. Template backups make it easy to get back to the last working version of the tool.
The Form Design Name, Version, and Description properties provide a mechanism for external applications (including scripts) to identify a Forms tool and to document the design for end-users and other forms designers.
Be sure to specify a design name, version, and description even though these fields are optional. There is no automated mechanism for incrementing the design version number; it is the responsibility of the forms designer to increment the version number when appropriate.
From the Designer list in the workspace toolbar, click Open Design Sandbox.
On the Settings and Options page, click the About This Tool tab.
In the text boxes, type a design name, design version, and description.
Click Apply and Publish Sandbox to save your changes.
From the Help menu, choose About This Tool to display the design name, design version, and a description for a tool.
Follow these guidelines when specifying a design name and incrementing the design version number.
The design name is a unique string that identifies the organization that designed the Forms tool and the application class or purpose of the Forms tool. Use the same URN format that is used in Groove to identify other templates with the general format of “urn:your-company-namespace:unique-name”. Design names cannot contain spaces. If you are modifying an existing Forms tool template to create a new version and use of the tool, give it a new URN.
The design version provides a way for form designers to indicate and document changes they make to a Forms tool template. After a Forms tool template is deployed and external applications are developed to access the tool, it is very important to provide information about modifications to the tool using the design version number. Application developers can ensure that their applications are working with the correct version of the Forms tool using this number. External applications can be affected when fields, views, and forms are added to the Forms tool, or when an existing one is modified. Typically, external applications are not affected by modifications to the user interface design, such as moving a field in a form.
The design version is an ordered set of four digits delimited by periods, and is stored as a string. The design version conforms to the version identification scheme: (0.0.0.0). From left to right, the four version digits represent:
Major version
Minor version
Custom version
Build number
Use the following guidelines to increment the design numbers:
Major version indicates an incompatible Forms tool schema change. For example, if you modified the Issue Tracker template and deleted the AssignedToIndividual field and added IssueManager, IssueTester, and IssueDocumenter fields, you should change the major version number. A major version upgrade of a Forms application should be considered a different application altogether. While functionally the application remains the same application class, (it is still an Issue Tracker application), it uses a completely different record data schema.
Minor version indicates no incompatible schema changes that would affect existing external applications. A minor version upgrade can add new fields, forms, and views to a Forms tool but cannot delete or make schema modifications to existing ones.
Custom version is considered a patch release, which has specific functionality or modifications to serve a specific purpose.
Build number is a designer-specified increasing sequence used to associate a particular software build sequence to the forms application.
For more information about these properties, see Design Names, Versions, and Descriptions for Groove Forms Tools.
Be sure to name all fields in a form. You cannot change field names after the field is created. If you do not enter a field name explicitly, the name is derived from the contents of the Label property of the field.
Field names generated from the Label property are difficult to work with in a script because any spaces, punctuation, or other characters are replaced.
For example, the field label Enter Customer First Name: results in the field name “Enter_32Customer_32_First_32Name_58”.
It is difficult to work with field names generated from the Label properties in script, external integration code, and within the Designer UI.
If a Forms tool is localized for multiple languages, the form and view names should be translated into the local language because they are displayed to the user. If you use the form or view name in a call to the OpenFormIDByAlias Method or OpenViewIDByAlias Method, you have to modify the script files each time the tool is localized. To avoid this problem, define a programmatic alias for each form and view defined in your tool. Since this alias is not displayed to the user, it does not need to be translated when a tool is localized. You can define a programmatic alias on the form or View Options tab in the Forms tool designer. It is a good practice to define an alias whenever you create a form or view.
Use one script file to contain all the functions and variables in a tool that are commonly used in all forms in a tool and name the file Common.js or Shared.js.
Any script that is unique to a specific form should be saved in a single script file that uses the same name as the form.
For example, if the form is named Contact, save all scripts in a file named Contact.js or ContactFormFunctions.js and keep the naming consistent across all scripts.
This naming convention makes it easier to maintain scripts and also ensures that the form only loads the script file that it uses. This speeds up tool loading and performance.
The Forms script editor is very simple and does not provide color-ahead or type-ahead coding help. Typically, you develop a script in an external editor and then copy and paste the script into the script editor. Be sure to maintain a version number at the top of each script file especially if there is more than one developer working on a tool.
Developers can easily determine which script file (if any) is updated using the version number and can verify that their local copy is up to date before making any additional changes. Maintaining the script version number at the top of the file is also helpful if you work with a common script library that is across multiple tools. It is easier to determine when the script library file is updated and which version of the library tools are in use.
Place system callout implementations in their respective form-specific script file to make scripts easier to find and maintain.
Enter a function reference in the system callout script file and define the function in the specific script file defined for the form. This provides a single file that contains the body of all the functions that are used on the script.
Use the following naming convention for the function names of the system callout implementations:
form_name_system_callout_name
For example, if you are implementing the following callout OnAfterInitialize for the Contact form, name the function Contact_OnAfterInitialize and place it in the form-specific-file, Contact.js.
Any calls to the script in the system callout script editor are calls to the implementation in the form-specific script files. This provides developers with one location to make changes to any script used by the Contact form.
Assign class names defined in Style files to fields in the Designer or use other cascading style sheets selectors to achieve the desired visual effect. This makes it easier to maintain visual formatting and allows a designer familiar with cascading style sheets to alter visual formatting without editing a script.
Follow these general guidelines to create easy to maintain scripts.
It is important to remember that your system callout script can be called when the user is performing different actions with the form. For example, the OnBeforeInitialize script can be called when the user is:
Creating a record
Updating an existing record
Viewing an existing record
Viewing the record in the preview panel
Using the form to search the records
Your script may need to function differently in each context. For example, your script may need to enable or disable a field based on whether the user is viewing or updating the record.
A top-level function is any function that is the first entry point where a script performs an operation. System callout implementations and event handlers for buttons or fields are considered top-level functions by this definition.
The Forms tool catches errors that occur in user-defined functions with no indication to the end user or developer that an error occurred. This is why it is good to wrap any script in top-level functions within try…catch blocks and display, or otherwise handle any resulting errors. This puts the developer in control of how to handle errors (display them or not or take some other action) which can make troubleshooting easier.
…
Be sure to handle errors and display error messages.
Generally, it is not a good idea to hide an error and not display an error message. Users are confused if a feature is not working as expected but there is no indication to the user or description of the problem; the user cannot correct his or her mistake.
Ignoring or hiding errors can also make troubleshooting more difficult. Be sure to add comments that fully document any circumstance or situation that leads to an error that you track but do not display an error message.
The Welcome to the Microsoft Office Groove 2007 Forms Developer Reference contains the complete list of supported API variables and function. For example, you may determine the definition (when using Microsoft Visual Studio) of global variables “g_” prefix, such as g_IsCreateAnother. These global variables and functions that are visible in the JavaScript sources are internal variables and functions and are not part of the supported API. While it is technically possible to use these variables and functions, you do so at the risk of those variables and functions being changed or removed in a later version of Groove (causing your tool not to function).
You should avoid system implementation for the same reasons. Examples of implementation details are the format of URLs returned from script lookups, and custom attributes on form fields such as ISREQUIRED.
Follow these scripting conventions.
Declare all variables using the var statement to keep them within the global scope.
var
End all statement with a semicolon.
Place global variable declarations at the top of script files.
Avoid using the g_ prefix on variables
g_
Use consistent casing conventions:
For example, use lower case for the first word then initial capitals for following words such as "someVariable".
For function names, use initial capitals for all words, such as "SomeFunction".
Use consistent spacing, indenting, and formatting.
The following script presents an example of the conventions in use.
Example of consistent; parent record could not be opened",
GrooveMessageBoxIcon_Information);
}
}
catch (err)
{
ShowError(err);
}
}
Example of inconsistent, thus parent record could not be opened", GrooveMessageBoxIcon_Information);
}
}
catch ( err )
{
ShowError(err);
}
}
Use descriptive and meaningful comments to describe script statements and sections of logic that are complicated or unclear. Avoid comments that are vague or state the obvious, such as //getting data or //setting the variable.
//getting data
//setting the variable
Use comments to help you remember why you wrote the script the way that you did. Comments can help you since they clarify decisions made at the time of development and they help other developers understand the code when they work on it and make changes.
You can use scripts to add custom script code to your Groove Forms tool and to extend the capabilities of the Forms tool to handle specialized application requirements. This article detailed a number of best practices to use to develop the design and to organize the design elements so that the design is easier to maintain. These best practices are useful for any Forms tool design even if you do not add scripts.
For more information about Microsoft Office Groove 2007 Forms development,.
Welcome to the Microsoft Office Groove 2007 Forms Developer Reference. | http://msdn.microsoft.com/en-us/library/bb894664.aspx | crawl-002 | en | refinedweb |
Provides authorization access checking for service operations.
Public Class ServiceAuthorizationManager
Dim instance As ServiceAuthorizationManager
public class ServiceAuthorizationManager
public ref class ServiceAuthorizationManager
This class is responsible for evaluating all policies (rules that define what a user is allowed to do), comparing the policies to claims made by a client, setting the resulting AuthorizationContext to the ServiceSecurityContext, and providing the authorization decision whether to allow or deny access for a given service operation for a caller.
The CheckAccessCore method is called by the Windows Communication Foundation (WCF) infrastructure each time an attempt to access a resource is made. The method returns true or false to allow or deny access, respectively.
The ServiceAuthorizationManager is part of the WCFIdentity Model infrastructure. The Identity Model enables you to create custom authorization policies and custom authorization schemes. For more information about how the Identity Model works, see Claims and Authorization.
This class does not perform any authorization and allows users to access all service operations. To provide more restrictive authorization, you must create a custom authorization manager that checks custom policies. To do this, inherit from this class and override the CheckAccessCore method. Specify the instance of the derived class through the ServiceAuthorizationManager property.
In CheckAccessCore, the application can use the OperationContext object to access the caller identity (ServiceSecurityContext).
By getting the IncomingMessageHeaders property, which returns a MessageHeaders object, the application can access the service (To), and the operation (Action).
By getting the RequestContext property, which returns a RequestContext object, the application can access the entire request message (RequestMessage) and perform the authorization decision accordingly.
For an example, see How To: Create a Custom AuthorizationManager for a Service.
To create custom authorization policies, implement the IAuthorizationPolicy class. For an example, see How To: Create a Custom Authorization Policy.
To create a custom claim, use the Claim class. For an example, see How To: Create a Custom Claim. To compare custom claims, you must compare claims, as shown in How To: Compare Claims.
For more information, see Custom Authorization.
You can set the type of a custom authorization manager using the <serviceAuthorization> element in a client application configuration file.
The following example shows a class named MyServiceAuthorizationManager that inherits from the ServiceAuthorizationManager and overrides the CheckAccessCore method.
public class MyServiceAuthorizationManager : ServiceAuthorizationManager
{
protected override bool CheckAccessCore(OperationContext operationContext)
{
// Extract the action URI from the OperationContext. Match this against the claims
// in the AuthorizationContext.
string action = operationContext.RequestContext.RequestMessage.Headers.Action;
// Iterate through the various claim sets))
{
// If the Claim resource matches the action URI then return true to allow access.
if (action == c.Resource.ToString())
return true;
}
}
}
// If this point is reached, return false to deny access.
return false;
}
}
Windows 7, Windows Vista, Windows XP SP2, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 | http://msdn.microsoft.com/en-us/library/system.servicemodel.serviceauthorizationmanager.aspx | crawl-002 | en | refinedweb |
Roger Wolter
Microsoft Corporation
May 2001
Summary: This. (16 printed pages)
Contents
Introduction
What Is SOAP?
What Is WSDL?
Building A Simple Client Application
Next Steps
The SOAPClient Object
Mssoapinit Method
ClientProperty Property
ConnectorProperty Property
Detail Property
HeaderHandler (SOAPClient)
Faultactor Property (SOAPClient)
Faultcode Property (SOAPClient)
Faultstring Property (SOAPClient)
Microsoft® Windows®:Envelope
SOAP-ENV:encodingStyle=""
xmlns:
:encodingStyle="" attribute defines the encoding style used in the message. In this case, standard section 5 encoding is used. The Body sub element of the Envelope element contains the SOAP Message. As defined in section 7 of the SOAP specification, the Add element represents a call on an operation named Add. The sub elements of Add are the parameters of the Add method call.
The response to this message would look like this:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle=""
xmlns:
<SOAP-ENV:Body>
<SOAPSDK1:AddResponse
xmlns:
<Result>1221</Result>
</SOAPSDK1:AddResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
WSDL stands for Web Service" ?>
="">
<message name="getRateRequest1">
<part name="country1" type="xsd:string" />
<part name="country2" type="xsd:string" />
<message name="getRateResponse1">
<part name="Result" type="xsd:float" />
<portType name="net.xmethods.services.
currencyexchange.CurrencyExchangePortType">
<operation name="getRate" parameterOrder="country1 country2">
<input message="tns:getRateRequest1" />
<output message="tns:getRateResponse1" />
</operation>
</portType>
>
<:
="">
</definitions>
Note that the xsd namespace is the 2001 version. The SOAP Toolkit defaults to the 2001 Recommendation version of the XSD Schema namespace but it can understand WSDL files written with older namespace versions.
If the WSDL used any complex types, the XSD Schema definition for these types would be contained in a <types></types> element. Since this WSDL file doesn't use any complex types, that element is missing.
The next section defines the messages used by the service defined in this WSDL:
<message name="getRateRequest1">
<part name="country1" type="xsd:string" />
<part name="country2" type="xsd:string" />
<message name="getRateResponse1">
<part name="Result" type="xsd:float" />
The names and datatypes of the method parameters are defined here.
The portType element defines a mapping between the messages defined in the previous section and the operations that use them:
<portType name="net.xmethods.services.currencyexchange.CurrencyExchangePortType">
<operation name="getRate" parameterOrder="country1 country2">
<input message="tns:getRateRequest1" />
<output message="tns:getRateResponse1" />
</operation>
</portType>
The binding element defines a binding between the abstract operations defined in the portType element and how they are implemented in SOAP. This is done is a separate element because WSDL is intended to define other non-SOAP protocols. A few thing to note here: the style="rpc" means that this message uses the rpc rules defined in section 7 of the SOAP standard. If style="document" was specified, the contents of the SOAP message would be an XML document. The transport attribute indicates that the SOAP messages will be sent in SOAP HTTP messages, the soapAction attribute defines the contents of the soapAction header in the HTTP packet, and the use="encoded" attribute indicates that SOAP section 5 encoding is used for parameter values.
>
The service element ties a SOAP binding to a physical implementation of the service. This is where the URL of the service is defined. Notice that this service is found at port 9090—SOAP doesn't just work through port 80. (There is also a copy of the service running on port 80 if your firewall requires you to use port 80.)
>
This section walks through the process of building a simple SOAP application using the SOAP Client included in Windows XP. The Web Service used in our example is the currency exchange rate service located at the XMethods site, an online listing of available SOAP services.
This service accepts two countries as parameters and returns the exchange rate between them. The WSDL file that describes this service is available at:.
The steps necessary to call this SOAP method using the high-level SOAP API are: create a SOAPClient object, initialize the SOAPClient object with the WSDL file, and call the method.
Here's the simple VBScript code that accomplishes this:
dim SOAPClient
set SOAPClient = createobject("MSSOAP.SOAPClient")
on error resume next
SOAPClient.mssoapinit("
CurrencyExchangeService.wsdl")
if err then
wscript.echo SOAPClient.faultString
wscript.echo SOAPClient.detail
end if
wscript.echo SOAPClient.getRate("England","Japan")
if err then
wscript.echo SOAPClient.faultString
wscript.echo SOAPClient.detail
end if
The three bold lines correspond to the three steps mentioned previously. The parameter to mssoapinit is a WSDL file specification, which can be either an URL if the WSDL file is located on a remote system or a file path if the WSDL file is present on the local machine. A local WSDL file is more efficient because no network round-trips are necessary to retrieve the file. Still, it is easier to administer a single WSDL file that gets loaded to all the clients as it is used.
To run this SOAP method, type the code into a file called "currency.vbs" and then execute it with "cscript currency.vbs". The results should look something like this:
C:\SOAPDemo>cscript currency.vbs
Microsoft (R) Windows Script Host Version 5.1 for Windows
173.9434
You should easily be able to apply this to VB, C++, or any other COM-enabled language. Here's the same service implemented in VB:
Private Sub Form_Load()
Dim SOAPClient As SOAPClient
Set SOAPClient = New SOAPClient
On Error GoTo SOAPError
SOAPClient.mssoapinit _
("")
MsgBox Str(SOAPClient.getRate("England", "Japan")), _
vbOKOnly, "Exchange Rate"
Exit Sub
SOAPError:
MsgBox SOAPClient.faultstring + vbCrLf + SOAPClient.detail, vbOKOnly,_
"SOAP Error"
End Sub
Note: The version of the SOAPClient that is included in Windows XP is not suitable for running in an ASP application. To do that, you must download the full release of the SOAP Toolkit 2.0.
Now that you know how to build a Web Services client application using SOAP in Windows XP, you might want to expand your SOAP knowledge and build Web Services of your own. Two good sources of information are and. These sites have links to SOAP information, white papers, other SOAP sites, and the download page where you can download the complete SOAP Toolkit 2.0 to start building your own Web Service applications. Please note that when you install the full release you will get some warnings about the inability to overwrite the current files because of system file protection. It is safe to ignore these errors. You can also go to XMethods and try some more of the services there—just keep in mind that in order for the SOAP Toolkit to work, the service must supply a WSDL file.
This section reviews the object model exposed by the SOAPClient object. Each method and property exposed by the SOAPClient is described. For more information, see.
The mssoapinit method initializes the SOAPClient object using the Web Services Description Language (WSDL) file as input. All of the operations in the identified service are bound to the SOAPClient object during initialization. Thus, you can call the operations defined in the service using the SOAPClient object.
HRESULT mssoapinit(
[in] BSTR bstrWSDLFile,
[in, optional, defaultvalue("")] BSTR bstrServiceName,
[in, optional, defaultvalue("")] BSTR bstrPort,
[in, optional, defaultvalue("")] BSTR bstrWSMLFile);
bstrWSDLFile
bstrWSDLFile is the URL of the WSDL file that describes the services offered by the server.
bstrServiceName
bstrServiceName is the optional service in the WSDL file that contains the operation specified in the Simple Object Access Protocol (SOAP) request. If this parameter is missing, null, or an empty string, the mssoapinit method uses the first service in the specified WSDL file when initializing the SOAPClient object.
bstrPort
b.
Sub mssoapinit(bstrWSDLFile As String,_
[bstrServiceName As String],_
[bstrPort As String],_
[bstrWSMLFile As String])
set soapclient = CreateObject("MSSOAP.SOAPClient")
call soapclient.mssoapinit("DocSample1.wsdl", "", "", "")
wscript.echo soapclient.AddNumbers(2,3)
wscript.echo soapclient.SubtractNumbers(3,2).
The ClientProperty property sets and retrieves properties specific to the SOAPClient object.
[propget] HRESULT ClientProperty(
[in] BSTR PropertyName,
[out, retval] VARIANT* pPropertyValue);
[propput] HRESULT ClientProperty(
[in] BSTR PropertyName,
[in] VARIANT pPropertyValue);
PropertyName PropertyName is the name of the property to set or retrieve. See the following Remarks section for a list of properties. The following properties are supported (the property names are case sensitive):
When ServerHTTPRequest is set to true, the WinHTTP Proxy Configuration Utility must be used to configure WinHTTP. This is necessary even if a proxy server is not being used. Download utility and follow the usage instructions provided in the ReadMe.txt file provided in the download.
pPropertyValue
pPropertyValue is the property's value.
Property ClientProperty(PropertyName As String) As Variant
Dim Client As New SOAPClient
Client.ClientProperty("ServerHTTPRequest") = True
Sets and retrieves properties specific to the transport protocol connector used by a SOAPClient object.
[propget] HRESULT ConnectorProperty (
[in] BSTR PropertyName,
[out, retval] VARIANT* pPropertyValue);
[propput] HRESULT ConnectorProperty (
[in] BSTR PropertyName,
[in] VARIANT pPropertyValue);
PropertyName
PropertyName is the name of the property to set or retrieve. Which properties are supported depends on the connector being used. The protocol specified by the <soap:binding> transport attribute in the Web Services Description Language (WSDL) file determines the connector to use.
pPropertyValue is the value of the property.
Property ConnectorProperty(PropertyName As String)
Dim Client As New SOAPClient
Client.mssoapinit WSDLFile, Service, Port
Client.ConnectorProperty("ProxyUser") = User
Client.ConnectorProperty("ProxyPassword") = Password
[CURRENT_USER | LOCAL_MACHINE\[store-name\]]cert-name
with the defaults being CURRENT_USER\MY (the same store that Microsoft Internet Explorer uses).
The detail property is read-only. It provides the value of the <detail> element of the <Fault> element in the Simple Object Access Protocol (SOAP) message.
[propget] HRESULT detail(
[out, retval] BSTR* bstrDetail);
bstrDetail
bstrDetail is the detail of the fault.
Property detail As String
Sets the header handler for the next call against this client instance.
HRESULT HeaderHandler([in] IDispatch* rhs);
rhs
rhs is the reference to the COM class interface that implements the IHeaderHandler.
Property HeaderHandler As Object
Set sc = WScript.CreateObject("MSSOAP.SOAPClient")
sc.mssoapinit ""
sc.HeaderHandler =
WScript.CreateObject("SessionInfoClient.clientHeaderHandler")
Sc.SomeMethod "param1", "param2"
The faultactor property is read-only, and provides the Universal Resource Identifier (URI) that generated the fault.
[propget] HRESULT faultactor(
[out, retval] BSTR* bstrActor);
bstrActor
The bstrActor is the URI that generated the fault.
Property faultactor As String
wscript.echo soapclient.faultactor
The faultcode property is read-only. It provides the value of the <faultcode> element of the <Fault> element in the Simple Object Access Protocol (SOAP) message.
[propget] HRESULT faultcode(
[out, retval] BSTR* bstrFaultcode);
bstrFaultcode
bstrFaultcode is the value of the <faultcode> element.
Property faultcode As String
wscript.echo soapclient.faultcode
This faultstring property is read-only. It provides the value of the <faultstring> element of the <Fault> element in the Simple Object Access Protocol (SOAP) message.
[propget] HRESULT faultstring(
[out, retval] BSTR* bstrFaultstring);
bstrFaultstring
bstrFaultstring is the value of the <faultstring> element.
Property faultstring As String
wscript.echo soapclient.faultstring | http://msdn.microsoft.com/en-us/library/ms997641.aspx | crawl-002 | en | refinedweb |
WelcomeDimEndEnd If' Set up the PresentParameters which determine how the device behavesDim presentParams As New PresentParameters()presentParams.SwapEffect = SwapEffect.Discard' Make sure we are in windowed mode when we are debugging#If DEBUG Then presentParams.Windowed = True#End If' Now create the devicedevice = New Device(adapterOrdinal, DeviceType.Hardware, Me, _ createFlags, presentParams)
private Device device;.
device.Clear(ClearFlags.Target, Color.DarkBlue, 1.0f, 0);device.Present();;
Public Shared Function CalculateFrameRate() As Integer If System.Environment.TickCount - lastTick >= 1000 Then lastFrameRate = frameRate frameRate = 0 lastTick = System.Environment.TickCount End If frameRate += 1 Return lastFrameRateEnd Function 'CalculateFrameRate Private Shared lastTick As IntegerPrivate Shared lastFrameRate As IntegerPrivate Shared frameRate As Integer
this.Text = string.Format("The framerate is {0}", FrameRate.CalculateFrameRate());.
If you would like to receive an email when updates are made to this post, please register here
RSS
Just thought I'd suggest linking to the previous/next sections of this tutorial. I've also have troubles finding each section of this article via your web page. After I did section 1 I couldn't find section 2 without typing 'Beginning Game Development: Part II' into google. Perhaps I'm blind and just missed a very visible link to the articles but right now I'm not seeing it. Thanks,
Adam
After Putting in Private Device device .......... It said It couldn't find a Namespace Device
Sean, you must right-click it and choose "using Microsoft.DirectX" to link it to that class ;)
make sure you put in -
using Microsoft.DirectX.Direct3D;
- on around line 13 in the GameEngine.cs
"After Putting in Private Device device .......... It said It couldn't find a Namespace Device"
u just need to put
using Microsoft.DirectX;
in GameEngine class
I have had the same issue as Sean (April 7, 2007)
I added the code: using Microsoft.DirectX.Direct3D;
and the error dissapeared. I hope that this would be sufficient to use for the rest of the game.
Anton
I still get the error about namespace samples is not part of class microsoft... I've done everything as you said... Pleas help...
Hi!
Sean, you must put the following line:
See ya!
re: Namespace Device error, make sure to add the "using Microsoft.DirectX;" and "using Microsoft.DirectX.Direct3D;" directives to the top of your GameEngine.cs file
I get the same error as Sean...
Error 1 The type or namespace name 'Device' could not be found (are you missing a using directive or an assembly reference?)
Never mind, fixed the problem.
You must add this at the top:
I plugged in the Private Device device, and is said that it coudn't find a namespace device.
Same problem,Can't find the Namespace Device
if it couldn't find a Namespace...add to code
yea it says to be exact.. The type or namespace name 'Device' could not be found(are you missing a using directive or an assembly reference.
those words pop up in my compiler. it is saying to me as i think i may be wrong but your trying to create a type of something we havent got down.. #include<somefile> anyone?
also any chance you can post the actual code. this will enable us to compare it. copy paste the gameengine class would be appreciated
I am interested in learning how to use models in this content, I am a new person to this your help would be apreciated.
Visual C# didn't like the int's in declarations in the FrameRate class
yeah!....i'm having the same problem....it said it couldn't find a namespace Device....
Device is located under Microsoft.DirectX.Direct3D
Sean: you must add
"using Microsoft.DirectX.Direct3D;"
to the GameEngine-class.
># Sean said on April 7, 2007 2:40 AM:
>After Putting in Private Device device .......... It said It couldn't find a Namespace Device
You should add 'using Microsoft.DirectX.Direct3D;' to the others, than it'll work.
Silicon Brain
private Device device; gave an error, adding Microsoft.DirectX and Microsoft.DirectX.Direct3D to the references did not work for me (c# express). I had to add them in the using section:
The type of namespace name 'Device' could not be found.
It occured after putting in the: private Device device;
I'm a total noob in this and am stuck here, can anybody help me?
Use
in the using region
if you get the namespace error for
To Sean
Just float your mouse on the device word in your code page.Click on the arrow that appears you'll get a small box next to the word device.Click on Direct 3D Device.
You just need to include directX libraries
i had the same problem as sean
Sean,
Add the following using directive to the top of your GameEngine form:
Nevin
Type this line:
and the problem is solved
I've just made the program upto the paragraph "3d Graphics Terminology" and my program just crashes. After commenting some code out is seems that the line with "new Device(etc.)" causes the problem, any suggestions?
you need to make sure you have using Microsoft.DirectX.Direct3D; at the top when u have ur input of "private device device" to work and not bring up the namespace not found message
For whatever reason, I get an exception thrown from the constructor of Device if I don't set presentParams.Windowed = true. I don't know why, but when this is not set on my system, the constructor throws an unhelpful (Message = "Error in Application") exception...
Any ideas?
Dast, in order to you Windowed mode you must setup BackBuffers in the PresentParameters.
When I try code for VB from part I or part II, I get the same error message: 'Samples' is not a member of 'Microsoft'. Do I have to install C# for support of DirectXSampleFramework?
When i get to the point of commenting out the stuff in dxmutmisc.cs and try to build i get the following errors:
This one is from the GameEngine.cs file, with reference to the OnPaint method parameters:
Error 1 The type or namespace name 'PaintEventArgs' could not be found (are you missing a using directive or an assembly reference?)
And i get a few of these every time i have something to do with the Drawing object.
Error 7 The type or namespace name 'Drawing' does not exist in the namespace 'System' (are you missing an assembly reference?)
I have followed everything as explained and i am using the DirectX SDK (April 2007)
I get an error when trying to compile and the debugger stops at the line of code: Application.Run(new GameEngine() );
The error is:
BadImageFormatException was unhandled
"is not a valid Win32 application. (Exception from HRESULT: 0x800700C1)"
Anyone else get this or know how to resolve it? I assume it may have something to do with the 64-bit OS.
Caps caps = Manager.GetDeviceCaps(adapterOrdinal, DeviceType.Hardware); throws an exception
An unhandled exception of type 'Microsoft.DirectX.Direct3D.NotAvailableException' occurred in Microsoft.DirectX.Direct3D.dll
Any Ideas
I made the program up to "3d Graphics Terminology", and I'm using Microsoft.DirectX.Direct3D but the program will only run in Debug mode. Otherwise it crashes.
I get this exception when I try and debug or run the program, this is with downloaded battletank files so no mistyping from me. Can anyone help me (email:luke321321{at]gmail DoT com, exception:
System.BadImageFormatException was unhandled
Message=" is not a valid Win32 application. (Exception from HRESULT: 0x800700C1)"
Source="BattleTank2005"
StackTrace:
at BattleTank2005.GameEngine..ctor()
at BattleTank2005.Program.Main() in C:\Users\Luke\Documents\Visual Studio 2005\Projects\BattleTank2005\BattleTank2005\Program.cs:line 19()
I cant build at the step right before "My GPU is Bigger than Yours". I get this error:
Error 1 'BattleTank2005.GameEngine.Dispose(bool)': no suitable method found to override C:\Users\Angel\AppData\Local\Temporary Projects\BattleTank2005\Form1.Designer.cs 14 33 BattleTank2005
and it points to the line:
protected override void Dispose(bool disposing)
Ok, I fixed the problem (I didnt add .cs when I renamed form1 to GameEngine). But now, after adding "Microsoft.Samples.DirectX.UtilityToolkit;" at the top of the code, I get an error that says:
Error 1 A namespace does not directly contain members such as fields or methods C:\Users\Angel\AppData\Local\Temporary Projects\BattleTank2005\GameEngine.cs 1 1 BattleTank2005
P.S. Is it the placement? I put it at the very top
When I put in the code:
this.Text = string.Format("The framerate is {0}",
FrameRate.CalculateFrameRate());
C# comes up with a bunch of errors.
I put the code right after the code that says:
private double deltaTime;
Am I putting it in the wrong place or what. Please help
Anyone solved Dast's problem? I am having the same issue.
Hello,
I get an error after I press f6: "Error 1 The type or namespace name 'Samples' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?) "
I tried to add: using Microsoft.DirectX.Direct3D;
The problem remains.
my email:[email protected]
Do you know what to do in this case?
thank u...
I am getting Microsoft.DirectX.Direct3D.NotAvailableException at Caps caps = Manager.GetDeviceCaps(adapterOrdinal, DeviceType.Hardware);
Any suggessions?
Thanks
I've got exception on the Device Constructor as Dast said it. Any useful ideas from those of who has worked it out after removing(commenting out) the
#if DEBUG
presentParams.Windowed=true;
#endif
section.
hey there,
at first i wan't to thank you for this very good article(s). I'm a programmer since two years now but i've never tried to to desing a came yet. YOU got me realy into that now ;)
Now my question:
is there any specific reason why you create global variables such as "deltaTime" or "device" at the very END of a class or is it just some sort of style?
I learned to create global variables always at the BEGINNING of a class.
It doesn't matter anyhow where i create them, does it?
Seems like i just learned a diffrent style... please correct me if i'm wrong.
chris from germany
When I run it in debug mode I get the blue window... but if I just run it (full-screen) it crashes.
I suspect Vista and DX10 are my problems... and fixes?
I am getting a similar exception at
Caps caps = Manager.GetDeviceCaps(adapterOrdinal, DeviceType.Hardware)
The error string :"D3DERR_NOTAVAILABLE" and errror code -2005530518.
I am up to the part where it says to complile the program and you get a blue screen. I have done all the suggestions made including DirectX libraries and remcoing a few "unsafe" words from the dxmutmisc.cs file but I still get an error message about not finding the namespace "Framework" in that very file. I am using Visual C# Express Edition.
Hey just FYI, if you are running on 64 bit, you have to change the build tab on the project properties to build for x86, otherwise you get an error every time you try to run.
I get an error that says cannot process code for
if (caps.DeviceCaps.SupportsHardwareTransformAndLight) {
createFlags = CreateFlags.HardwareVertexProcessing;
}
else
{
createFlags = CreateFlags.SoftwareVertexProcessing;
}
it says you shouldn't modify the code in the Initialize Component method.
I get the same exception if I don't set presentParams.Windowed = true
I got the same error Sean did... I added using Microsoft.DirectX.Direct3D; but then it threw a warning:
Warning 1 Field 'BattleTank2005.GameEngine.device' is never assigned to, and will always have its default value null C:\Users\Sean\Documents\Visual Studio 2005\Projects\BattleTank2005\BattleTank2005\GameEngine.cs 32 24 BattleTank2005
Any one else get this warning also??
I really like your writing style and I feel like I am learning a lot. At time it seems like you are trying to cram a lot of information into a little space but I understand where you are coming from. This is a tutorial and not a book. With that said I would like to suggest that you write a book and try to get one of those big publishers to pick you up. I'm sure it's easier said then done but after reading two parts of your tutorial I am hooked and would purchase your book and recommend it to others in a heart beat. I have wanted to write games for a long time now and this is the first time any tutorials have made since to me.
You stated you removed Application.EnablRTLMirroring() from the Program.cs class. My program.cs class did not have this method call in it. I assume this is because I am running a newer update of VS2005
My only question is: Can you elaborate more on why you added a using block in the program.cs class. Or better yet can you explain why or how it works. I understand the purpose but not why it does that.
Just so i understand, what do you mean by "wraping the creation of the GameEngine class into a using statement"? That was mentionned in the Code Housekeeping section.
Thank-you!
Same problem as Dast - without adding
presentParams.Windowed = true;
to the GameEngine() class, the program crashes.
OS I'm using is Windows Vista.
Problem that I am having is in the dxmutmisc.cs file provided with the 2007 SDK. FramWork is not identiefied:
Error 1 The type or namespace name 'Framework' could not be found (are you missing a using directive or an assembly reference?) C:\Projects C#\BattleTank2005\BattleTank2005\DirectXSupport\dxmutmisc.cs 2189 61 BattleTank2005
Anyone figure out why it Crashes?
I've added all the includes and what not... runs but crashes..
Would anyone please tell me why the window stays grey when I start it? I told the computer to change the color to dark blue in the color part of the device's setup near the framework timer start function but it just won't do it! Any suggestions?
PingBack from
PingBack from.
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from | http://blogs.msdn.com/coding4fun/archive/2006/11/03/940223.aspx | crawl-002 | en | refinedweb |
Inspired by a few recent posts I’ve run across ( and). I decided to write a simple threading library so, I can run the Erlang ring benchmark in C#.
Here is the code of the ring benchmark itself, it uses my own ThreadIterators library which I’ll get around to posting one day once I clean it up a bit.
1: using System.Collections.Generic;
2: using ThreadIterators;
3:
4: static public class RingBenchmark
5: {
6: static IEnumerable<Event>
7: ForwardMessage(Mailbox<int> inbox,
8: Mailbox<int> outbox) {
9: for (; ; )
10: {
11: var evt = inbox.Recv();
12: yield return evt;
13: outbox.Send(evt.Value);
14: }
15: }
16: static Mailbox<int>
17: SpawnForwaders(int n, Mailbox<int> inbox)
18: {
19: var prev = inbox;
20: var next = inbox;
21: for (int i = 0; i < n; i++)
22: {
23: next = new Mailbox<int>();
24: var thIter = ForwardMessage(prev, next);
25: ThreadIterator.Spawn(thIter);
26: prev = next;
27: }
28: return next;
29: }
30: static IEnumerable<Event>
31: PumpRing(int m, Mailbox<int> outbox)
32: {
33: for (int i = 0; i < m; i++)
34: {
35: outbox.Send(i);
36: }
37: outbox.Send(-1);
38: yield return null;
39: }
40: static IEnumerable<Event>
41: DrainRing(Mailbox<int> inbox)
42: {
43: for (; ; )
44: {
45: var evt = inbox.Recv();
46: yield return evt;
47: if (evt.Value < 0)
48: {
49: yield break;
50: }
51: }
52: }
53: public static IEnumerable<Event>
54: RingBenchMark(int n, int m)
55: {
56: var inbox = new Mailbox<int>();
57: var outbox = SpawnForwaders(n, inbox);
58: var thIter1 = PumpRing(m, inbox)
59: var thIter2 = DrainRing(outbox)
60: ThreadIterator.Spawn(thIter1);
61: var th = ThreadIterator.Spawn(thIter2);
62: yield return th.ExitedEvent;
63: }
64:
65: }.
Here is a table with the raw data, where I’ve fixed M to 1000 and exponentially increased N.
Here’s a graph for that compares the ratio of runtimes.. | http://blogs.msdn.com/daniwang/ | crawl-002 | en | refinedweb |
If you're writing a custom control, it's nice to be able to lock off the controls collection from the outside world.
The first place one might think of going to accomplish this is the "Controls" property off of System.Windows.Forms.Control. Unfortunately, this is not a virtual property. But wait! Not all hope is lost - there is a protected method called "CreateControlsInstance" which is called when the control wants to make a child controls collection.
CreateControlsInstance can be overridden to provide a custom control collection. protected override Control.ControlCollection CreateControlsInstance() { return new MyReadonlyControlCollection(this);}
Here's a really simple "read-only" control collection example:public class MyReadonlyControlCollection : Control.ControlCollection { public MyReadonlyControlCollection(Control owner) : base(owner) { }
public override void Add ( System.Windows.Forms.Control value ) { throw new NotSupportedException(); }
public override void Remove ( System.Windows.Forms.Control value ) { throw new NotSupportedException(); } internal void AddInternal( System.Windows.Forms.Control value ) { base.Add(value); } internal void RemoveInternal( System.Windows.Forms.Control value ) { base.Remove(value); } internal void RemoveAtInternal(int index) { RemoveInternal(this[index]); } internal void ClearInternal() { int count = this.Count; for (int i = 0; i < count; i++) { if (Count <= 0) { break; } RemoveInternal(this[0]); } }}
...and if you want an easy way of working with this class within your control you could add a property like so:
private MyReadonlyControlCollection PrivateControls { get { return this.Controls as MyReadonlyControlCollection; }}So that working with your readonly collection is as easy as:
this.PrivateControls.AddInternal(b);
Trademarks |
Privacy Statement | http://blogs.msdn.com/jfoscoding/articles/450843.aspx | crawl-002 | en | refinedweb |
Vesa "vesku" Juvonen"Circumstances may cause interruptions and delays, but never lose sight of your goal." Server2007-03-23T11:28:00ZWCM enabled collaboration sites<p>Usually collaboration sites (team sites) and WCM functionalities (publishing sites) are seen as two totally separate functionalities provided by SharePoint. However by combining these both options, we can provide even more sophisticated functionalities.</p> <p>One of the challenges in the team sites is the fact that there’s no easy way to store metadata in “site level”, since all of the stored information is managed in the lists. Wouldn’t it be nice, if we could dynamically list for example all of the team or project sites of one particular company division or group team sites based on the project manager assigned to the particular project site.</p> <p>Usually this kind of requirements are solved by implementing a custom web part, which is used to store the “<em>site level metadata</em>” to database. This is good solution, but actually we can implement the same functionality just by utilizing out of the box functionalities.</p> <h3>Combining best of the both</h3> <ul> <li>WCM provides us a way to store “site level” metadata. This can achieved by storing metadata information in the <em>welcome page</em> <ul> <li>This way we can utilize for example CQWP to list the content dynamically within site collections </li> </ul> </li> <li>From team sites we can combine the great collaboration tools, which usually are not provided in WCM sites </li> </ul> <p>Following chapters define one example usage model, which I prepared for development oriented trainings back in spring 2007. We have also used this same model successfully in multiple customer engagements. </p> <h3>Introduction to the solution</h3> <p>Following project catalog and project site functionalities are part of the more complex development methods solution, which demonstrates different possibilities of the SharePoint. Branding of the site has been done in matter of hours, so so don’t concentrate on the actual look and feel, rather to the functionalities. I didn’t want to spend too much time with CSS.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="355" /></a> </p> <p> </p> <h3>Project site</h3> <p><em><strong>Requirement</strong>– Customer needs project sites, which should store also site level metadata. This information is used to aggregate site information in multiple places on the Intranet.</em> <em>Project manager, as the site owner, should be able to manipulate the site metadata to indicate progress of possible issues in the project. </em></p> <p><em><strong>Requirement</strong>– Customer wants to have project phases as the document libraries in the project site. These document libraries have different document templates.</em></p> <p>Following steps defines the different features and functionalities developed to be able to provide the requested functionality.</p> <p>1. Create a necessary site columns, which will be used to store the site specific metadata. Let’s create a feature, which defines the necessary site columns based on the requirements. Below is example of single site column required to store the organization division information.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="198" /></a> </p> <p>2. Create a content type used for the welcome page of the project site. Since we are creating a publishing page content type, the content type is inherited from the out of the box Page content type and the document template is set as a specific aspx page. </p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="211" /></a> </p> <p>3. Create necessary document content types used in the project site. Notice that we use the _cts –folder in the target definitions of the template upload (module element). Underneath this folder there will be specific content type folder created, when ever a new content type is provisioned to the site collection. This is good place to store the document templates, unless the templates are frequently updated (another story, another time…).</p> <p> <a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="182" /></a> </p> <p>4. Create document libraries used in the project site, from where the document content types are provided. In the below image, we introduce a new document library template and create a new instance from it to be used to store documents in the <em>Execute</em> phase of the project. “<em>Why new list template?”</em> – If we only bind the document content types to default document library, the out of the box <em>document</em> template would be still available. </p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="308" /></a> </p> <p>5. Bind the content types to the specific libraries. In the following image, we bind the specific document content types to the execute phase library. <em>“Why aren’t we binding the content types directly in list template schema</em>?” – this approach allows use to use the same document library template for multiple different instances and just bind the specific document templates depending on the instance.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="328" /></a> </p> <p>6. Let’s not forget to bind the project welcome page content type to the pages library, so that the necessary fields will be provisioned to the list.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="76" /></a> </p> <p>7. Now when the project site welcome page content type is available in the pages library, we can provision initial values for the site created. This can be done by setting the default.aspx page’s metadata appropriately as in the image below. Notice that we the the ContentType value as the content type declared earlier in the feature. We also define the initial status values to be “Green”.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="249" /></a> </p> <p>8. We need to create a page layout, which is responsible of rendering the metadata information for the end users (I wont’ declared the detail steps to save some space). Now we can create the site definition used to provision the site. When the site is provisioned, we can see the initial values defined in the module above. “<em>Why a custom site definition?</em>” - If customer would like to utilize the oob team sites and other templates as they are, but they would also like to have this new definition available, custom definition is the only way to go. Since the WCM features are enabled, the site template option is not supported.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="346" /></a> </p> <p>9. Site specific metadata can now be managed directly within the welcome page as long as the fields are rendered correctly in edit mode. </p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="344" height="310" /></a> </p> <p>10. One advantage of this approach is also the possibility to utilize the field controls for content editing. This improves the content editing experience.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="339" /></a> </p> <p>11. When the values have been updated and information is published, we have standardized site model for all of the project sites. This way the end users of the portal can easily see the key information of the specific project from the standardized place in the layout.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="364" /></a> </p> <p>12. As part of the requirements we also created document libraries for each project phase and used content type binding to associate the specific document content types to the specific libraries. As an outcome we are providing different document templates from different document libraries. This way we make the document creation process more efficient and save time for the site end users.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="444" height="171" /></a> </p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="444" height="171" /></a> </p> <p> </p> <h3>Project catalog site</h3> <p><em><strong>Requirement</strong> – Customer wants to list dynamically all of the project sites created underneath the particular catalog site. Aggregation should list the sites based on the organization division and the phase of the project. Only those sites on which the specific user has access to should be shown.</em></p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="458" /></a> </p> <p>Solution is to use Content By Query Web parts (CQWP) as part of the site provisioning process and set the web part configuration values appropriately. Since the organization division and the project phase are stored as the metadata of the project site welcome page (created with specific content type), we can simply utilize the standard out of the box functionalities.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="177" /></a> </p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="110" /></a> </p> <p>Now since we are using CQWP to aggregate the information, all the updates are automatically updated to the lists. Out of the box Site Directory does provide similar functionalities, but the list is not dynamically updated and the information is not stored in the specific site, so that the site owner (for example project manager) could not update the information.</p> <p> </p> <p><em><strong>Requirement</strong>– Only project sites should be created underneath the project catalog site.</em></p> <p>Define the publishing <a href="">features appropriately</a>, so that only the project site is visible in the Create Site functionality.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="148" /></a> </p> <p> </p> <p><em><strong>Requirement</strong>– Customer wants indicate possible issues in the project sites for the managers based on the status updated by the project manager.</em></p> <p>Luckily we had three status indications stored to the the metadata of the sites, which we can utilize. Let’s also create a audience for the managers and show the web part only to those persons, whoe belongs to it. Below is the picture to show dynamically those sites, which have any of the status metadata values selected as “<em>red</em>”.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="404" height="183" /></a> </p> <p> </p> <h3>Putting it all together</h3> <p>Of course all of the functionalities are packaged to the solution package (wsp-file), which can be used to reproduce the functionalities in any MOSS deployment. This means that we need to create the manifest file, which is used to explain the SharePoint the content of the customizations, as in what features, site definitions, assemblies, web parts etc are included in the solution.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="534" height="276" /></a> </p> <p>As addition to the manifest file, we need to create the ddf-file which is used when the functionalities are packaged to the solution package.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="204" /></a> </p> <p>After this, we can deploy the same functionalities to any MOSS deployment and we can be sure that it works identically as in the development environment.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="272" /></a> </p> <p> </p> <h3>Independent site collections</h3> <p>Example design declared in this article works when the project sites are created to same site collection as the project catalog site. This has both advantages and disadvantages. One common design is the have the individual team sites created as own site collection. Also in this case, the WCM features enhance the functionality. If the site metadata information would be stored to external database, we’d have to create custom search functionality.</p> <p>Since the “site metadata” is stored to welcome page based on content type, we can create managed properties of the information stored. Managed properties can then be directly utilized in the search results and we can provide same user experience as in example declared in this article.</p> <h3>Possible enhancements</h3> <p>There are multiple different ways to improve the defined functionality. Here’s few simple ideas.</p> <p>1. <strong>Site approval </strong>– since we are utilizing the WCM functionalities, we could easily attach site approval process before the site would be visible for other portal users. By using minor versioning in the pages library, the site would not be visible for other persons than the site owners before the site is published as a major version. For the publishing process, we can specify any kind of workflow.</p> <p>2. <strong>PKI values for the status</strong> - To indicate the project status more efficiently, we could create a custom field control, which renders the status values using traffic lights in the project site. For the catalog site, this is even more easier, since we can do conditional rendering in the xslt used by the CQWP. This way we could extremely easily create a list of all projects in organization and indicate their status using traffic light model. </p> <p>WCM functionalities provides us numerous different functionalities, so you can easily invent more possibilities.</p> <h3>Considerations</h3> <p>“<em>Why would I create standard team sites, rather than WCM enabled collaboration sites?</em>” – good question. If the requirements can be met with more simplified solution, like feature stapling to out of the box team sites, than there’s no reason to enable WCM in team sites. If you need to provide more enhanced functionalities, enabling the WCM functionalities might just be the solution you are looking for.</p> <p>“<em>I use database to store the metadata values of the site, is this bad solution?</em>” - No, but remember to consider he overall consequences of your architecture choice. The solution to choose always depends on the requirements and functionalities to be provided. The downside of the database based model is however the fact that the metadata is not stored within the site and therefore we need to do additional work to ensure that the values are in sync. Also the database based model requires additional work on the operational point of view in sense of disaster recovery and maintenance. The WCM based model keeps the metadata in the site and even though the site location in the hierarchy would be changed, the metadata would be intact, since it’s stored as part of the site.</p> <p>Hopefully this was useful.</p><img src="" width="1" height="1">sonofthesun facing MOSS sites without content deployment<p>Lately there has been quite a lot of discussions concerning Internet facing MOSS sites and <a href="">content deployment</a>. Quite often there’s misconception that MOSS cannot be used as Internet facing platform without separate authoring farm and utilizing content deployment. Assumption is understandable, but completely wrong.</p> <p>By utilizing possibility to have multiple zones for single <em>SharePoint application</em>, we can setup an environment, which is can be accessed by both anonymous users from Internet and content editors from corporate network using windows authentication. </p> <h3>Conceptual model</h3> <p>Zone in the SharePoint basically means different access points to access the same content. Each of these access points can have their individual configuration for the network and for the authentication. When we create a new zone, we actually create new IIS application, which is pointing to the same <em>SharePoint application</em>. </p> <p>This model is often used in extranet scenarios, where the external users are authenticated using forms based authentication (FBA) and internal users are authenticated using windows authentication (NTLM). Same model can of course be used to provide content editing functionality for Internet facing site.</p> <p>Following picture defines the elements more detailed.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="544" height="258" /><>External users are seeing only the www.[sitename].com address. This is physically one IIS application, which accesses the MOSS application.</p> <p>Anonymous access is configured to the zone and the <a href="">ViewFormPagesLockDown feature</a> is activated for site collection to enhance security.</p> </td> </tr> <tr> <td valign="top" width="53"> <p align="center"><strong>2</strong></p> </td> <td valign="top" width="472"> <p>Internal users will have access to , which is exposed only to the internal network. </p> <p>This secondary IIS application is used for creating the content in windows authentication mode to same MOSS application</p> </td> </tr> <tr> <td valign="top" width="55"> <p align="center"><strong>3</strong></p> </td> <td valign="top" width="472"> <p align="justify">Actual <em>SharePoint</em> <em>application</em> is in the database and both zones access the same content.</p> </td> </tr> </tbody></table> <p> </p> <h3>Infrastructure architecture</h3> <p>Following picture defines the model in infrastructure level. There are multiple different variations of this kind of setup depending on the network policies and possibilities. Each of the elements and few possibilities are declared also in detail.</p> <p><a href=""><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="" width="522" height="549" /><Internet facing network zone for external people to access the service. Anonymous zone(s) are accessed from this direction. </p> </td> </tr> <tr> <td valign="top" width="53"> <p align="center"><strong>2</strong></p> </td> <td valign="top" width="472"> <p align="left">Internet facing firewall and network load balancer, which permits only necessary ports to be used when accessing the portal.</p> </td> </tr> <tr> <td valign="top" width="55"> <p align="center"><strong>3</strong></p> </td> <td valign="top" width="472"> <p align="justify">Web front end servers actually serving the content for the external url, which is configured to the NLB and to the <a href="">alternate access mapping</a>.</p> <p align="justify">Depending on the <a href="">networks and the security setup</a>, these can actually also be in the corporate network.</p> <p align="left">Since all of the zones created to the MOSS applications are synchronized between the servers, the IIS application for NTLM zone exists in these servers, but there’s no way to external people access this IIS application.</p> </td> </tr> <tr> <td valign="top" width="55"> <p align="center"><strong>4</strong></p> </td> <td valign="top" width="472"> <p align="justify">Optional firewall depending on the network, if the web front ends are in DMZ zone and the farm is divided between network segments.</p> <p align="justify">This is so called <a href="">split-back-to-back farm</a>.</p> </td> </tr> <tr> <td valign="top" width="56"> <p align="center"><strong>5</strong></p> </td> <td valign="top" width="472"> <p align="justify">Optional internal servers, which can be used to access the internal zone configured to use Windows authentication. Zone is only available for corporate network using internal DNS entries.</p> <p align="justify">Depending on the requirements and network, the NTLM zone could actually also exists in the WFE servers as declared partly already in step 3.</p> <p align="justify">This is not usually seen as a valid option, but actually the model is justified with security considerations and with cost saving since we would not need separate authoring farm.</p> </td> </tr> <tr> <td valign="top" width="56"> <p align="center"><strong>6</strong></p> </td> <td valign="top" width="472"> <p align="justify">Internal access point to the portal using NTLM. From here the content editors can access the portal and make necessary changes. </p> </td> </tr> <tr> <td valign="top" width="56"> <p align="center"><strong>7</strong></p> </td> <td valign="top" width="472"> <p align="justify">Database cluster for the MOSS farm. </p> </td> </tr> <tr> <td valign="top" width="56"> <p align="center"><strong>8</strong></p> </td> <td valign="top" width="472"> <p align="justify">Index server crawling the sites. Depending on the content and load, the index server can actually also be acting as the WFE server and there for it can access the content directly from itself, rather than causing load to the other WFE servers. </p> <p align="justify">There are however multiple different options on this one, so it’s a different discussion.</p> </td> </tr> </tbody></table> <p> </p> <p></p> <h3>Considerations</h3> <h4>Customization development and deployment</h4> <p>When separate staging or authoring farm is used, there’s more detailed processes to follow to keep the farms in sync, so that the content publishing work properly. In this option, there’s only one farm, so the deployment practices are more simplified (separate QA farm of course though recommended).</p> <p>Other advance on the single farm setup is more simplified possibilities for custom solutions. Let’s have an example concerning simple feedback functionality.</p> <ul> <li>Objective – Collect feedback from end users by providing a simple form for information entry. Store the sent feedback to secure list in SharePoint, which can only access the web masters. </li> <li>Single farm with Zones – Create feedback list to the root web and setup the access control appropriately to it. Use elevated privileges in the custom form to write the feedback to the list. Web masters can access the list directly using the NTLM zone. </li> <li. </li> </ul> <h4>Management</h4> <p>Since there’s only one farm to manage and monitor, the overall costs of the MOSS deployment are considerable smaller than having two farms. Also keeping the separate farms in sync is more complex than having single farm to configure. Model also decreases the storage requirements, since there’s no duplications concerning the databases of the individual MOSS farms.</p> <h4>Security</h4> <p>All the concepts declared in this article are dependent on the network configurations and security settings. Especially when we are exposing the services to the Internet, we really need to ensure that the network and farm is properly configured. </p> <p>Also all the customizations developed have to follow the security recommendations so that any sensitive information is not exposed.</p> <h4>Content editing</h4> <p>Since publishing of the content is instant when the <em>Publish –</em>button is clicked, the content can be easily modified and managed. There’s no delays of getting the information released to the internet, except the possible caching configurations.</p> <p>It’s of course important to notice the model in prepared training mater, like instructing the content editors to use relative addresses for resources, like images and for the links within the portal.</p> <h3>Real life experiences</h3> <p>We have used this model in multiple Internet facing sites during past few years successfully. Administrators have been pleased on the fact that there’s only one farm to manage and monitor, content editors have been pleased on the simplified model of authoring the content and generally customer directors have been pleased on the fact that the model saves investments done to hardware and to the possible customization.</p> <p>Hopefully this provides more insight on the flexibility of the SharePoint and for the different possibilities it provides. Each enterprise architecture is independent and they all have their own environmental characters. Therefore the guidance provided from TechNet or from numerous blogs should be always adapt to specific engagement.</p><img src="" width="1" height="1">sonofthesun 14 for Web<p>This was so cool video, that I had to publish the link. Pretty interesting functionalities are coming up.</p> <ul> <li><a title="" href=""></a></li></ul><img src="" width="1" height="1">sonofthesun English and SharePoint - localization techniques for WCM sites. </p> <p.</p> <p.</p> <p><em><strong>Compilation Error <br>Description:</strong> An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. <br><strong>Compiler Error Message:</strong> CS0101: The namespace 'Resources' already contains a definition for 'CUSTOMER'</em></p> <p>Well... that's interesting, since based on the <a href="">.NET 2.0 documentation</a>, the en-CB is officially supported locale. I did little bit more digging on the case and found out that there's actually a <a href="">KB article</a> released concerning issues with few locale codes, if certain security fix has been installed. KB article defines that for example in this case, the locale should be actually en-029. After changing the RESX file name correctly, everything started working again and translations were successful.</p> <p> </p> <h3><strong>Improving the translation logic</strong></h3> <p>Obviously the situation declared above was extremely scary. You can pull down the whole farm just by providing <em>wrongly</em> <a href="">here</a>), so this didn't cause any catastrophic issues to production.</p> <p><strong>What can you do? </strong>-.</p> <ul> <li><a href="">MSDN - Extending the ASP.NET 2.0 Resource-Provider Model</a></li></ul> <p.</p> <p>Hopefully this was useful.</p><img src="" width="1" height="1">sonofthesun planning and application lifecycle management<p></p> <p.</p> <p (<a href="">Continuous integration in MOSS development using TFS</a>). </p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="403" alt="image" src="" width="545" border="0"></a> </p> <p>Following table defines the steps and phases one-by-one.</p> <table cellspacing="0" cellpadding="2" width="522" border="1"> <tbody> <tr> <td valign="top" width="48"> <p align="center"><strong>#</strong></p></td> <td valign="top" width="472"> <p align="justify"><strong>Phase / Element</strong></p></td></tr> <tr> <td valign="top" width="51"> <p align="center"><strong>1</strong></p></td> <td valign="top" width="472"> <p align="justify">Developers develop individual features and functionalities based on of the technical specification using their independent virtualized environments, which have access to the TFS server for work items, source control etc.</p> <p align="justify">Virtualized environment have to be in-sync with the production environment concerning the licensing and patching. Customizations developed with enterprise license; don't necessarily work in standard environment. Patches and service packs should be also keep in sync.</p> <p align="justify">Ensure that it's somebody’s responsibility to keep the virtualized environment up to date.</p></td></tr> <tr> <td valign="top" width="53"> <p align="center"><strong>2</strong></p></td> <td valign="top" width="472"> <p align="justify">TFS Server used to store source code and other project related information. Developers can also sync their environment using the artifacts stored in TFS.</p> <p align="justify">It's obvious, that developers have to have ensured connection to the source control repository.</p></td></tr> <tr> <td valign="top" width="55"> <p align="center"><strong>3</strong></p></td> <td valign="top" width="472"> <p align="justify">SQL Server instance of the TFS, used for actual storage of the different artifacts and document. Ensure that this database is fully managed, monitored and operational; otherwise the development cannot be conducted.</p></td></tr> <tr> <td valign="top" width="55"> <p align="center"><strong>4</strong></p></td> <td valign="top" width="472"> <p align="justify">Development integration server, which is used to verify the builds from the TFS, preferable using automated build process. Server should be kept in sync with the production environment.</p> <p align="justify">Integration server is used for integration and deployment testing. Also the initial functional testing for full package should be conducted in this environment.</p></td></tr> <tr> <td valign="top" width="56"> <p align="center"><strong>5</strong></p></td> <td valign="top" width="472"> <p align="justify">Project members (for example project manager, testers and even customers representatives in some cases) can follow the progress of the project and give feedback based on the builds deployed.</p> <p align="justify">Define the ground rules for accepting the deployments to quality assurance environment from the integration environment. </p> <p align="justify"</p> <p align="justify">There has to be clear rules to follow for accepting the deployment for following phases.</p></td></tr> <tr> <td valign="top" width="56"> <p align="center"><strong>6</strong></p></td> <td valign="top" width="472"> <p align="justify">Quality assurance environment used for functionality testing and acceptance testing. In ideal world this environment is identical as the production environment, so that it can also be used for load testing. Quite often though, the environment is virtualized for more convenient maintenance. </p> <p align="justify">Load testing can be nevertheless conducted also in virtualized environment, if project performs the initial load testing (base line testing) during the first version of the portal. This way the future load testing results can be compared to these base testing values.</p> <p align="justify">In sense of licensing and configuration, the environment should be completely identical to production. This ensures that if the deployment is successful in this environment, it will work also in production.</p></td></tr> <tr> <td valign="top" width="56"> <p align="center"><strong>7</strong></p></td> <td valign="top" width="472"> <p align="justify">SQL Server of the quality assurance environment. Obviously the configuration should be identical as for the production. Operational setup should also follow the production, so that full portal behavior can be observed also in this environment.</p></td></tr> <tr> <td valign="top" width="56"> <p align="center"><strong>8</strong></p></td> <td valign="top" width="472"> <p align="justify">MOSS environment used for production purposes. There should be clear responsibilities and operational guidance for this environment to ensure that possible issues caused for example customizations can be solved in timely fashion.</p> <p align="justify">One of the key things to document and to follow is the version handling model of the customizations (how things are updated). As written earlier, there has to be crystal clear model for deployment of new versions and guidance to follow in case of any issues encountered.</p></td></tr> <tr> <td valign="top" width="56"> <p align="center"><strong>9</strong></p></td> <td valign="top" width="472"> <p align="justify".</p></td></tr></tbody></table> <p> </p> <h3>Considerations</h3> <p>Following chapters defines few points to consider when the full process is planned. I also want to raise few pointers, which I have personally run into in multiple partners and customers.</p> <h4>Are the code and SharePoint artifacts in safe place?</h4> <p. </p> <p>Basically this means that the source control system should be high available or at least there's a backup plan to access the source code in timely fashion, so for example possible SLA's can be achieved.</p> <h4>Is the process high available?</h4> <p.</p> <h4>Is there clear model to update the customizations?</h4> <p.</p> <p>Personally I've seen way too many SharePoint deployments, which have been build up nicely, but when we need to update any of the customizations, the life get's over complicated.</p> <h4>Is the customization deployment model scalable?</h4> <p.</p> <h3>Summary</h3> <p.</p><img src="" width="1" height="1">sonofthesun for SharePoint<p>Have you already completed the four MCTS certifications available for SharePoint and would like to have more challenges?</p> <ul> <li><a href="">MCTS: Windows SharePoint Services 3.0 – Configuration</a> <li><a href="">MCTS: Microsoft Windows SharePoint Services 3.0 – Application Development</a> <li><a href="">MCTS: Microsoft Office SharePoint Server 2007 – Configuration</a> <li><a href="">MCTS: Microsoft Office SharePoint Server 2007 ― Application Development</a></li></ul> <p>Would you like to distinct yourself as the real subject matter expert for SharePoint? Check the following links for more information concerning the upcoming Microsoft Certified Master (MCM) for SharePoint.</p> <ul> <li><a href="" target="_blank">Microsoft Certified Master Program</a> <li><a href="">Climbing the Ladder of Success with Microsoft Certification</a> <li><a href="" target="_blank">More on the Certified Master programs from me, Per, the program owner...</a> <li><a href="">So you think you qualify for the Microsoft Certified Master Program</a> <li>Live Meeting recording - <a href=""></a></li></ul><img src="" width="1" height="1">sonofthesun SharePoint Server 2007 TechNet content released as .chm file<p>I just noticed that the SharePoint IT pro documentation team has released <a href="" target="_blank">Office SharePoint Server 2007 TechNet</a> content as a <a href="" target="_blank">downloadable file</a>, to be able to access the content also off-line (intercontinental flights etc.). According the announcement, the package will be updated in monthly. This is also good reference guidance, if you are in customer premises and want to verify something without access to the actual site.</p> <ul> <li>Blog entry - <a href="" target="_blank">You asked for it, you got it: .chm builds of library content</a></li> <li>Download - <a href="" target="_blank">Office SharePoint Server 2007 Technical Library in Compiled Help format</a></li></ul><img src="" width="1" height="1">sonofthesun integration in MOSS development using TFS<p>I've been delivering quite a few technical training's during past year and one of the most discussed thing is the setup of the development environments for large scale projects. Especially large ISV's are really interested on the the practicalities of utilizing the <a href="" target="_blank">TFS</a> as the continuous integration (CI) and/or application lifecycle management (ALM) platform. For standard .net projects this has been the way to manage large projects and it's obvious that the investment and practices are wanted to be utilized also for the SharePoint based development. </p> <p <a href="" target="_blank">CruiseControl.NET (CCNet)</a>.</p> <p> </p> <h3>Setting up the Visual Studio solution for the TFS</h3> <p.</p> <ul> <li><a href="" target="_blank">MSDN - Automating Solution Package Creation for Windows SharePoint Services by Using MSBuild</a></li></ul> .</p> <p> </p> <h3>Creating the auto build project for TFS</h3> <p.</p> <table cellspacing="0" cellpadding="2" width="575" border="1"> <tbody> <tr> <td valign="top" width="573"><pre class="code"><span style="color: rgb(0,0,255)"><br><</span><span style="color: rgb(163,21,21)">Target</span><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Name</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">AfterCompile</span>"<span style="color: rgb(0,0,255)">> <</span><span style="color: rgb(163,21,21)">Copy</span><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">SourceFiles</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">$(SolutionRoot)\[TFSProjectName]\[ProjectName]<br> </span><span style="color: rgb(0,0,255)">\SolutionFiles\Package\[SolutionPackageName].wsp</span>"<span style="color: rgb(0,0,255)"> <br> </span><span style="color: rgb(255,0,0)">DestinationFolder</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">$(DropLocation)</span>"<span style="color: rgb(0,0,255)"> /> <</span><span style="color: rgb(163,21,21)">Exec</span><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Command</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">C:\TFS\rebuild.bat</span>"<span style="color: rgb(0,0,255)"> <br> </span><span style="color: rgb(255,0,0)">WorkingDirectory</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">$(DropLocation)</span>"<span style="color: rgb(0,0,255)"> /> </</span><span style="color: rgb(163,21,21)">Target</span><span style="color: rgb(0,0,255)">><br></span></pre></td></tr></tbody></table> <p><em>Note. Above example of using rebuild.bat is dependent on the fact that the SharePoint is located in the same server as where the build happens, which in most of the time, is not the case. Alternative solution is declared in the following chapter.</em></p> <p.</p> <p> <a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="377" alt="Build Information" src="" width="504" border="0"></a> </p> <p>Build log (BuildLog.txt) has huge amounts of details concerning the actions taken in particular build. All the MakeCab logged information is also included in the log for detailed analyses on the SharePoint solution package compilation.</p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="339" alt="Build Log" src="" width="504" border="0"></a> </p> <p> </p> <h3>Adding rebuild of the portal to scenario</h3> <p>When the wsp package has been created, it has to be of course deployed to the portal first, before it can be tested. This can accomplished manually, but it can also be automated, so that the portal is recreated automatically as part of the auto build process. </p> <p>Personally I have done this few different ways. Initially I created a console application, which was executed as a scheduled task by the Windows OS. More convenient way to do the same would be to create few new extensions to the <a href="" target="_blank">stsadm</a>,.</p> <p>Following defines one approach used. The tasks are dependent on the type of development and can be customized based on your requirements.</p> <p><strong>Objectives</strong></p> <ol> <li>Redeploy the new solution package to the farm - remove any previous versions, if exists <li>Recreate the portal hierarchy using portal site definitions <li>Define access to the newly created hierarchy for the project managers and testers</li></ol> <p. </p> <table cellspacing="0" cellpadding="2" width="513" border="1"> <tbody> <tr> <td valign="top" width="165">Command</td> <td valign="top" width="346">Description</td></tr> <tr> <td valign="top" width="169">deploysolutionadv</td> <td valign="top" width="346">Responsible of deploying the new solution package to the farm. Retracts and removes any previous versions from the farm, if exits.<br><br>Command is used to redeploy the solution package as part of the daily builds.</td></tr> <tr> <td valign="top" width="173">recreatesitecollection</td> <td valign="top" width="346">Command recreates site collection using specific template defined as parameter. If site collection already exists in the farm, it's deleted.<br><br>Command is used to recreate the site collection for the daily builds. Portal site definitions are great way of providing immediately full hierarchies for the newly created site collection.</td></tr> <tr> <td valign="top" width="177">assignuserstogroup</td> <td valign="top" width="346">Grant access to defined site collection for the users defined as parameter. <br><br>Command is used to define access to the newly created site for the persons responsible of verification tasks.</td></tr></tbody></table> <p> </p> <h3>Full scenario for continuous integration</h3> <p>Following image defines the key steps for the continuous integration within the SharePoint development. This model can be considered as the development time process for the project. </p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="418" alt="CI for MOSS development" src="" width="520" border="0"></a> </p> <p>Following table defines the steps and phases one-by-one.</p> <table cellspacing="0" cellpadding="2" width="518" border="1"> <tbody> <tr> <td valign="top" width="26"> <p align="center"><strong>#</strong></p></td> <td valign="top" width="490"> <p align="justify"><strong>Phase / Element</strong></p></td></tr> <tr> <td valign="top" width="31"> <p align="center"><strong>1</strong></p></td> <td valign="top" width="490"> <p align="justify">Developers develop individual features and functionalities based on module plan (part of the technical specification) using their independent virtualized environments, which have access to the TFS server for work items, source control etc.</p></td></tr> <tr> <td valign="top" width="36"> <p align="center"><strong>2</strong></p></td> <td valign="top" width="490"> <p align="justify">TFS Server used to store source code and other project related information. TFS is scheduled to build the integrated version of the package using build automation functionalities.</p> <p align="justify">Developers can also sync their environment using the artifacts stored in TFS.</p></td></tr> <tr> <td valign="top" width="41"> <p align="center"><strong>3</strong></p></td> <td valign="top" width="490"> <p align="justify">Development integration server, which is used to setup the outputs from the TFS. If required, this server environment can be utilized by multiple projects as long as they have separate application on which the solution is automatically deployed (often the case in ISV environments).</p></td></tr> <tr> <td valign="top" width="45"> <p align="center"><strong>4</strong></p></td> <td valign="top" width="490"> <p align="justify">Project members (for example project manager, testers and even customers representatives in some cases) can follow the progress of the project and give feedback based on the builds deployed.</p> <ul> <li> <div align="justify">By providing instance access to daily builds, project will get instant feedback for the developed functionalities</div> <li> <div align="justify">Daily builds provide flexibility to follow the progress and to discover any required changes in the design as early as possible</div></li></ul></td></tr></tbody></table> <p. </p> <p.</p> <p.</p> <p> </p> <h3>SharePoint artifact development</h3> <p).</p> <p.</p> <p> </p> <h3>Real life experiences</h3> <p. </p> <p>Similar setup would be however extremely useful for also any ISV, which does SharePoint development. Since the recommended deployment method for any customizations in the SharePoint landscape is to use <a href="" target="_blank">solution packages</a>, this process would be useful to any development project, no matter the amount of the customizations (from one web part to enterprise projects with tens of developers).</p> <p.</p> <p> </p> <h3>Summary & more information</h3> <p. </p> <p>Links to the concepts defined in this blog post</p> <ul> <li><a href="" target="_blank">Overview of Team Foundation Build</a> <li><a href="" target="_blank">How to: Extend the STSADM Utility</a> <li><a href="" target="_blank">SharePoint Solutions Overview</a> <li><a href="" target="_blank">Automating Solution Package Creation for Windows SharePoint Services by Using MSBuild</a></li></ul> <p>I'll write more guidelines concerning the ALM (Application Lifecycle Management) and other project practices for SharePoint development in upcoming posts.</p><img src="" width="1" height="1">sonofthesun Central Administration database to use in MOSS installation<p>By default when you install new MOSS/WSS farm to your environment, the content database for Central Administration application is renamed automatically using standard prefix (SharePoint_AdminContent_) and randomly generated GUID to avoid any problems if there's multiple MOSS farms installed using the same SQL Server. </p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="244" alt="image" src="" width="360" border="0"></a> </p> <p.</p> <p>Usually also the companies hosting the databases would like to know the exact name of all the databases created as part of the installation process before hand, so that they can establish the necessary operational activities (backups etc.).</p> <p.</p> <p>Execute the following from the command line (change the values to the correct ones to use in your environment...especially the farm account details)</p> <table cellspacing="5" cellpadding="5" width="435" border="0"> <tbody> <tr> <td valign="top" width="423"> <p>psconfig -cmd configdb -create -server servername -database MOSS_Config -user domain\farmaccount -password accountpwd -admincontentdatabase MOSS_Content_Admin</p></td></tr></tbody></table> <p><font size="2"><em>Note. Make sure that the account you are using has the sufficient access rights to the database server as declared in <a href="" target="_blank">Technet</a>.</em></font></p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="271" alt="image" src="" width="444" border="0"></a> </p> <p>When the configuration is ended, the configuration database (this case the <em>MOSS_Config</em>) has been created to SQL Server (in this case to server <em>servername</em>) and you can restart the configuration wizard. From the wizard you can notice that the initial values have been already set and the server has already been attached to the newly created MOSS farm.</p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="380" alt="image" src="" width="443" border="0"></a> </p> <p.</p><img src="" width="1" height="1">sonofthesun governance and training material for SharePoint<p></p> <p>I. <p>I'm not anymore even counting the times on when the customer have been amazed that Microsoft is providing this kind of guidance and training material... Yes... we actually do also something else rather than only install the products / servers... especially with MOSS. <p> <h3>Governance and general guidance</h3> <ul> <li>SharePoint Gear Up <ul> <li><a href=""></a> <li>Huge amount of good materials concerning how to plan the overall project based on the SharePoint technologies.</li></ul> <li>SharePoint Governance site at Technet <ul> <li><a title="" href=""></a> <li>Excellent site including numerous links to relevant documents, guidance and information</li></ul> <li>SharePoint Governance Checklist Guide <ul> <li><a title="" href=""></a> <li>Excellent checklist, which points what to be plan and decide as part of the governance planning</li></ul> <li>SharePoint Governance and Manageability (Codeplex) <ul> <li><a title="" href=""></a> <li>Code examples and tools to manage sites more easily </li></ul> <li>Example governance plan <ul> <li><a title="" href=""></a> <li>Sample by Mark Wagner and Joel Oleson</li></ul></li></ul> <p> </p> <h3>Training material</h3> <p. <p> <h4>Microsoft Office SharePoint Server 2007 Training Standalone Edition</h4> <p><a href=""></a><u> <b></b></u></p> <ul> <li>Step-by-step through beginning to advanced features, including Collaboration, Business Processes and Forms, Portals and Personalization, Search, Business Intelligence and Enterprise Content Management. <li>Includes videos, interactive tutorials, and articles. <li>Accessed through your browser after you install the application on your personal computer.</li></ul> <h5> </h5> <h4>Microsoft Office SharePoint Server 2007 Training Portal Edition</h4> <p><a href="" target="_blank"></a><u> </u> <ul> <li>Built on the Microsoft SharePoint Learning Kit, <li>Designed for server administrators who want to help their end-users learn how to use the features of Microsoft Office SharePoint Server 2007. <li>Step-by-step through beginning to advanced features, including Collaboration, Business Processes and Forms, Portals and Personalization, Search, Business Intelligence and Enterprise Content Management. <li>Includes videos, interactive tutorials, and articles. <li>The material is SCORM compliant. <li>Easily add or remove training topics to fit your business needs. <li>Includes a reporting function that allows an administrator/trainer to track learners’ completed training topics. <li>You can customize the Training to fit the look and feel of your own Office SharePoint Server site.</li></ul><img src="" width="1" height="1">sonofthesun Days 2008 - Example solution package<p <a href="" target="_blank">SkyDrive</a>. Below is direct link to the folder... <> <p>I'll write more information concerning the functionalities demonstrated on the VS solution on upcoming blog posts including instructions and guidelines to extend / modify the solution based on requirements in projects.</p> <p><em>If you have any questions concerning the structures, feel free to add comments on this blog entry... </em></p><img src="" width="1" height="1">sonofthesun a WebCreated event of your own in onet.xml<p.</p> <h3>Possible workarounds</h3> <ol> <li>Create your own custom site definitions using browser and export that using WSS 3.0 Extensions for VS 2005. As part of the export process, the tools generate the necessary code and associations to be used. Unfortunately the extensions do not support WCM enabled sites, so it cannot be used in publishing sites. <li>Use <a href="">ExecuteUrl</a> element to redirect the user to custom aspx page after the site has been created to be able to customize the site. Your custom aspx page might be located under the _layouts folder and there for we can easily point to that one using the xml elements within the onet.xml. <li>Create the WebCreated event using the powerfull feature framework. I'll declare the steps below for more detailed information.</li></ol> <p> </p> <h3>Steps for WebCreated event using feature receiver</h3> <p>Step-by-Step guide for manual creating of WebCreated event are the following:</p> <ol> <li>Create feature to deploy the default.aspx to root of the site using Module events <li>Create feature with feature receiver class – This is used for the Site Created event <li>Create site definition, which does not have directly default.aspx, rather use the feature developed above <li>Set the order of the features so that the default.aspx feature is activated before the receiver feature <li>In feature receiver feature, access the default aspx page, for example access the web part manager of the welcome page using following code </li></ol> <table cellspacing="5" cellpadding="5" width="539" border="0"> <tbody> <tr> <td valign="top" width="527"> <p><font color="#008000">//look for the default page so we can mess with the web parts</font><br><font color="#5591ca"><font color="#2b91bd">SPFile</font> </font>thePage = curWeb.RootFolder.Files[<font color="#ff0000">"default.aspx"</font>]; <p><font color="#008000">//get the web part manager</font> <br><font color="#2b91bd">SPLimitedWebPartManager</font> theMan = thePage.GetLimitedWebPartManager(System.Web.UI.WebControls.WebParts.<font color="#2b91bd">PersonalizationScope</font>.Shared); </p></td></tr></tbody></table> <p> </p> <h3>Final words</h3> <p>This was extremely quick sample, but hopefully it's useful to you. I'll try to find some time to make more comprehensive example of this. </p><img src="" width="1" height="1">sonofthesun publishing features from onet.xml<p.</p> <p>This blog entry explains the different options when you configure the standard publishing features in onet.xml. If you are interested concerning the navigation options for the MOSS sites, check the <a href="">previous post</a> with details concerning the different options on configuring the navigation settings within the site.</p> <h1></h1> <h2>Introduction</h2> <p>If you have created your own site definitions based on the out-of-the-box reference implementations, you have most likely noticed following feature and it's configuration options. </p> <p> <a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="114" alt="image" src="" width="543" border="0"></a> </p> <p>The feature ID defined in the onet.xml refers to the Publishing feature stored by default in the folder C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES\Publishing. Feature.xml file for the feature looks like following. </p> <p><span style="color: rgb(0,0,255)"><</span><span style="color: rgb(163,21,21)">Feature</span><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Id</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">22A9EF51-737B-4ff2-9346-694633FE4416</span>"<br><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Title</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">Publishing</span>"<br><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Description</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">Enable Publishing in a web.</span>"<br><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Version</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">12.0.0.0</span>"<br><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Scope</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">Web</span>"<br><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Hidden</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">TRUE</span>"<br><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">DefaultResourceFile</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">core</span>"<br><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">ReceiverAssembly</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">Microsoft.SharePoint.Publishing, Version=12.0.0.0, <br> Culture=neutral, PublicKeyToken=71e9bce111e9429c</span>"<br><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">ReceiverClass</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">Microsoft.SharePoint.Publishing.PublishingFeatureHandler</span>"<br><span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">xmlns</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)"></span>"<span style="color: rgb(0,0,255)">><br> <</span><span style="color: rgb(163,21,21)">ElementManifests</span><span style="color: rgb(0,0,255)">><br> ...</span><span style="color: rgb(0,0,255)"><br> </</span><span style="color: rgb(163,21,21)">ElementManifests</span><span style="color: rgb(0,0,255)">><br></</span><span style="color: rgb(163,21,21)">Feature</span><span style="color: rgb(0,0,255)">></span></p><a href=""><a href=""></a> <p.</p> <h2>Master page setting<)">ChromeMasterUrl</span>"<span style="color: rgb(0,0,255)"> <br> </span><span style="color: rgb(255,0,0)">Value</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">~SiteCollection/_catalogs/masterpage/MBaseMaster.master</span>"<span style="color: rgb(0,0,255)">/></span></pre><a href=""></a> <p.</p> <p.</p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="338" alt="image" src="" width="546" border="0"></a> </p> <p> </p> <h2>Welcome Page)">WelcomePageUrl</span>"<span style="color: rgb(0,0,255)"> <br> </span><span style="color: rgb(255,0,0)">Value</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">$Resources:cmscore,List_Pages_UrlName;/default.aspx</span>"<span style="color: rgb(0,0,255)">/></span></pre> <p. </p> <p>In user interface, the welcome page can be set using the Welcome page link, which can be found under the Look and feel section in the site settings page.</p><a href=""></a> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="208" alt="image" src="" width="244" border="0"></a> </p> <p>On the welcome page setting page, you can browse to the file you want using standard asset picker. </p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="203" alt="image" src="" width="476" border="0"></a> </p> <p> </p> <h2>Page List)">PagesListUrl</span>"<span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Value</span><span style="color: rgb(0,0,255)">=</span>""<span style="color: rgb(0,0,255)">/></span></pre><a href=""></a> <p>You can use this property to define some other list to be used as the pages library. By default the WCM pages are stored under pages list (<a href=""></a>), but if you like, you can change the setting by adding the list name in to this property. </p> <p> </p> <h2>Available Web Templates<.Project#1</span>"<span style="color: rgb(0,0,255)">/></span></pre> <p><a href=""></a>This setting can be used to filter the site definitions to be shown in the <em>Create Site</em> <em>Create Site</em> page, regardless of the application used.. </p> <p> <a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="174" alt="image" src="" width="514" border="0"></a> </p> <p>So if property is left empty, all of the installed site definitions are available. Multiple templates can be configured to the property using following syntax. In this example there would be two different site definitions available. </p>.Generic#1;<br> </span><span style="color: rgb(0,0,255)">*-Microsoft.Intranet.POC.News#1;</span>"<span style="color: rgb(0,0,255)">/></span></pre><a href=""></a> <p> </p> <p>From user interface, you can configure the same setting from the <em>Page layouts and site templates</em> functionality found under the <em>Look & Feel</em> section of the <em>Site Settings</em> page.</p> <p> <a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="207" alt="image" src="" width="244" border="0"></a> </p> <p>Using this functionality, you can manually select the shown site definitions. In this case also, all of the site definitions installed on the MOSS farm are shown, regardless of the particular application.</p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="157" alt="image" src="" width="543" border="0"></a> </p> <p> </p> <p>Following image is from the <em>Create Site page</em>, when only one site definition is configured to be shown. In this case, we are under the <em>Projects Catalog</em> site, and based on the portal design, it's decided that you should only create <em>Project</em> sites under it. </p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="173" alt="image" src="" width="544" border="0"></a> </p> <p> </p> <h2>Available Page Layouts<PageLayouts</span>"<span style="color: rgb(0,0,255)"> <br> </span><span style="color: rgb(255,0,0)">Value</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">~SiteCollection/_catalogs/masterpage/MGenericBodyOnly.aspx:<br> ~SiteCollection/_catalogs/masterpage/MGenericImageLeft.aspx:<br> ~SiteCollection/_catalogs/masterpage/MGenericImageRight.aspx:<br> ~SiteCollection/_catalogs/masterpage/MGenericImageTop.aspx:<br> ~SiteCollection/_catalogs/masterpage/MGenericLinks.aspx:<br> ~SiteCollection/_catalogs/masterpage/MSectionPage.aspx</span>"<span style="color: rgb(0,0,255)">/></span></pre> <p>This property is similar as the AvailableWebTemplates, but it applies on the page layout level. Using this property, you can filter the page layouts to be shown in the <em>Create Page</em>.</p> <p>From the user interface, the similar would be configured using the the <em>Page layouts and site templates</em> functionality found under the <em>Look & Feel</em> section of the <em>Site Settings</em> page.</p><pre class="code"><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="166" alt="image" src="" width="551" border="0"></a> </pre> <p> </p> <p>So when the configuration has been done for the site and you select <em>Create Page</em> from the <em>Site Actions</em> menu, we can see only the configured page layouts to be shown. </p><a href=""></a> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="218" alt="image" src="" width="551" border="0"></a> </p> <p> </p> <h2>Simple Publishing<)">SimplePublishing</span>"<span style="color: rgb(0,0,255)"> </span><span style="color: rgb(255,0,0)">Value</span><span style="color: rgb(0,0,255)">=</span>"<span style="color: rgb(0,0,255)">true</span>"<span style="color: rgb(0,0,255)"> /></span></pre><a href=""></a> <p.</p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="132" alt="image" src="" width="535" border="0"></a> </p> <p> </p> <h2>Final words</h2> <p>Using these settings and properties, you can fairly easily control the different publishing settings for the particular site. On the upcoming post, I'll declare the detailed steps to write a custom feature receiver to be able to configure also those properties, which are not by default available. <p>Hopefully this helps.</p><img src="" width="1" height="1">sonofthesun navigation options from the onet.xml<p.</p> <p> </p> <h2><strong>Introduction</strong></h2> <p>If you have played around the onet.xml files included as out-of-the-box in the MOSS (Publishing template etc.), you have most likely noticed the publishing navigation feature dependencies in the WebFeatures element as in xml block below.</p> <blockquote> <p><WebFeatures><br> ...<br> <Feature ID="541F5F57-C847-4e16-B59A-B31E90E6F9EA"><br> <Properties xmlns="<a href=""">"</a>><br> <Property Key="InheritGlobalNavigation" Value="true"/><br> <Property Key="ShowSiblings" Value="true"/><br> <Property Key="IncludeSubSites" Value="true"/><br> </Properties><br> </Feature><br> ...<br></WebFeatures></p></blockquote> <p): </p> <blockquote> <p>C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES\NavigationProperties</p></blockquote> <p>And the feature.xml file from here contains following information:</p> <blockquote> <p><Feature Id="541F5F57-C847-4e16-B59A-B31E90E6F9EA"<br>"</a>><br> <ElementManifests><br> <ElementManifest Location="NavigationSiteSettings.xml"/><br> </ElementManifests><br></Feature></p></blockquote> <p.</p> <p> </p> <h2><strong>Supported parameters</strong></h2> <p>So what are the parameters supported by the NavigationFeatureHandler class and how do they affect compared to settings done from the user interface. I'll declare the supported parameters one-by-one and compare the settings to modifications done from the user interface (<em>Site Actions -> Site Settings -> Navigation)</em>.</p> <p> </p> <h3><b>IncludeInGlobalNavigation, IncludeInCurrentNavigation</b> </h3> <p>Controls the IncludeInGlobalNavigation and IncludeInCurrentNavigation properties of the SPPublishingWeb. In user interface this functionality is controlled by using following options. <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="207" src="" width="532" border="0"></a> <p> <p><b></b> <h3><b>InheritGlobalNavigation</b> </h3> <p>This paremeters controls the global navigation options. If set the true, we will get the same outcome by selecting "<em>Display the same navigation items as the parent site".</em> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="85" src="" width="514" border="0"></a></p> <p> </p> <p><b></b> <h3><b>InheritCurrentNavigation</b> </h3> <p>This controls the inheritance of the current navigation. If set the true, we would get the same results as by selecting the "<em>Display the same navigation items as the parent site"</em> from the user interface (the first option from the picture below) <p><b></b> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="109" src="" width="515" border="0"></a> <p> <p><b></b> <h3><b>ShowSiblings</b> </h3> <p>If set the TRUE, the outcome is the same options as the <i>“Display the current site, the navigation items below the current site, and the current site's siblings”</i> option in the user interface (second option from the picture below). Note that the <em>IncludeSubSites</em> and <em>IncludePages </em>parameters also affects to outcome. <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="109" src="" width="515" border="0"></a> <p><b></b> <h3><b></b> </h3> <h3><b>IncludeSubSites<>IncludePages<>OrderingMethod</b> </h3> <p>This option affects to ordering of the navigation items. Note that the final outcome depends also from the <em>AutomaticSortingMathod </em>and the <em>SortAscending</em> properties.</p> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="93" src="" width="240" border="0"></a> <p><em>Possible values</em> <p><b></b> <p><b>Automatic</b> - Sort all node types automatically, and group pages after other types. <p><b>Manual</b> - Sort all types manually. <p><b>ManualWithAutomaticPageSorting</b> - Sort all types except pages manually. If pages are included, sort them automatically and group them after all other types. <p><b></b> <h3><b></b> </h3> <h3><b>AutomaticSortingMathod and </b> <b>SortAscending</b> </h3> <p>These controls the sorting of the navigation items. Possible outcome depends on numerous other properties, since for example the AutomaticSortingMathod property has only meaning, if the <em>OrderingMethod</em> has been set to <em>ManualWithAutomaticPageSorting</em>.</p> <blockquote> <p><em>Note. It's not a typo... it's really AutomaticSortingMathod...</em></p></blockquote> <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="90" src="" width="240" border="0"></a> <p>Possible values for the <em>AutomaticSortingMathod</em> property <p><b>CreatedDate</b> - Sort items by time of creation. <p><b>LastModifiedDate</b> - Sort items by time of last modification. <p><b>Title</b> - Sort items alphabetically by title. <p><a href=""><img style="border-top-width: 0px; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="90" src="" width="240" border="0"></a> <p> </p> <h2><strong>Final words</strong></h2> <p).</p> <p>More information concerning the functionalities declared here can be found from the SDK.</p> <ul> <li><a href="" target="_blank">Feature.xml schema</a> <li><a href="" target="_blank">Onet.xml schema</a> <li><a href="">SPFeatureReceiver class</a> - All the FeatureReceiver's are inherited from this class</li></ul> <p> </p> <p>PS. I'll try to find some time to write similar article concerning the other possibilities of the WCM features (how to limit the page layouts, how to limit the web templates shown in UI, how to configure master page etc.). Stay tuned... </p> <p>[<font color="#ff0000">Update</font>] - The following post with information concerning the other publishing feature configurations has been released. Check the details from <a href="">here</a>. </p><img src="" width="1" height="1">sonofthesun | http://blogs.msdn.com/vesku/atom.xml | crawl-002 | en | refinedweb |
.
For those of you interested in the policies/politics side of file formats, I've seen a couple folks point out this bill currently in place in Texas ()
As all of you know by now, I think it's very cool to see this attention being paid to file formats, and the importance they play in all of our lives. I've been working on this stuff for years, and it's always fun to see other folks talking about your work. Here are the traits they'd like to see in a file format in Texas:
Each electronic document created, exchanged, or maintained by a state agency must be created, exchanged, or maintained in an open, Extensible Markup Language based file format, specified by the department, that is:
It's great to look at things like this and think about the scenarios folks have in mind. Rather than talk about motivations in terms of "levels of openness", I think it's easier to discuss it in terms of scenarios or use cases. Most policies around file formats are there to ensure the following:
There are a lot of other factors that can help you achieve these four goals, but those are all implementation decisions, and don't necessarily prevent you from achieving your goals. For example, using existing technologies like ZIP and XML helps you achieve #3 because there are already tools out there that support them (they aren't necessary for success though). You could go invent your own technology as well, and still achieve #3 assuming you fully document that new technology, but it's often easier leverage what's already there and can help you achieve a more rapid level of adoption in the community.
If you look at the bill in Texas, you can see that they have these goals in mind, and have set 4 criteria points to help them meet the goals:
As I said at the beginning, it's fun seeing so much attention being paid to file formats. It's always important to remove the more "religious" aspects from the debate, and really drill into the scenarios. What are you trying to do with the documents, and what do you want to see put in place to help you succeed.
-Brian
So, how does OpenXML hold up against the proposed bill? Specifically how would it hold up right now and on December 1, when the bill would go into effect (especially considering that the ISO approval might fail on the fast track)?
Thanks,
Patrick
Speaking of scenarios, why you guys did choose to NOT rely on Microsoft Office own test suite to "validate" the CleverAge translator?
Could it be because it draws all the wrong conclusions?
"It's always important to remove the more "religious" aspects from the debate, and really drill into the scenarios."
Really? You won't win this game Brian. One counter-example is enough to silence the claims you made over and over on this blog.
Again, you prefer to create a reality distortion field, hoping that your audience does not understand. Your blog reads like a marketing brochure.
In this regard there are two simple steps Microsoft can take to demonstrate their commitment to customers over their commitment to protecting the Windows and Office monopolies.
1. Publish the secret specifications of the MS-Office binary blob formats.
2. Include native ODF support in MS-Office.
Why does Microsoft need the "billions and billions" of existing Office documents to remain unreadable or poorly readable outside of MS-Office?
Why does MS-Office support dozens of alternative file formats, including poorly defined "standards" such as RTF and CSV, yet fail to support OpenDocument Format natively?
Sean DALY.
Patrick,
I think that even without the ISO approval it still meets the bill's criteria as it's already an Ecma standard. I can't think of a single application/scenario that can't be met without ISO approval. The ISO submission was something that a number of governments had requested, which is why Ecma went that route. I view that more as an additional endorsement, as well as an alternative approach for the maintenance of the spec, rather than something that actually opens up more scenarios and use cases.
Stephane,
That's really a question that's best for the translator team's blog:
I believe they looked to a number of different sources for their test cases. For instance, there is an ODF conformance test suite out there that they leveraged, as well as some government compliance documents. Again, that's a question that I would take up with them if you feel they didn't pick the right set.
Sean,
1. The binaries are already documented and freely available (they have been for some time). Here is the most up to date article describing how you can get to it:
2. There are plenty of file formats out there that meet the needs of this bill. There aren't many folks out there that specifically want ODF. Instead they want formats with long term archivability; royalty free access; interoperability; and meet their user needs. ODF fits the bill some of the times, but so does HTML, PDF, plain text, OpenXML, DocBook, etc. etc.
The ODF format was just a small blip on everyone's radar until the news around the commonwealth of Massachusetts. You can see this just by looking at the OASIS committee's meeting minutes. Leading up to the point there they submitted the standard for final approval, the attendance rate was very low. Only 2 people attended more than 75% of the meetings. It wasn't until all the press around Massachusetts hit that they got more people participating, but at that point the 1.0 version was already done (which is the one that went to ISO).
We were already well on the way to the first Beta of Office 2007 when all this picked up, and it was far too late to add an additional format, even if customer demand increased. That's one of the reasons why we went the route of supporting the open source converter route. The other reason was that by making it an open source project, everything would be transparent, and there would be no chance of people misinterpreting the results.
-Brian
Brian, that Texas bill is very interesting but also could end up stifling document creation and technological innovation as well as raising costs.
The bill's aims are laudable. However, mandating XML for all government uses is not the best way to achieve them. In fact, it's incredibly short-sighted.
It has been shown time and time again that
prescribing a certain technology often proves costly and counterproductive in the long run:
1. What happens if the technology stipulated isn't up to the task at hand? XML is not suitable for all uses (a fact evinced by the XSLB format in Excel 2007.) Forcing XML onto inappropriate applications will only increase the compliance burden without necessarily achieving the law's enumerated goals.
2. What if better technologies appear? XML may be today's darling, but there is no guarantee that it will be the best solution twenty or thirty years down the road. Laws stay on the books a long time. At some point, technology will have moved on: XML will be obsolete. Mandating its use even under such circumstances will make data more arcane and less accessible. Such a hard-coded stipulation (instead of a call for best management practices) will do the exact opposite of what the law intends.
The law is also impractical. Here are a few questions:
1. What is an electronic document? What about files that are not destined for interchange? E.G., do temporary/swap/cache, raw input files (e.g. from cameras and scientific instruments), and database store files count?
2. Do the file formats for limited use, custom in-house solutions have to be published? Does this apply to legacy systems (e.g. mainframes?) That could be a documentation nightmare. Public benefit would be minimal.
3. It is not realistic to expect every format used by an entity as large as government to be implemented by multiple providers on multiple platforms. There is a lot of specialty and custom software in use. What happens if the government fails in this? Will the government be forced to subsidize additional software development to ensure that every piece of software it uses is available on at least one other platform and from one other company?
4. Are open industry organizations necessary for legacy formats? What about for special-use software? Does it make sense to encumber the development of custom software with the need for such corporatist structures?
A better law would call for open formats without mentioning XML. It would state that the formats to be used must adhere to best management practices (i.e. being open and accessible) but allow exceptions in cases where documents are not intended for interchange, where optimization is necessary, etc.
Francis, those are great points. I agree that it's very important to not mandate specific technologies but to instead stay focused on the actual goals.
Let's take long term archivability as an example. People view PDF as an archival format, and it obviously doesn't use XML. Also, as you point out, not every file you create needs to be archived.
Whenever we start to work on a version of Office, we focus on the scenarios we want to solve. We try to not think about the technologies until we have first agreed on which scenarios we think are more important. For example, one of the broad scenarios behind the new file formats was that we wanted developers outside of Microsoft to easily write solutions that could read and write our formats, which would increase the value of Office as a platform. We then created more detailed scenarios around the types of solutions we wanted to enable for those developers. We then took those scenarios as well as a few other broad scenarios and we reached the final decision of going with XML and ZIP.
The same approach needs to be applied here. That's why I tried to talk about the actual goals first, rather than the specific technologies.
Brian: PDF does in fact use XML. The specification defines a number of pieces that are represented as XML. To name just two examples: XMP metadata and XFA forms. When you think about it, this isn't that different from OOXML, which uses a binary container (ZIP) to store data in binary (PNG, etc.) and XML forms. Of course, it goes without saying that as a much newer format OOXML represents much more of its data as XML, but it isn't nearly as black and white as you claim.
Brian said "That's really a question that's best for the translator team's blog:"
Hmmm.
Microsoft sponsored CleverAge, not the other way around.
Microsoft declared the CleverAge plugin "complete", not the other way around. You even used that word as the headline of one of your blog posts.
Therefore it's your responsability to define what you mean by "complete".
Well, at least if you believed in it a nanosecond.
To claim that this thins is "complete" is very strong. That sets the expectations very high you know. The blogosphere is already laughing at the CleverAge guys just two days after the announcement. This is Microsoft's problem.
And, by analogy, when you say "fully documented", "fully xml" or "100% ZIP", perhaps you want to use different words. It would be sad that such strong words fly back in your face.
As I said above, one counter-example kills your claims. Since Office 2007 shipped, I guess you understand it's very easy for someone to show you the problem.
It's good to see this topic addressed here.
I'm stopping by since the /. coverage makes claim that OOXML won't meet the bills definitions. As usual, great humor was had in reading the opposite.
More humor and great wonderment was had in reading the complaints about previous and now deprecated technologies and the additional and hyper-generalized spurious claims and zealotry.
Cheers!
Andrew,
Great point, I did know that PDF makes use of XML, and I didn't mean to imply otherwise. My point was more to show that PDF at it's core is *not* based on XML, but that doesn't prevent it from being a great format for long term archival.
Stephane,
I could go and talk to the tranlator team to get a better understanding of the methodology they used to come up with their test files and then report that back to you. I thought it would be more efficient though if you just asked them directly. I don't work directly on that project, so I don't have the knowledge to drill into their design decisions.
Brian -
The binary blob MS-Office formats have not been published. They are available for licensing to "companies or agencies", under terms which are secret. Why is this so? Why not just publish the formats?
This secrecy threatens the MS-OOXML specification's candidacy for standardization, since the document refers in many instances to previous versions of Office and the closed binary blob formats. Hasn't the time come to publish them?
Sean
"The binary blob MS-Office formats have not been published."
They have been published, in fact those specs were part of MSDN for a long time. Just buy MSDN Library CD March 1998. I guess you have to get in touch with MS Ireland for MSDN CD orders.
Brian said "I could go and talk to the tranlator team to get a better understanding of the methodology they used to come up with their test files and then report that back to you."
Don't you think I can read the source code and come up with my own answers? (I did, of course)
That's not my point. I see you are side- stepping not just your responsabilities, again Microsoft claims this translator is "complete", and this must mean something. But it also strikes me as obvious that you refuse to enter technical discussions, despite the fact that you know me, I value merit and truth.
Microsoft does not ship software without a measurement of quality. I think that those quality standards are of very high, otherwise Microsoft would sink. How come you subsidized this project to a third-party, and now leave it for others to worry about whatever work they are done?
I don't get it.
What sections are there references to binary blobs? If you could point out what you're referencing I think I could better understand what you're looking for.
I'm not trying to sidestep the technical issues, and I'm well aware of you're knowledge in the area (that's why I enjoy these discussions with you).).
Wow. You are amazingly generous. I must confess a certain cynicism around these latest legislative adventures, and I have probably gone too far in that direction in my own appraisal:
I think your view of the policies that open-format adoption can be part of is very clear and would be important in establishing a legislative history too. Your four points are excellent.
Sun Microsystems just raised its hand:
This is an intriguing development. I think, at this point, this is mostly good news. It could be used in an argument that OOX is unnecessary, but we'll have to see what the caveats are concerning the Sun Conversion Technology.
You said:
"Whenever we start to work on a version of Office, we focus on the scenarios we want to solve."
And Microsoft's most basic scenario is how to lock-in customers and lock-out major competitive solutions.
Brian said )."
My original question was whether Microsoft had used their own test suite (i.e. corpus of documents) to validate this project before you made the announcement. Not getting my answer, I thought a different angle would be to ask what you meant by "complete" in this blog post :
I also note that the announcement comes with no strings attached, meaning that it should do what it is expected to do, and do it right.
CleverAge's own test suite means nothing to me. I have seen the list of features they support that goes into their test suite. Who made that list? How do you go from that arbitrary list (i.e. they are big features not even mentioned) to extrapolate into the billions of Office documents out there? I mean, how can this project make sense at all without relying on Microsoft's own Office test suite?
You are clearly trying to make a mountain out of a molehill here, for your own purposes.
I think it's pretty clear here that Brian was simply announcing the completion of release 1.0 of the converter by a non-Microsoft group(CleverAge, plus other contributors to this open source project). Microsoft may have kicked off the project, but it does not mean it owns or controls this converter, nor is it required to do acceptance testing. Further, as a 1.0 release, the development group is not guaranteeing that it is bug free; Microsoft cannot guartee this either, because it is not their converter, despite what you claim to believe.
If it bothers you that Microsoft does not own the converter, and thus you cannot then hold them to task for any issues with it, then just learn to live with it.
Ian,
I beg to differ. Microsoft has made a mountain of this project. Have you been following it since last year? And they are one who made PR lately announcing all sorts of things, including that it was "complete". When Microsoft say, it's "complete", one may understand it under the typical light of Microsoft declaring a product complete.
It's very simple. Why have they participated in something that they knew from day one would not hold waters?
They dare criticize IBM and others there's this politics, FUD, and so on. Well, sorry, but there are participating in it too.
Now that, I think, the technical merit of the whole project are pretty questionable, I would not be surprised that Microsoft brings their top spin doctors to add a layer of politics on top of it.
There is no technical merit in this project for one simple reason (and I know they worked hard, so it's a heart breaker) : reading/writing in full-fidelity mandates a lot more knowledge of the internals which in fact is much like rendering the document in memory by taking core of the semantics. Those who originally thought they would do a couple translations and would be done with it were wrong. The smart brass in the Microsoft Office team knew that since day one.
Plus, you have to understand that this project will just fly in Microsoft's face now that government are being told that it's the official solution to the interoperability problem.
There is a ton more to be said about it. I was just asking an innocent question, and apparently this is already too much being asked...
Your logic is totally wrong. The fact that Microsoft has chosen to get a lot of publicity from this does not imply that this is a product that they own or control or are responsible for. Nothing you say can alter this basic fact, no matter how often you repeat it. So, if you have any issues with the quality of the 1.0 release (I know I do!!!), it's not Microsoft's responsibility at all; it's the development teams'.
As for your comment about the project having no technical merit, you seem not to understand that it was never meant to do reading/writing in full fidelity because that is technically impossible -- the ODF specification and the ECMA OOXML spec are not in one-to-one correspondance.
I have and likely will have no need for the converter but I have checked it out from time to time. One of the things I read on their blog/release notes is that they test via "double conversion". I'm not sure of the correct technical term for this, but they convert a document and then convert it back and verify the results. Surely this doesn't say anything about supporting every feature, but it does place doubt on any kind of requirement for "mirroring" or understanding an in-memory binary image of it.
Just rolling with the FUD machine, hahahaha
Have you read the press? CleverAge has done work on behalf of Microsoft. Governments are being told that this project is a solution to the interoperability problem.
"full fidelity because that is technically impossible"
Eh, eh, I knew someone would address this. I will not comment whether or not it is impossible as you say. It's in fact more complicated than that.
Because you have markup (+ blobs) on both side you may think the translator is that clever thing which, from ODF to OOXML does the following :
ODF ==> translator ==> OOXML
If you state this, it's because you believe the semantics is entirely captured by the markup. You believe there is no discrepancy.
Here is now the reality :
ODF ==> intermediate representation(1) ==> translate ==> intermediate representation(2) ==> OOXML
Problem : the intermediate representations hold pretty much all of the semantics. I won't comment on whether representation (1) is doable. But I'll assert that representation (2) is only doable by Microsoft own's application. Why? because there is semantics that OOXML does not store at all.
And it is the same for translating formats the other way around.
I would have hoped that OOXML would have been that wonderfully designed format that captures itself 100% of the semantics. It's not the case. And it puts the burden on CleverAge. That's why it's a mistake to have delegated this part of the work outside the fence.
You may not agree by that's how I see it. And, while the translator only tries to do something with Word documents, it's going to get worse with Excel spreadsheets (this is my area of expertise).
@Stephane
So, if spreadsheets are your hot topic, are you aware that the problem with ODF (including the just-approved 1.1) is that it is not just spreadsheet semantics that are missing, there is no formula syntax.
There is a way for a calc attribute to identify a namespace that appeals to an agreement on what the following formula is, but ODF doesn't define any of those, either.
Oo.o uses a namespace of their own. I don't know if it is documented, although certainly the formulas that are seen and written by users are documented for the product.
I don't know how much OpenFormula will harmonize with the Oo.o formula definition, but I am certain the namespace will be different.
The facts that 1) CleverAge has done other projects for Microsoft in the past, and 2) that Microsoft is making some PR from this, does not make Microsoft responsible for this project. Can we end this argument at that? You're just repeating yourself.
As for the technical imposibility of 100% fidelity, we are in agreement with the final conclusion. However, you have introduced a spurious argument to reach that conclusion. You seem to be referring to the well-discussed legacy rendering aspects of the ECMA spec. (Correct me if I am wrong.)
As you undoubtedly know, those are optional minor details of layout as done in older versions of Word, and are ignored by Word 2007 itself when converting old files. So, they are basically irrelevant. I myself wouldn't make such a big deal out of them. People who do, seem to have other agendas. For example, like you, they omit to mention that the ODF spec has a similar set of attributes whose rendering is unspecified. This omission of a similar ODF behaviour allows them to make it seem that it's all Microsoft's fault, which of course, is their real aim.
Dennis,
Spreadsheet formulas have been defined pretty well by MS Excel a number of years ago don't you think? OpenOffice, or any other competing product, has no chance to stand if it does not accurately replicate MS Excel run-time behaviors in formulas, including the bugs. So it's not like there is no specs. It's very unfortunate that the MS Excel's implementation is the specs.
Here is an excerpt of the MS press pass : "(...)Microsoft Committed to Interoperability
The Open XML Translator is one among many interoperability projects Microsoft has undertaken. Microsoft continues to work with others in the industry to deliver products that are interoperable by design(...)"
Very bold statements MeSay.
Ian said "You seem to be referring to the well-discussed legacy rendering aspects of the ECMA spec."
No. I am not talking about rendering in general. To render something, you need to know what you are doing. It's just not in the same league than reading and writing markup and binary blobs. Unless the markup and binary blobs are so well designed that they capture all of the semantics.
I'm sure most of you have had those annoying conversations with folks on a topic where that person views
I just saw that the Novell folks have released a version of OpenOffice with support for the Ecma Office | http://blogs.msdn.com/brian_jones/archive/2007/02/06/texas-looks-at-the-interoperability-of-file-formats.aspx | crawl-002 | en | refinedweb |
Channel Definition Format (CDF) files give Web authors another way to organize their sites. The hierarchical structure of CDF files makes it easy to offer users a meaningful subset of a site's contents.
Microsoft Active Channel technology, which is supported by Microsoft Internet Explorer 4.0 or later, is one implementation of CDF files. Smart offline Favorites, which are supported by Internet Explorer 5 or later, are another implementation of CDF files. A smart offline Favorite is a Web page that specifies a subset of additional content to be cached when the page is selected as an offline Favorite. The additional content to be cached is defined in a CDF file.
Smart offline Favorites, like channels, enable users to view Web pages from the cache. Cached content is easier for users to find, and the pages can be viewed independent of a Web connection.
Active Channel sites can employ all of the CDF elements. Unlike channels, however, the smart offline Favorites feature in Internet Explorer 5 and later uses a subset of the CDF vocabulary, ignoring the CDF elements it doesn't use. Smart offline Favorites, for example, don't employ the SCHEDULE element, which provides an author-defined schedule for updating, or "synchronizing" content.
It is not necessary to author separate CDF files to fulfill different roles. One CDF file, for example, can be used for channels and offline Favorites. This article demonstrates how easy it is to create a single, "generic" CDF file to take advantage of CDF support.
Support for CDF files began with Internet Explorer 4.0. Earlier versions of Windows Internet Explorer don't recognize CDF files. This tutorial includes a script you can copy and paste to your Web pages to check the user's browser version, enabling you to prevent undesirable results.
Support for smart offline Favorites began with Internet Explorer 5. The "Offline" value for the REL attribute of the LINK element is supported by Internet Explorer 5 and later. This value is key to the implementation of smart offline Favorites. Earlier versions of Internet Explorer ignore LINK REL="Offline".
You can create a CDF file with any text editor.
The only other thing you'll need is a Web site. Any Web site will work; even a single Web page will do. Basic HTML knowledge is helpful here, but you don't need to be an expert to implement this technology.
In this section, you'll learn, step-by-step, how easy it is to author a generic CDF file. You'll create the CDF file first and then link a page on your Web site to the CDF file.
Follow these steps to create a CDF file:
The XML processing instruction indicates an XML document. For readability, it's a good idea to specify the character set and version numbers even when default values are assigned, as in the following example:
<?XML ENCODING="UTF-8" VERSION="1.0"?>
Everything you add to the CDF file will be placed between the beginning and ending CHANNEL tags. In the following example, the CHANNEL element defines the top level of the CDF file's hierarchy. Its HREF attribute points to the top-level Web page in the CDF file. The other attributes of the example CHANNEL element specify the date the page was last modified, and the number of levels deep the browser should site crawl and precache the content of the CHANNEL.
<CHANNEL HREF=""
LASTMOD="1998-11-05T22:12"
PRECACHE="YES"
LEVEL="1">
.
.
.
</CHANNEL>
In the preceding example, Internet Explorer is instructed to cache pageOne and all the pages linked to it. This is all you really need in a "no frills" CDF file where pageOne represents the Web site's entry page, and all the pages linked to pageOne should be cached for offline use.
If it isn't appropriate to cache all the pages linked to pageOne, set the value of the CHANNEL element's LEVEL attribute to 0. You'll learn how to cache individual pages in Step 6.
The text strings within these elements will represent your channel in the Windows Media Showcase on your users' desktops and also in the Channel bar in Internet Explorer 4.0, or the Favorites bar in Internet Explorer 5 and later. The title appearing in the head section of your Web page will represent offline Favorites on the Favorites bar in Internet Explorer 5 and later. The TITLE and ABSTRACT elements are ignored by the offline Favorites feature in Internet Explorer 5 and later.
<TITLE>Title of your Channel</TITLE>
<ABSTRACT>Synopsis of your channel's contents.</ABSTRACT>
Smart offline Favorites in Internet Explorer 5 will not use the update schedule you create in your CDF file. Like TITLE and ABSTRACT, the SCHEDULE element is ignored by the smart offline Favorites feature in Internet Explorer 5 and later. The SCHEDULE element does work with channels. Users can choose to automatically update a Favorite's content daily, weekly, monthly, or when they choose Synchronize from the Tools menu.
In the following example, the content will be synchronized every two weeks:
<SCHEDULE>
<INTERVALTIME DAY="14"/>
</SCHEDULE>
Using the EARLIESTTIME and LATESTTIME elements, you can also specify a range of valid update times to help manage server loads.
The images you specify will represent your channel in the Windows Media Showcase on your users' desktops and also in the Channel bar in Internet Explorer 4.0, or the Favorites bar in Internet Explorer 5 and later. Default images will be provided for you wherever necessary.
<LOGO HREF="" STYLE="IMAGE-WIDE"/>
<LOGO HREF="" STYLE="IMAGE"/>
<LOGO HREF="" STYLE="ICON"/>
Additional information on logo design guidelines can be found in Creating Active Channel Logo Images.
The Web pages you specify will constitute the "intelligent subset" of your content that enhances your users' browsing experience. With Internet Explorer 5 and later, authors have the ability to choose which pages are cached when users add pages to Favorites. Previously, browsers only gave users the option of caching all of a page's links, or none of them. Now you can offer your users a more meaningful alternative.
By setting the values of the PRECACHE attribute to Yes, and the LEVEL attribute to 1, pageTwo will be cached in the following example, as will all of the pages linked to pageTwo. Note that TITLE, ABSTRACT and LOGO elements can be nested inside the ITEM element to represent the page on the Windows Media Showcase and also in the Channel bar in Internet Explorer 4.0, or the Favorites bar in Internet Explorer 5 and later.
Yes
1
<ITEM HREF=""
LASTMOD="1998-11-05T22:12"
PRECACHE="YES"
LEVEL="1">
<TITLE>Page Two's Title</TITLE>
<ABSTRACT>Synopsis of Page Two's contents.</ABSTRACT>
<LOGO HREF="" STYLE="IMAGE"/>
<LOGO HREF="" STYLE="ICON"/>
</ITEM>
Without this extension, Internet Explorer will not perform the expected actions for CDF files. For example, the CDF file might display as text in the browser window rather than launching the Add Channel dialog box. This step ensures that the server will return the correct MIME type to the browser.
Here's a generic CDF file that includes each of the elements outlined above.
<?XML ENCODING="UTF-8" VERSION="1.0"?>
<CHANNEL HREF=""
BASE=""
LASTMOD="1998-11-05T22:12"
PRECACHE="YES"
LEVEL="0">
<TITLE>Title of your Channel</TITLE>
<ABSTRACT>Synopsis of your channel's contents.</ABSTRACT>
<SCHEDULE>
<INTERVALTIME DAY="14"/>
</SCHEDULE>
<LOGO HREF="wideChannelLogo.gif" STYLE="IMAGE-WIDE"/>
<LOGO HREF="imageChannelLogo.gif" STYLE="IMAGE"/>
<LOGO HREF="iconChannelLogo.gif" STYLE="ICON"/>
<ITEM HREF="pageTwo.extension"
LASTMOD="1998-11-05T22:12"
PRECACHE="YES"
LEVEL="1">
<TITLE>Page Two's Title</TITLE>
<ABSTRACT>Synopsis of Page Two's contents.</ABSTRACT>
<LOGO HREF="pageTwoLogo.gif" STYLE="IMAGE"/>
<LOGO HREF="pageTwoLogo.gif" STYLE="ICON"/>
</ITEM>
</CHANNEL>
There are two ways to link your Web page to the CDF file. The way you choose will depend on whether you are creating a channel or an offline Favorite. If you want your CDF file to fulfill both of these roles, follow the steps in both of the following sections.
<!--
<SCRIPT LANGUAGE="JavaScript">
function isMsie4orGreater() {
var ua = window.navigator.userAgent; var msie = ua.indexOf ("MSIE");
if (msie > 0) {
return (parseInt (ua.substring (msie+5, ua.indexOf (".", msie))) >= 4)
&& (ua.indexOf ("MSIE 4.0b") <0); }
else {return false; }}
</SCRIPT>
-->
This script is used to place the image button on your Web page, allowing users to install your channel.
<A NAME="uniqueName"
HREF="">
<IMG SRC="urlOfImageButton" border=0 width=136 height=20></A>
<SCRIPT LANGUAGE="JavaScript">
if ( isMsie4orGreater()) { uniqueName.href ="urlToCDF"; }
</SCRIPT>
Edit the above script as follows:
Add the following HTML between the beginning and ending HEAD tags in your Web page. LINK is an HTML element that specifies a relationship with another object. REL is an attribute of LINK that sets or retrieves the specified relationship. For offline Favorites, the value of the REL attribute is "Offline."
<LINK REL="Offline" HREF=""> | http://msdn.microsoft.com/en-us/library/aa768024(VS.85).aspx | crawl-002 | en | refinedweb |
Scott Mitchell
April 2004
Applies to
Microsoft ASP.NET
Microsoft Internet Information Services
Summary: Learn how to provide templated URL driven content with HTTP Handlers, using three real-world scenarios presented in this article. (29 printed pages)
Download the source code for this article.
Introduction
Routing Requests Based on File Extension
Creating and Configuring an HTTP Handler
Dissecting Some Real-World HTTP Handlers
Conclusion
Related Books
Whenever a request reaches an Microsoft Internet Information Services (IIS) Web server, IIS determines how to handle the file by examining the requested file's extension. Static files, like HTML pages, images, Cascading Style Sheet (CSS) files, and the like, are handled directly by IIS. Requests for Microsoft ASP.NET Web pages or Web services—files with extensions .aspx or .asmx—are handed off to the ASP.NET engine. Requests for files with the extension .asp are handed off to the classic ASP engine. The ASP.NET and ASP engine are responsible for generating the markup for the requested resource. For ASP.NET and classic ASP Web pages, this markup is HTML; for Web services, the markup is a SOAP response. Once the engine has successfully rendered the markup for the requested resource, this markup is returned to IIS, which then sends the markup back to the client that requested the resource.
This model of serving content—having IIS directly serve only static content, while delegating the rendering of dynamic content to separate engines—has two distinct advantages:
To have this model of serving content work, IIS needs a mapping of file extensions to programs. This information exists in the IIS metabase and can be configured via the Internet Services Manager, as we'll see in the next section. When a request comes into IIS, then, this mapping is consulted to determine where the request should be routed. Extensions like .aspx, .asmx, .ashx, .cs, .vb, .config, and others are all configured, by default, to be routed to the ASP.NET engine.
Whenever a request is routed from IIS to the ASP.NET engine, the ASP.NET engine performs a similar series of steps to determine how to properly render the requested file. Specifically, the ASP.NET engine examines the requested file's extension and then invokes the HTTP handler associated with that extension, whose job it is to render the requested file's markup.
Note Technically, the ASP.NET engine will invoke either an HTTP handler or an HTTP handler factory. An HTTP handler factory is a class that returns an instance of an HTTP handler.
An HTTP handler is a class that knows how to render content for a particular type of Web content. For example, there is a different HTTP handler class in the .NET Framework for rendering ASP.NET Web pages than there is for rendering Web services. Just as IIS relies on external programs to serve dynamic content, the ASP.NET engine relies on different classes to render the content for a certain type of content.
By having the ASP.NET engine pluggable like IIS, the same advantages discussed earlier are realized by ASP.NET. Of particular interest is the fact that this model allows for developers to create new HTTP handler classes, and plug them into the ASP.NET engine. In this article, we'll examine precisely how to create custom HTTP handlers and use them in an ASP.NET Web application. We'll start with an in-depth look at how the ASP.NET engine determines what HTTP handler should service the request. We'll then see how to easily create our own HTTP handler classes with just a few lines of code. Finally, we'll look at a number of real-world HTTP handler examples that you can start using in your Web applications today.
As discussed in the Introduction, both IIS and the ASP.NET engine route incoming requests to an external program or class based on the request's file extension. In order to achieve this, it is imperative that both IIS and ASP.NET have some sort of directory, mapping file extensions to external programs. IIS stores this information in its metabase, which is able to be edited through the Internet Services Manager. Figure 1 shows a screenshot of the Application Configuration dialog box for an IIS application. Each provided extension maps to a specific executable path. Figure 1 shows some of the file extensions that are mapped to the ASP.NET engine (.asax, .ascx, .ashx, .asmx, and so on).
Figure 1. Configured file extensions
Specifically, IIS maps the ASP.NET-related extensions to \WINDOWS_DIR\Microsoft.NET\Framework\VERSION\aspnet_isapi.dll. Just as ASP.NET maps file extensions to HTTP handlers, IIS maps file extensions to ISAPI Extensions. (An ISAPI extension is an unmanaged, compiled class that handles an incoming Web request, whose task is to generate the content for the requested resource.) The ASP.NET engine, however, is a set of managed classes in the .NET Framework. aspnet_isapi.dll serves as a bridge between the unmanaged world (IIS) and the managed world (the ASP.NET engine).
For a more in-depth look at how IIS handles incoming requests, including how to customize the IIS-specific mappings, check out Michele Leroux Bustamante's article Inside IIS and ASP.NET.
While IIS stores its directory of file extensions and ISAPI Extensions in its metabase, this directory for ASP.NET is stored in XML-formatted configuration files. The machine.config file (located in \WINDOWS_DIR\Microsoft.NET\Framework\VERSION\CONFIG\) contains the default, Web server-wide mappings, while the Web.config file can be used to specify mappings specific to a Web application.
In both the machine.config and Web.config files, the mappings are stored in the <httpHandlers> element. Each mapping is represented by a single <add> element, which has the following syntax:
<add verb="verb list" path="extension | path"
type="HTTP handler type" />
The verb attribute can be used to limit the HTTP handler to only serve particular types of HTTP requests such as GETs or POSTs. To include all verbs, use *. The path attribute specifies an extension to map to the HTTP handler, such as *.scott, or can specify a particular URL path. Finally, the type attribute specifies the type of the HTTP handler that is responsible for rendering this content.
The following is a snippet of default HTTP handler assignments in the machine.config file:
<httpHandlers>
<add verb="*" path="*.aspx"
type="System.Web.UI.
<add verb="*" path="*.config"
type="System.Web.HttpForbiddenHandler" />
<add verb="*" path="*.cs"
type="System.Web.HttpForbiddenHandler" />
<add verb="*" path="*.vb"
type="System.Web.HttpForbiddenHandler" />
...
</httpHandlers>
The first <add> element maps all requests to ASP.NET Web pages (*.aspx) to the HTTP handler factory PageHandlerFactory. The second maps all requests to Web services (.asmx) to the WebServiceHandlerFactory class. The remaining four <add> elements map certain extensions to the HttpForbiddenHandler HTTP handler. This HTTP handler, then, is invoked if a user attempts to browse to a .config file, such as your application's Web.config file. The HttpForbiddenHandler simply emits a message indicating that files of that type are not served, as shown in Figure 2.
Note You can protect individuals from directly accessing sensitive files by mapping those files' extensions to the HttpForbiddenHandler HTTP handler in either the machine.config or Web.config files. For example, if you run a Web hosting company you could configure IIS to route requests to .mdb files (Microsoft Access database files) to the ASP.NET engine, and then have the ASP.NET engine map all .mdb files to the HttpForbiddenHandler. That way, even if your users put their Access database files in a Web-accessible location, nefarious users won't be able to download them. For more information see my article Protecting Files with ASP.NET.
Figure 2. Restricting viewing web.config
The machine.config file specifies the default mappings for all Web applications on the Web server. The mappings, however, can be customized on a Web-application by Web-application basis using the Web.config file. To add an HTTP handler to a Web application, add an <add> element to the the <httpHandlers> element. The <httpHandlers> element should be added as a child of the <system.web> element, like so:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.web>
<httpHandlers>
<add verb="verb list" path="extension | path" type="type" />
</httpHandlers>
...
</system.web>
</configuration>
Specific HTTP handlers can also be removed from a Web-application using the <remove> element like so:
<httpHandlers>
<remove verb="verb list" path="extension | path" />
</httpHandlers>
When customizing the ASP.NET engine's mapping of file extensions to HTTP handlers, it is important to understand that the file extensions being set in the machine.config or Web.config files must be mapped to the aspnet_isapi.dll in the IIS metabase. In order for the ASP.NET engine to be able to route a request to the proper HTTP handler, it must first receive the request from IIS. IIS will route the request to the ASP.NET engine only if the requested file's extension is mapped to the aspnet_isapi.dll file in the IIS metabase. This is something you'll always need to keep in mind when creating custom HTTP handlers. We'll see an example later in this article where an addition needs to be made to the IIS metabase, mapping the .gif and .jpg extensions to the aspnet_isapi.dll ISAPI Extension.
Now that we've examined how IIS maps incoming requests to ISAPI Extensions, and how the ASP.NET engine maps incoming requests to HTTP handlers (or HTTP handler factories), we're ready to examine how to create our own HTTP handler classes.
Adding an HTTP handler to an ASP.NET Web application requires two steps. First, the HTTP handler must be created, which entails creating a class that implements the System.Web.IHttpHandler interface. Second, the ASP.NET Web application needs to be configured to use the HTTP handler. In the previous section we saw how to configure a Web application to use an HTTP handler—by adding an <httpHandlers> section to the application's Web.config file or the Web server's machine.config file. Since we've already looked at configuring an application to use an HTTP handler, let's focus on building an HTTP handler.
The IHttpHandler interface defines one method, ProcessRequest(HttpContext), and one property, IsReusable. The ProcessRequest(HttpContext) method takes in a System.Web.HttpContext instance which contains information about the request. It is the ProcessRequest(HttpContext) method's responsibility to emit the correct markup based on the request details.
The HttpContext class that is passed into the ProcessRequest(HttpContext) method exposes many of the same vital properties that the System.Web.UI.Page class provides: the Request and Response properties allow you to work with the incoming request and outgoing response; the Session and Application properties can be used to work with session and application state; the Cache property provides access to the application's data cache; and the User property contains information about the user making the request.
Note The seeming similarities between the HttpContext and Page classes are not coincidental. The Page class, in fact, an HTTP handler itself, implements IHttpHandler. At the start of the Page class's ProcessRequest(HttpContext) method the Page class's Request, Response, Server, and other intrinsic objects are assigned the corresponding properties of the passed-in HttpContext.
The IsReusable property of the IHttpHandler is a Boolean property that indicates if the HTTP handler is reusable. The IsReusable property indicates if one instance of the HTTP handler can be used for other requests to the same file type, or if each request requires an individual instance of the HTTP handler class. Sadly, there is little information in the official documentation as to what are the best practices for using IsReusable. This ASP.NET Forums post, by Dmitry Robsman, a developer on the ASP.NET Team at Microsoft, sheds some light on the subject: "A handler is reusable when you don't need a new instance for each request. Allocating memory is cheap, so you only need to mark a handler as reusable if one-time initialization cost is high." Dmitry also points out that the Page class—which is an HTTP handler, recall—is not reusable. With this information, you can feel confident to have your HTTP handler return false for IsReusable.
To illustrate creating an HTTP handler class, let's build a simple HTTP handler, one that merely displays the current time along with request information. To follow along, create a new Class Library project in your language of choice (I'll be using C#; I named my project skmHttpHandlers). This will create a new project with a default Class.cs (or Class.vb) file. In this file, add the following code (your namespace might differ):
using System;
using System.Web;
namespace skmHttpHandlers
{
public class SimpleHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
// TODO: Add SimpleHandler.ProcessRequest implementation
}
public bool IsReusable
{
get
{
// TODO: Add SimpleHandler.IsReusable
// getter implementation
return false;
}
}
}
}
The code above defines a class called SimpleHandler that implements IHttpHandler. The SimpleHandler class provides a single method—ProcessRequest(HttpContext)—and a single property—IsReusable. We can leave the IsReusable property as-is (since it returns false by default), meaning that all we have left to do is write the code for the ProcessRequest(HttpContext) method.
We can have this HTTP handler render the current time and request details by adding the following code to the ProcessRequest(HttpContext) method:
public void ProcessRequest(HttpContext context)
{
context.Response.Write("<html><body><h1>The current time is ");
context.Response.Write(DateTime.Now.ToLongTimeString());
context.Response.Write("</h1><p><b>Request Details:</b><br /><ul>");
context.Response.Write("<li>Requested URL: ");
context.Response.Write(context.Request.Url.ToString());
context.Response.Write("</li><li>HTTP Verb: ");
context.Response.Write(context.Request.HttpMethod);
context.Response.Write("</li><li>Browser Information: ");
context.Response.Write(context.Request.Browser.ToString());
context.Response.Write("</li></ul></body></html>");
}
Notice that this code emits its content using a series of Response.Write() statements, spitting out the precise HTML markup that should be sent back to the requesting Web browser. Realize that the goal of the ProcessRequest(HttpContext) method is to emit the markup for the page to the Response object's output stream. A simple way to achieve this is through Response.Write(). (In the "Protecting Your Images" section we'll see how to write the binary content of image files directly to the Respose object's OutputStream property.)
For HTTP handlers that render HTML markup, you might be a bit remiss if you use Response.Write() statements. You'll want to use Web controls to emit HTML markup instead. You can use Web controls in an HTTP handler, although it's not as simple or straightforward as using Web controls in an ASP.NET Web page. There are two techniques that can be employed:
The first approach is the ideal one as it provides a clean separation of code and content. That is, the HTML markup an HTTP handler generates can be tweaked by just modifying the template ASP.NET Web page rather than mucking around with Response.Write() statements in the HTTP handler. It takes a bit of work to get this technique working properly, though. In the "Using an HTTP Handler Factory for Displaying URL-Driven Content" section we'll look at an HTTP handler factory that uses this technique.
The second technique involves programmatically creating instances of the Web control classes that you want to have rendered in the ProcessRequest() method ofyour HTTP handler. This technique proves a bit challenging since if you want to add Web controls nested inside of other ones, you have to manually build the Web control hierarchy yourself. Once the control hierarchy has been constructed, you need to generate the HTML for the control hierarchy using the RenderControl() method.
The following code snippet illustrates how Web controls can be rendered programmatically in an HTTP handler:
public void ProcessRequest(HttpContext context)
{
// build up the control hiearchy - a Panel as the root, with
// two Labels and a LiteralControl as children...
Panel p = new Panel();
Label lbl1 = new Label();
lbl1.Text = "Hello, World!";
lbl1.Font.Bold = true;
Label lbl2 = new Label();
lbl2.Text = "How are you?";
lbl2.Font.Italic = true;
p.Controls.Add(lbl1);
p.Controls.Add(new LiteralControl(" - "));
p.Controls.Add(lbl2);
// Render the Panel control
StringWriter sw = new StringWriter();
HtmlTextWriter writer = new HtmlTextWriter(sw);
p.RenderControl(writer);
// Emit the rendered HTML
context.Response.Write(sw.ToString());
}
(For the above code to work the System.IO, System.Web, System.Web.UI, and System.Web.UI.WebControls namespaces will need to be included via Imports or using statements.)
Clearly building and rendering a control hierarchy by hand is not nearly as easy as adding Web controls by dragging and dropping, or using the declarative Web control syntax. Realize that each time an ASP.NET Web page is visited after its HTML portion has been changed, the HTML portion is converted into a class that programmatically builds up the control hierarchy, akin to our example above.
After creating the HTTP handler class, all that remains is configuring an ASP.NET Web application to use the handler for a particular file extension. Assuming you created a new Microsoft® Visual Studio® .NET Solution for the HTTP handler Class Library project, the easiest approach is to add a new ASP.NET Web application project to the Solution. You'll also need to add the HTTP handler project to the ASP.NET Web application's References folder. (Right-click on the References folder and choose Add Reference. From the Add Reference dialog box, select the Projects tab and pick the Class Library project created in the previous section.) If you are not using Visual Studio .NET you'll need to manually copy the HTTP handler's assembly to the /bin directory of the ASP.NET Web application.
Recall that to configure a Web application to use an HTTP handler we need to map some file extension (or a specific path) to the HTTP handler. We could make up our own extension, like .simple, but in doing so we'd have to configure IIS to map the .simple extension to the ASP.NET engine's ISAPI Extension (aspnet_isapi.dll). If you are hosting your site on a shared Web server, chances are the Web hosting company doesn't allow you to add custom mappings to the IIS metabase. Fortunately, when the .NET Framework is installed on a Web server the extension .ashx is automatically added and mapped to the ASP.NET engine's ISAPI Extension. This extension, then, can be used for custom HTTP handlers if you do not have access or permissions to modify the IIS metabase.
For this first example of configuring a Web application to use a custom HTTP handler, let's use the .ashx extension, which is accomplished by adding the following <httpHandlers> section to the Web application's Web.config file.
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.web>
<httpHandlers>
<!-- Simple Handler -->
<add verb="*" path="*.ashx"
type="skmHttpHandlers.SimpleHandler, skmHttpHandlers" />
</httpHandlers>
</system.web>
</configuration>
Notice that the <add> element indicates that a request coming in on any HTTP verb for any file with a .ashx extension should be handled by the skmHttpHandlers.SimpleHandler HTTP handler. The type attribute's value specifies the type of the HTTP handler class to use (namespace.className), followed by a comma, and then followed by the assembly name where the HTTP handler class resides.
After adding the above <httpHandlers> section to the Web application's Web.config file, visiting any path with a .ashx extension displays the page shown in Figure 3.
Figure 3. Simple HTTP handler
Figure 3 shows a screenshot of a browser visiting the Web application's HelloWorld.ashx file. Realize that this file, HelloWorld.ashx, does not actually exist. What happens is the following:
These seven steps would proceed in the same manner had the visitor requested ASPisNeat.ashx, myfile.ashx, or any file with an .ashx extension.
Now that we have looked at the steps necessary for creating an HTTP handler and configuring a Web application to use the handler, let's examine some realistic HTTP handlers that you can start using in your ASP.NET Web applications today. (Many of these HTTP handler ideas come from comments on a blog entry I made requesting suggestions for real-world HTTP handler examples.)
The remainder of this article walks through three such HTTP handlers:
The complete source for all three handler examples—along with the source code for the ASP.NET Web application demonstrating their use—is available to download from this article.
Have you ever wanted to quickly see the source code for one of your ASP.NET Web page's code-behind classes, but didn't want to have to take the time to load up Visual Studio .NET and open the associated project? If you're like me, you likely already have several instances of Visual Studio .NET open, along with Microsoft® Outlook, Microsoft® Word, the .NET Framework documentation, SQL Enterprise Manager, and a Web browser, so opening another instance of Visual Studio .NET is usually the last thing I want to do.
Ideally, it would be nice to be able to visit the code-behind class that resides on the server to view its source. That is, to see the code for the page WebForm1.aspx, I could just point my browser to. If you try this, however, you'll get the "This type of page is not served" method (see Figure 2) since, by default, the .cs and .vb extensions are mapped to the HttpForbiddenHandler HTTP handler. This is a good thing, mind you, since your code-behind classes may contain connection strings or other sensitive information that you don't want to allow any random visitor to view. Ideally, though, when moving the ASP.NET Web application from a development server to a production server you'd not copy over the code-behind class files—just the .aspx files and the required assemblies in the /bin directory.
On your development server, though, you might want to be able to view the code-behind source code through a browser. One option would be to just remove the mapping from .cs and .vb files to the HttpForbiddenHandler HTTP handler. (This could be done for the entire development server by modifying the machine.config file, or on an application-by-application basis by using a <remove> element in the <httpHandlers> section of the Web.config file.) When this works, the source code is displayed as plain text, in an unformatted manner (see Figure 4).
Figure 4. Displaying code with an HTTP handler
While this definitely works, the source code display is anything but ideal. Fortunately there are free .NET libraries available that perform HTML formatting of code, such as squishySyntaxHighlighter by squishyWARE. With just a couple of lines of code, squishySyntaxHighlighter takes in a string containing Visual Basic.NET code, C# code, or XML content and returns a string of HTML that displays the passed-in content akin to how Visual Studio .NET renders code and XML content. To accomplish this we will use an HTTP handler. This is, after all, what HTTP handlers are designed to do—to render a specific type of content. In this case we're providing a formatted rendering of code-behind classes.
The following code shows the ProcessRequest() method of the CodeFormatHandler HTTP handler. The SyntaxHighlighter class used in the code is the class provided by squishySyntaxHighlighter. To highlight code all we have to do is create an instance of the SyntaxHighlighter class using the GetHighlighter() static method, specifying how we want the code to be formatted (as Visual Basic.NET, C#, or XML). Then, the created instance's Highlight(contents) method takes in a string input (contents) and returns an HTML-formatted representation of that content. At the end of this method, the formatted content is emitted using Response.Write().
public void ProcessRequest(HttpContext context)
{
string output = string.Empty;
// grab the file's contents
StreamReader sr = File.OpenText(context.Request.PhysicalPath);
string contents = sr.ReadToEnd();
sr.Close();
// determine how to format the file based on its extension
string extension = Path.GetExtension(
context.Request.PhysicalPath).ToLower();
SyntaxHighlighter highlighter;
if (extension == ".vb")
{
highlighter = SyntaxHighlighter.GetHighlighter(
SyntaxType.VisualBasic );
output = highlighter.Highlight( contents );
}
else if (extension == ".cs")
{
highlighter = SyntaxHighlighter.GetHighlighter(
SyntaxType.CSharp );
output = highlighter.Highlight( contents );
}
else // unknown extension
{
output = contents;
}
// output the formatted contents
context.Response.Write("<html><body>");
context.Response.Write(output);
context.Response.Write("</body></html>");
}
Figure 5 shows a screenshot of the same code in Figure 4, but when highlighted using the CodeFormatHandler.
Figure 5: Code formatted by HTTP handler
Note The CodeFormatHandler HTTP handler available for download has a slightly more robust ProcessRequest() method, allowing only those accessing the page through localhost to view the code-behind class's source code.
Configuring the HTTP handler in a Web application requires just adding <add> elements to the <httpHandlers> section of the application's Web.config file (or the machine.config file, if you wish to use the handler for the entire Web server). As the following illustrates, both the .cs and .vb extensions are routed to the CodeFormatHandler (as opposed to the HttpForbiddenHandler):
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.web>
<httpHandlers>
<!-- Code Format handler -->
<add verb="*" path="*.cs"
type="skmHttpHandlers.CodeFormatHandler,
skmHttpHandlers" />
<add verb="*" path="*.vb"
type="skmHttpHandlers.CodeFormatHandler,
skmHttpHandlers" />
</httpHandlers>
...
</system.web>
</configuration>
There's a saying about content on the Web: half of the content is original, the other half is stolen. Computers make it very easy to take the work of others and replicate it with minimal effort. For example, a professional photographer might want to display some of his best images on his Web site, but wants to prevent other people from simply saving the picture and putting it up on their Web site, as if it were their own work. Even if you don't mind if other sites use your images, you want to make sure that they save your images on their site, rather than simply adding an <img> tag that points back to your Web server. (An example would be if site had a Web page with an <img> tag like: <img src="">. It would be nice to prevent this, since by serving an image from your Web server, you'll have to burden the bandwidth expense, even though the user viewing the image is visitng the Web site.)
To help protect images, let's create an HTTP handler that does two things:
When an image is requested, most browsers send the URL of the Web page that contains the image in the referrer HTTP header. What we can do in our HTTP handler, then, is check to make sure that the host of the referrer HTTP header and the host of the image URL are the same. If they are not, then we have a bandwidth thief on our hands. In this case, rather than returning the original image (or even the watermarked image), we'll return an alternate image, something like, "YOU CAN VIEW THIS IMAGE BY GOING TO."
Note Some browsers do not send a referrer HTTP header when requesting images; others provide an option to disable this feature. Therefore, this technique is not foolproof, but will likely work for the vast majority of Web surfers, thereby dissuading nefarious Web site designers from linking to images directly on your Web site.
To add a watermark we'll use the System.Drawing namespace classes to add a text message in the center of the image. The .NET Framework contains a number of classes in the System.Drawing namespace that can be used to create and modify graphic images at runtime. Unfortunately, a thorough discussion of these classes is far beyond the scope of this article, but a good starting place for more information is Chris Garrett's articles covering GDI+ and System.Drawing.
The following code snippet shows the ProcessRequest() method for the ImageHandler HTTP handler.
public void ProcessRequest(HttpContext context)
{
if (context.Request.UrlReferrer == null ||
context.Request.UrlReferrer.Host.Length == 0 ||
context.Request.UrlReferrer.Host.CompareTo(
context.Request.Url.Host.ToString()) == 0)
{
// get the binary data for the image
Bitmap bmap = new Bitmap(context.Request.PhysicalPath);
// determine if we need to add a watermark
if (ImageConfiguration.GetConfig().AddWatermark)
{
// Create a Graphics object from the bitmap instance
Graphics gphx = Graphics.FromImage(bmap);
// Create a font
Font fontWatermark = new Font("Verdana", 8, FontStyle.Italic);
// Indicate that the text should be
// center aligned both vertically
// and horizontally...
StringFormat stringFormat = new StringFormat();
stringFormat.Alignment = StringAlignment.Center;
stringFormat.LineAlignment = StringAlignment.Center;
// Add the watermark...
gphx.DrawString(ImageConfiguration.GetConfig().WatermarkText,
fontWatermark, Brushes.Beige,
new Rectangle(10, 10, bmap.Width - 10,
bmap.Height - 10),
stringFormat);
gphx.Dispose();
}
// determine what type of file to send back
switch (
Path.GetExtension(context.Request.PhysicalPath).ToLower())
{
case ".gif":
context.Response.ContentType = "image/gif";
bmap.Save(context.Response.OutputStream, ImageFormat.Gif);
break;
case ".jpg":
context.Response.ContentType = "image/jpeg";
bmap.Save(context.Response.OutputStream, ImageFormat.Jpeg);
break;
}
bmap.Dispose();
}
else
{
string imgPath =
context.Server.MapPath(
ImageConfiguration.GetConfig().ForbiddenFilePath);
Bitmap bmap = new Bitmap(imgPath);
// determine what type of file to send back
switch (Path.GetExtension(imgPath))
{
case ".gif":
context.Response.ContentType = "image/gif";
bmap.Save(context.Response.OutputStream, ImageFormat.Gif);
break;
case ".jpg":
context.Response.ContentType = "image/jpeg";
bmap.Save(context.Response.OutputStream, ImageFormat.Jpeg);
break;
}
bmap.Dispose();
}
}
The ImageHandler works in tandem with an ImageConfiguration class, which contains configuration information used by the HTTP handler. The ImageConfiguration class is populated by its static GetConfig() method, which deserializes the appropriate Web.config section. The ImageConfiguration has three properties:
These settings are specified in the Web application's Web.config file in the <ImageHandler> section, as shown below:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="ImageHandler"
type=
"skmHttpHandlers.Config.ImageConfigSerializerSectionHandler,
skmHttpHandlers" />
</configSections>
<ImageHandler>
<forbiddenFilePath>
~/images/STOP-STEALING-MY-BANDWIDTH.gif
</forbiddenFilePath>
<addWatermark>true</addWatermark>
<watermarkText>Copyright Scott Mitchell</watermarkText>
</ImageHandler>
<system.web>
...
</system.web>
</configuration>
The above settings dictate that if an image is detected as being requested from an external site, the end user will see the image ~/images/STOP-STEALING-MY-BANDWIDTH.gif. Furthermore, the settings indicate that all images should be watermarked with the text "Copyright Scott Mitchell."
The ProcessRequest() method starts out by checking to see if this image is being requested from a remote host by determining if the referrer HTTP header's Host and the image URL's Host match up. If not, then the image is being requested from a different Web server, and the user is redirected to the image specified in the ImageConfiguration class's ForbiddenFilePath property. Otherwise, the code checks to see if the AddWatermark property is true and, if it is, watermarks the image using the specified WatermarkText.
To use the ImageHandler HTTP handler in an ASP.NET Web application you'll need to first add the ImageConfiguration properties through the Web.config in the <ImageHandler> section, and then include an <add> element in the c section associating the .gif and .jpg extensions with the HTTP handler. We examined the <ImageHandler> section above, so let's just look at the <httpHandlers> section. As the following shows, you need two <add> elements—one mapping .gif files to the handler, and one mapping .jpg files.
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
... <ImageHandler> section ...
<system.web>
<httpHandlers>
<!-- ImageHandler handlers -->
<add verb="*" path="*.jpg"
type="skmHttpHandlers.ImageHandler, skmHttpHandlers" />
<add verb="*" path="*.gif"
type="skmHttpHandlers.ImageHandler, skmHttpHandlers" />
</httpHandlers>
...
</system.web>
</configuration>
In addition to adding the <httpHandlers> section to your Web.config file you must also configure IIS to send all requests to .gif and .jpg files to the aspnet_isapi.dll ISAPI Extension. If you forget to do this, then anytime a request comes in for a GIF or JPEG file, IIS will handle the request itself. We need to add this mapping to the IIS metabase so that when a GIF or JPEG request comes in, it is routed to the ASP.NET engine, which will then route the request to the ImageHandler HTTP handler.
Now that we've examined how to create the HTTP handler and configure a Web application (and IIS) to use the handler, let's see the handler in action! Figure 6 shows a Web page displaying two images of my dog Sam. Notice that the images are watermarked with the specified watermark text.
Figure 6. Watermarked images
Figure 7 shows a Web page that attempts to access an image from another server. For this example, the Web page contains an <img> tag in the HTML like <img src="">. (mitchellsvr is the name of my computer.) When the page is requested through, the browser sends the referrer HTTP header as, while the image's requested URL is. The ImageHandler HTTP handler detects a difference between the referrer Host and the image URL's Host, and therefore displays the STOP-STEALING-MY-BANDWIDTH.gif image.
Figure 7. Limiting image requests
All Web developers have created a single page that displays different data based on some set of parameters, such as a Web page that displays information about an employee based on the employee ID passed through the querystring. While offering a /DisplayEmployee.aspx?EmpID=EmployeeID Web page is one way to show employee information, you might want to be able to provide employee information using a more memorable URL, like /employees/name.info. (With this alternate URL, to see information about employee Jisun Lee you'd visit /employees/JisunLee.info.)
There are a couple of techniques that can be used to achieve a more readable and memorable URL. The first is to use URL rewriting. In a previous article of mine, URL Rewriting in ASP.NET, I showed how to perform URL rewriting using HTTP modules and HTTP handlers. URL rewriting is the process of intercepting a Web request to some non-existing URL and rerouting the request to an actual URL. URL rewriting is commonly used for providing short and memorable URLs. For example, an eCommerce Web site might have a page titled /ListProductsByCategory.aspx, which listed all products for sale in a specific category, where the category of products to display was indicated by a querystring parameter. That is, /ListProductsByCategory.aspx?CatID=PN-1221 might list all widgets for sale. With URL rewriting you could define a "pseudo" URL like /Products/Widgets.aspx, which really doesn't exist. When a request comes into the ASP.NET engine for /Products/Widget.aspx, URL rewriting would, transparently, redirect the user to /ListProductsByCategory.aspx?CatID=PN-1221. The user, however, would still see /Products/Widgets.aspx in their browser's Address bar. URL rewriting in the ASP.NET engine is typically accomplished by using the RewritePath(newURL) method of the HttpContext class, which can be employed in either an HTTP module or an HTTP handler.
The second technique is to create an HTTP handler that knows how to render .info files. This HTTP handler would display employee information by examining the requested URL and picking out the employee's name. Having the employee's name, a quick lookup to a database table would retrieve information about the employee in question. The final step would be to somehow render this information as HTML markup.
In the ImageHandler example we saw how an HTTP handler can inspect the URL of the requested resource, so picking out the employee's name and accessing her information from a database should be relatively straightforward. The real challenge lies in rendering the employee information. The simplest approach would be to hard-code the HTML output in the HTTP handler, using Response.Write() statements to emit the precise HTML markup, inserting the employee's information where needed. (This behavior is akin to that of the first HTTP handler example we looked at, SimpleHandler.)
A better approach is to use an ASP.NET page as a template to separate the code and content. To accomplish this you'll need to first create an ASP.NET Web page in your Web application. This page should have a mix of HTML markup and Web controls, like any other ASP.NET Web page. For our example, we'll be displaying an employee's name, social security number, and a brief biography. This page will therefore have three labels for these three fields. Figure 8 shows a screenshot of the ASP.NET Web page DisplayEmployee.aspx, when viewed in the Design tab in Visual Studio .NET.
Figure 8. Employee Information page
Next, we need to create an HTTP handler factory. An HTTP handler factory is a class that implements the System.Web.IhttpHandlerFactory interface, and is responsible for returning an instance of a class that implements IHttpHandler when invoked. A Web application is configured to use an HTTP handler factory in the exact same way that it is configured to use an HTTP handler. Once a Web application maps an extension to an HTTP handler factory, when a request comes in for that extension the ASP.NET engine asks the HTTP handler factory for an IHttpHandler instance. The HTTP handler factory provides such an instance to the ASP.NET engine, which can then invoke this instance's ProcessRequest() method. The ASP.NET engine requests an IHttpHandler instance by calling the HTTP handler factory's GetHandler() method.
For this example, the GetHandler() method needs to do two things: first, it needs to determine the name of the employee being requested and retrieve his employee information. Following that, it must return an HTTP handler that can serve the ASP.NET Web page we created earlier (DisplayEmployee.aspx). Fortunately we don't need to create an HTTP handler to serve the ASP.NET Web page—there already exists one in the .NET Framework that can be retrieved calling the System.Web.UI.PageParser class's GetCompiledPageInstance() method. This method takes in three inputs: the virtual path to the requested Web page, the physical path to the requested Web page, and the HttpContext used in requesting this Web page. The full details of how GetCompiledPageInstance() works are not essential to understand; realize, though, that the method compiles the ASP.NET HTML portion into a class, if needed, and returns an instance of this class. (This autogenerated class is derived from your code-behind class, which is derived from the System.Web.UI.Page class. The Page class implements IHttpHandler, so the class being returned from GetCompiledPageInstance() is an HTTP handler.)
The last challenge facing us is how to pass along the employee information from the HTTP handler factory to the ASP.NET Web page template. The HttpContext object contains an Items property which can be used to pass information between resources that share the same HttpContext. Therefore, we'll want to store the employee's data here.
The last step is to return to the ASP.NET Web page template (DisplayEmployee.aspx). In the template's code-behind class we need to retrieve the employee information from the Items property of the HttpContext and assign the employee data to the respective Label Web controls in the page's HTML portion.
For the demo included with this code's download, I provide a set of classes in the EmployeeBOL project that provide a class representation of an employee (Employee), along with a class for generating employees based on their name (EmployeeFactory). The HTTP handler factory's GetHandler() method is shown below. Notice that it first determines the employee's name and adds the Employee object returned by EmployeeFactory.GetEmployeeByName() to the Items collection of HttpContext. Finally, it returns the IHttpHandler instance returned by PageParser.GetCompiledInstance(), when passing in the DisplayEmployee.aspx ASP.NET Web page template as the physical file path.
public class EmployeeHandlerFactory : IHttpHandlerFactory
{
...
public IHttpHandler GetHandler(HttpContext context,
string requestType, string url, string pathTranslated)
{
// determine the employee's name
string empName =
Path.GetFileNameWithoutExtension(
context.Request.PhysicalPath);
// Add the Employee object to the Items property
context.Items.Add("Employee Info",
EmployeeFactory.GetEmployeeByName(empName));
// Get the DisplayEmployee.aspx HTTP handler
return PageParser.GetCompiledPageInstance(url,
context.Server.MapPath("DisplayEmployee.aspx"), context);
}
}
The code-behind class for the DisplayEmployee.aspx ASP.NET Web page template accesses the Employee class instance from the Items property of HttpContext and assigns the Label Web controls' Text properties to the corresponding Employee properties.
public class DisplayEmployee : System.Web.UI.Page
{
// three Label Web controls in HTML portion of page
protected System.Web.UI.WebControls.Label lblName;
protected System.Web.UI.WebControls.Label lblSSN;
protected System.Web.UI.WebControls.Label lblBio;
private void Page_Load(object sender, System.EventArgs e)
{
// load Employee information from context
Employee emp = (Employee) Context.Items["Employee Info"];
if (emp != null)
{
// Assign the Employee properties to the Label controls
lblName.Text = emp.Name;
lblSSN.Text = emp.SSN;
lblBio.Text = emp.Biography;
}
}
}
To configure the Web application to use the HTTP handler factory, place an <add> element in the <httpHandlers> section, just like you would for an HTTP handler:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.web>
<httpHandlers>
<!-- EmployeeHandlerFactory -->
<add verb="*" path="*.info"
type="skmHttpHandlers.EmployeeHandlerFactory,
skmHttpHandlers" />
</httpHandlers>
</system.web>
</configuration>
Of course, since this HTTP handler factory uses the .info extension, you'll also need to map the .info extension in IIS to the aspnet_isapi.dll ISAPI Extension. Figure 9 shows a screenshot of the employee HTTP handler factory in action.
Figure 9. Using the Employee Information handler
The benefit of this template approach is that if we wanted to change the Employee Information page, we'd just need to edit the DisplayEmployee.aspx page. There'd be no need to alter the HTTP handler factory, or to have to recompile the HTTP handler factory assembly.
Note .Text, an open-source blog engine written in C# by Scott Watermasysk, uses HTTP handler factories to provide URL driven content. .Text provides a much more in-depth template system than the one I presented in this article. It has a single master template page, DTP.aspx, which is used as the template for all requests. This template, though, can be customized based on the type of request made. For example, if a request is made to a URL like /blog/posts/123.aspx, .Text can determine that based on the URL path (/posts/) the user is asking to view a particular post. Therefore, the master template page is customized (at runtime) by loading a set of User Controls specific to displaying a single blog post. If, on the other hand, a request comes in for /blog/archive/2004/04.aspx, .Text can determine based on the path (/archive/2004/XX.aspx) that you want to view all posts for a given month (April 2004 in this example). The master template page will therefore have those User Controls loaded which are pertinent to displaying a month's entries.
Like ISAPI Extensions in IIS, HTTP handlers provide a level of abstraction between the ASP.NET engine and the rendering of Web content. As we saw, an HTTP handler is responsible for generating the markup for a particular type of request, based on a specified extension. HTTP handlers implement the IHttpHandler interface, providing all the heavy lifting in the ProcessRequest() method.
This article looked at three real-world HTTP handler scenarios: a code formatter, an image protector, and an HTTP handler factory for providing templated, URL driven content. Some other uses for HTTP handlers include:
If you come up with other cool ideas or uses for HTTP handlers, I invite your comments at this blog entry: REQUEST: Ideas for a Useful HTTP Handler Demo?
Happy Programming!. | http://msdn.microsoft.com/en-us/library/ms972953.aspx | crawl-002 | en | refinedweb |
Charlie Russel
Microsoft MVP for Server 2003 Interoperability
Abstract
This paper describes the features and benefits of Microsoft® Windows® Services for UNIX (SFU) 3.5, the award-winning interoperability toolkit from Microsoft. SFU enables Windows and UNIX clients and servers to share network resources, integrates account management, simplifies cross-platform management, and provides a full UNIX scripting and application execution environment that runs natively on Windows.
Introduction Interoperability Requirements Features of Services for UNIX Summary Related Links
Microsoft® Windows® Services for UNIX (SFU) 3.5 allows Windows–based and UNIX–based computers to share data, security credentials, and scripts. And SFU 3.5 includes technology to provide a high performance scripting and application execution environment that enables UNIX applications and scripts to be retargeted to run natively on Windows.
Administrators are looking for solutions to integrate a heterogeneous network and share information seamlessly between their Windows and UNIX systems. Users should not face impediments when moving among networked computers running different operating systems. Businesses are looking for ways to use their investments in UNIX applications, resources, and expertise while minimizing the total cost of ownership (TCO) as they move forward.
The TCO of Windows–based computers is compelling. Microsoft Windows XP and Microsoft Windows Server™ 2003 have added new features and have improved security, reliability, availability, and scalability. CPU performance continues to advance at exponential rates, as does the price–performance of the PC as a standardized, high-volume server and workstation platform. Businesses have integrated Windows–based computers into their traditionally UNIX–based enterprise networks, and Windows–based systems are being used in concert with and as a replacement for UNIX–based computers.
Businesses have significant investments in both UNIX–based and Windows–based applications, databases, and business processes, so there is a need for comprehensive integration between these two environments. Staff skilled in one environment need to be able to translate experience and knowledge to the other so that they can work constructively there. SFU delivers the protocol support, interoperability tools, execution environment, and administrative framework to make doing so as simple as possible.
SFU’s primary objective is to provide interoperability tools that bridge the gap between UNIX and Windows for users, administrators, and developers. As a result, enterprise networks can be created in which resources can be shared seamlessly. Access to resources is determined by enterprise policies and must accommodate the sharing of credentials, authorization, and authentication information from either the Windows or UNIX domain.
The design goals that shape SFU are:
Seamless sharing of data between Windows and UNIX network protocols.
Remote command-line access to both Windows–based computers and UNIX–based computers using existing UNIX practices and protocols.
Heterogeneous network administration, including common directory management and user password synchronization.
Full UNIX scripting support on Windows, including shells, utilities, hard and soft (symbolic) links, and a single rooted file system
High performance application development and execution environment to permit easy retargeting of key business applications.
A single, integrated installation process.
Simplicity of administration and management for all SFU components.
This paper describes SFU’s features and benefits in typical deployment scenarios.
SFU easily and seamlessly integrates networks that have both UNIX–based and Windows–based computers. This section describes the issues faced in the environment, and the next section explains how SFU’s features address those issues.
In a seamless logical network, users share resources regardless of location. Access to resources is controlled by corporate policies and is independent of implementation technology. Users can access applications and databases residing anywhere in a network.. Because the two systems use different authentication mechanisms, users have separate user identities for each system, even when their user names appear identical. These different identities and their separate passwords are a problem for both users and administrators. In a logical enterprise network, users have a single sign-on mechanism to access resources transparently on both systems. .Access to resources on UNIX from a Windows–based computer, or vice versa, requires separate authentication. Consequently, the network is split into disjointed Windows–based and UNIX–based domains, creating an artificial barrier that divides a single enterprise network.
Many applications that were previously available only on UNIX–based computers are now available on Windows computers, while existing UNIX applications are being ported to Windows. Similarly, new high-end workstation applications are being developed on Windows.
With many UNIX developers now working on applications for Windows–based computers, they expect a UNIX-like environment to ease the transition. UNIX users expect the shell and tools, such as grep, find, ps, tar, and make, to support the more complex scripting requirements they’ve come to depend on. The availability of a familiar environment on Windows is needed to reduce the cost of learning different tool sets. The availability of UNIX-like tools also helps increase the efficiency of UNIX programmers when they switch to Windows software development.
The system administration tools and mechanisms on Windows and UNIX are significantly different. UNIX system administrators typically use command-line tools and shell scripts, while Windows system administrators rely primarily on graphical tools supplemented by command-line scripts using the Windows Scripting Host (WSH). Without a common language or toolset, administration and management of heterogeneous enterprise networks is a challenge.
For administrators, managing two different networks entails a significant workload in both acquiring and maintaining the requisite skills, adding significantly to overall IT costs. Administrators of a heterogeneous network need similar mechanisms and tools to administer both Windows–based and UNIX–based computers. Such tools should allow remote administration as well as automation of repetitive administration tasks.
In addition to the differences in administration mechanisms, UNIX and Windows operating systems use different directories to store users, groups, and other network objects. Managing them separately and keeping them synchronized is a time-consuming burden.
Adding, changing, or deleting network resources such as users and devices requires changes to two separate directories using two separate processes. This duplication of effort increases costs and reduces efficiency while greatly increasing the risk of inconsistencies.
Windows–based network administrators increasingly use scripting and command-line tools to simplify routine system management tasks. GUI-based tools are useful for novice users to help discover and increase their productivity quickly. However, command-line tools and scripts are required for automating repetitive tasks and ensuring consistent results. Scripting is a useful mechanism for experienced users seeking increased productivity through reuse and automation.
SFU provides a single, comprehensive package to meet the interoperability requirements described above. SFU implements the following features:
File sharing between UNIX and Windows.
Symbolic and hard links on NTFS and NFS file systems.
Single rooted file system.
Common network administration by providing NIS server functionality using the Windows Active Directory® service.
Password synchronization between Windows and UNIX. Includes precompiled binaries for Solaris 7 and 8, HP-UX 11i, Redhat Linux 8.0, and IBM AIX 5L 5.2, and source code to support compilation on other platforms.
Installation using Microsoft Windows Installer.
Administration of SFU components and services using Microsoft Management Console (MMC) or a fully scriptable command line
Management of SFU components using Windows Management Instrumentation (WMI).
Installation on computers running Windows 2000, Windows XP Professional, and Windows Server 2003.1
Compatibility with a variety of UNIX–based computers, but specifically tested against Solaris 7 and 8, HP-UX 11i, Redhat Linux 8.0, and IBM AIX 5L 5.2.
SFU supports the NFS file sharing protocol, versions 2 and 3, and provides three separate NFS components: Server for NFS, Client for NFS, and Gateway for NFS. In addition, the User Name Mapping service allows access control based on existing Windows and UNIX identities.
Server for NFS is a Windows–based NFS server sharing files using NFS exports. Server for NFS allows NFS clients to access files on Windows–based computers. For the UNIX–based NFS client, this process is completely transparent and requires no additional software to be installed on the UNIX client. File access is regulated by user and group identities (UID and GID). Server for NFS supports NFS exports from CDFS, and NTFS file systems only. FAT and FAT32 are not supported.
NFS versions 2 and 3 support. Server for NFS allows UNIX and other NFS clients to access files stored on Windows file servers and provides complete support for NFS version 2 and 3. It supports NFS file locking specified by the Network Lock Manager (NLM) protocol.
Simple sharing. Server for NFS provides an easy way to share directories and set NFS access permissions on Windows–based computers. The NFS Sharing tab is a GUI interface accessible from the directory context menu (by right clicking the directory in Windows Explorer.) The command-line utility, nfsshare, allows scripted share management from either Windows or UNIX shells. NFS access permissions can be set to read, read/write, and to control root access to individual computers or groups of computers.
Access control and authentication. Server for NFS integrates UNIX and Windows access control mechanisms in a natural manner. The credentials and permissions of both local users and domain users are honored. UNIX UIDs and GIDs presented using the NFS protocol are mapped to a corresponding Windows security principal (user or group identity), known as a SID, by the SFU User Name Mapping service. File access is managed by using a mapped user’s context. Server for NFS Authentication provides authentication of NFS requests. This process ensures that a user’s file access is consistent with the UID/GID settings of the UNIX computer and prevents circumvention of the Windows security settings.
Simple administration. Server for NFS provides both graphical and command-line administration tools with options for configuring server settings and for logging all NFS activities. It also enables monitoring and reclaiming of NFS locks.
Client for NFS enables Windows computers to access files and directories that are exported from NFS file servers and requires no additional software on UNIX hosts. Client for NFS supports NFS versions 2 and 3 and has the following features:
Simple access mechanism. For a user, accessing an NFS export is the same as accessing a Windows share. Users browse NFS servers in the NFS network by using Windows Explorer and can access NFS exports by either mapping them to a drive letter or using Universal Naming Convention (UNC) names. NFS exports can also be accessed by using the net or mount commands. (Windows implementations of the showmount and umount commands are also provided.)
Authentication. Authentication of NFS requests uses the User Name Mapping service that provides single sign-on for Windows users accessing NFS exports and takes advantate of the Windows authentication process to access NFS resources. NFS requests are sent using the UID or GID of a mapped user, and thus a Windows user operates in the context of the mapped UNIX user when accessing NFS resources. Users with accounts on both UNIX and Windows receive the same privileges whether accessing files from a UNIX NFS client or from a Windows NFS client.
Performance tuning. Administrators can tune the performance characteristics of an NFS mount by using the administration tools of Client for NFS. They can set or change the read/write buffer size and select soft or hard mount. In addition, the Autotune tool helps detect optimum read/write buffer sizes for connecting to a particular NFS share.
Gateway for NFS provides access to NFS exports without requiring additional software on downstream Windows clients. It does so by acting as a gateway between the Windows network and the UNIX (NFS) network.
Gateway for NFS mounts NFS exports and then exposes them as Windows shares. Windows computers on the network access the NFS exports indirectly by using standard windows file sharing protocols, and they see the NFS exports as shared Windows drives on the Gateway Server.
Gateway for NFS also uses the User Name Mapping service to map Windows credentials to UNIX UIDs or GIDs before forwarding file access requests to NFS servers. Each gateway request from a separate user is properly identified, and.
The User Name Mapping service is an SFU component providing bidirectional, one-to-one, and many-to-one mapping among UNIX UIDs/GIDs and Windows user and group identities (SID).
Note: The many-to-one mapping allows many UNIX identities to map to a single Windows identity, but not the reverse.
User Name Mapping administration, performed through the graphical user interface or a command-line tool, mapadmin, associates user names or identifications between the Windows–based and UNIX–based domains. All the NFS components of SFU utilize the User Name Mapping service. The User Name Mapping service can be installed on any machine that has SFU installed. User Name Mapping equates an authenticated Windows user’s identity, or SID, to a UID/GID pair. Client for NFS and Gateway for NFS use User Name Mapping to control a Windows user’s access to NFS resources, while Server for NFS uses the mappings when processing NFS requests originating from UNIX–based computers. Such requests contain UNIX UID/GIDs.2 (It is the Windows NT Security subsystem that actually performs the authorization checks.)
User Name Mapping can be deployed as a service on any server connected to the network, simplifying administration and deployment. All SFU NFS components that use the central User Name Mapping service receive consistent access to NFS resources from anywhere on a network. User Name Mapping also provides the following features:
Support for NIS or PCNFS. User Name Mapping retrieves UNIX user names from an NIS domain or reads them from PCNFS-style passwd and group files. With support for NIS, User Name Mapping can be used with very little disruption to the rest of the NIS infrastructure. For Windows users, it obtains user names from the Windows Domain Controllers. It also periodically refreshes user names from both UNIX and Windows domains.
Support for simple and advanced mappings. By default, User Name Mapping equates Windows domain users and UNIX users with the same names. Additionally, administrators can map users with different Windows and UNIX names (or multiple Windows user names to the same UNIX user name) using advanced mappings.
Squashing support. User name mapping supports the ability to squash a Windows or UNIX user’s identity—that is, to map all or selected users to an unmapped user context. (Unmapped users are assigned a UID/GID pair of “nobody” which is –2/–2 by default. Squashing and the squash to UID/GID value can be specified on a per mount basis.) This feature is useful to override automatically mapped users (due to simple mappings) or those users who should be explicitly squashed. This behavior results in a squashed user being explicitly treated as an anonymous user.
Group mapping. User Name Mapping also maps groups (GID to SID and vice versa) between Windows and UNIX. Thus, group membership and access rights are preserved.
SFU includes both a Windows–based and Interix–based Telnet server and client. Telnet is a TCP/IP-based protocol that allows platform independent remote terminal access through a network or dialup connection.
The SFU Windows Telnet client and server are installed on Windows 2000 machines as part of SFU, but are not installed on Windows XP and Windows Server 2003, because they are essentially the same. However, SFU can administer the included Telnet server. In addition to ANSI, VT100, and VT52 terminal support, the Windows Telnet client and server also support the VTNT terminal type to provide access to all functions of the Windows command console.
Windows NT LAN Manager (NTLM) authentication is supported to allow logon without sending a password over the network. Traditional plain-text logons are also supported. Telnet server supports both console mode and stream mode, and console mode supports screen-oriented programs such as vi. Stream mode operates similar to a UNIX dumb terminal type but is not suitable for use with programs such as vi.
Telnet server logs a variety of telnet-related activities, such as auditing, monitoring Telnet sessions, and sending messages to Telnet connections. Telnet server enables a user to keep applications running even after disconnecting.
Interix includes a Telnet daemon (server), telnetd, that can be enabled instead of the Windows–based Telnet server. The Interix Telnet server is more UNIX-like in its behavior and administration. The Interix telnetd is started by inetd and establishes a remote session under the Interix subsystem. More details can be found in the appropriate man pages.
SFU includes Server for NIS, based on Active Directory. Server for NIS is installed on a Windows 2000 Server or Windows Server 2003 domain controller and can be used as a master NIS server to administer a UNIX NIS domain by using the NIS 2.0 protocol. It supports both UNIX–based NIS subordinate (slave) servers as well UNIX–based NIS clients.
Server for NIS stores NIS objects. Further, any users common to both UNIX and Windows networks can be represented uniquely in Active Directory. Doing so creates a common namespace, reducing the administrative overhead involved in managing two separate namespaces and directories. Using Server for NIS has the following benefits:
Use of Active Directory to store NIS data. Active Directory has the advantages of a secure data store, multi-master data replication, and schematized data storage and access. Through Active Directory, NIS data may be accessed by using the COM-enabled Active Directory Services Interface (ADSI) and LDAP protocols, in addition to NIS tools, such as ypcat, ypget, and others.
Migration of NIS domains to Server for NIS . Server for NIS includes the NIS Migration Wizard, a tool to migrate NIS domains or maps. In addition to simple domain migrations, multiple domains can be merged into existing NIS domains. This tool simplifies the migration of existing UNIX NIS domains to Active Directory domains.
Supports yppasswd and user password synchronization. Server for NIS keeps Windows and UNIX passwords synchronized. Whenever a user’s Windows password is changed, the corresponding UNIX password is also changed. At the same time, it supports yppasswd, which enables UNIX users to change their passwords from UNIX client computers. However, if a password change is initiated from a UNIX NIS client, the Windows password is not changed unless Password Synchronization (see the next section) is also installed on the UNIX system. that administrators select. For UNIX-to-Windows synchronization, the ssod.conf file controls the password synchronization behavior. Configuration of the Windows-to-UNIX synchronization uses the SFU Administration snap-in.
Password change requests can be restricted to specific users. Administrators can stipulate users for whom passwords should be synchronized and others whose passwords should not be synchronized. The password change requests from Windows to UNIX and UNIX to Windows are encrypted by using strong encryption. Password Synchronization uses the Triple-DES algorithm. The source code for the UNIX components, PAM and SSOD, is included with SFU.
SFU 3.5 includes Interix, a complete POSIX development environment tightly integrated with the Windows kernel. Both Korn and C shells are included, and the Bash shell is available as a free download. With more than 350 UNIX utilities and a single rooted file system, SFU provides a familiar environment for UNIX developers, users, and administrators.
The Interix SDK includes more than 2,000 APIs that are consistent with the ISO 9945-1 ANSI IEE 1003.1 POSIX specification. As a result, existing business applications and scripts can be retargeted without a performance penalty.
SFU is well integrated with Windows and utilizes many underlying Microsoft technologies. Supported technologies include:
Microsoft Windows Installer. Provides a consistent installation across a wide variety of Microsoft products and supports custom installation scripting, adding and removing features and repair of an existing installation.
Microsoft Management Console (MMC). Provides a consistent and feature-rich management console that supports remote management and customization.
Windows Management Instrumentation (WMI). Provides programmatic and scripting access to automate data gathering and system configuration.
Windows Explorer interface. Provides a consistent look and feel for NFS client and server operations, enabling users to use and share NFS resources transparently.
SFU 3.5 provides a comprehensive set of tools to bridge the gap between UNIX and Windows computers for both users and administrators. With SFU, a consistent, logical enterprise network can be created in which resources are shared seamlessly and access control is determined by enterprise policies instead of the platform.
SFU includes many benefits, such as the following:
Seamless sharing of data between servers and clients running either a UNIX or Windows operating system.
Remote command-line access to both UNIX or Windows computers from either Windows or UNIX computers.
Scripting and application development. The Interix subsystem technology provides familiar UNIX shells and utilities, along with a full set of UNIX APIs.
Heterogeneous network administration, including password synchronization.
Simple, integrated installation.
Easy-to-use, single-point administration and management of SFU components.
See the following resources for more information:
For the latest Open Source tools compiled for SFU, and a forum for developers discussing porting issues, see the Interop Systems website at.
For more information about configuring and using SFU and to share your interoperability stories with other users, visit the online community of the Usenet newsgroup at news://msnews.microsoft.com/microsoft.public.servicesforunix.general.
For the latest information about the Windows Server System family of products, see the Windows Server System website at.
For the latest information about Windows Services for UNIX 3.5, see the Services for UNIX website at. | http://technet.microsoft.com/en-us/library/bb463212.aspx | crawl-002 | en | refinedweb |
.
At this point, we're getting really close to PDC and I can't wait. At PDC, I'm going to go through some examples of the new formats in all three applications (Word, Excel, and PowerPoint). I'll continue to talk about Office 2003 as well, but there will be more focus on the 12 formats from that point on. That's still a few weeks away though, so I figured today I would still focus on Office 2003. I want to write really quickly about Excel's ability to map XML structures as both the input and output of a spreadsheet. In the Intro #2, I showed how you could use an XSLT to transform your data into SpreadsheetML for rich display. Most folks who read that and knew about the XML support in Excel 2003 realized that there was a much easier way to do this. You can use the XML mapping functionality to completely skip the XSLT step, which makes it a lot easier.
Let's start with that same example we used in Part 2. Take this XML and save it to a file on your desktop:
<?xml version="1.0"?><p:PhoneBook xmlns: <p:Entry> <p:FirstName>Brian</p:FirstName> <p:LastName>Jones</p:LastName> <p:Number>(425) 123-4567</p:Number> </p:Entry> <p:Entry> <p:FirstName>Chad</p:FirstName> <p:LastName>Rothschiller</p:LastName> <p:Number>(425) 123-4567</p:Number> </p:Entry></p:PhoneBook>
Open up a blank Excel spreadsheet (you need to be using a version of Excel 2003 that supports custom defined schema), and go to the Data menu. Find the XML flyout, and choose XML Source. The XML Source task pane should now be up on the side. It's currently blank because we haven't specified an XML schema to map yet. Click on the XML Maps... button and it will bring up a dialog that let's you specify the XML schema that you want to map. Click the Add... button and find the XML file you saved to your desktop. You will be notified that there isn't a schema but that Excel can infer a schema for you. In this example we're starting with an XML instance, so we want Excel to infer a schema. We could have also just started with a schema file if we had that. Go ahead and press OK, and you will now have a tree view of the inferred schema in the XML Source task pane.
Click on the node for the Entry element and drag it out onto the spreadsheet. This will map the child nodes and give them titles. After doing this, you've told Excel where you want the elements to be mapped to in the grid. You can change the titles of the columns if you want so that they have a more user friendly title. By default they have the namespace prefix and element name in the title.
Now that the nodes have been mapped, you can tell Excel to import the data. Right click on the mapped region, navigate to the XML fly-out menu, and select Refresh XML data. That will import the data from your XML file. The region that the data was imported into has a blue border around it. This is a new feature in Excel 2003 called a "list". A list is a structured region in Excel that consists of repeating content. The list was automatically generated for us when we mapped the Entry element into the spreadsheet.
Now that we have our list mapped to the XML schema, we can also choose to import multiple XML files at once if you have a couple XML files that adhere to your schema. Just make a copy of the XML file you saved to the desktop, open it in notepad and make some changes. Now let's import both of the files. Right-click on the list and under the XML flyout choose Import... Now just select both of your XML files and hit OK. Now both sets of data are imported into the list.
If you want to export your data, it's just as easy. Right click on the list again and this time under the XML flyout choose Export... You can choose to export to a brand new XML file, or to overwrite one of the files you imported.
This example shows how easy it is to bring your own XML data into Excel, work on it, and then output it back into it's original XML schema. Once common use I've seen of this functionality is that people will have two schemas. The first schema is used to import a huge data set that comes form a web service or some other external data source. Using the XML mapping functionality you can bring that data into Excel, and then run whatever models you want to on the data. The 2nd schema is used to map the results of the model in Excel. Map the result regions with the 2nd schema, and use that to export the results as XML. This allows Excel to serve as a very powerful transformation tool with rich UI. It's pretty cool
-Brian | http://blogs.msdn.com/brian_jones/archive/2005/08/25/456298.aspx | crawl-002 | en | refinedweb |
I left one thing unsaid in the serialization rules and Aaron's sharp eyes caught it promptly. As he mentioned in his blog, mixing interface programming model (such as IXmlSerializable or ISerializable) with DataContract programming model is disallowed in V1. Here is an example of such a class.
[DataContract]
public class XmlDataContractType : IXmlSerializable
{
[DataMember]
public int MyDataMember;
//IXmlSerializable Members
}
As Aaron mentioned we could have chosen IXmlSerializable in this case. I agree that it is natural for the interface to trump attribute since it impacts the derived types as well. However, this meant that the DataContract does not impact anyone.
On the other hand, picking DataContract enables scenarios where a type is serialized with multiple serializers. Both IXmlSerializable and ISerializable are programming models that are also supported by other serializers (XmlSerializer and BinaryFormatter). A user might want to make a type IXmlSerializable or ISerializable to be used by ‘legacy’ serializers and make it a DataContract type for WCF. But this approach fails as soon as we consider inherited types. Consider two derived types of XmlDataContractType as defined below.
public class DerivedXmlType : XmlDataContractType
public class DerivedDataContractType : XmlDataContractType
Class DerivedXmlType does not define DataContract but is an IXmlSerializable by virtue of inheritance. In this case IXmlSerializable projection of XmlDataContractType should be used. Since DerivedDataContractType is a DataContract it will end up using the DataContract projection of XmlDataContractType. We definitely did not want to encourage this dual projection of XmlDataContractType.
I don't understand why you don't want to encourage dual projection of XmlDataContractType. Let the developer decides.
Since you can in the same entity use [DataMember] and [XmlElement] attributes on a property you authorize actually dual projection.
This is not consistent. Is there any better reason?
PingBack from | http://blogs.msdn.com/sowmy/archive/2006/05/14/597476.aspx | crawl-002 | en | refinedweb |
Algorithms, functional programming, CLR 4.0, and of course, F#!
With..
PingBack from
Hi Chris, thanks, for this example.
VS 2008, F# 1.9.6.0.
Just downloaded, unzipped, Built...
Error 1 The tag 'TankGame' does not exist in XML namespace 'clr-namespace:BurnedLandGame;assembly=BurnedLandGame'. Line 8 Position 4. C:\Users\Art\Documents\MS F#\F# CTP samples AUG08\Burnedland\BurnedLand\BurnedLandUI\Window1.xaml 8 4 BurnedLandUI
For the data binding in the WPF-side to work, the F# library needs to be built first.
If you build the F# project first and then open the WPF designer (or click 'Reload'), things should work as you expect. Is that not the behaivor you are seeing?
I've had issues with removing VSLab F# 1.9.4... maybe this is what I'm seeing. Prob needed to confirm that is OK first.
Regedit'd to fully uninstall VSLab. BurnedLand Builds OK now; Vista VS 2008 F# 1.9.6.0. Thanks.
1. What is F#? It is a functional language that is capable of object oriented programming and eases multi-cpu
It used to be that rockets and research were two words that went together. Now, rockets aren’t quite
digg_url = "";digg_title | http://blogs.msdn.com/chrsmith/archive/2008/09/04/simple-f-game-using-wpf.aspx | crawl-002 | en | refinedweb |
He's not as angry as he looks
People have been discovering that the VS Team System profiler can collect allocation data for an application. It isn't long after that they discover that it only works on managed code, not native. Sadly, the documentation is not clear on this.
The memory alloction profiling support in VSTS uses the profiler API provided by the CLR. This gives us a rich set of information that allows us to track the lifetimes of individual objects. There is no such off-the-shelf support in native memory management, since there are nearly as many heap implementations as there are applications in the world.
Memory allocation profiling is a much bigger deal for managed code, as the CLR has effectively turned what used to be a correctness issue (leaks, double frees, etc) into performance issues (excessive GCs and memory pressure)*. This does not mean that some kind memory allocation profiling wouldn't benefit the world, but the combination of it being less important and more difficult keeps it out of the product for now.
If you think you are facing memory issues in native code, there is at least one utility I can offer up: In the server resource kits, the vadump utility can give you information about your virtual address space, including memory allocated by VirtualAlloc.
Unfortunately, this is a pale shadow (if that) of what you can get from the managed side.
* The idea that correctness issues become performance issues as we develop more advanced runtimes was something I heard David Detlefs mention somewhere..);
Note that both the native and managed versions return zero on success, so it is possible to detect whether the call succeeded:
On the forums, someone was using the /INCLUDE option in VsInstr.exe. It is possible to use multiple instances of this option to include different sets of functions. For a big chunk of functions, you might want to use dozens of function specifications. Who the heck wants to do all that typing? You could make a batch file, but a response file would be better.
Response files are text files where each line in the file is a single command line option. Since each line holds exactly one option, quotes are not necessary. They are much easier to edit than a batch file (which would have single, really long line). To use a response file, simply use @filename on the command line of the tool. For example:
VsPerfCmd /start:sample "/output:c:\Documents and Settings\AngryRichard\foo.vsp" "/user:NETWORK SERVICE"
VsPerfCmd /start:sample "/output:c:\Documents and Settings\AngryRichard\foo.vsp" "/user:NETWORK SERVICE"
can be turned into a response file like this:
Startup.rsp:
/start:sample/output:c:\Documents and Settings\AngryRichard\foo.vsp/user:NETWORK SERVICE
VsPerfCmd @Startup.rsp
/start:sample/output:c:\Documents and Settings\AngryRichard\foo.vsp/user:NETWORK SERVICE
VsPerfCmd @Startup.rsp
All of the command line tools for the profiler accept response files. It beats all that error prone typing, and if you run scenarios from the command line a lot, it can pay to have some response files laying about for common scenarios.
I've just posted an article on the pitfalls of profiling services with the Visual Studio profiler. It includes a sample service with a quick walkthrough. Enjoy.
Profiling Windows™ Services with the Visual Studio Profiler
Typically, one can use the sampling profiler to nail down the hot spot in an application. Having done that, what does one do when the sampling data doesn't provide enough information? The trace profiler can offer up more detail, particularly if the issue revolves around thread interaction. However, if you profile a heavily CPU bound application, you may find that you are getting huge trace files, or that the profiler is significantly impacting the performance of your application. The VisualStudio profiler offers a mechanism to stem the avalanche of data.
I'll illustrate the general idea with an example.
Suppose we'd like to run trace profiling on the following highly useful piece of code. We've decided that we only care about profiling in the context of the function "OnlyProfileThis()"
using System;
public class A{ private int _x; public A(int x) { _x = x; } public int DoNotProfileThis() { return _x * _x; } public int OnlyProfileThis() { return _x + _x; } public static void Main() { A a; a = new A(2); Console.WriteLine("2 square is {0}", a.DoNotProfileThis()); Console.WriteLine("2 doubled is {0}", a.OnlyProfileThis()); }}
The VisualStudio profiler provides an API for controlling data collection from within the application. For native code, this API lives in VSPerf.dll. A header (VSPerf.h) and import library (VSPerf.lib) provided in the default Team Developer install allows us to use the profiler control API from native code. For managed code, this API is wrapped by the DataCollection class in Microsoft.VisualStudio.Profiler.dll. We can update the example to the following to control the profiler during our run:
using System;using Microsoft.VisualStudio.Profiler;
public class A{ private int _x; public A(int x) { _x = x; } public int DoNotProfileThis() { return _x * _x; } public int OnlyProfileThis() { return _x + _x; } public static void Main() { A a; a = new A(2);
int x; Console.WriteLine("2 square is {0}", a.DoNotProfileThis());
DataCollection.StartProfile( DataCollection.ProfileLevel.DC_LEVEL_GLOBAL, DataCollection.PROFILE_CURRENTID);
x = a.OnlyProfileThis();
DataCollection.StopProfile( DataCollection.ProfileLevel.DC_LEVEL_GLOBAL, DataCollection.PROFILE_CURRENTID);
Console.WriteLine("2 doubled is {0}", x);
}}
We still need to instrument the application as normal. We also have one additional step. When running the code above, data collection will be enabled by default, so the API won't appear to do anything beyond stopping data collection after the call to OnlyProfileThis. We need to disable data collection before running the application. The profiler control tool,VSPerfCmd, has options to do this.
To run this scenario from the command line:
StartProfile(ProfileLevel level, UInt32 id)StopProfile(ProfileLevel level, UInt32 id).
SuspendProfile(ProfileLevel level, UInt32 id)ResumeProfile(ProfileLevel level, UInt32 id)
This works very much like StartProfile and StopProfile, however, calls to these functions are reference counted. If you call SuspendProfile twice, you must call ResumeProfile twice to enable profiling.
This works very much like StartProfile and StopProfile, however, calls to these functions are reference counted. If you call SuspendProfile twice, you must call ResumeProfile twice to enable profiling.
MarkProfile(Int32 markId)CommentMarkProfile(Int32 markId, String comment)CommentMarkAtProfile(Int64 timeStamp, Int32 markId, String comment)
Inserts a 32-bit data value into the collection stream. Optionally, you can include a comment. With the last function, the mark can be inserted at a specific time stamp. The id value and the optional comment will appear in the CallTrace report.
Inserts a 32-bit data value into the collection stream. Optionally, you can include a comment. With the last function, the mark can be inserted at a specific time stamp. The id value and the optional comment will appear in the CallTrace report.
The profiler control tool provides similar functionality through a command line interface, though the notion of a "current" process or thread id is obviously not relevant.
VSPerfCmd /?
-GLOBALON Sets the global Start/Stop count to one (starts profiling).
-GLOBALOFF Sets the global Start/Stop count to zero (stops profiling).
-PROCESSON:pid Sets the Start/Stop count to one for the given process.
-PROCESSOFF:pid Sets the Start/Stop count to zero for the given process.
-THREADON:tid Sets the Start/Stop count to one for the given thread. Valid only in TRACE mode.
-THREADOFF:tid Sets the Start/Stop count to zero for the given thread. Valid only in TRACE mode.
-MARK:marknum[,marktext] Inserts a mark into the global event stream, with optional text. [...]
If you find yourself buried under a ton of trace data, investigate the profiling API to help focus on the important parts of your application.
I've frequently heard the question asked, "Can I use the profiler on a Virtual PC?" It has even come up on the blog feedback a few times. My answer has always been, "Theoretically, yes." I didn't want to post this answer externally until I'd actually gotten around to trying it myself.
I've finally been nagged into it.
In my limited experience with our VirtualPC product, it has quite impressed me with its functionality. However, it does not emulate the hardware performance counters upon which the profiler implicitly depends. For this reason, you can not run the sampling profiler using a performance counter based interrupt. My collegue Ishai pointed out* that you should be able to use page-fault or system-call based sampling, but the VPC has a different problem with these modes that is still under investigation.
Instrumentation based profiling will work on the VPC. However, as I've already mentioned, there is a bug check issue with the driver when it unloads. Fortunately, instrumentation based profiling doesn't rely on the presence of the driver.
By renaming the driver to prevent the profiling monitor from installing it at startup, I was able to use instrumentation based profiling on the VPC. This is obviously just a workaround, but I hope this will allow you to investigate some of our tools in the comfort of a VirtualPC environment.
Here is how you can prevent the driver from loading on your VPC installation.
Happy hunting!
* I'm pretty sure Ishai was hired to point out things I do wrong. Fortunately, he usually points out solutions as well.
I had a nice long email chat with members of the Virtual PC team.
The good news: The Virtual PC emulates the host processor well enough that our kernel-mode driver can detect what features are enabled.
The bad news: The Virtual PC does not emulate an APIC or performance counters.
So, if you were planning on running the profiler inside a Virtual PC, the best you can hope to do is get function trace data on an instrumented app. Sampling will not work at all, and collecting perfomance counter data in the instrumentation will fault the application.
Bummer. I wish I had better news.
I'm so pleased. Someone did something exciting and dangerous with the profiler. In case you're not reading the newsgroups, an intrepid customer tried to profile on a Virtual PC, and discovered that it only leads to pain and misery via the BSOD.
So don't do that.
Seriously, is this something people want to do? I mean, VPC is about the coolest thing ever, but we do use hardware performance counters by default, and VPC is not exactly a real life environment for performance analysis and measurement. Still, maybe y'all have good reasons for this.
At the very least, we'll fix that BSOD thing. I mean, how 1990's is that?
This is an excellent time to point out that the profiler does in fact install a kernel-mode device driver in order to play with the hardware counters on your Intel and AMD processors. There are some fun implications from this:
A bunch of the guys on the team I work for have been starting up blogs. I started feeling left out, which made me very angry.
It appears all blogs start with "Hi, I'm a developer who does X and I'm going to talk about Y and maybe Z."It's all part of Microsoft's new image -- we're transparent now.
Transparency's good, right?
Go down to your local German car dealer, or, if you own a newer German car, go out to your driveway. Shiny, pretty, isn't it? Open the hood. Look at that -- a big, fat, intake pipe that goes into a big box that says "BMW" in a 4" Century Gothic font. Cool. Now pop that plastic thing off the top. Go ahead, I dare you. Not so pretty now. Look at all those wires and tubes. Look at all that stuff you could cut yourself on. That's why that plastic thing is there; it makes owning all that power a little less scary.
Some of us spend all our time under the shiny plastic thing, hands full of wires and tubes and spark plugs. That'd be me and some of my less 'transparent' friends, working on the instrumentation and data collection engine in the profiler.
Of course, with great power, comes great potential for disaster. Some day you'll have your turn with the profiler. Trust me, something's always too slow. If it works, great. If you find yourself upside down in a ditch, tell us, we want to know where we need to cover up the sharp edges. | http://blogs.msdn.com/angryrichard/default.aspx | crawl-002 | en | refinedweb |
Max Feingold's blog
PDC05
The other day I mentioned some of the issues surrounding the appropriate scenarios for atomic transactions deployment. I also mentioned the compensating model for long-running activities. Choosing between these models inside distributed applications is a common dichotomy when building real-life systems.
Atomic transactions are a fundamental building block solution for a vast number of software problems. It is difficult to overstate the benefits provided to developers by the atomic transaction model, in terms of simplifying the reliability, concurrency and error handling of a transactional program. However, atomic transactions require strong assurances of trust and relative liveness between all participants. In scenarios where these assurances are not possible, a more loosely-coupled model must be used.
An example of such a scenario is the painfully real-life sequence of events and decisions with which anyone building a business application backend will be familiar. A business may wait for supplies ordered from different vendors, require human input to make decisions, await confirmations from customers, tolerate partial failures, resubmit orders, offer customers alternative products, etc. When such a sequence is cancelled, compensating actions are applied to undo every action that was taken as part of the sequence of events leading up to the cancellation. Both actions and compensating actions are likely to use atomic transactions in their internal implementations, but that is a detail at a lower level of granularity than the high-level business process.
While the end result of a compensating workflow may be a concrete outcome or result, it would be limiting to label the entire set of states, events and decisions a “transaction”. In fact, the use of that word in this context is so misleading that you should immediately slap anyone who uses “transaction” to refer to a long-running activity. An atomic transaction is an aggregate set of operations that either complete or fail in unison, usually in timeframes approaching a few milliseconds. Furthermore, an aborted transaction leaves no changes in its wake: the final state of the world is necessarily identical to the initial state. On the other hand, a cancelled workflow activity is likely to leave the world state in the most equivalent state possible to the initial state. The quality of that equivalence will depend on the nature of the actions and compensating actions taken during the workflow activity.
The greater complexity of the compensating model leads to an interesting conclusion. From the user’s perspective, an infrastructure that provides support for atomic transactions can provide a highly transparent interface. However, an infrastructure that provides support for compensating activities has no obvious transparent interface.
A good example of the former is the new System.Transactions namespace in .NET 2.0, which provides a simple and lightweight veneer on top of what is actually a highly complex system for propagating and coordinating distributed transactions. All of the complexity is buried underneath the veneer: in a very real sense, we implemented System.Transactions and MSDTC so that you didn’t have to.
On the other hand, a framework for long-running compensating activities is likely to be an exercise in delegation to user code: both the logic that makes forward progress and the logic that performs compensating operations fall within the domain of the user’s business needs. Consequently, a compensating infrastructure is far less able to isolate a user from his own system’s complexity.
With that said, the workflow modeling environment provided by a sister component of Indigo/WCF, the Windows Workflow Foundation, provides an excellent basis for designing applications that use the compensating model for long-running activities. The combination of WCF for distributed messaging and WWF for state machine and message exchange pattern (MEP) design, as well as general implementation modeling, is extraordinarily powerful.
As WCF and WWF evolve towards their eventual release next year, you should expect to see guidelines and best practices from us on how to build distributed compensating workflows using these components.
I should point out that I enjoyed last night’s Birds of a Feather session. It was a lively and spirited discussion. I appreciated the opportunity both to correct some misconceptions and to hear some good feedback. Thank you for that, William.
A.
Those of you who pay attention to these things might have noticed that we recently published new versions of the WS-Transaction family of specifications. These versions are dated August 2005 and supersede the previous versions from October 2004.
You can find the new specifications on MSDN:WS-Coordination:::
So what's new in these specifications, you might ask? Great question. We made a number of changes, all for good:
We've added some new co-proposers, including representatives from Hitachi, IONA and Arjuna. These companies have been involved in our WS-Transaction interoperability events for a long time and have made significant contributions in this space.
We addressed one of the more common interoperability problems by defining specific actions for WS-C and WS-AT fault messages, instead of leveraging the WS-Addressing fault action. This allows implementations to easily distinguish faults generated by the actual protocols from faults generated by the infrastructure or other binding-specific elements.
We also made the fault message schemas more precise and clarified the meaning of each standard fault code.
We clarified the use of WS-Trust in the WS-C security model for authorizing participants. Instead of leaving the precise details of this composition to the implementation, we specified the use of the <wst:IssuedTokens> header to propagate security tokens alongside coordination contexts. This should greatly simplify the interoperability of the model significantly.
Last, but far from least, we removed the old (and rather obsolete) WS-Policy sections from all three specifications, introducing a new policy model for WS-AT and WS-BA. It is now possible to create policy documents for applications that specify in an interoperable manner the requirements of a given web service with respect to coordination context flow. For example, by using the new policy assertions, one can now write a WS-AT-aware service that uses WS-Policy to tell clients whether they MUST, MAY or SHOULD NOT flow WS-AT transactions to the service. This is an area in which proprietary mechanisms and frameworks have been used forever, so having this policy in place is a highly significant step for the WS-* model and for web service interoperability in general.
One interesting fact: while we corrected some minor problems with the 2004/10 schemas, we did not make any changes that were significant enough to merit a namespace change. Consequently, the good old namespace and friends are still current. Our intention here was to preserve the interoperability gains we made in our January 2005 interoperability event, preserving the possibility of continuing to interoperate with implementations that targeted the previous version of the specifications.
And yes, in case you were wondering, Indigo Windows Communication Framework has supported WS-AT since its first public releases and Beta 2 will be no exception. We have spent a lot of time working on transaction flow functionality, security and performance in these last few months, and that includes a fully secure and interoperable implementation of WS-AT. I was tempted to post what would apparently be the first snippet of code + configuration demonstrating how to enable WS-AT transaction flow between services, but I think I should probably wait until our public disclosure at the PDC. Let me just say that switching to WS-AT as your transaction flow protocol is a single line of configuration. Enabling a service for interoperable WS-AT transaction flow is as simple as selecting an interoperable standard binding and turning on transaction flow with another line of configuration.
Did I mention that I love the transaction flow features in WCF Beta 2? I'm sure I have. One of these days, I'll spend some time talking about the most interesting parts of WS-C and AT and why I think they're so important.
Speaking of the PDC, if you're going to be there and you'd like to talk to me about WCF transactions, or if you have questions about (or plans for !) WS-Transaction implementations or interoperability, drop me an email and let me know that you'll be looking for me there.
I suspect that I have managed to set a record for the longest silence after an initial blog post. Except, that is, for all the people posted once and never came back.
Like all poor students, I do have some excuses. I was on vacation in Alaska, an utterly excellent sojourn which I may talk about sometime if there’s nothing else to write down. Suffice to say that if you have a chance to drive the Denali Highway, just do it. Never mind those car rental agencies and their pesky girly-man warnings. We had the time of our lives out there. Just make sure you slow down for other cars and respect the ptarmigans as they slowly moonwalk across the road.
My other excuse is that our team has been working like ants to meet the dates for our next milestone. It has not been a true death march, although there were some minor casualties. Our mission? To boldly make Indigo Beta 2 safe for human use. Obviously nothing can be said about this until the Los Angeles PDC, but even so, I should warn you: the transactions support is really good. Much better than in Beta 1, in the same way that Tom Yum is a much better soup than Caldo Verde.
Speaking of the PDC, I will be attending this year. Not as a talker like some of my more illustrious co-workers, but as a free-lancer eager to see what other Microsoft teams are working on and what our customers make of it all. Maybe I’ll even see you there.
Hello, world. Be kind. Be fair.
I should introduce myself, just to get that part out of the way.
My name is Max Feingold I am a development lead on Microsoft’s Transactions team, which is a core component of the Indigo team. I work mostly on transaction protocols, most notably on the WS-Coordination and WS-AtomicTransaction specifications. I am also involved in a number of Indigo features, including Indigo’s COM+ integration support, the transaction flow model, the transactional programming model and so forth.
As the last developer in building 42 to begin a blog, I’m not entitled to give you the long explanation of what Indigo is. The short explanation is this: Indigo is not love; it’s a healthy long distance relationship. It’s also an interoperable messaging system, an application server, an enterprise framework and (in a mystical sort of way) a reincarnation of COM+.
So what is it that I do at work? I used to debug and write code. I still do that, but these days find myself slipping more and more into various roles requiring more maturity and savoire faire: designer, enabler, driver, writer, negotiator, process manager, people manager, general sinkhole of responsibility... Life was simpler when I could just play on the monkey bars. These days I have to worry about helping those crazy kids when they hurt themselves.
Next time, I’ll talk about the things I worked on before Indigo came along. | http://blogs.msdn.com/esperpento/default.aspx | crawl-002 | en | refinedweb |
Designing Reusable Frameworks!
If you would like to receive an email when updates are made to this post, please register here
RSS
Thank you for just referring to lambda expressions as a shorthand for anonymous delegates. It drove me nuts trying to grasp the concept early on due to all the space dedicated to the origins in calculus (it served to confuse more than inform). In fact, that goes for a lot of these guidelines; I would have saved a bit of time/frustration if these were around a few months ago rather than the long drawn out MSDN articles introducing LINQ. Straight and to the point is great (lamdbas are delegates, LINQ is a bunch of extension methods, the nature of Expression<>.
One thing, though: for us language purists, could you include alternate method call versions of sample code instead of just the language extensions? I find the not-quite-SQL format of the LI part to be more confusing than helpful due to the odd structuring of the SELECTs
Don't seem to be able to comment against that post so I'll post here...
Just wondering whether there should be explicit treatment of extension methods against enumerations in the guidelines. It seems to me that it will be quite common to see something like:
public enum MyEnum {
ValueOne, ValueTwo
}
public static class MyEnumExtensions {
public static string ToHumanReadableString(this MyEnum myEnum) {
//impl
}
According to the guidelines, this is bad because the extensions are in the same namespace as the enumeration. But is it really such a bad thing for enumerations? They're a special case, IMO - these members cannot be added directly to them.
Krzysztof Cwalina savā vietnē ( ) paziņojis
Comments and trackbacks are back on after a futile battle with spam. I'll see how long it
You know that I'm a big fan of framework and library design so also have been a big fan of Framework
[ Nacsa Sándor , 2009. január 19. – február 5.] Ez a Team System változat fejlett eszközrendszert kínál
PingBack from | http://blogs.msdn.com/kcwalina/archive/2008/03/12/8178467.aspx | crawl-002 | en | refinedweb |
This year at Tech·Ed 2005 in Orlando, FL, USA we will not only have individual sessions covering internationalization, but a whole virtual track! We have everything from pre-conference sessions covering the basics for someone getting started to in-depth technical sessions about new Whidbey features. I'll be talking about custom cultures and international data together with Matt Ayers. And we aren't completely done planning yet - we might add hands-on labs and webcasts, stay tuned.
I'm looking forward to meet you there, answer your questions and listen to your suggestions.
Register before April 15 and get the Tech·Ed 2005 early-registration rate of just $1,695.
Michele wrote a thorough whitepaper about our new ASP.NET 2.0 localization functionality. Great job! Thanks for the kudos!
She also did a webcast on the same topic last week - mainly focusing on ASP.NET 1.x, but also covering the new features for v2.0. Michele is certainly providing the best guidance and up-to-date information on the ASP.NET Internationalization today.
The command prompt in Windows has a number of limitations when it comes to dealing with Unicode – it does not support complex scripts (like Arabic, Hebrew, Indic languages), the raster font selected by default depends on the system locale and only supports glyphs for the current OEM system codepage, redirected output cannot be encoded in Unicode, TrueType fonts used in the command prompt do not do font linking etc. Therefore when you output a string with System.Console.Write[Line] it gets converted to the console codepage for the command prompt.
When creating a multilingual command line application with .NET Framework v2.0 how can you make sure to load resources that are actually displayable? Here is code that demonstrates this (it is using some new features only available in Whidbey):
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
using System.Globalization;
using System.Resources;
using System.Reflection;
[assembly: NeutralResourcesLanguageAttribute("en")]
namespace loccons
{
class Program
{
static void Main(string[] args)
{
Thread.CurrentThread.CurrentUICulture = CultureInfo.CurrentUICulture.GetConsoleFallbackUICulture();
if ( (System.Console.OutputEncoding.CodePage != 65001) &&
(System.Console.OutputEncoding.CodePage !=
Thread.CurrentThread.CurrentUICulture.TextInfo.OEMCodePage) &&
(System.Console.OutputEncoding.CodePage !=
Thread.CurrentThread.CurrentUICulture.TextInfo.ANSICodePage) )
{
Thread.CurrentThread.CurrentUICulture = new CultureInfo("en");
}
Console.WriteLine(loccons.strings.txtHello);
}
}
}
This code assumes that you have added strongly typed resources with the base name strings to your project and that the default resources are in U.S. English (this could be any other language too). The resources need to contain a string resource with the name txtHello.
The code first eliminates complex scripts from the UI culture by calling GetConsoleFallbackUI culture – this function falls back to a language most appropriate for complex script cultures, e.g. to French if Arabic (Morocco) was the original UI culture. For non-complex script UI cultures this line is a no-op.
After that the code makes sure that the selected UI culture is actually displayable in the console the code is running in – if not it is falling back to the default UI culture.
Some remarks on how you can actually tweak the console in a limited way to allow display of a language other than the one prescribed by the system locale:
If you are going to Amsterdam next week and are interested in globalization/localization check out these sessions:
You can read more detailed descriptions here (hm, why don't they allow deep linking to the session descriptions?)
Yesterday in the morning I saw a new Cisco ad on TV advertising their newest color IP phone - in Italian with English subtitles. It seems to be a trend these days to do non-English ads. Does this mean the US becomes more multi-lingual now? Beats me, the advertisers must know something I don't - then again, they always do. My favorite non-English TV ad is still the Chinese Office ad by FedEx with no subtitles at all!
Compare this to the mostly English ads that you see in Europe - maybe people aren't so scared of globalization after all.
Christian Forsberg wrote an article on how to create multi-lingual and globalized .NET CF applications. Nice. The sample application looks a bit similar to the WorldClock sample application I wrote a while ago, but of course with more info and without using complicated to set up custom controls.
Note that you can also use the form properties Localizable and Language in the designer to create multi-lingual forms. What that doesn't provide by default is a language switching menu and a dynamic refresh of the form with a newly selected language. You can resize the forms in the designer though and the language user interface switches along with the device language on multi-lingual Windows Mobile 2003 devices.
I | http://blogs.msdn.com/achimr/ | crawl-002 | en | refinedweb |
I've been reading a few comments and questions floating around in the Silverlight ethos around DataBinding and basically the how. The most common examples I've seen usually show the Binding capabilities within XAML but don't really break open the "code-behind" or more to the point, "how to bind to a custom model with code only".
Here's how (think ShoppingCart as your example):
1: public class ShoppingCart : INotifyPropertyChanged, IModel
2: {
3:
4: string mycartname;
5:
6: public string MyCartName
7: {
8: get { return mycartname; }
9: set { mycartname = value; Notify("MyCartName"); }
10: }
11:
12: // boilerplate INotifyPropertyChanged code
13: void Notify(string mycartname)
14: {
15: if (PropertyChanged != null)
16: PropertyChanged(this, new PropertyChangedEventArgs(mycartname));
17: }
18:
19: public event PropertyChangedEventHandler PropertyChanged;
20:
21: }
1: public partial class ShoppingCart : UserControl
3: ShoppingCart sh;
4: public ShoppingCart()
5: {
6: InitializeComponent();
7: sh = new ShoppingCart();
8: Binding bnd = new Binding("MyCartName");
9: bnd.Mode = BindingMode.OneWay;
10: bnd.Source = sh;
11: txtData.SetBinding(TextBox.TextProperty, bnd);
12: sh.MyCartName = "I was added after Binding";
13:
14: // Bind the RoutedEvent to handle a button click to update the
15: // Model attribute only.
16: btnUpdateCart.Click += new RoutedEventHandler(btnUpdateCart_Click);
19: void btnUpdateCart_Click(object sender, RoutedEventArgs e)
20: {
21: sh.MyCartName = "I was added after a Click";
22: }
23:
24: }
That's it, DataBinding 101 with Code Only - Look mah, no XAML.
I'm going to explore more about DependencyProperty later next week as I think we haven't covered them off as well as we could, no biggy, it's still beta 2 so all is forgiven.
Any suggestions, feedback around DataBinding please drop me a comment or email and will be more than happy to discuss them.
If you would like to receive an email when updates are made to this post, please register here
RSS
Michael Washington's Silverlight Desktop, Peter Bromberg on Data, Jesse Liberty on new interface | http://blogs.msdn.com/msmossyblog/archive/2008/06/21/silverlight-how-to-bind-controls-to-your-own-custom-model.aspx | crawl-002 | en | refinedweb |
Basetypes, Collections, Diagnostics, IO, RegEx...
I see a lot of complaints about counters in the Logical Disk/Physical Disk categories always returning zero when you try to read them. Specifically, using PerformanceCounter.NextValue with these counters always returns zero:
% Disk Read Time % Disk Write Time % Idle Time % Disk Time Avg. Disk Queue Length Avg. Disk Read Queue Length Avg. Disk Write Queue Length
Two things happen when you call NextValue. First, we read the raw value of the performance counter. Second, we do some calculations on the raw value based on the counter type that the counter says it is. If the calculated value is always zero, either we failed to read the counter or we failed to calculate it properly. If we failed to read, well, you're out of luck and there's probably no way to work around the problem. But if we failed to calculate it properly, then you have the option of doing the calculation yourself.
For the "% ..." counters, unfortunately the bug is in the first step of reading the counters, so doing the calculations yourself won't help. However, the "Avg Disk ..." simply have bugs in their calculations, and I'm going to walk through the process of doing the calculations manually.
First, you need to find the counter type. The first thing I found was that the Avg counters are of type PERF_COUNTER_LARGE_QUEUELEN_TYPE: after some trial and error, I discovered that they are actually PERF_COUNTER_100NS_QUEUELEN_TYPE:
The next step is to figure out what the calculation for PERF_COUNTER_100NS_QUEUELEN_TYPE is. A quick MSDN search yields, which tells us that the calculation looks like (X1-X0)/(Y1-Y0), where X is the counter data and Y is the 100ns Time.
Now we can finally implement the calculation ourselves:
public static double Calculate100NsQueuelen(CounterSample oldSample, CounterSample newSample) { ulong n = (ulong) newSample.RawValue - (ulong) oldSample.RawValue; ulong d = (ulong) newSample.TimeStamp100nSec - (ulong) oldSample.TimeStamp100nSec; return ((double) n) / ((double)d); }
This process is somewhat of a pain, as documentation is not always good and it can be difficult to figure out which values to use in the calculations. But if you compare your results to what perfmon reports and keeping trying, you can get it eventually. | http://blogs.msdn.com/bclteam/archive/2005/03/15/395986.aspx | crawl-002 | en | refinedweb |
David Chappell
Chappell & Associates
September 2006
Applies to:
Windows Vista
Windows Presentation Foundation
Microsoft .NET Framework 3.0
Summary: The primary goal of Windows Presentation Foundation (WPF) is to help developers create attractive and effective user interfaces. Learn how the WPF unified platform helps make designers active participants in creating user interfaces, and provides a common programming model for standalone and browser applications. (34 printed pages)
Describing Windows Presentation Foundation
Illustrating the Problem
Addressing the Problem: What Windows Presentation Foundation Provides
Using Windows Presentation Foundation
The Technology of Windows Presentation Foundation
Applying Windows Presentation Foundation
Tools for Windows Presentation Foundation
For Developers: Visual Studio
For Designers: Expression Interactive Designer
Windows Presentation Foundation and Other Microsoft Technologies
Windows Presentation Foundation and Windows Forms
Windows Presentation Foundation and Win32/MFC
Windows Presentation Foundation and Direct3D
Windows Presentation Foundation and AJAX/"Atlas"
Windows Presentation Foundation and "WPF/E"
Conclusion
About the Author
By definition, technical people care most about technology. Many software professionals are far more interested in how an application works than in how it interacts with its users. Yet those users—who are, after all, the ones paying for all of this—care deeply about user interfaces. An application's interface is a major part of the complete user experience with that software, and to its users, the experience is the application. Providing a better user experience through a better interface can improve productivity, help create loyal customers, increase sales on a Web site, and more.
Once happy with purely character-based interfaces, users have now become accustomed to graphical interfaces. Yet the requirements for user interfaces continue to advance. Graphics and media have become more widely used, and the Web has conditioned a generation of people to expect easy interaction with software. The more time people spend interacting with applications, the more important the interfaces to those applications become. To keep up with increasing expectations, the technology used to create user interfaces must also advance.
The goal of Windows Presentation Foundation (WPF) is to provide these advances for Windows. Included in version 3.0 of the Microsoft .NET Framework, WPF allows building interfaces that incorporate documents, media, two- and three-dimensional graphics, animations, Web-like characteristics, and much more. Like everything else in the .NET Framework 3.0, WPF will be available for Windows Vista, Windows XP, and Windows Server 2003, and it's scheduled to be released when Windows Vista ships. This paper introduces WPF, describing its various parts. The goal is to make clear the problems this technology addresses, then survey the solutions that WPF provides.
Suppose a hospital wants to create a new application for examining and monitoring patients. The requirements for this new application's user interface might include the following:
These requirements are ambitious, but they're not unreasonable. User interfaces that present the right information in the right way at the right time can have significant business value. In situations such as the health care example described here, they can actually save lives. In less critical scenarios, such as on-line merchants or other consumer-oriented applications, providing a powerful user experience can help differentiate a company's offerings from its competitors, increasing both sales and the value of the firm's brand. The point is that many modern applications can benefit from providing interfaces that integrate graphics, media, documents, and the other elements of a modern user experience.
Building this kind of interface on Windows is possible with the technologies of 2006, but it's remarkably challenging. Some of the major hurdles are:
There's no inherent reason why creating powerful, modern user interfaces should be so complex. A common foundation could address all of these challenges, offering a unified approach to developers while letting designers play an important role. As described next, this is exactly the intent of WPF.
Three aspects of what WPF provides stand out as most important. They are: a unified platform for modern user interfaces, the ability for developers and designers to work together, and a common technology for Windows and Web browser user interfaces. This section describes each of these three.
In a pre-WPF world, creating a Windows user interface like the one described earlier would require using several different technologies. The table below summarizes the situation.
To create the forms, controls, and other typical aspects of a Windows graphical user interface, a developer would most likely choose Windows Forms, part of the .NET Framework. If the interface needs to display documents, Windows Forms has some support for on-screen documents, while fixed-format documents might use Adobe's PDF. For images and two-dimensional graphics, that developer will use GDI+, a distinct programming model that is also accessible via Windows Forms. To display video and audio, he might rely on Windows Media Player, and for three-dimensional graphics, he'll use Direct3D, a standard part of Windows.
This complicated situation exists solely for historical reasons. No one would argue that it makes much sense. What does make sense is to provide a single unified solution: WPF. Developers creating applications for machines with WPF installed will likely use it to address all the areas listed above. After all, why not use one coherent foundation for creating user interfaces rather than a diverse collection of independent technologies?
WPF doesn't replace everything on this list, of course. Windows Forms applications will continue to have value, and even in a WPF world, some new applications will continue to use Windows Forms. (It's worth noting that WPF can interoperate with Windows Forms, something that's described in more detail later in this paper.) Windows Media Player continues to have an independent role to play, and PDF documents will continue to be used. Direct3D also remains an important technology for games and some other kinds of applications. (In fact, WPF itself relies on Direct3D for all rendering.)
Yet by providing a broad range of functionality in a single technology, WPF can make creating modern user interfaces significantly easier. To get a sense of what this unified approach allows, here's a typical screen that a WPF-based version of the health care application described above might present to a user:
Figure 1. A WPF interface can combine images, text, 2D and 3D graphics, and more.
This screen contains text and images along with both two- and three-dimensional graphics. All of this was produced using WPF—the developer doesn't need to write code that uses specialized graphics technologies such as GDI+ or Direct3D. Similarly, WPF allows displaying and perhaps annotating video, such as the ultrasound feed shown below.
Figure 2. A WPF interface can include video, allowing the user to make text annotations.
WPF also allows displaying documents in a readable way. In the hospital application, for instance, a physician might be able to look up notes about a patient's treatment or access current medical research on a relevant topic. Again, the physician might be able to add annotations as the screen below shows.
Figure 3. WPF interface can display multi-column documents, including annotations.
Notice that the document is displayed in readable columns and that the user can move through it a page at a time rather than by scrolling. Improving on-screen readability is a worthy aim, and it's an important goal of WPF. Useful as on-screen documents are, however, fixed-format documents can sometimes be the right choice. Because they look the same on screen and on a printer, fixed-format documents provide a standard look in any situation. To define this type of document, Microsoft has created the XML Paper Specification (XPS). WPF also provides a group of application programming interfaces (APIs) that developers can use to create and work with XPS documents.
Yet creating modern user interfaces means more than just unifying what were once diverse technologies. It also means taking advantage of modern graphics cards, and so WPF exploits whatever graphics processing unit (GPU) is available on a system by offloading as much work as possible to it. Modern interfaces also shouldn't be constrained by the limitations of bit-mapped graphics. Accordingly, WPF relies entirely on vector graphics, allowing an image to be automatically resized to fit the size and resolution of the screen it's displayed on. Rather than creating different graphics for display on a small monitor and a big-screen television, the developer can let WPF itself handle this.
By unifying all of the technologies required to create a user interface into a single foundation, WPF can make life significantly simpler for the people who create those interfaces. By requiring those people to learn only a single environment, WPF can make creating and maintaining applications less expensive. And by making it straightforward to build interfaces that incorporate graphics, video, and more, WPF can improve the quality—and business value—of how users interact with Windows applications.
Providing a unified technology foundation for creating full-featured user interfaces is a good thing. Yet expecting developers to use this power wisely, creating comprehensible, easy-to-use interfaces, is probably asking too much. Creating good user interfaces, especially when they're as comprehensive as the hospital example just described, often requires skills that most software professionals just don't have. Even though many applications are built without them, the truth is that building great user interfaces requires working with professional interface designers.
But how can designers and developers work together? The way the two disciplines interact today is problematic. Most commonly, a designer uses a graphical tool to create static images of the screen layouts that an application should display.). XAML defines a set of XML elements such as Button, TextBox, Label, and many more to define exactly how a user interface looks. XAML elements typically have attributes as well, allowing various options to be set. For example, this simple XAML snippet creates a red button containing the word "No":
<Button Background="Red">
No
</Button>
Each XAML element corresponds to a WPF class, and each of that element's attributes has a corresponding property or event in the class. For example, the same red button could be produced with this C# code:
Button btn = new Button();
btn.Background = Brushes.Red;
btn.Content = "No";
If everything expressible in XAML is also expressible in code—and it is—what's the value of XAML? The answer is that building tools that generate and consume XML-based descriptions is much easier than doing the same thing with code. Because XAML offers a tool-friendly way to describe a user interface, it provides a better way for developers and designers to work together. The figure below illustrates the process.
Figure 4. XAML helps designers and developers work together
A designer can specify how a user interface should look and interact using a tool such as Microsoft Expression Interactive Designer. Oriented entirely toward defining the look and feel of a WPF interface, this tool generates a description of that interface expressed in XAML. (While it might include a simple button like the example shown here, this description is in fact much more complex than the snippet above might suggest.) A developer then imports. It's also possible to create styles that can be globally applied to an application's interface, allowing it to be customized as needed for different situations.
Enabling designers and developers work together like this reduces the translation errors that tend to occur when developers implement interfaces from designer-created images. It can also allow people in these two roles to work in parallel, with quicker iteration and better feedback. And because both environments use the same build system, a WPF application can be passed back and forth between the two development environments. More specialized tools for designing XAML-defined interfaces are also available, such as Electric Rain's ZAM 3D for creating three-dimensional interface elements.
Better user interfaces can increase productivity—they have measurable business value. Yet to create truly effective interfaces, especially in the multi-faceted world that WPF provides, designers must become first-class citizens in the process. A primary goal of XAML and the tools that support it is to make this possible.
Creating effective user interfaces for Windows applications is important. Yet creating effective interfaces for Web-based applications is at least as important. By definition, these interfaces are provided by a Web browser, and the simplest approach is just to let the browser passively display whatever HTML it receives. More responsive browser interfaces provide logic running in JavaScript, perhaps using asynchronous JavaScript and XML (AJAX). The interface may even support animations, video, and more using Adobe's Flash Player or some other technology. Sometimes known as rich Internet applications, Web software that provides this kind of full-featured interface can significantly improve the user's experience. It can also add substantial business value by making a Web application more attractive to users.
Building this kind of interface has traditionally required using a completely different set of technologies from those used for a native Windows interface. Accordingly, developers commonly focus on one of these approaches: either you're a Windows interface developer or you're a Web interface developer. Yet for rich Internet applications that will be accessed from Windows, why should this dichotomy exist? There's no inherent reason why the same technologies can't be used for both native Windows interfaces and Web browser interfaces.
WPF allows this. A developer can create a XAML Browser Application (XBAP) using WPF that runs in Internet Explorer. In fact, the same code can be used to create a standalone WPF application and an XBAP. The screen below, for example, shows a financial services application running as a standalone Windows application. Like the hospital application shown earlier, this one mixes text, images, and various kinds of graphics. (This screen also illustrates the Windows Vista desktop, including gadgets such as the clock and the Aero theme that provides the semi-transparent borders around the application's window.)
Figure 5. A financial services application can run as a standalone WPF application.
Here's how this interface looks running inside Internet Explorer as an XBAP:
Figure 6. The same application can potentially run as an XBAP.
The interface is now framed by the browser rather than running in its own window, yet its functionality remains the same. The same code can also used in both cases, which decreases the amount of work required to address both kinds of interfaces. Using the same code also means using the same developer skills. Rather than forcing developers into the disjoint boxes of Windows interface developer or Web interface developer, WPF can break down those divisions, allowing the same knowledge to be used in both cases.
Another advantage of using the same technology for both Windows and Web interfaces is that an application's creator isn't necessarily forced to decide in advance what kind of interface the application should have. As long as the target clients meet the requirements for running XBAPs, an application can provide either (or both) a Windows and a Web interface using largely the same code.
Because an XBAP is downloaded on demand from a Web server, it faces more stringent security requirements than a standalone Windows application. Accordingly, XBAPs run in a security sandbox provided by the .NET Framework's code access security. XBAPs also run only on Windows systems with WPF installed and only with Internet Explorer versions 6 and 7. For applications that meet these requirements, however, rich Internet applications can now use the same foundation as standalone Windows applications.
Knowing what problems WPF addresses is useful, but having some understanding of how it addresses those problems is also useful. This section surveys the WPF technology itself, then looks at the different ways it's applied in Windows desktop applications, XBAPs, and XPS documents.
Even though WPF offers a unified foundation for creating user interfaces, the technologies it contains can be examined in discrete, understandable parts. These parts include documents, images, graphics, animation, and more. All of them depend on WPF's basic application model, however, which is described next.
Like other parts of the .NET Framework, WPF organizes its functionality into a group of namespaces, all contained in the System.Windows namespace. Whatever parts of this functionality it uses, the basic structure of every WPF application is much the same. Whether it's a standalone Windows application or an XBAP, the application typically consists of a set of XAML pages and code associated with those pages.
At its root, every application inherits from WPF's standard Application class. This class provides common services that are useful to every application. These include holding state that needs to be available to the entire application and providing standard methods such as Run, which starts the application, and Shutdown, which terminates it.
An Application object can be created with either XAML, via the Application element, or code, using the Application class. (This is true for virtually everything in WPF, but for simplicity, this paper always uses the XAML option.) Here's a simple XAML illustration:
<Application xmlns=
""
xmlns:x=""
StartupUri="Page1.xaml"
x:
. . .
</Application>
This definition first specifies WPF's namespace as the default for this element, then defines a prefix for the XAML namespace. (XAML is used for more than WPF, so these two namespaces aren't synonymous.) It next uses the StartupUri attribute to indicate the name of the XAML page that should be loaded and displayed when the application is started. The final attribute, Class, is used to identity the class that contains the code associated with this Application. As mentioned earlier, WPF applications typically contain both XAML and code written in C# or Visual Basic, and so a code-behind file is used to contain code for this class. Following this opening Application tag appears the rest of the XAML used to define this application, all of which is omitted here, followed by the closing tag for the Application element.
Even though all WPF applications derive from the same root class, there are still plenty of choices that a developer needs to make. A big one is deciding whether an application should provide a traditional dialog-driven interface or a navigational interface. A dialog-driven interface provides the buttons and other elements that every Windows user is familiar with. A navigational interface, by contrast, acts much like a browser. Rather than opening a new window for a dialog, for instance, it commonly loads a new page. Interfaces like this are implemented as a group of pages, each consisting of a user interface defined in XAML together with logic expressed in a programming language. Like HTML-defined browser pages, XAML provides a Hyperlink element that can be used to link pages together. A user navigates through these pages much as she would through the pages of a Web-based application, relying on a History list to move back and forth. Don't be confused, however—this is still a Windows application. While XBAPs will typically use this kind of interface, it's also perfectly legal for a standalone Windows application to interact with its user through a navigational interface. The choice is made by the people who create the application.
Whatever interface style an application uses, it usually displays one or more windows. WPF provides a few choices for doing this. The simple Window class provides basic windowing functions, such as displaying, hiding, and closing a window, and it's typically used by a WPF application that's not using a navigational interface. NavigationWindow, used by applications that do have a navigational interface, extends the basic Window class with support for navigation. This support includes a Navigate method that allows the application to move to a new page, a journal that keeps track of the user's navigation history, and various navigation-related events.
To organize the various parts of an interface, a WPF application uses panels for layout. Each panel can contain children, including controls such as buttons and text boxes, and other panels. Different kinds of panels provide different layout options. A DockPanel, for example, allows its child elements to be positioned along the edges of the panel, while a Grid allows positioning its children precisely on a grid, just as its name suggests. The developer defines the number of rows and columns in the grid, then specifies exactly where any children should be placed. A Canvas lets a developer position its children freely anywhere within the panel's boundaries.
Like any user interface technology, WPF provides a large set of controls, and developers are free to create custom controls as well. The standard set includes Button, Label, TextBox, ListBox, Menu, Slider, and other traditional atoms of user interface design. More complex controls are also provided, such as SpellCheck, PasswordBox, controls for working with ink (as with a Tablet PC), and more.
As usual in a graphical interface, events generated by the user, such as mouse movements and key presses, can be caught and handled by the controls in a WPF application. While controls and other user interface elements can be fully specified using XAML, events must be handled in code. For example, here's a XAML definition of a simple Button on a Canvas:
<Canvas xmlns=
""
xmlns:x=""
x:
<Button Click="Button_Click">
</Button>
</Canvas>
The opening Canvas tag starts by defining the usual WPF and XAML namespaces. It then specifies that the code associated with this XAML can be found in a class named CodeForCanvas, which is contained in the .NET Framework namespace Example. Next comes the definition of the Button itself, specifying "Click Here" as its on-screen text. The Click attribute on the opening Button tag indicates that this button relies on a method called Button_Click to handle the click event. The code for that method might look like this:
namespace Example {
public partial class CodeForCanvas : Canvas {
void Button_Click(object sender, RoutedEventArgs e) {
Button btn = e.Source as Button;
btn.Background = Brushes.Purple;
}
}
}
The namespace and class name match that specified in the Canvas tag just shown. The class CodeForCanvas inherits from the base Canvas class provided by WPF, and it's defined as a partial class. Partial classes were a new addition in version 2.0 of the .NET Framework, and they allow combining code defined separately into a single class. In this case, the XAML-defined Canvas generates a partial class that gets combined with the partial class shown here. The result is a complete class capable of both displaying a canvas with a button and handling its event.
The Button_Click method to handle that event is provided within the CodeForCanvas class. It follows the usual .NET Framework conventions for an event, although the event's arguments are conveyed using the WPF-defined RoutedEventArgs class. This class's Source property contains a reference to the Button that generated the event, which the method uses to change the button's color to purple.
As this simple example suggests, the elements in a WPF user interface are organized into a visual tree. Here, the tree consists of just a Canvas with a single child Button, but in a real WPF application, this tree is typically much more complex. To actually create the on-screen interface, this visual tree must be rendered. Whenever possible, WPF relies on hardware rendering, letting the graphics card installed on the application's machine handle the work. If the machine's graphics hardware isn't up to the job, however, WPF will render the interface using its own software. The decision is made at run time by WPF—developers don't need to do anything special.
Whether rendering is done in hardware or software, WPF always relies on an approach known as retained mode graphics. The creators of an application define what the visual tree looks like, typically using a combination of XAML and code. WPF itself then retains the information in this tree. Rather than requiring the application to repaint all or part of a window when the user uncovers it, for example, WPF handles this on its own. The elements that comprise the tree are stored as objects, not as pixels on the screen, and so WPF has enough information to handle this kind of rendering. Even if a window and the controls it contains are resized, WPF can re-render everything on its own. Because it understands the form of the graphics—lines, ellipses, and so on—and because it relies on vector graphics rather than maps of pixels, WPF has enough information to recreate the interface at the new size.
It's often useful to be able to define how some user interface element looks once, then apply that look over and over. Cascading Style Sheets (CSS) allow doing this in HTML pages, for example. WPF provides something similar with styles. The ability to define styles can be quite useful, as the popularity of CSS stylesheets suggests. They allow better separation between designers and developers, for instance, allowing a designer to create a uniform look for an interface while letting the developer ignore these details.
Using XAML's Style element, the creator of a WPF application can define one or more aspects of how something should look, then apply that style over and over. For example, a style named ButtonStyle might be defined like this:
<Style x:
<Setter Property="Control.Background" Value="Red"/>
<Setter Property="Control.FontSize" Value="16"/>
</Style>
Any Button defined using this style would be given a red background and use a font size of 16. For example:
<Button Style="{StaticResource ButtonStyle}">
</Button>
As the appearance of "StaticResource" in this example suggests, WPF styles are typically defined as a resource, which is just data defined separately from an application's code.
Styles allow more than the simple example shown here might suggest. A style can be derived from another style, for instance, inheriting and perhaps overriding its settings. A style can also define triggers that specify common aspects of interactive behavior. For example, a style might specify that hovering the mouse over a Button should cause the button's background to turn yellow.
WPF also supports the use of templates. A template is similar to a style, and two different kinds are available:
Providing a straightforward way for an application's creators to define the appearance of its interface makes sense. In WPF, styles and templates are primary mechanisms for doing this.
Most user interfaces display at least some text, and some display little else. Yet for most people, reading text on a screen can't compare with reading a printed page. We've become accustomed to the high-quality depictions of letters and the relationships between them typically found in books and magazines. When we read on-screen text, things just aren't the same—the text somehow doesn't feel as readable.
WPF aims at closing this gap, making on-screen text as readable as a printed page. Toward this end, WPF supports industry-standard OpenType fonts, allowing existing font libraries to be used. It also supports the more recently defined ClearType technology. Through sub-pixel positioning, a technique for individually lighting up the sub-elements that make up each pixel on modern display screens, ClearType allows text to look smoother to the human eye. WPF also provides low-level support for rendering text via the Glyphs class. As described later, this class is used by XPS documents to represent characters.
To further improve readability, WPF also allows extras such as ligatures, where a group of characters are replaced by a single connected image. For instance, the group "ffi" will typically be replaced in a printed page by a single connected ligature containing those three characters. Adding this to on-screen text makes the reader feel more at home, even if she doesn't consciously perceive the details that create that feeling.
Making text more readable is a good thing, since text appears in buttons and lists and many other places in a user interface. Yet we care most about text when we're reading longer chunks of it, such as in a document. Accordingly, improving on-screen readability also requires improving how documents are displayed. Toward this end, WPF supports two kinds of documents: fixed documents and flow documents.
Fixed documents look exactly the same whether they're rendered on a screen or a printer. Knowing that a document will always look the same is important for some forms, legal documents, and other kinds of publications, and so fixed-format documents are important in a number of areas. The fixed-format documents supported by WPF are defined by XPS, which is described later in this paper. A fixed document's contents can be specified using XAML's FixedDocument element. This simple element contains just a list of PageContent elements, each containing the name of a page in the fixed document. To display a fixed document, WPF provides the DocumentViewer control. This control provides read-only display of an XPS document, letting the reader move backward and forward in the document, search for specific text, and more.
While fixed documents are meant to be used both on a screen and on paper, flow documents are intended solely for on-screen display. To make its contents as readable as possible, a flow document can adjust how a document's text and graphics are displayed based on the window size and other factors. Unsurprisingly, flow documents are defined using a XAML element called FlowDocument. Here's a simple example:
<FlowDocument
ColumnWidth="300"
IsColumnWidthFlexible="True"
IsHyphenationEnabled="True">
<Paragraph FontSize="12">
<Bold>Describing WPF</Bold>
</Paragraph>
<Paragraph FontSize="10">
WPF is the user interface technology for the .NET
Framework 3.0. It provides a unified foundation for modern
user interfaces, including support for documents, two- and
three-dimensional graphics, media, and more. It also
allows using the same approach (and the same code) for
creating standalone Windows applications and applications
that run inside a browser.
</Paragraph>
</FlowDocument>
This document asks to be displayed in a column with a width no less than 300. (The width is measured in device-independent pixels, each of which is defined to be 1/96th of an inch.) In the very next line, however, the document's creator says that this width is flexible by setting the IsColumnWidthFlexible property to true. This authorizes WPF to change the width and number of columns that will be used to display this document. If the user changes the width of the window in which this document is displayed, for example, WPF can increase or decrease the number and the width of columns used to display the document's text.
Next, the document requests hyphenation by setting the IsHyphenationEnabled property to true. Following this are the two paragraphs this document contains. The text inside each one is contained within a Paragraph element, each setting a different font size. The text in the first paragraph also indicates that it should be displayed in bold.
WPF defines several more FlowDocument options for improved readability. For instance, if the IsOptimalParagraphEnabled property is set to true, WPF will distribute white space as evenly as possible throughout a paragraph. This can prevent the "rivers" of white space that hurt readability, something that's commonly done with printed documents. Flow documents also allow annotations, such as adding notes in ordinary text or, on Tablet PCs, in ink. Each annotation consists of an anchor that identifies what content in the document an annotation is associated with and cargo that contains the content of the annotation itself.
To display a FlowDocument, WPF includes a few different controls. They are the following:
As more and more information is delivered digitally, the quality of the on-screen reading experience becomes more important. By providing adaptive display of information through flow documents, WPF attempts to improve this experience for Windows users.
Whether they represent company logos, pictures of sunsets, or something else, images are a fundamental part of many user interfaces. In WPF, images are typically displayed using the Image control. To show a JPEG file, for example, the following XAML could be used:
<Image
Width="200"
Source="C:\Documents and Settings\All Users\Documents\
My Pictures\Ava.jpg" />
The image's width is set to 200, and once again, the units here are device-independent pixels. The file that contains the image is identified using the Source attribute.
An image file can contain information about the image—metadata—such as keywords and ratings applied by users, and WPF applications can read and write this information. An image can also be used in more interesting ways, such as painting it onto one face of a revolving three-dimensional object. Although the simple example shown here illustrates a common case, WPF allows images to be used in a significantly broader way.
The WPF Image control can display images stored in various formats, including JPEG, BMP, TIFF, GIF, and PNG. It can also display images stored using Microsoft's Windows Media Photo (WMPhoto) format, new with Windows Vista. Whatever format is used, WPF relies on the Windows Imaging Component (WIC) to produce the image. Along with coder/decoders (commonly known as codecs) for all the image formats just listed, WIC also provides a framework for adding third-party codecs.
As both network and processor speeds have increased, video has become a larger part of how people interact with software. People also spend a good deal of time listening to music and other audio on their computers. Accordingly, WPF provides built-in support for both.
That support depends on the MediaElement control. Here's a simple XAML example of how this control might be used:
<MediaElement
Source="C:\Documents and Settings\All Users\Documents\
My Videos\Ruby.wmv" />
This control can play WMV, MPEG, and AVI video, along with various audio formats.
For the last twenty years, creators of two-dimensional graphics in Windows have relied on the Graphics Device Interface (GDI) and its successor, GDI+. Yet even Windows Forms applications must access this functionality through a distinctly different namespace—2D graphics aren't integrated into the user interface technology itself. The situation was even worse for three-dimensional graphics, since an entirely separate technology, Direct3D, was required. With WPF, this complexity goes away for a large share of applications. Both 2D and 3D graphics can be created directly in XAML or in procedural code using the WPF libraries. Like everything else in WPF, the elements they use are just another part of an application's visual tree.
For 2D graphics, WPF defines a group of shapes that applications can use to create images. They are:
Using these classes to create simple graphics is straightforward. For example, the following XAML draws a red ellipse:
<Ellipse Width="30" Height="10" Fill="Red" />
Filling a shape relies on a brush. The example above uses the default, which is a solid color brush, but WPF provides several other options. For example, a rectangle filled with a color gradient changing horizontally from red to yellow can be defined with:
<Rectangle Width="30" Height="10"
Fill="HorizontalGradient Red Yellow" />
Several other brushes are also available, including a vertical gradient, a radial gradiant, and brushes that paint with images, bitmaps, and more. Although it's not shown here, shapes can also use pens to specify the color, width, and style of their outline.
A key thing to understand about WPF is that because everything is built on a common foundation, combining different aspects is straightforward. An application can display an Image inside a Rectangle, place an Ellipse within a Button, and much more. Because of this, combining 2D graphics with 3D graphics and other parts of an interface is straightforward.
Along with shapes, WPF also provides another group of classes for working with two-dimensional graphics. Known as geometries, these classes are similar in many ways to shapes. Like shapes, which include choices such as Line, Rectangle, Ellipse, and Path, geometries provide options such as LineGeometry, RectangleGeometry, EllipseGeometry, and PathGeometry. The most important difference between the two kinds of classes is that while shapes are typically used to draw visible images, geometries are more often used to define regions. If a square image needs to be cropped to fit inside a circle, for example, the EllipseGeometry class can be used to specify the circle's boundaries. Similarly, if an application wishes to define a hit-testing region, such as an area in which mouse clicks will be detected, it can do this by specifying a geometry for that region.
Finally, it's worth mentioning that everything described in this section is actually implemented on top of a lower-level interface called the visual layer. It's possible to create graphics, images, and text using this layer directly. While doing this can be useful in some situations, such as for creating simple, high-performance graphics, the great majority of applications will use shapes and the other higher-level abstractions that WPF provides.
Two-dimensional graphics are a common part of Windows interfaces, and so WPF provides quite a bit of technology in this area. Three-dimensional graphics are less commonly used today, however, even though they can provide substantial value through better data visualization, 3D charts, product renderings, and more. Working in 3D has traditionally required a distinct skill set, one that's not commonly found outside of game developers and other specialized groups. By making support for 3D graphics part of the standard environment, WPF aims at changing this.
Without WPF, 3D development on Windows typically relies on the Direct3D API. Like everything else in WPF, its support for 3D graphics uses Direct3D under the covers, but developers are presented with a significantly simpler world. While there are still cases where it makes sense to use Direct3D rather than WPF, as described later in this paper, Microsoft's intent is that mainstream 3D development for Windows interfaces use WPF.
To display 3D graphics in WPF, an application uses the Viewport3D control. This control essentially provides a window into the three-dimensional world the application describes. A Viewport3D control can be used anywhere in a WPF interface, allowing 3D graphics to appear wherever they're needed.
To create a 3D scene, a developer describes one or more models, then specifies how those models should be lit and viewed. As usual, all of these things can be specified using XAML, code, or a mix of the two. To describe a model, WPF provides a GeometryModel3D class that allows defining the model's shape. Once a model is defined, its appearance can be controlled by applying different kinds of material. The SpecularMaterial class, for instance, makes a surface look shiny, while the DiffuseMaterial class does not.
Regardless of the materials it uses, a model can be lit in various ways. DirectionalLight provides light that comes from a specific direction, while AmbientLight provides uniform lighting for everything in a scene. Finally, to define how the model should be viewed, the developer specifies a camera. A PerspectiveCamera, for instance, allows specifying the distance and perspective from which a model is viewed, while an OrthographicCamera does the same thing, except without perspective: objects further from the camera don't appear smaller.
Creating complex 3D scenes directly in either XAML or code isn't simple. It's safe to assume that for the great majority of WPF applications that use 3D, developers will rely on graphical tools to generate the necessary definitions. However it's accomplished, the ability to use 3D graphics in a standard user interface has the potential to improve significantly the quality of what users see on their screens.
Along with providing a way to define shapes and other elements, WPF also offers developers the ability to transform these elements by rotating them, changing their size, and more. In XAML, elements such as RotateTransform and ScaleTransform are used to do this. These transformations can be applied to any user interface element. Here's a simple example:
</Button>
<Button Content="Click Here">
<Button.RenderTransform>
<RotateTransform Angle="45" />
</Button.RenderTransform>
</Button>
The RotateTransform element rotates the button by 45 degrees. While rotating a button like this isn't especially useful, the fact that it's possible indicates the generality of WPF's design. Because the various aspects of a user interface don't rely on different underlying technologies, they can be combined in diverse ways.
WPF also includes a few pre-defined effects. Like transformations, these effects can be applied to various aspects of a user interface, such as Buttons, ComboBoxes, and others. They include a blur effect that makes the interface element appear fuzzy, an outer glow effect that makes an element appear to glow, and a drop shadow effect that adds a shadow behind an interface element.
The ability to make the elements in an interface move—to animate them—can be very useful. Clicking on a button might cause the button to appear to move down, then up, for instance, giving better feedback to the user. More complex animations can help create interfaces that engage their users by directing their attention and telling stories. WPF's animation support makes this possible.
As with transformations, animations can be applied to many different aspects of the interface, including buttons, shapes, images, and more. Animation is accomplished by changing the value of one or more of an object's properties over time. For example, an Ellipse might appear to be slowly squashed by incrementally decreasing its Height property over a period of two seconds.
It's often useful to define a group of related animations. To allow this, WPF provides the Storyboard class. Each Storyboard can contain one or more timelines, and each of these can contain one or more animations. Various kinds of timelines are provided, allowing animations to run sequentially or in parallel. Here's a simple (although slightly incomplete) XAML example that illustrates squashing an Ellipse:
<Ellipse Width="100" Height="50" Fill="Blue"
Name="EllipseForSquashing">
. . .
<Storyboard>
<DoubleAnimation
Storyboard.TargetName="EllipseForSquashing"
Storyboard.
</Storyboard>
. . .
</Ellipse>
The example begins with the definition of an Ellipse, as seen earlier in this paper. Here, however, the Name property is also used, assigning an identifier that allows this Ellipse to be referenced later. Some details are omitted, but to define the animation in XAML, a Storyboard element must appear. Because Ellipse's Height property is of the type double, the Storyboard contains a DoubleAnimation element. This element specifies the name of the Ellipse being animated, the property that will be changed, and exactly what those changes should be. Here, the value of Height is being changed from 50 to 25 over a period of two seconds.
Animations can be much more complex than this. They can be triggered by events, such as mouse clicks, be paused and then resumed, be set to repeat some number of times (or forever), and more. The goal is to allow developers to create user interfaces that provide better feedback, offer more functionality, and are all-around easier to use than they otherwise might be.
Most user interfaces display some kind of data. To make life simpler for the developers who create those interfaces, data binding can be used to make displaying data easier. Data binding allows directly connecting what a WPF control displays with data that lives outside that control. For example, the value of the Text property in a WPF TextBox control might be bound to a property called Name in an Employee object that's part of this application's business logic. A change to either property could then be reflected in the other. If a user updated the value in the TextBox, the Employee object's Name property would also change, and vice-versa.
Creating this kind of connection between properties in two objects requires using the WPF Binding class. Here's a slightly simplified XAML illustration of how this might look:
<TextBox . . . >
<TextBox.Text>
<Binding Path="Name" />
</TextBox.Text>
</TextBox>
In this example, the Binding element's Path attribute is used to identify the property to which the TextBox's Text property should be bound. Path is used when the object this property is part of (which will be specified at runtime) is a Common Language Runtime (CLR) object, defined in a language such as C# or Visual Basic. Along with CLR objects, WPF's data binding can also connect to XML data directly using Binding's XPath property. This option creates an XPath query that selects one or more nodes in an XML document referencing the specified data.
More complex data binding options are also possible. For example, list bindings allow the contents of a ListBox control to be populated from any CLR object that implements the standard IEnumerable interface. If necessary, data can also be filtered or sorted before it's displayed. (While it is possible to bind to an ADO.NET Dataset, however, WPF has no direct support for binding to data in a relational database management system.) Whatever data binding option is used, the intent is to make a common requirement—displaying data in a user interface—as straightforward as possible.
The most common user of a WPF interface is, of course, a person. But there are times when a user interface needs to be driven not by a human being, but instead by other software. WPF's user interface (UI) automation makes this possible.
Suppose, for example, that a developer wishes to create automated test scripts for an interface. Using the programmatic access that UI automation provides, she can create scripts that drive the interface just as a human user would. UI automation is also useful for creating accessibility aids, such as a tool that reads aloud the various elements of the interface. Because it allows programmatically walking through the tree that contains those elements, UI automation makes building these kinds of tools possible.
To allow this, WPF creates a UI automation tree. This tree consists of AutomationElement objects, each representing something in the interface. The root of the tree is the Desktop, and each open application is a child of this root. The tree continues into each of these applications, with each WPF control represented as one (or sometimes more than one) AutomationElement objects. To allow complete programmatic access to the interface, everything that a user can interact with is represented as a distinct AutomationElement. For example, a control with multiple buttons will have both the control itself and each button represented as a distinct AutomationElement object. Building this level of granularity into the UI Automation tree allows a client application, whether it's a test script, an accessibility aid, or something else, to access each component of the interface just as would a human user.
UI automation isn't the most mainstream aspect of WPF. Most people will probably never use it. Yet those who need it, such as software testers and users with disabilities, really need it. Something doesn't have to be widely used to be important.
WPF contains a remarkable amount of technology. While all of it relates to interacting with people, the technology is applied today in three related ways: standalone WPF applications, XBAPs, and XPS documents. This section looks at each of these three.
The most general way to use WPF is in a standalone application. Standalone WPF applications run like any other Windows application—they don't require a Web browser. Accordingly, standalone applications can have full trust, so all of WPF's capabilities can be used. Full trust also means that standalone WPF applications can freely use other services available on the machine, such as Windows Communication Foundation (WCF).
Like other Windows applications, a standalone WPF application can be installed from a local disk or from a network server. It can also be installed using ClickOnce, a facility in the .NET Framework. ClickOnce provides a straightforward way for Internet Explorer users to download and install Windows applications, including WPF applications, and to have those applications automatically updated when they change.
While standalone WPF applications offer the most capability, they're not always the right choice. Plenty of situations make more sense with a client that runs in a Web browser rather than as a Windows application. To allow these clients to present modern user interfaces, WPF provides XBAPs.
As the figure below shows, an XBAP runs inside Internet Explorer. XBAPs can act as a client for Web applications built using ASP.NET, JavaServer Pages (JSP), or other Web technologies. To communicate back to this Web application, the XBAP can use HTTP or SOAP. Whatever server platform is used, an XBAP is always loaded via ClickOnce. It presents no dialogs or prompts to the user during this process, however; an XBAP loads just like a Web page. Because of this, XBAPs don't appear on the Start menu or in Add/Remove Programs.
Figure 7. An XBAP running inside Internet Explorer
While it's not strictly required, XBAPs typically present a navigational interface to the user. This lets the application behave like a Web client, which is probably what the user expects. In Internet Explorer 7, an XBAP uses the forward and back buttons of the browser itself, and the XAML pages a user accesses will appear in the browser's history list. In Internet Explorer 6, the XBAP displays its own forward and back buttons, along with maintaining its own history list. The XBAP can determine which environment it's running in and do the right thing; the developer need not create different versions for each browser.
Because it's loaded from the Web and runs inside a browser, an XBAP is given only limited trust by the .NET Framework's code access security. Because of this, there are a number of things that a standalone WPF application can do that an XBAP cannot. For example, an XBAP deployed from the Internet zone can't do any of the following:
As mentioned earlier, it's possible to use the same code base for both a standalone WPF application and an XBAP. To allow this, a developer might use conditional compilation, wrapping any functionality that's not allowed in an XBAP inside ifdefs. The XBAP version can do most of what the standalone application can do, including displaying documents, using two-dimensional and three-dimensional graphics, playing video and audio, and more. It can also take advantage of whatever graphics hardware is available in the machine it's running on.
Along with XBAPs, it's also possible to display pure XAML pages directly in Internet Explorer. Referred to as loose XAML, this can be useful for showing static pages in the browser. Handling events requires code, however, which means creating an XBAP.
XBAPs allow developers to use most of WPF's capabilities in a browser application. They also allow a common programming model, using mostly the same code, for standalone applications and browser applications. For Web applications whose clients target newer Windows platforms, XBAPs are likely to be an attractive choice.
Fixed-format documents, which in the WPF world means XPS documents, clearly have a role in user interfaces. As described earlier, WPF provides the DocumentViewer control for displaying XPS documents. Yet while it certainly makes sense to include this control in WPF, it's less obvious why XPS itself should be considered part of WPF. After all, the XPS specification provides a highly detailed way to define fixed-format documents, and the documents themselves can be used in different ways. Everything else in WPF is focused solely on creating a user interface. Given its broader purview, why include XPS under the WPF umbrella?
One big reason is that XPS documents are defined using XAML. Only a small subset of XAML is used, including the Canvas element for layout, the Glyphs element for representing text, and the Path element for creating two-dimensional graphics, but every XPS document is really a XAML document. Given this, viewing XPS as part of WPF is plausible.
Still, one of XPS's most important applications isn't about on-screen user interfaces. Beginning with Windows Vista, XPS becomes a native print format for Windows. XPS acts as a page description language, and so XPS documents can be rendered directly by XPS-aware printers. This allows using a single description format—XAML—all the way from the screen to the printer. It also improves on existing GDI-based print mechanism in Windows, providing better print support for complex graphic effects such as transparency and gradients.
Along with XAML, an XPS document can contain binary data such as images in various formats (including JPEG, PNG, TIFF, and WMPhoto), font data, information about document structure, and more. If necessary, XPS documents can also be digitally signed using the W3C XML Signature definitions and X.509 certificates. Whatever it contains, every XPS document is stored in a format defined by the Open Packaging Conventions (OPC). OPC specifies how the various parts of an XML document (not just an XPS or XAML document) are related, how they're stored in a standard ZIP format, and more. Microsoft Office 2007 also uses OPC for its XML formats, providing some commonality between the two kinds of documents.
Users of a WPF application can view XPS documents via the WPF DocumentViewer control, as mentioned earlier. Microsoft also provides an XPS viewer application, built on the DocumentViewer control, as shown below. Like the control, this application lets users move through documents page by page, search for text, and more. XPS documents are not Windows-specific, and so Microsoft plans to provide XPS viewers for other platforms as well, such as the Apple Macintosh.
Figure 8. An XPS viewer allows reading an XPS document a page at a time.
To let developers work with XPS documents, WPF provides a set of APIs to create, load, and manipulate them. WPF applications can also work with documents at the OPC level, allowing generalized access to XPS documents, Office 2007 documents, and others. Applications built using Microsoft's Windows Workflow Foundation can also use these APIs to create workflows that use XPS documents.
By allowing applications to display and work with fixed format documents, WPF integrates this component of modern user interfaces into its consistent approach. By using this same format to print documents, Windows Vista allows a better match between what people see on the screen and what they see on paper. While this type of document probably isn't the first thing people expect from a user interface technology, the broad use of XPS illustrates the range that a technology like WPF can cover.
WPF provides lots of functionality for developers, which is a good thing. No matter how powerful it is, though, a technology can be made much more useful by good tools. For WPF, Microsoft provides one tool aimed specifically at developers and another aimed at designers. This section takes a brief look at both.
Visual Studio is Microsoft's flagship tool for software developers. When WPF is initially released, Microsoft will provide extensions for Visual Studio 2005 that let developers create WPF applications. The next Visual Studio release, code-named "Orcas", will add more, including a Visual Designer for WPF (which has a code name of its own: "Cider"). Using this visual tool, developers will be able to create WPF interfaces graphically, then have the underlying XAML generated automatically. Although no official release date has been announced, Orcas is scheduled to ship sometime in 2007.
As described earlier, a primary goal of WPF is to make designers first-class citizens in the creation of user interfaces. XAML makes this possible, but only if tools are provided that let designers work in this new world. Toward this end, Microsoft has created Expression Interactive Designer.
As the screen shot below suggests, Expression Interactive Designer provides some aspects of traditional design tools, allowing its user to work in familiar ways. Yet the Designer is exclusively focused on creating interfaces for WPF applications. (In fact, this tool's interface is itself built using WPF.) Notice, for example, the list of WPF controls on the upper right of the screen below, and the graphical timeline at the bottom. All of these correspond to WPF capabilities described earlier in this paper, and all are made available for an interface designer to use. Animations can be created graphically, as can transformations, effects, and more. The result of the designer's work is expressed in a XAML file generated by the tool, which can then be imported into Visual Studio.
Figure 9. Expression Interactive Designer lets designers create WPF interfaces.[fig 8]
Expression Interactive Designer is one of three members of Microsoft's Expression family. The others are Expression Web Designer, a tool for creating standards-based Web interfaces, and Expression Graphic Designer, a tool for creating vector and/or bitmap images. Of the three, only Expression Interactive Designer is focused exclusively on creating user interfaces for WPF applications. A designer might well use the others to create parts of a user interface—maybe the interface's GIF images are created with Expression Graphic Designer, for instance—but these tools aren't specific to WPF. And although dates haven't been announced, all of the Expression tools are scheduled to ship sometime after the release of WPF.
Like most new Microsoft technologies, WPF affects other parts of the Windows world. Before looking at these effects, though, it's important to understand that installing WPF on a system doesn't break any software that uses Windows Forms, MFC, or any other existing technology. While new applications written for systems that support the .NET Framework 3.0 will most likely build their interfaces using WPF, applications that use these older technologies will continue to run unchanged..
Even applications built using WPF might benefit from using some aspects of Windows Forms. For example, Windows Forms today has a larger set of controls available than does WPF. The DataGridView control introduced with version 2.0 of the .NET Framework has no analog in WPF, and third parties have created Windows Forms controls for many other uses.. As its name suggests, this control is capable of hosting Windows Forms controls, allowing them to be used within the WPF application. It can also host ActiveX controls, giving WPF applications access to the large existing library created using this older technology..
Before the release of the .NET Framework in 2002, Windows developers commonly built user interfaces using either direct calls to Win32 APIs or MFC, which provided C++ wrappers around those APIs. Accordingly, plenty of code exists today with interfaces created in this style. What happens to this code in a WPF world?
The answer is similar to the situation with Windows Forms. Once again, WPF controls can be hosted within existing Win32/MFC code, and existing Win32/MFC controls can be hosted within WPF. (In fact, the facilities for interoperation between WPF and Windows Forms are actually built on top of the Win32/MFC interoperability services.) WPF provides the HwndHost class to allow using Win32/MFC controls in WPF, and the HwndSource class to let WPF controls be used in a Win32/MFC application. Each class maps between the two technologies as required. HwndHost, for instance, makes the hWnd used to reference a Win32/MFC control look like a WPF control. HwndSource does the opposite, making a WPF control look like an hWnd.
As with Windows Forms, there are some limitations in mixing these two worlds. In fact, since the Windows Forms interoperability relies on HwndHost and HwndSource, all of the restrictions described earlier for Windows Forms controls, such as limitations on layering and transparency, apply here, too. Also, unlike Windows Forms, applications that mix WPF with Win32/MFC code face the added challenge of interoperating between the WPF managed code environment and the unmanaged world of Win32. For this and other reasons, WPF applications that use Win32/MFC code can't run as XBAPs. As before, however, the key point is that Windows application can use WPF and Win32/MFC together. Using WPF doesn't require throwing away all of an application's existing user interface code.
Direct3D, part of Microsoft's DirectX family of APIs, is a mainstay for Windows developers who create three-dimensional graphics. The advent of WPF in no way obsoletes Direct3D. In fact, as described earlier, WPF relies entirely on Direct3D for rendering. Yet since WPF also allows developers to create 3D graphics, developers working in 3D must decide between the two.
The decision isn't especially difficult, however. Direct3D is still the best choice for intensive 3D development, such as games and 3D-centered technical applications, e.g., high-end scientific visualization. At least in its first release, WPF isn't designed to be a platform for these types of software.
WPF does make 3D graphics available to a much wider and less specialized audience, however. It also allows using 3D graphics on the Web in an XBAP, integrating them naturally with two-dimensional graphics, documents, and other aspects of an application's user interface, and more. It's also possible for a WPF application to host Direct3D code via the HwndHost class described earlier. Both WPF and Direct3D have distinct roles, and both have a good future as part of the Windows platform.
Using AJAX, developers can create browser clients that are more responsive to users. AJAX allows the user to interact with an application without requiring a page refresh (and thus a round trip to the Web server) for each request. To accomplish this, AJAX relies on a browser's support for the XMLHttpRequest object, an idea that first appeared in Internet Explorer 5.0 in the late 1990s. By the middle of the next decade, support for XMLHttpRequest had become widespread in browsers, and the AJAX phenomenon was born.
Creating AJAX clients isn't especially simple, however. To help in the process, Microsoft has created a set of technologies code-named "Atlas." Atlas is a set of libraries, controls, and more for creating AJAX applications. It consists of a client script library that can work in various browsers—not just Internet Explorer—and server-side extensions to ASP.NET. The goal is to make building Web applications with AJAX clients simpler.
The widespread browser support for AJAX makes it very attractive to developers. Yet even though AJAX allows creating much more interactive interfaces for Web users, it doesn't add support for a wider range of content types. Graphics, video, animations, and other more modern styles of interaction aren't supported by AJAX alone. For clients that support WPF, applications that need these will likely be written instead as XBAPs.
Using XBAPs, a Web application can provide its users with a large fraction of WPF's capabilities. Yet XBAPs require WPF to be installed on the client machine, limiting their applicability. What about Web applications that need to present modern interfaces, but must also be accessible from Macintoshes and other systems that don't support WPF?
A forthcoming technology, code-named "WPF/E, is intended to address this problem. WPF/E—the "E" stands for "Everywhere"—will provide a subset of WPF capabilities on a range of client platforms, including the Macintosh, smaller devices, and others, and on diverse Web browsers, including Internet Explorer, Firefox, and Netscape. This subset includes two-dimensional graphics, images, video, animation, and text. Some of what an XBAP can do is omitted from WPF/E, however, including support for three-dimensional graphics, documents, and hardware acceleration.
To create a WPF/E application, a developer can use JavaScript. WPF/E will also include a cross-platform subset of the .NET Framework, allowing development in C# and Visual Basic. WPF/E is not part of the .NET Framework 3.0, and so it's not scheduled to be released until sometime in 2007. Once it's available, creators of Web applications will have another option, one that provides a range of functions on a range of platforms, for building clients.
User interfaces are a fundamentally important part of most applications. Making those interfaces as effective has possible can have measurable benefits to the people and organizations that rely on them. The primary goal of WPF is to help developers provide these benefits, and so for anybody who creates or uses Windows applications, WPF is big news.
By providing a unified platform for modern user interfaces, helping make designers active participants in creating those interfaces, and allowing a common programming model for standalone and browser applications, WPF aims at significantly improving the Windows user experience. Some of the technologies it supplants had a twenty-year run as the foundation for Windows user interfaces. The intent of WPF is to lay the foundation for the next twenty years.
David Chappell is Principal of Chappell & Associates () in San Francisco, California. Through his speaking, writing, and consulting, he helps technology professionals around the world understand, use, and make better decisions about enterprise software. | http://msdn.microsoft.com/en-us/library/aa663364.aspx | crawl-002 | en | refinedweb |
Management, Building Manageable Applications, Design for Operations, WMI, DSI, MMC, VSMMD and of course Windows Powershell
1st grab the Visual Studio Templates from. There are VB.NET and C# ones. (yes i did say VB.NET)
Now thats downloading i'm going to walk through building, installing and running a Windows PowerShell CmdLet. The steps are simple:
Sounds a lot, but its really very simple and soon you will be building cmdlets for everything.
Now the templates have downloaded, extract the zip and you should find 2 vsi files - one for VB.NET and the other for C#. Click and install the one you want.
They are not signed, so expect a complaint they are not signed
Next load Visual Studio and create a brand new project, select Windows PowerShell as the project template.
The project Contains a PSSnapin file and a reference to the assembly System.Management.Automation - which should be located in the c:\program files\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0 folder. (you need to install the windows SDK for Vista to get these installed)
Right click the PSSnapin file and choose view code. (if you double click the file, visual studio tries to open the designer window and fails - don't worry about this, PSSnapin was never designed to work with the visual designer)
The PSSnapin contains 5 properties which you need to complete. They should be self explanitory - the key one is the Name which is how you will refer to it in PowerShell. Add some nice values for these properties.
The PSSnapin class inherits from PSSnapin which is the base class for creating snapins. Snapins are the deployment unit of Windows PowerShell. You typically add and remove snapins. Snapins contains one or more cmdlets and/or providers.
Next we'll add a a new class for our cmdlet. We can add as many cmdlets to the snapin as needed, in this walkthrough we'll just add one.
Right-click your project and choose, add then New Item. From the list of items choose Windows PowerShell PSCmdlet. This should be the default choice for cmdlets. Name the class GetProc.
The new class should inherit from PSCmdlet, be tagged with a Cmdlet attribute and contain a parameter which is commented out and a ProcessRecord method.
In the attribute, change the name to "Proc". In Powershell the name of the class is irrelevant. The cmdlet gets its name from the attribute. In our case we want our cmdlet to be called Get-Proc, so we use the common verb "Get" and the name "Proc". the attribute tag should look like the following:
[Cmdlet(VerbsCommon.Get, "Proc", SupportsShouldProcess = true)]
Since our cmdlet does not need parameters, we don't need to do anything with it. As a sideline, if you take a look at the parameter code you will see it is simply a tagged property. This means you no longer need to parse the cmdline to work out what parameters have been passed, nor will you have to deal with the pain around args[0]. the Powershell runtime parses the cmdline and populates your properties, before calling the ProcessRecord method.
The ProcessRecord method is typicall where we'll do our cmdlets work. Our cmdlet (if you have not guessed already) will simply enumerate all the Processes running on your machine. Its simple but illustrates the steps to building a powershell cmdlet very well.
To enumerate all the processes, we can use the System.Diagnostics.Process class, and invoke the GetProcesses() method. To write these objects out we can use the WriteObject method. Specifying true at the end tells Windows PowerShell that the object we are writing out is a collection and powershell should write out each object separately:
WriteObject(System.Diagnostics.Process.GetProcesses(), true);
So our class code looks like this:
using System;
using System.Collections.Generic;
using System.Text;
using System.Management.Automation;
using System.Collections;
namespace BlogSample
{
[Cmdlet(VerbsCommon.Get, "Proc", SupportsShouldProcess = true)]
public class GetProc : PSCmdlet
{
#region Parameters
/*
[Parameter(Position = 0,
Mandatory = false,
ValueFromPipelineByPropertyName = true,
HelpMessage = "Help Text")]
[ValidateNotNullOrEmpty]
public string Name
{
}
*/
#endregion
protected override void ProcessRecord()
{
try
{
WriteObject(System.Diagnostics.Process.GetProcesses(), true);
}
catch (Exception)
{
}
}
}
}
Now we have all our code, we can build the solution. Do that now.
Next we need to do the installation magic. You may have noticed that PSSnapin was tagged with a RunInstaller attribute. the installer simply registers the snapin with Windows PowerShell. To execute the installer we can use the InstallUtil tool. Open a Visual Studio cmd prompt, navigate to the bin\debug folder of your solution (where your assembly dll is located) and run
InstallUtil yourassemblyname.dll
InstallUtil yourassemblyname.dll
Next load Windows PowerShell.
Type
Get-PSSnapIn -registered
Get-PSSnapIn -registered
which should list your snapin along with any other snapins currently registered.
Next enter
Add-PSSnapIn yoursnapinname
Add-PSSnapIn yoursnapinname
this will load your snapin
You should now be able to type
Get-Proc
Get-Proc
and see a list of Processes running on your machine.
Hopefully this tutorial will give you a head start in building Windows PowerShell cmdlets. There are lots more things to learn and I've just skimmed the surface here.
THIS POSTING IS PROVIDED "AS IS" WITH NO WARRANTIES, AND CONFERS NO RIGHTS
Hoy en cosas interesantes: Hackers atacan Internet, Administrando los buzones en Exchange 2007, Creando
I just tried this out and it worked great. Thanks.
Here's how you can get a sweet Visual Studio development experience for building and debugging your own
Thank you so much for posting this. Very timely :-)
I am getting a error message while installing Windows PowerShell (CS).vsi. The error message is "String Cannot have zero length"
While at the Day of .NET, I discovered that I had an issue - my item templates were missing from Visual
Comme Michael Niehaus l'expliquait il y a quelques temps, il est assez simple d'utiliser la DLL ConfigManager
Body: The Problem: I am always using PowerShell to inspect the SharePoint object model when diagnosing
.NET:NETFoundations-Memorymodel(part2)Quake3Arena.NETPortisDone!lucene使用与优化Build...
Quick Outline for more detailed items on STSADM, especially with PowerShell goodness TechNet Powerful
.csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", | http://blogs.msdn.com/daiken/archive/2007/02/07/creating-a-windows-powershell-cmdlet-using-the-visual-studio-windows-powershell-templates.aspx | crawl-002 | en | refinedweb |
Expect to find here about .Net, latest Microsoft technologies, some non Microsoft technologies and a lot more about ME :)
Well, not really gruelsome, but its definitely interesting.I had a small demo of AJAX working with ASP.net 2.0 and of course - it works :).
AJAX (Asynchronous Javascript and XML) is not really a technology in itself. But it is based on some really cool things that were quite tedious to implement in the good old ASP days. The AJAX basically leverages the XML HTTP functionality that was used for some real jazzy UIs in past (Albeit, in the past, it was much much more complex to implement).AJAX now offers an easy to use framework that can be used to build really rich web applications.
What I did with it was a really simple demo that just queries the time from the server based on the values selected in the client side Combo Box. And the page fetches the values from the server WITHOUT REFRESHING. Oh of course thats what it is supposed to do.
Well, lets get to know HOW.
First of all you will need to download the AJAX Assemblies from here. Once done, just create a small ASP.Net project. Add a reference to AjaxPro.2.dll and you are good to go. Be sure you use the HTML controls on the page (they load faster). For a method to be "callable" from the client side, it needs to be an "AjaxMethod".Include the AjaxPro namespace as - using AjaxPro;
And then define your method in the cs file as -
[
The AjaxMethod attribute actually takes care that the method can be called from the client side.In addition to this, there are two small changes that you need to do on the server side - 1. Add the following entry to your web.config file (under System.Web) -
<
Thats it on the server side.
On the client side, I simply have a combo box that tells the format in which I want my time and the texbox that just displays the time.
<select id="TimeFormat" name="TimeFormat" onchange="javascript:getTime()"><option value="SHORT" selected="selected">Short Time</option><option value="LONG">Long Time</option><option value="UNIV">Universal Time</option></select><input type="text" id="timetext" />
In the Javascript, I simpy have two methods that perform all the async stuff for me - function getTime(){AJAX.Samples._Default.GetCurrentServerTime(document.all.TimeFormat.value,getTime_callback); // asynchronous call}
// This method will be called after the method has been executed and the result has been sent to the client.
The first method "getTime()" actually gives a call to the server side method. It specifies two parameters - the first one is the sType parameter that your server side method expects. The second one is the callback method that will be invoked once the data is fetched from the server. In the callback method, we simply assign the res.value to the textbox. Although, I have tried to give you a starter here on the AJAX stuff, this link would provide a lot more samples for you to take a look. Also, there is another article on msdn that talks about AJAX development with ASP.Net.--Sanket
If you would like to receive an email when updates are made to this post, please register here
RSS | http://blogs.msdn.com/sanket/archive/2006/07/06/657783.aspx | crawl-002 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.