text
stringlengths
0
128k
# YTPlaylistToCSV Successor to YTPlaylistGrabber. Gets YouTube playlist data and generates a basic CSV file. Can be run locally. Zip archive includes key for YouTube's API and was added to SCM won't index it. Tested on Firefox 35 and Chrome 40. Can be run locally. ####Usage Just open index.html in a modern browser. Enter a url or id to a YouTube playlist and press GET. A download for the CSV will automatically be generated when finished. ####CSV format ``` title, date added to playlist (if available), video url ``` ####Version history 0.1: Initial version ####Notes Uses JQuery, JQuery UI, and YouTube API v3. ####License MIT
Talk:Modding/Cutscene Commands/@comment-37385845-20190131223952 Can you guys help? I'm trying to make change the time to 1:00PM but it doesn't work.
import { User } from '../../entities/User' import { IUsersRepository } from '../../repositories/IUsersRepositories' interface IUserRequest { name: string username: string email: string } class CreateUserService { constructor(private usersRepository: IUsersRepository) {} async execute({ email, username, name }: IUserRequest) { const userAlreadyExists = await this.usersRepository.exists(username) if (userAlreadyExists) { throw new Error('User already exists!') } const userCreate = User.create({ email, username, name }) const user = await this.usersRepository.create(userCreate) return user } } export { CreateUserService }
--- --- # TLDR; What is this all about? [See me in action conducting a sample interview here!](sample) It may seem as though I am talking to myself, but I am actually playing the role of both the interviewer and the interviewee # Introduction Welcome to Tyler Citrin's website for Indiana University's School of Informatics, Computing, and Engineering (SICE) Career Services. This website originally supported Indiana University's Peer-Led Team Learning (PLTL) program for CSCI-C 343 "Data Structures and Algorithms." This site was inspired by Suzanne Menzel who was my instructor for this course. # Breakdown I have organized my site into a few different sections: * [Interview Structure along with Tips and Tricks](structure) * [Post archives of sessions](archives) * [Data-type, Data Structures, Algorithms, and Big-O TC!](materials) * [Information all the different types of role](roles) * [Great Study Resources](resources) # Mock Interviews In addition to the C343 PLTL program, you can schedule mock interviews through SICE Careers. These interviews can be scheduled via the 'Appointments' tab on the [SICE Careers website](https://sice-indiana-csm.symplicity.com/students/index.php). # Contact If you have any questions and want to reach my email is [[email protected]](mailto:[email protected]). ## Website Credits * Dante Razo, B.S. '20 computer science
Page:History of New South Wales from the records, Volume 2.djvu/63 THE GUABJ)IAK. 47 . The loss of the Guardian was not a public misfortune ^^®® only, it told severely on individuals in the community. Friends of the officers in England, knowing that they would ^JJJSi be in want of many necessaries, sent out supplies by this ^fli^iiAn vessel. Thinking that the gun-room was a safer place than the hold, these precious goods were stored in that part of the ship, but, as it happened, the choice was the very worst that could have been made. When the Guardian, after striking the iceberg, got clear off, she was found to be making water rapidly, and the first object of her commander was to lighten the ship. The live stock and Sir Joseph Banks's " plant-cabin" went overboard to begin with, and ov^JJ^ad then the gun-room was swept. Some of the officers, Collins says, were " great losers.'* All sorts and conditions of people at the settlement, therefore, had good reason to remember the loss of the Guardian.* The moral as well as the material welfare of the colony suffered. Among the persons on board the Guardian was the Eev. John Crowther, who had been appointed at ^' a salary The of eight shillings per diem" to be assistant chaplain of the crowther. settlement.f He was one of those who left the vessel in the long-boat, and was rescued with the master, Mr. Clements, and others by a French vessel, which took them to the Cape. Instead of waiting for an opportunity to continue the voyage to Port Jackson, Mr. Crowther made the best of his way He retiirn» back to England. The circumstances attending his appoint- ment and his return to England are told by the Rev. John Newton, of Olney (the friend and confidant of the poet Cowper), in a series of letters written by him to the Rev. R. Johnson, chaplain at Sydney. The correspondence forms part to lament that the efforts of our several friends, in amply supplying the wants that they concluded must hare been occasioned by an absence of three years, were all rendered ineffectual, the private articles having been among the first things that were thrown overboard to lighten the ship." — Ck)llin8, vol. i, L117. Tench says that " there was scarcely an officer in the colony that dnot his share of private property on board of this richly-freighted ship." t Historical Becords, vol. i, part 2, p. 2W). * '* Beside the common share which we all bore in this calamity, we had
Linq2XML Query problem (not accepting ':' in names) i have a xml file which has following elements: <rs:data> <z:row<EMAIL_ADDRESS>type='1'/> i need to access them by using Linq2Xml. My problem is that i get an exception which tells me that the ':' sign can not be used in Names. my Linq2Xml Query is: var rowQuery = from Mail in whiteMails.Descendants("xml").Descendants("rs:data").Descendants("z:row") select Mail; How can i handle this? In XML I think the : is use to point to a specific namespace, that's why it can't be used I guess. In the root element is there a xmlns:rs or xmlns:z attributes? Try just searching for "data" or "row" and see if that works out. Else if you created the names like that you will probably have to change. yes i found out that it is a namespace... i've definition like xmlns:rs='urn:schemas-microsoft-com:rowset' xmlns:z='#RowsetSchema' well but i cant find out how to handle them :( if i just look for "data" or "row" it find nothing Yes I'm not sure myself, how about Descendendts("data").where(d => d.NameSpaceURI == "thenamepsace" ).Descend...... rs:data is a name of an element which belongs to a namespace. The "rs" is a namespace prefix, the "data" is a local name. According to your comment above the rs prefix is declared for a namespace URI "urn:schemas-microsoft-com:rowset". This means that your element is identified as an element with local-name "data" and namespace URI "urn:schemas-microsoft-com:rowset". In LINQ to XML, all names need to be fully qualified by their namespace (it's also how XML works in general). In the code this is done by usage of XNamespace and XName classes. So for example: XNamespace rsNamespace = XNamespace.Get("urn:schemas-microsoft-com:rowset"); XNamespace zNamespace = XNamespace.Get("#RowsetSchema"); var rowQuery = from Mail in whiteMails.Elements("xml") .Elements(rsNamespace + "data") .Elements(zNamespace + "row") select Mail; Note that I've used Elements instead of Descendants. (Descendants would work as well). Descendants will return you all the element of the specified name in the entire subtree of the element you call it on - in any depth. Elements will return just all immediate children with that name. From your XML and query it seems you want the immediate children. Also Elements is much faster than Descendants since it only needs to go over the immediate children, not the entire subtree. wow, very very nice description :) :) thank you very much, it works
Coat is a Gear Item. Effect The holder of takes 20% less damage when hit by techniques. Location Reward from the Travel Writer side quest after visiting Tucma.
/*! Copyright 2019 Ron Buckton Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ import { MatchingKeys } from '@esfx/type-model'; import { StructPrimitiveType, StructType, StructFieldDefinition, StructDefinition, Struct } from './index'; import { NumberType, sizeOf, getValueFromView, putValueInView, Alignment } from './numbers'; type StructTypeLike = | StructType | (new (buffer: ArrayBufferLike, byteOffset?: number) => Struct) | typeof Struct; const typeInfos = new WeakMap<object, TypeInfo>(); /* @internal */ export abstract class TypeInfo { readonly size: number; readonly alignment: Alignment; constructor (size: number, alignment: Alignment) { this.size = size; this.alignment = alignment; } static get(type: StructTypeLike): StructTypeInfo; static get(type: StructPrimitiveType): PrimitiveTypeInfo; static get(type: StructPrimitiveType | StructTypeLike): StructTypeInfo | PrimitiveTypeInfo; static get(type: StructPrimitiveType | StructTypeLike) { const typeInfo = this.tryGet(type); if (typeInfo) { return typeInfo; } throw new TypeError("Invalid struct or primitive type."); } protected static tryGet(type: StructPrimitiveType | StructTypeLike) { let current: object | undefined = type; while (typeof current === "function") { const typeInfo = typeInfos.get(current); if (typeInfo) { return typeInfo; } current = Object.getPrototypeOf(current); } } abstract coerce(value: any): number | bigint | Struct; abstract readFrom(view: DataView, offset: number, isLittleEndian?: boolean): number | bigint | Struct; abstract writeTo(view: DataView, offset: number, value: number | bigint | Struct, isLittleEndian?: boolean): void; } /* @internal */ export interface ArrayBufferViewConstructor { new (size: number): ArrayBufferView & Record<number, number | bigint>; new (buffer: ArrayBufferLike, byteOffset?: number): ArrayBufferView & Record<number, number | bigint>; BYTES_PER_ELEMENT: number; } /* @internal */ export type DataViewReaders = MatchingKeys<DataView, (offset: number) => number | bigint>; /* @internal */ export type DataViewWriters = MatchingKeys<DataView, ((offset: number, value: number) => void) | ((offset: number, value: bigint) => void)>; /* @internal */ export class PrimitiveTypeInfo extends TypeInfo { private _primitiveType!: StructPrimitiveType; private _numberType: NumberType; constructor(numberType: NumberType) { super(sizeOf(numberType), sizeOf(numberType)); this._numberType = numberType; } get primitiveType() { return this._primitiveType; } coerce(value: any) { return this._primitiveType(value); } readFrom(view: DataView, offset: number, isLittleEndian?: boolean) { return getValueFromView(view, this._numberType, offset, isLittleEndian); } writeTo(view: DataView, offset: number, value: number | bigint, isLittleEndian?: boolean) { putValueInView(view, this._numberType, offset, value, isLittleEndian); } static get(type: StructTypeLike): never; static get(type: StructPrimitiveType): PrimitiveTypeInfo; static get(type: StructPrimitiveType | StructTypeLike): PrimitiveTypeInfo; static get(type: StructPrimitiveType | StructTypeLike): PrimitiveTypeInfo { const typeInfo = typeInfos.get(type); if (!typeInfo || !(typeInfo instanceof PrimitiveTypeInfo)) { throw new TypeError("Invalid primitive type."); } return typeInfo; } finishType<T extends StructPrimitiveType>(primitiveType: T) { this._primitiveType = primitiveType; Object.freeze(this); typeInfos.set(primitiveType, this); return primitiveType; } } const weakFieldCache = new WeakMap<StructFieldInfo, WeakMap<Struct, Struct>>(); /* @internal */ export class StructFieldInfo { readonly containingType: StructTypeInfo; readonly field: StructFieldDefinition; readonly index: number; readonly byteOffset: number; readonly typeInfo: TypeInfo; constructor(type: StructTypeInfo, field: StructFieldDefinition, index: number, byteOffset: number) { this.containingType = type; this.field = {...field}; this.index = index; this.byteOffset = byteOffset; this.typeInfo = TypeInfo.get(this.field.type); Object.freeze(this.field); Object.freeze(this); } get name() { return this.field.name; } get type() { return this.field.type; } get size() { return this.field.type.SIZE; } coerce(value: any) { return this.typeInfo.coerce(value); } readFrom(owner: Struct, view: DataView, isLittleEndian?: boolean) { if (this.typeInfo instanceof StructTypeInfo) { let cache = weakFieldCache.get(this); if (!cache) weakFieldCache.set(this, cache = new WeakMap()); let value = cache.get(owner); if (!value) cache.set(owner, value = this.typeInfo.readFrom(view, this.byteOffset, isLittleEndian)); return value; } return this.typeInfo.readFrom(view, this.byteOffset, isLittleEndian); } writeTo(_owner: Struct, view: DataView, value: number | bigint | Struct, isLittleEndian?: boolean) { this.typeInfo.writeTo(view, this.byteOffset, value, isLittleEndian); } } /* @internal */ export class StructTypeInfo extends TypeInfo { readonly fields: readonly StructFieldInfo[]; readonly ownFields: readonly StructFieldInfo[]; readonly fieldsByName: ReadonlyMap<string | symbol, StructFieldInfo>; readonly fieldsByOffset: ReadonlyMap<number, StructFieldInfo>; readonly baseType: StructTypeInfo | undefined; private _structType!: StructType; constructor(fields: StructDefinition, baseType?: StructTypeInfo) { const fieldNames = new Set<string | symbol>(); const fieldsArray: StructFieldInfo[] = []; const fieldsByName = new Map<string | symbol, StructFieldInfo>(); const fieldsByOffset = new Map<number, StructFieldInfo>(); if (baseType) { for (const field of baseType.fields) { fieldsArray.push(field); fieldNames.add(field.name); fieldsByName.set(field.name, field); fieldsByOffset.set(field.byteOffset, field); } } const fieldOffsets: number[] = []; let offset = baseType ? baseType.size : 0; let maxAlignment: Alignment = 1; for (const field of fields) { if (fieldNames.has(field.name)) { throw new TypeError(`Duplicate field: ${field.name.toString()}`); } const fieldTypeInfo = TypeInfo.get(field.type); const alignment = fieldTypeInfo.alignment; offset = align(offset, alignment); fieldOffsets.push(offset); fieldNames.add(field.name); if (maxAlignment < alignment) maxAlignment = alignment; offset += fieldTypeInfo.size; } super(align(offset, maxAlignment), maxAlignment); const baseLength = baseType ? baseType.fields.length : 0; const ownFieldsArray: StructFieldInfo[] = []; for (let i = 0; i < fields.length; i++) { const fieldInfo = new StructFieldInfo(this, fields[i], baseLength + i, fieldOffsets[i]); fieldsArray.push(fieldInfo); ownFieldsArray.push(fieldInfo); fieldsByName.set(fieldInfo.name, fieldInfo); fieldsByOffset.set(fieldInfo.byteOffset, fieldInfo); } this.ownFields = ownFieldsArray; this.fields = fieldsArray; this.fieldsByName = fieldsByName; this.fieldsByOffset = fieldsByOffset; this.baseType = baseType; } get structType() { return this._structType; } static get(type: StructTypeLike): StructTypeInfo; static get(type: StructPrimitiveType): never; static get(type: StructPrimitiveType | StructTypeLike): StructTypeInfo; static get(type: StructPrimitiveType | StructTypeLike): StructTypeInfo { const typeInfo = this.tryGet(type); if (!typeInfo || !(typeInfo instanceof StructTypeInfo)) { throw new TypeError("Invalid struct or primitive type."); } return typeInfo; } coerce(value: any) { return value instanceof this._structType ? value : new this._structType(value); } readFrom(view: DataView, offset: number, isLittleEndian?: boolean) { return new this._structType(view.buffer, view.byteOffset + offset); } writeTo(view: DataView, offset: number, value: number | bigint | Struct, isLittleEndian?: boolean) { if (!(value instanceof this._structType)) { throw new TypeError(); } value.writeTo(view.buffer, view.byteOffset + offset); } finishType<T extends StructType>(structType: T): T; finishType(structType: typeof Struct): void; finishType<T extends StructType>(structType: T) { this._structType = structType; Object.freeze(this.ownFields); Object.freeze(this.fields); Object.freeze(this.fieldsByName); Object.freeze(this.fieldsByOffset); Object.freeze(this); typeInfos.set(structType, this); return structType; } } function align(offset: number, alignment: Alignment) { return (offset + (alignment - 1)) & -alignment; }
Compatibility with older python versions Hi ! The use of from __future__ import annotations, and subsequent postponed annotations, prevents xstate to be used with version 3.6 of python, despite being still officially supported. Can you ensure the compatibility with that version ? Hi Aluriak, You are correct 3.6 is still supported and that using postponed annotations is 3.7+ pep-0563. I'll work on a solution. Cheers.
How to overlap elements along both width and height, taking on minimum value for each? I want to have elements that overlay each other, such that the parent takes on the minimum height and width needed to contain them. Inspired from this question, I've come up with the below solution. See this identical JSFiddle. #merger { text-align: center; background-color: #f2e7e5; overflow: hidden; display: inline-block; } #merger > * { float: left; width: 100%; } #merger .active { visibility: visible; } #merger > *:not(:first-child) { margin-left: -100%; } <div id="merger"> <div> 1 </div> <div> 222 </div> <div> 3 <br /> 3 </div> <div> 444 <br /> 444 </div> </div> However, the above solution only works for height. You can see that the parent merger actually takes on extra width. If you go into the JSFiddle and remove one of the elements that has less width than the rest (such as the "1"), the parent's width shrinks. The expected behavior is that the parent's width remains the same, since it should be equivalent to the width of it's widest child. Removing the non-widest child should have no impact on width, just like removing the non-tallest child currently has no impact on height. How do I extend the behavior I already have for height to width as well? I'd suggest using CSS Grid, as follows: #merger { /* displaying the element as an inline grid; so that the element itself behaves as an inline element, while still laying its own content out as a grid; so instead of taking the full block-width of its parent it takes only the space it needs: */ display: inline-grid; /* defining one named area into which all content will be placed: */ grid-template-areas: "allContent"; background-color: #f2e7e5; overflow: hidden; } #merger>div { /* placing all the <div> children of the #merger element into the single named grid area: */ grid-area: allContent; } <div id="merger"> <div> 1 </div> <div> 222 </div> <div> 3 <br /> 3 </div> <div> 444 <br /> 444 </div> </div> References: display. grid-area. grid-template-areas. place-content. Bibliography: "A Complete Guide to Grid," CSS-Tricks. "Basic Concepts of grid layout," MDN. Thank you! This is concise and intuitive. I see however that place-content: center; didn't place the elements in the center of allContent. Instead, I had to do margin: auto applied to #merger > div to get vertical and horizontal centering of the children. Could you please provide an example where place-content: center results in a visual difference? My mistake, while I recall seeing demonstrations of place-content: center working with CSS grid, I think it's a habit I developed from working with flex-box layout (as a simple demo: https://jsfiddle.net/davidThomas/m6f2t3zr/). I'll edit that part out of the answer.
Clearing cookies, why is specific value required? I am reading up on clearing cookies. Say a cookie is set with setcookie("abc", "xyz", time()+3600), then from what I've read you unset it by using setcookie("abc", "xyz", time()-3600) which sets the cookie to expire in the past. All the examples I've seen use this format. My question is why does the last parameter have to be specifically time()-3600, why can't it be time()-1 or time()-9999999 for example? Any negative value will do, but this magic number 3600 just "feels right" for coders :) @bsdnoobz I would think a number like 256 or 1048 or 65536 would "feel right" for a coder :) It doesn't have to be time() - 3600. That's merely used in examples because it makes a nice tidy "one hour ago". It just has to be some time in the past, so time()-1 or time()-9999999 are acceptable as well, as is any value < time(). My question is why does the last parameter have to be specifically time()-3600, why can't it be time()-1 or time()-9999999 for example? It doesn't. 3600 works, but anything in the past will work too. In setcookie("abc", "xyz", time()-3600) the trick is that this references the time on the server while the cookie expiration is dependent on the time of the host running the browser. If there is a mismatch of time between the two hosts, it is possible that it a cookie may not expire. However, using a time of ’1′ indicates an expiration time of 1 second after midnight, January 1st, 1970 which is the earliest possible expiration time. The current timestamp won't be zero (well, not until the year 2038 problem kicks in on some devies). It can be any number below the current UNIX timestamp. When the browser reads a past time it is deleted time() Returns the current time measured in the number of seconds since the Unix Epoch (January 1 1970 00:00:00 GMT). so when you do this time() + or - somedigit that means you are addiing or subtracting value to seconds,
Extreme Negative Gamma I see Zero Hedge, talk about extreme negative gamma position of dealers all the time which it then ties back to market moves. I was wondering how do you calculate such market positioning based on publicly available data. Has someone done it or any ideas how to do it? This is what flow derivatives desks call the "Gamma Hammer" (in the morning huddle) or "pin risk" (more formally). In the run-up to quarterly expiry, imagine that the dealers as a group have a net gamma position versus the market (ie their clients), who have to be running the opposite gamma position. Across the market, there is no net gamma position. There cannot be, by definition. But within the market, the difference is that the banks have to hedge their book; while the punters don't. So: if the brokers are positive gamma and the market goes up, their delta will become more positive. The brokers have to sell the market. If the brokers are positive gamma and the market goes down, their delta will become more negative. The brokers have to buy the market. Brokers will buy the dip and sell the rally, helping to pin the market down. and vice versa. on the other side, if the brokers are negative gamma, market moves will create a delta position reversed to the direction of market movement. To hedge that delta, the brokers have to buy the rally and sell the dip, which (potentially) amplifies market movements and volatility. Dealer flow desks really believe in this stuff. They really do. My only caveats are (1) that most of them also believe in some fairly weird other stuff. And (2) this is a group of people who have run out of greek letters to describe risk, for whom "vega" is a normal concept and "zanda" really exists... Let's take it as a given this argument is really true. Can you then track it? No, and the dealers themselves can only guess. They can look at the options outstanding for every strike to get a feel for the aggregate gamma risk out there (on listed product). But they cannot know how much is hedged by an investor owning call-spreads, put-spreads etc, or eg selling NASDAQ puts to buy S&P calls etc. And there is a positive and negative side to every gamma risk. The data tells you nothing about the imbalance within the market, in terms of who is long vs short when expiry happens. Nor does it tell you anything about the OTC flow that is never routed to the exchange only reporting its positions. So it's a finger-in-the-air affair. At the very least, it's a good talking point for one week in 13, which the ZeroHedge crowd also seem to love ;-) hope this helps
New treatment plans are now being designed all of the time, and second, or even 3rd opinions may provide the client more information about freshly learned prosperous methods.
~°* THEATRE OF ANATOMY, MEDICINE, &c. % 2. Blenheim-Street, Great Marlborough-Street. E SUMMER COURSE. of: LECTURES, at this School, will. Os TSI aie begin on Monday,"June.7, ; - Anatomy, Physiology, and Surgery, by Mr. Brooxns, daily, at Seven “in the Morning.—Dissections as usual. : i > . Chemistry, Materia Medica, &c. daily, at Eight in the Morning;Theory -and Practice of Physic at Nine ; with Examinations, by Dr. Ager... , _ Three Courses are given every year, each occupying nearly four monthse ’ Farther particulars may be known of Mr. Brookes, -at the Theatre; or ‘of Dr. Ager, 69, Margaret. Street, Cavendish-Square,. dt =f ne » Vol. XLIX. A. Plate to illustrate Dr. Evans’s Communication on * Terrestrial Magnetism; a new Electro-atmospherical Instrument ; and ~ Mr. Anprew Horn’s Paper on Vision—A Quarto Plate to illustrate ee Inpetson’s Physiology of Vegetables.—A Plate to. illustrate the olar-‘Spots which appeared during the Year 1816 ;—and Mr. Brvan’s [mprovement on the Sliding-Rule.—A Plate descriptive of Mr. Emmetr’s Thstrument for the "Measurement of the Moon’s Distance from the Sun, L &ces; also a New Reflecting Goniometer.—A Plate to illustrate Chevalier aDER’s Method of communicating Rotatory Motion; Lieut. Suvxp™’s improved Method of working a Capstan ; and Srrexe’s new Mofication of Nootn’s Apparatus, &c. - : Vol. i... A Plate to illustrate Sir Humpary Davy’s new Researches on Flame, arid Sir Georce Cayrey’s Paper on Aétial Navigation.— A Plate representing a Section of the Pneumatic. Cistern, with the com“pound Blow-pipe of Mr. Haze} and a Sketch of a Steam-Vessel inig fo run between London and Exeter,—Re}resentation of Apparatus | for Sublimation of lodine—Model of a Safety Furnace by Mr. Baxewetr ‘—Apparatusfor consuming Fire-damp in the Mine—and Apparatus for irelighting the Miners’ Davy.—A Plate illustrative of the New Patent Hornzontal Water-Wheel of Mr. Apamson.—A Plate illustrative of Mrs. BBETsoN’s Theory of the Physiology of Vegetables.—A Plate to illusite Mr. Dickxinson’s new System of Beaconing. Vol. LI. A Plate illustrative of Mr, Capex Lorrr’s Paper on the’ *robability of Meteorolites being pyojected from the Moon.—-Two ates: one, of Mr. H. Trirron’s Improved-Apparatus for Distillation ; d another, of the Figures in Braptey’s Gardening illustrative of the Aron the Kaverposcore.—A Plate illustrative of Mrs. Ispetson’s Pae . er on the Anatomy of Vevetables; and Mr. Trepgoxp’s on Revetements. Vol. LI. A Plate illustrative of Mr, Upincron’s Electrical Ipfeaser for the unerring Manifestation of small Portions of the Electric Eluid.—A Plate illustrative of Mrs. Isserson’s Paper on the Fructifichon of Plants.—A Plate illustrative of the Rev. Joun Micuerr’s Theory fthe Formation of the Earth,—A Plat illustrative of Capt. Karer’s Article on the Pendulum ; and New Apparatus for impregnating Liqhids With Gases.—A Plate illustrative of Sir H. Dayy’s Apparatus for Volatilization of Phosphorus, and Mr, Smirn’s Essay on the Structure of the . joisonous Fangs of Serpents, . . ° pass Vol. LIII. A Plate illastrative of ‘Dr. Ure’s Experiments on Caloric, ir. Lucxcocr’s Paper on the Atomic Philosophy, and Mr. Bowron’s ithe Purification of Coal Gas.—A Plate representing Mr. Renwie’s Pp: ratus employed in his Experiments on the Strength of Materials; nd the Marquis Riposrur’s Improvement on the Gas Blow-pipe.—A @riate illustrative of Mr. Mzrxrx’s Paper on Calorific Radiation; ‘Mr. Msowe’s on the Purification of Coal Gus; and Mr, HuGuus’s on asceraining Distances. .
require 'rails_helper' RSpec.describe SkillsController, type: :controller do let(:user) { create(:user) } before do sign_in user end describe 'create #POST' do let!(:skill) { create(:skill, name: 'rollerblade') } it 'creates a new skill if it does not exist yet' do expect{ post :create, user_id: user.id, skill: { name: 'Bowling'} }.to change(Skill, :count).by 1 end it 'does not create a new skill if it exists' do expect{ post :create, user_id: user.id, skill: { name: 'Rollerblade'} }.to change(Skill, :count).by 0 end it 'creates a new users_skill' do expect{ post :create, user_id: user.id, skill: { name: 'Bowling'} }.to change(UsersSkill, :count).by 1 end it 'downcases the skill name' do post :create, user_id: user.id, skill: { name: 'Hula-OOp'} expect(Skill.last.name).to eq 'hula-oop' end it 'redirects to user profile with flash notice' do post :create, user_id: user.id, skill: { name: 'Bowling'} expect(response).to redirect_to user_path(user) end end describe 'destroy #DELETE' do let(:skill) { create(:skill, name: 'rollerblade') } let!(:users_skill) { create(:users_skill, user: user, skill: skill) } it "destroys current user's users_skill" do expect{ delete :destroy, user_id: user.id, id: skill.id }.to change { user.skills.count }.by -1 end it 'redirects to user profile with flash notice' do delete :destroy, user_id: user.id, id: skill.id expect(flash[:notice]).to eq "Removed skill rollerblade" expect(response).to redirect_to user_path(user) end end end
from being situated in Portland Ridge, Vere. The following is a section of part of this cave, in which two or three circum stances deserve attention, as they cannot fail to remind the reader of some of Prof. Buckland's cavern sections. A stalagmitic floor (A) rests upon a fine silty clay (B), the d^th of which I could not ascertain ; one or two large stalac tite columns appear also to rest upon the clay ; but of this I am not certain; the heat, in fact, was so oppressive (from being near the surface) during the time I visited it, that I was prevented from remaining long in the cavern. This cave is situated on the side of a hill, and is a short dis tance from the sea, but sufficiently elevated above it to prevent the possibility of the clay being derived from it at its present level. The crust of stalagmite is of sufficient thickness to show that it must have taken a long time to form. I did not observe any bones beneath it, and am now sorry that proper search was ■not made, as the depth of the silty clay has not been ascertained, •and as it might contain bones. Portland Cave has been visited by hundreds of persons, most of whom have written their names on almost every accessible porti(»n of it: the floor, therefore, cannot be expected to be in the condition in which it was first discovered, and it would be difficult to say how far the stalagmitic crust might have extended. The portion that I observed was not large, and is in itself of little importance ; but it becomes interesting as con nected witli the sections of caverns, beneath the stalagmitic floors of which bones have been discovered.
const { eventValidationSchema } = require('common'); const validationMiddleware = require('../validationMiddleware'); const validate = validationMiddleware(eventValidationSchema); describe('validationMiddleware', () => { const next = jest.fn(); const send = jest.fn(); const res = { status: jest.fn().mockImplementation(() => ({ send })) }; const validData = { firstName: 'a', lastName: 'b', email: '[email protected]', date: '2018-10-14' }; it('should correctly validate valid data', async () => { const req = { body: validData }; await validate(req, res, next); expect(next).toHaveBeenCalled(); }); describe('should correctly validate invalid data', () => { const req = { body: { ...validData, firstName: '' } }; it('and return 400', async () => { await validate(req, res, next); expect(res.status).toHaveBeenCalledWith(400); }); it('and return valid error', async () => { const err = 'First name is required'; await validate(req, res, next); const receivedErr = send.mock.calls[0][0].message; expect(receivedErr).toEqual(err); }); }); });
How to manage a Garden extermination of these pests often requires a large expendi ture in time, money, and plant energy. Even if already you are troubled with the same undesirable foes, there is surely no reason why you should increase the stock. This is one of the chief dangers of what we might term jumble sales. Only too often we find that sales include a variety of stuff which could not be disposed of in the orthodox way ; which is the remains of a large stock from which the best has been scrupulously selected. True, the price is low, but a worthless plant is dear at any price. I do not wish to urge readers to keep away from sales altogether, but I certainly advise them to buy cautiously, and if they know nothing as to the value of a plant, to take some one with them who does. In buying stuff from established "seedsmen there is much less fear of being " done," for it may be said with pride that by far the large majority of our horticultural firms carry on their business in an irreproachable manner. They usually try to give every satisfaction to their customers, for they are fully aware that their own interest is indissolubly bound up therein. Nor is the price paid for things always a safe criterion of value. By paying a large price it often happens that we are really paying for a careful method of selection, and the additional cost cannot then be reasonably be grudged. A low price is not always a criterion of cheapness, nor is a high price always one of quality. The best school in this case is perhaps the old one of experience.
[Congressional Record (Bound Edition), Volume 160 (2014), Part 11] [Senate] [Page 15316] TRIBUTE TO DONNA KUETHE Mrs. SHAHEEN. Mr. President, I wish to recognize the achievements of Donna Kuethe, Recreation Director for the town of Moultonborough, NH, who was recently named New England Woman of the Year by Every Child is Ours, ECIO, an organization dedicated to promoting universal educational opportunity. For 40 years Donna has been a tireless advocate for children's education, environmental stewardship and community recreation. On the national and international stages, Donna has volunteered in New Orleans and South Africa delivering emergency supplies to areas hard-hit by natural disasters. She has worked with Operation Recreation Relief, a group designed to help provide recreation services to areas impacted by disasters, in addition to focusing on assisting children in those areas through her work with the Save the Children foundation. Back home in New Hampshire, Donna has been active throughout the State through her work with the Children in Nature initiative, the New Hampshire State Parks Great Park Pursuit and other programs focused on encouraging active and healthy lifestyles. She has also advocated for outdoor initiatives through her service on the House/Senate Committee on Child Care Licensing and as Chair of the Legislative Committee for New Hampshire Parks and Recreation. Donna has also been a key leader at the Moultonborough Parks and Recreation Department, in addition to coaching New Hampshire high school students and establishing after-school programs, youth sports and summer day camps, and programs for seniors. Donna's lifelong devotion to her community and the many organizations she has served is truly admirable, and her recognition as ECIO's New England Woman of the Year is well deserved. On behalf of Granite Staters everywhere, I thank Donna Kuethe for her service. ____________________
Deviprasad R. D, Satyanarayana GNV, Asati A, Muralidharan K, Mudiam MKR. Development of a multi‐class method to quantify phthalates, pharmaceuticals and personal care products in river water using ultra‐high performance liquid chromatography coupled with quadrupole hybrid Orbitrap mass spectrometry. Anal Sci Adv. 2021;2:373--386. 10.1002/ansa.202000015 : automatic gain control : global positioning system : higher energy collisional dissociation : heated electrospray ionization source : hydrophilic‐lipophilic balance : high resolution mass spectrometry : liquid‐liquid extraction : matrix effect : pharmaceuticals, and personal care products : phthalates, pharmaceuticals, and personal care products : parallel reaction monitoring : quadrupole hybrid Orbitrap mass spectrometry : solid‐phase extraction : ultra high‐performance liquid chromatography River water is an essential source of drinking water for both rural and urban communities. However, pollution of rivers is a global issue that has to be dealt seriously by identifying the pollutants and ways to detect them. A large number of anthropogenic pollutants (pharmaceuticals and personal care products, PPCPs) are being discharged from industrial, commercial, domestic, and agricultural sources leading to surface water contamination, which in turn not only threatens aquatic life but also affects human health.[@ansa202000015-bib-0001], [@ansa202000015-bib-0002], [@ansa202000015-bib-0003], [@ansa202000015-bib-0004], [@ansa202000015-bib-0005] Attempts have been made by several researchers to remove water contaminants and the efforts to decontaminate water are still being explored.[@ansa202000015-bib-0006], [@ansa202000015-bib-0007] Although the majority of micropollutants are present at low levels in the surface water, they can increase health hazards to both flora and fauna in the aquatic atmosphere and also indirectly to the terrestrial habitat. While their long‐standing effects on living beings are mostly unknown, their harmful impact cannot be ignored.[@ansa202000015-bib-0001], [@ansa202000015-bib-0008] The monitoring of river water for organic or inorganic micropollutants has drawn the attention of researchers to understand their contamination levels and design mitigation strategies.[@ansa202000015-bib-0007], [@ansa202000015-bib-0009], [@ansa202000015-bib-0010] The PPCPs, known as organic micropollutants, are being extensively used in daily life and are a cause of concern because they are being continuously discharged into the aquatic environment.[@ansa202000015-bib-0008], [@ansa202000015-bib-0009], [@ansa202000015-bib-0011], [@ansa202000015-bib-0012] PPCPs have been broadly classified into different groups based on their structural and physicochemical properties and one needs to understand their impact on human health.[@ansa202000015-bib-0013], [@ansa202000015-bib-0014], [@ansa202000015-bib-0015] Several methods are available for analysis of PPCPs in various environmental samples, but no method has been reported for the simultaneous high‐throughput quantitative analysis of 26 targeted pharmaceuticals and personal care products including phthalates (PPPCPs) by solid‐phase extraction‐ultra high performance liquid chromatography‐quadrupole hybrid orbitrap mass spectrometry (SPE‐UHPLC‐Q‐Orbitrap‐MS) in water samples in a single chromatographic run. The analysis of these different classes of chemicals requires an efficient analytical method that is able to identify and quantify them at low concentration with acceptable accuracy and precision. Recently, rapid advancements in analytical techniques have reduced the detection limits to nanogram levels with high precision. Several methods are available based on gas chromatography‐mass spectrometry (GC‐MS) for the analysis of PPCPs in river sediments,[@ansa202000015-bib-0016] sewage sludges,[@ansa202000015-bib-0017] groundwaters,[@ansa202000015-bib-0018] surface water,[@ansa202000015-bib-0019] and aquatic plants.[@ansa202000015-bib-0020] But they have limitations like low sensitivity and require additional derivatization step to analyze analytes with polar functional groups.[@ansa202000015-bib-0021] Another hyphenated analytical technique like liquid chromatography‐mass spectrometry (LC‐MS) uses different mass analyzers like triple quadrupole used in analysis of sediment,[@ansa202000015-bib-0022] time of flight used for water,[@ansa202000015-bib-0023] waste water,[@ansa202000015-bib-0024], [@ansa202000015-bib-0025] and linear ion‐trap used for water,[@ansa202000015-bib-0026] surface water,[@ansa202000015-bib-0027] and environmental waters[@ansa202000015-bib-0028] as well as quadrupole‐Orbitrap used for surface water,[@ansa202000015-bib-0029], [@ansa202000015-bib-0030] waste water,[@ansa202000015-bib-0031], [@ansa202000015-bib-0032] soil, and plants[@ansa202000015-bib-0033] for the PPCPs analysis. Although LC‐triple quadrupole‐MS has played an important role in the identification and quantification of targeted analytes,[@ansa202000015-bib-0009] it has limitations for the identification of PPCPs in the untargeted analysis due to its low mass resolution (unit mass). Due to, higher mass accuracy and precision, LC coupled with high‐resolution MS can be used for the analysis of both targeted and untargeted analysis of PPCPs.[@ansa202000015-bib-0031] The monitoring of these organic micropollutants in environmental samples like water, soil, and sediment requires an effective extraction method to remove the matrix interferences and enrich the low concentration of analytes from a large volume of water sample.[@ansa202000015-bib-0034] Owing to the diverse chemical nature and polarity of the selected analytes, liquid‐liquid extraction (LLE) and solid‐phase extraction (SPE) are suitable for extraction and clean‐up of multiclass analytes. The disadvantages of LLE are: needs a large volume of sample as well as solvent, has low accuracy, and is a time‐consuming process.[@ansa202000015-bib-0035] The advantages of SPE method as compared to LLE are (a) availability of a wide range of sorbents for extraction of various analytes, (b) eco‐friendly, efficient, cost‐effective, and high recovery rate, and (c) needs less solvent for extraction. All these advantages of SPE method make it a better choice for extraction and clean‐up of PPPCPs from environmental samples.[@ansa202000015-bib-0025] The main aim of the study is to develop, validate, and evaluate the performance of the SPE‐UHPLC‐Q‐Orbitrap‐MS system for the simultaneous determination of 26 PPPCPs including phthalates in river water samples. MATERIALS AND METHODS {#ansa202000015-sec-0060} Chemicals and reagents {#ansa202000015-sec-0070} A total of 26 PPPCPs (7 phthalates, 12 pharmaceuticals, and 7 personal care products) were purchased from Sigma‐Aldrich (St. Louis, MO, USA) with \>97‐99% purity range. The mass spectrometric grade water, methanol, and acetonitrile were purchased from Optima Fisher Scientific USA (New Jersey, USA), and mass spectrometric grade formic acid was purchased from Merck (Darmstadt, Germany). Oasis hydrophilic‐lipophilic balance (HLB) SPE cartridges (3 g, 6 mL) were purchased from Waters (Milford, MA, USA) and filter paper (0.22 μ) was purchased from Millipore. Standard preparation {#ansa202000015-sec-0080} The stock solutions of individual standards were prepared at a concentration of 1.0 mg/mL with methanol as a diluent. A total of 1 μg/mL of mixed standard solution was prepared by taking stock solutions of each analyte in a mixture of water:methanol (50:50, v/v) and all standards were stored in a glass volumetric flask at ‐20°C until use. Sample collection {#ansa202000015-sec-0090} The water samples were collected from the River Ganga at nine different points of Allahabad and Varanasi, Uttar Pradesh, India using global positioning system (GPS) coordinates as shown in Table [1](#ansa202000015-tbl-0001){ref-type="table"}. Figure [1A](#ansa202000015-fig-0001){ref-type="fig"} display GPS map of Ganga River points (S1‐S5) in Allahabad, and Figure [1B](#ansa202000015-fig-0001){ref-type="fig"} shows Ganga River points (S6‐S9) in Varanasi. The collected samples were brought to the laboratory in amber color glass bottles under ice‐cold conditions, and filtered through Millipore filter paper. The pH of the samples was adjusted to 3.0 with formic acid to reduce microbial growth and then the samples were stored at ‐20°C until analysis. ::: {#ansa202000015-tbl-0001 .table-wrap} GPS coordinates of different sampling points in India Sample number Sampling points Latitude Longitude --------------- ----------------------------------- ----------- ----------- S1 Kuresar ghat, Allahabad 25.498340 81.735181 S2 Rasoolabad ghat, Allahabad 25.501465 81.853328 S3 Daraganj ghat, Allahabad 25.449306 81.886168 S4 Chitkana ghat, Allahabad 25.381157 81.908793 S5 Sangam, Allahabad 25.425045 81.888219 S6 Sheetla ghat, Varanasi 25.307168 83.011086 S7 Raj ghat, Varanasi 25.325142 83.037073 S8 Varuna River, Varanasi 25.330105 83.047934 S9 Markendeya Mahadev ghat, Varanasi 25.500890 83.167124 John Wiley & Sons, Ltd. ::: {#ansa202000015-fig-0001 .fig} **A**, GPS map of Ganga river point at Allahabad (S1‐S5). **B**, GPS map of Ganga river point at Varanasi (S6‐S9) Sample preparation {#ansa202000015-sec-0100} The SPE of PPPCPs from the river water samples was performed using Waters Oasis HLB cartridge as the sorbent for maximum extraction efficiency because it can extract a wide range of analytes at different pH levels.[@ansa202000015-bib-0019], [@ansa202000015-bib-0025] The SPE conditions were optimized using river water samples spiked with analytes at a concentration of 50 ng/L. Before loading the samples in SPE cartridges, the cartridges were preconditioned with 5 mL of methanol and 5 mL of ultra‐pure Milli‐Q water. The water samples (3.0 L) were loaded on to the cartridges at the flow rate of 5 mL/min, and then cartridges were air‐dried under vacuum. Ten milliliter mixture of dichloromethane: methanol (1:1, v/v) was eluted with sorbent for the obtaining maximum recovery of the PPPCPs. The extracted organic phase was dried in a nitrogen evaporator (TurboVap RV) and finally, the dried aliquot was reconstituted with 2000 μL of water:methanol (50:50, v/v) for further analysis. Blank sample {#ansa202000015-sec-0110} A blank sample was used to detect possible contamination during analysis. To avoid contamination, the following preventive procedure was followed: (a) Plastic materials were not used in sample collection, preservation, preparation, and analysis; (b) Standards and samples were prepared in amber color glassware to prevent possible contamination from air and degradation from temperature and light; (c) All glassware were cleaned with acetone, then dried before analysis at 250°C for 2 h in an oven; (d) Only PTFE septa, filter, and metal tubing were used in UHPLC‐Q‐Orbitrap‐MS; (e) The cross‐contamination was monitored with solvent blank and sample blank between sample analysis.[@ansa202000015-bib-0036] Instrument conditions {#ansa202000015-sec-0120} ### Liquid chromatography {#ansa202000015-sec-0130} The separation of PPPCPs was carried out using UHPLC (Dionex Ultimate 3000, Thermo Scientific, MA, USA) hyphenated with Q‐Exactive Orbitrap MS (Thermo Scientific, MA, USA). The UHPLC consists of a quaternary solvent manager, degasser, thermostat auto‐sampler, and column oven. The chromatographic separation of the PPPCPs was achieved using Acquity BEH C~18~ column (100 × 2.1 mm, 1.7 μm; Waters, MA, USA). The column and auto‐sampler temperatures were maintained at 35°C and 10°C, respectively, with an injection volume of 10 μL. The mobile phase (A) consisted of 0.05% formic acid in water and mobile phase (B) consisted of 0.05% formic acid in acetonitrile: methanol (50:50, v/v) with a flow rate of 0.3 mL/min was used for the analysis. A linear gradient program started from 2% mobile phase (B) with an initial hold of 0.5 min to a direct increase to 98% mobile phase (B) from 0.5 to 23 min, and hold 98% mobile phase (B) till 26 min, and then decreased to 2% mobile phase (B) in 0.5 min with the column equilibration of 3.5 min with 2% mobile phase (B) with a total run time of 30 min for PPPCPs analysis. ### High‐resolution mass spectrometry {#ansa202000015-sec-0140} The mass identification and quantification of PPPCPs were performed using UHPLC‐Q‐Orbitrap‐MS consisting of a heated electrospray ionization source (HESI), a quadrupole mass filter, higher‐energy collisional dissociation (HCD) cell for highest performance, MS/MS fragmentation, and high‐resolution Orbitrap mass analyzer with resolving power up to 140 000 at *m/z* 200. The HRMS parameters were: capillary temperature of 320°C, heater temperature of 350°C, electrospray voltage of 3.8 kV, S‐Lens RF level at 52 (arb), auxiliary gas (N~2~) at 9 (arb), sheath gas (N~2~) 37 (arb), and micro scans performed at 1 scan/s were used for the analysis. Nitrogen gas with high purity of 99.999% was used for the sheath and auxiliary gases in the ionization source, and also as collision gas in the HCD fragmentation cell. XCalibur 9890 Qual&Quan was used as data acquisition and quantification software. The HRMS full scan (MS1) was operated in both positive and negative modes in the scan range of 75‐1125 Da at a resolution of 70 000 with maximum injection time of 200 ms, and automatic gain control (AGC) set at 1.0e^5^. All acquisition methods in this study include a full‐scan (MS1) followed by targeted study with parallel reaction monitoring (PRM) MS2 data collection with predefined "inclusion list" that was used in the selection of precursor ions. Table [2](#ansa202000015-tbl-0002){ref-type="table"} displays the specific normalized collision energy (CE) used for each analyte in PRM mode at a resolution of 17 500 (FWHM at 200 Da) with a maximum injection time set at 100 ms. The AGC target was optimized to 2.0e^4^ with an isolation window of *m/z* 4 for analysis of PPPCPs. The PRM data were acquired in profile mode for full scan analysis and centroid mode for MS/MS analysis. ::: {#ansa202000015-tbl-0002 .table-wrap} Q‐Orbitrap‐MS instrument parameters for 26 PPPCPs Class Analytes Mass (*m/z*) Formula Mode RT (min) Error (ppm) Collision energy (eV) MS2 (Ion‐1) (*m/z*) MS2 (Ion‐2) (*m/z*) ------------------------ ------------------------------- -------------------- ---------------------- ------- ---------- ------------- ----------------------- --------------------- --------------------- Antiepileptic Carbamazepine 237.1022 C~15~H~12~N~2~O +ve 11.3 ‐1.02 30 194.0962 192.0805 β‐Blocker Atenolol 267.1703 C~14~H~22~N~2~O~3~ +ve 4.9 ‐2.7 30 190.0859 208.0963 β‐Blocker Pindolol 249.1568 C~14~H~20~N~2~O~2~ +ve 6.68 ‐2.7 30 116.107 172.0752 β‐Blocker Metoprolol 268.1907 C~15~H~25~NO~3~ +ve 8 ‐3 30 159.0798 191.106 β‐Blocker Propranolol 260.1645 C~16~H~21~NO~2~ +ve 10.1 ‐0.91 30 116.1073 183.0802 Opioid Tramadol 264.1958 C~16~H~25~NO~2~ +ve 8.1 ‐3 18 246.1846 231.0436 NSAIDs Ketoprofen 255.1016 C~16~H~14~O~3~ +ve 13.4 2.3 15 209.0956 105.0336 NSAIDs Diclofenac 296.024 C~14~H~11~Cl~2~NO~2~ +ve 15.8 ‐1.05 20 250.0183 215.0494 NSAIDs Naproxen 231.1016 C~14~H~14~O~3~ +ve 13.5 ‐2.5 10 185.0955 149.023 Steroid β‐Estradiol 273.1849 C~18~H~24~O~2~ +ve 13 ‐1.8 10 107.0491 213.1271 Steroid Estrone 271.1693 C~18~H~22~O~2~ +ve 13.99 ‐1.33 10 253.1584 157.0648 Steroid Prednisolone 361.201 C~21~H~28~O~5~ +ve 10.4 ‐3.1 10 343.1896 325.179 Phthalates Bis (methyl glycol) phthalate 283.1176 C~14~H~18~O~6~ +ve 11.46 ‐3 10 207.0649 147.0801 Dicyclohexyl Phthalate 331.1904 C~20~H~26~O~4~ +ve 21.59 ‐3.2 10 167.0335 149.023 Dimethyl Phthalate 195.0652 C~10~H~10~O~4~ +ve 11.2 ‐2.2 10 163.0385 95.0858 Dioctyl Phthalate 391.2843 C~24~H~38~O~4~ +ve 25.8 ‐3 10 149.023 167.0335 Dihexyl Phthalate 335.2217 C~20~H~30~O~4~ +ve 23.3 ‐3.3 10 149.0231 205.0853 Diethyl Phthalate 223.0965 C~12~H~14~O~4~ +ve 14.1 ‐2.9 10 149.023 177.0542 Dibutyl Phthalate 279.1591 C~16~H~22~O~4~ +ve 19.3 ‐2.5 10 149.0231 205.0854 Parabens Methylparaben 153.0546 C~8~H~8~O~3~ +ve 9.2 ‐1.8 10 121.0284 113.9637 Ethylparaben 167.0703 C~9~H~10~O~3~ +ve 10.9 ‐2.4 10 139.0387 95.0494 Propylparaben 181.0859 C~10~H~12~O~3~ +ve 12.6 ‐2.7 10 139.0387 95.0494 Butylparaben 193.087 C~11~H~14~O~3~ ‐ve 14.2 0.8 10 137.023 93.033 Personal Care Products Diethanolamine 106.0863 C~4~H~11~NO~2~ +ve 0.78 1.6 30 88.076 70.0657 Triethanolamine 150.1125 C~6~H~15~NO~3~ +ve 0.8 ‐1.9 30 132.1017 132.1017 Triclosan 286.9439 C~12~H~7~Cl~3~O~2~ ‐ve 18.3 2.6 10 154.9899 167.0124 John Wiley & Sons, Ltd. Analytical method validation {#ansa202000015-sec-0150} The developed method has been validated with respect to linearity, limit of detection (LOD), limit of quantification (LOQ), recovery, and precision as per ICH and SANTE guidelines. The linearity plot was constructed with eight different concentrations (1, 2, 4, 8, 16, 32, 64, and 125 ng/L) for PPPCPs in river water samples (matrix‐matched calibration). The sensitivity of the method was calculated by LOD and LOQ. Recovery study was performed at three different concentrations (2, 30, and 125 ng/L) in the river water samples to evaluate the accuracy of the developed method. Intra‐ and interday precisions were checked by carrying out six independent tests of the sample in a day, for six consecutive days. Matrix effect (ME) was calculated by standard addition method (matrix matched calibration). Further, the parameters like specificity and matrix effect were also assessed. All experiments were performed in triplicate. RESULTS AND DISCUSSION {#ansa202000015-sec-0160} HRMS optimization {#ansa202000015-sec-0170} The analysis was performed in full‐scan (MS1) and targeted monitoring mode (MS2) for better sensitivity of fragmented ions. The signal to noise (S/N) ratio was always kept at higher than 10 in full scan. The analyte confirmation was carried out using criteria of LC retention time (RT) tolerance of ±2.5% and mass error of ≤5 ppm for the monoisotopic mass in HRMS.[@ansa202000015-bib-0028], [@ansa202000015-bib-0037] The specific fragmented ion for each target analyte was determined by PRM mode (Table [2](#ansa202000015-tbl-0002){ref-type="table"}). UHPLC optimization {#ansa202000015-sec-0180} Different mobile phase modifiers were screened, out of which formic acid gave better peak separation, resolution, and sensitivity for the analysis of PPPCPs in river water samples. Milli‐Q water with formic acid (0.05%) was used as mobile phase A, and a mixture of acetonitrile and methanol (1:1, v/v) with formic acid (0.05%) was used as mobile phase B for optimum ionization and separation of the PPPCPs. The C~18~ column showed high‐quality chromatographic separation with symmetrical peaks and less peak tailing for neutral and basic analytes. SPE sample cleanup {#ansa202000015-sec-0190} The major part of the study was carried out using SPE HLB cartridge as sorbent for the analysis of PPPCPs.[@ansa202000015-bib-0038] The pH of the water sample was maintained at 7.0 for obtaining maximum recoveries of PPPCPs. In the SPE method, the elution solvent is also a significant parameter, which extracts all the targeted analytes from the matrix. For maximum efficiency, the elution solvent should have the following characteristics: (a) higher dissolving capability to extract the targeted analytes, (b) higher volatility, and (c) suitable for chromatographic analysis. Based on this criterion, a mixture of dichloromethane (DCM) and methanol (MeOH) in (1:1, v/v) in 10 mL was used as an elution solvent for maximum extraction efficiency of PPPCPs. Method validation {#ansa202000015-sec-0200} The developed method was validated as per ICH and SANTE guidelines.[@ansa202000015-bib-0039], [@ansa202000015-bib-0040] ### Linearity {#ansa202000015-sec-0210} Analytical method linearity is the ability to produce results that are directly proportional to the analyte concentration in the samples. The method linearity was constructed by 8‐point calibration curve of PPPCPs in river water in the concentration range of 1‐125 ng/L using linear least square method. The coefficient of determination (*R^2^*) was found to be in the range of 0.995‐0.999 for all the selected PPPCPs (Table [3](#ansa202000015-tbl-0003){ref-type="table"}). The linear regression data for the linearity plot show an excellent linear relationship throughout the linearity range. ::: {#ansa202000015-tbl-0003 .table-wrap} Method validation parameters for 26 PPPCPs \% Relative Recovery ± %RSD (n = 6) ------------------------------ -------- ------- ------ ------ -------------- ------------------------------------- ---------------- -------------- -------------- --------------- --------------- Carbamazepine 2--125 0.996 0.37 1.23 94.46 ± 2.9 91.93 ± 6.4 103\. 70 ± 5.2 97.4 ± 4.7 93.93 ± 10.5 107.38 ± 13.0 96.20 ± 8.6 Atenolol 2--125 0.999 0.49 1.61 100.16 ± 8.9 100.48 ± 8.9 102.90 ± 5.7 99.16 ± 8.3 98.64 ± 11.1 97.69 ± 7.6 90.58 ± 11.6 Pindolol 2--125 0.995 0.44 1.46 99.81 ± 8.3 82.49 ± 9.6 96.33 ± 4.6 97.02 ± 8.3 81.77 ± 12.7 93.24 ± 6.0 98.07 ± 13.8 Metoprolol 2--125 0.999 0.47 1.55 90.17 ± 8.3 88.11 ± 8.8 106\. 13 ± 6.2 93.15 ± 6.1 96.69 ± 10.1 102.16 ± 10.1 95.52 ± 13.1 Propranolol 2--125 0.999 0.45 1.49 108.92 ± 3.7 91.47 ± 8.6 100.79 ± 4.3 108.70 ± 2.6 84.51 ± 12.3 93.08 ± 8.3 102.08 ± 7.7 Tramadol 2--125 0.998 0.52 1.71 91.38 ± 11.5 94.01 ± 9.6 99.20 ± 5.3 93.09 ± 7.5 82.80 ± 12.5 96.52 ± 6.8 104.64 ± 11.9 Ketoprofen 2--125 0.998 0.39 1.3 103.56 ± 2.5 107.70 ± 6.5 97.19 ± 2.8 99.14 ± 4.9 102.02 ± 7.5 98.74 ± 4.5 99.81 ± 6.1 Diclofenac 1--125 0.996 0.3 0.99 104.98 ± 5.8 107.99 ± 4.8 87.50 ± 7.4 101.50 ± 4.5 102.79 ± 5.5 89.98 ± 12.0 96.13 ± 9.1 Naproxen 1--125 0.999 0.12 0.41 100.85 ± 3.6 92.02 ± 2.2 99.71 ± 1.9 101.82 ± 3.1 99.00 ± 5.9 102.64 ± 3.8 101.38 ± 4.5 β‐Estradiol 1--125 0.999 0.15 0.51 100.70 ± 3.6 75.11 ± 3.6 104.87 ± 1.2 97.46 ± 4.6 79.78 ± 6.4 101.59 ± 2.0 98.92 ± 4.8 Estrone 2--125 0.997 0.36 1.2 99.61 ± 1.1 97.99 ± 6.2 100.68 ± 4.4 104.47 ± 7.3 98.67 ± 6.9 98.63 ± 8.2 104.43 ± 8.0 Prednisolone 2--125 0.997 0.43 1.45 104.71 ± 4.1 87.89 ± 9.3 98.87 ± 7.8 100.30 ± 5.7 89.27 ± 13.3 102.36 ± 11.2 99.75 ± 7.8 Bis (methylglycol) phthalate 2--125 0.998 0.52 1.72 94.31 ± 3.8 101.79 ± 8.4 105.35 ± 3.3 96.98 ± 3.4 97.08 ± 8.8 103.44 ± 5.9 96.88 ± 7.2 Dicyclohexyl phthalate 1--125 0.999 0.3 0.99 83.53 ± 2.1 106.73 ± 4.0 90.98 ± 5.7 86.18 ± 1.9 103.72 ± 5.1 93.52 ± 8.7 84.52 ± 3.8 Dimethyl phthalate 2--125 0.996 0.47 1.56 90.65 ± 2.8 85.78 ± 8.6 101.77 ± 1.7 91.62 ± 4.0 85.61 ± 3.9 103.30 ± 2.2 90.33 ± 8.1 Dioctyl phthalate 2--125 0.999 0.35 1.16 96.35 ± 2.9 89.30 ± 6.5 100.71 ± 3.8 98.38 ± 3.4 101.66 ± 8.1 99.40 ± 6.2 101.53 ± 8.2 Dihexyl phthalate 1--125 0.995 0.31 1.04 101.40 ± 1.2 97.26 ± 5.5 96.60 ± 4.8 102.24 ± 2.7 93.93 ± 6.9 97.23 ± 8.1 104.74 ± 6.1 Diethyl phthalate 2--125 0.999 0.48 1.59 109.79 ± 5.2 114.75 ± 7.5 99.71 ± 2.0 104.97 ± 7.2 91.68 ± 2.7 97.42 ± 3.8 101.99 ± 7.3 Dibutyl phthalate 2--125 0.997 0.41 1.34 95.21 ± 6.2 104.22 ± 9.4 106.00 ± 4.2 94.76 ± 4.5 90.58 ± 7.4 102.94 ± 6.2 97.17 ± 9.1 Methylparaben 1--125 0.999 0.2 0.67 94.31 ± 3.8 98.05 ± 3.8 101.65 ± 2.4 100.89 ± 8.0 98.71 ± 5.8 101.40 ± 3.7 104.86 ± 8.8 Ethylparaben 2--125 0.999 0.38 1.24 100.26 ± 2.4 100.31 ± 6.4 102.99 ± 3.9 96.68 ± 4.7 99.33 ± 6.6 104.60 ± 6.1 101.13 ± 8.9 Propylparaben 1--125 0.996 0.2 0.65 99.13 ± 2.2 101.90 ± 3.2 99.58 ± 2.6 102.15 ± 4.8 100.64 ± 5.5 98.81 ± 4.5 105.83 ± 6.6 Butylparaben 1--125 0.999 0.26 0.85 103.80 ± 5.2 102.89 ± 4.2 103.29 ± 4.4 103.58 ± 5.0 99.73 ± 5.5 100.82 ± 5.4 105.22 ± 5.5 Triethanolamine 1--125 0.999 0.21 0.7 102.87 ± 2.3 99.64 ± 3.7 99.07 ± 4.1 100.9 ± 4.1 101.73 ± 5.2 95.47 ± 6.0 98.89 ± 6.1 Diethanolamine 1--125 0.998 0.26 0.86 106.46 ± 5.8 106.78 ± 4.5 93.82 ± 5.5 106.47 ± 4.6 102.32 ± 6.8 96.57 ± 5.8 105.02 ± 5.5 Triclosan 1--125 0.999 0.25 0.81 100.87 ± 1.9 91.57 ± 4.6 101.50 ± 5.8 101.15 ± 2.3 97.59 ± 7.4 97.21 ± 7.0 100.58 ± 3.8 Concentration in ng/L; ME, matrix effect; LOD, limit of detection; LOQ, limit of quantification; RSD, relative standard deviation. John Wiley & Sons, Ltd. ### LOD and LOQ {#ansa202000015-sec-0220} LOD is defined as the lowest concentration of the analyte with S/N \> 3. It is determined by taking three times the standard deviation of the peak area at the lowest level divided by the slope of the standard addition curve (Equation [1](#ansa202000015-disp-0001){ref-type="disp-formula"}). Ten times the standard deviation of the peak area in the lowest concentration divided by the slope of the standard addition curve gives LOQ of the method (Equation [2](#ansa202000015-disp-0002){ref-type="disp-formula"}). Table [3](#ansa202000015-tbl-0003){ref-type="table"} lists the LOD and LOQ values of all the PPPCPs estimated by this method and calculated using the following equations. $${LOD}\; = \;\frac{3 \times \,{Standard}\,{Deviation}}{Slope}$$ $${LOQ}\; = \;\frac{10\, \times \,{Standard}\,{{Deviation}\mspace{6mu}}}{Slope}$$ ### Method specificity {#ansa202000015-sec-0230} The method specificity was performed in the real river water samples (with and without spiking of the analytes). The samples without spiking were designated as blank water samples, whereas those spiked with PPPCPs at their LOD levels were identified as spiked water samples (n = 8). In the blank sample, no peaks were observed at the specific retention times of the targeted analytes that shows the absence of analytes in the blank water sample. The spiked water samples displayed peaks indicating that the method is highly specific for the selected PPPCPs. ### Recovery {#ansa202000015-sec-0240} For recovery study, three concentrations (2, 30, and 125 ng/L) were taken: one at LOQ level, second at the middle level, and third at the highest level of the linearity range were spiked in the river water to measure the authenticity of the method. Recoveries were calculated based on Equation [3](#ansa202000015-disp-0003){ref-type="disp-formula"} and found to be in the range of 75.1‐114.7% (Table [3](#ansa202000015-tbl-0003){ref-type="table"}). $$Recovery\% = \frac{Spiked\, Conc. - Nonspiked\, Conc.}{Add\, Conc.} \times 100$$ ### Precision {#ansa202000015-sec-0250} The precision is the ability of the assay to reliably reproduce the results when sub‐samples were taken from the same specimen. The precision of the measurement was determined by performing six replicates at each concentration (2, 30, and 125 ng/L) in the river water samples for inter‐ and intraday repeatability, and is represented as percent relative standard deviation (%RSD). The precision was found to be in the range of 1.2‐9.6% and 2.0‐13.8% for intra‐ and interday, respectively. The values were found to be within the acceptable criteria as per guidelines (\<15% RSD; Table [3](#ansa202000015-tbl-0003){ref-type="table"}). ### Matrix effect {#ansa202000015-sec-0260} Matrix effect is a co‐dependent phenomenon and can affect the ionization efficiency of the analytes, and is evaluated to measure the impact of matrix interferences on the analysis of PPPCPs, and to understand the ion intensity enhancement or suppression. ME can affect the quantification of PPPCPs unless they are diminished or compensated. Matrix‐matched calibration by standard addition method was used to evaluate the ME. The percentage ME is the ratio of matrix slope and solvent slope multiplied by 100 (Equation [4](#ansa202000015-disp-0004){ref-type="disp-formula"}). The matrix slopes obtained by the matrix‐matched calibration and solvent slopes obtained by solvent calibration were used for matrix effect analysis. $$ME\% = \frac{{Matrix}\,{slope}}{{Solvent}\,{slope}} \times 100$$The ME values \>100% indicate ion enhancement, \<100% indicate ion suppression, and 100% value shows no matrix interference.[@ansa202000015-bib-0041] A signal enhancement or suppression effect is considered acceptable if the matrix effect values are in the range of 80‐120%. It means a matrix effect \>120% or \<80% indicates a strong matrix effec.t[@ansa202000015-bib-0042] Results were found to be in the range of 83.5‐109.79% for the present method (Table [3](#ansa202000015-tbl-0003){ref-type="table"}). Among pharmaceuticals, propranolol, ketoprofen, diclofenac, naproxen, β‐estradiol, and prednisolone showed ion enhancement, and carbamazepine, metoprolol, tramadol, and estrone indicated ion suppression, whereas atenolol and pindolol did not exhibit substantial matrix interference. Among phthalates, only dihexyl phthalate and diethyl phthalate exhibited ion enhancement, while the other phthalates displayed ion suppression. Methylparaben and propylparaben demonstrated ion suppression, while butylparaben showed ion enhancement effect, whereas ethylparaben showed no matrix interference. Thus, to compensate for ME suppression and ME enhancement in the analysis of PPPCPs, matrix match calibration was used for analyte quantification and recovery studies. Matrix match calibration eliminates all interferences related to sample and other analytes. Matrix effect helps to provide reliable, accurate, and precise results in the analysis of real samples. Application of method to real samples {#ansa202000015-sec-0270} The present method was validated by checking its performance in real samples. The method was applied for the analysis of PPPCPs in real water samples collected from nine sampling points of the River Ganga (Table [4](#ansa202000015-tbl-0004){ref-type="table"}). The method was able to identify and quantify 21 analytes in the concentration range of 0.76‐9.49 ng/L for pharmaceuticals, 1.49‐8.67 ng/L for phthalates, and 0.9‐7.58 ng/L for personal care products. Figure [2A](#ansa202000015-fig-0002){ref-type="fig"} represents the total ion chromatogram (TIC) of all the analytes in a standard mixture at 50 ng/L and Figure [2B](#ansa202000015-fig-0002){ref-type="fig"} shows the identified analytes in the river water samples. ::: {#ansa202000015-tbl-0004 .table-wrap} PPPCPs concentrations ± SD in river water samples by UHPLC‐Q‐Orbitrap‐MS Analytes Sample 1 Sample 2 Sample 3 Sample 4 Sample 5 Sample 6 Sample 7 Sample 8 Sample‐9 ----------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ Carbamazepine 3.53 ± 0.8 9.49 ± 6.1 BQL 3.86 ± 2.2 2.48 ± 0.3 2.41 ± 0.5 2.48 ± 0.4 3.72 ± 2.3 1.76 ± 1.0 Atenolol 1.98 ± 1.8 BQL 5.84 ± 3.5 2.33 ± 2.1 BQL BQL BQL BQL BQL Pindolol BQL BQL BQL BQL BQL BQL BQL BQL ND Metoprolol 2.74 ± 0.7 5.94 ± 3.6 9.16 ± 2.9 2.63 ± 1.7 3.78 ± 0.4 4.28 ± 1.5 4.19 ± 3.5 BQL BQL Propranolol 1.61 ± 0.3 3.08 ± 2.2 2.90 ± 0.6 BQL BQL 1.87 ± 1.3 BQL 2.78 ± 2.0 2.28 ± 0.7 Tramadol BQL BQL BQL BQL BQL BQL BQL BQL BQL Ketoprofen BQL ND ND BQL BQL BQL BQL BQL BQL Diclofenac ND 5.12 ± 1.1 1.92 ± 1.2 1.95 ± 1.2 ND BQL ND 1.34 ± 1.0 4.45 ± 0.7 Naproxen BQL BQL BQL ND ND ND ND ND ND β‐Estradiol 0.81 ± 0.1 BQL 0.76 ± 0.3 BQL BQL BQL BQL ND ND Estrone 2.54 ± 0.3 BQL 3.42 ± 1.9 BQL BQL BQL BQL ND ND Prednisolone BQL BQL 2.72 ± 0.5 1.90 ± 0.9 BQL 1.90 ± 0.2 2.46 ± 0.2 BQL 1.98 ± 1.3 Bis(methylglycol) phthalate 3.75 ± 1.2 3.21 ± 2.3 6.51 ± 2.3 2.03 ± 1.3 2.18 ± 0.4 1.89 ± 0.2 2.95 ± 1.2 3.82 ± 2.5 2.75 ± 1.7 Dicyclohexyl phthalate 3.79 ± 0.9 ND ND 4.06 ± 3.5 4.59 ± 1.4 2.72 ± 1.8 5.42 ± 0.6 2.84 ± 1.5 1.49 ± 0.8 Dimethyl phthalate ND ND ND ND ND ND ND 5.12 ± 2.5 ND Dioctyl phthalate ND ND ND BQL ND ND BQL ND ND Dihexyl phthalate BQL 1.73 ± 2.0 8.67 ± 0.4 BQL BQL BQL BQL BQL BQL Diethyl phthalate BQL BQL 2.88 ± 0.1 BQL BQL BQL BQL BQL BQL Dibutyl phthalate 3.31 ± 0.4 7.72 ± 5.3 4.96 ± 1.5 6.22 ± 4.1 3.57 ± 0.2 5.11 ± 0.8 2.89 ± 0.6 2.81 ± 1.3 1.78 ± 0.8 Methylparaben BQL BQL BQL 1.82 ± 0.8 2.01 ± 0.4 3.00 ± 2.0 1.75 ± 0.1 0.95 ± 1.1 0.97 ± 1.1 Ethylparaben 1.59 ± 1.3 BQL BQL ND ND ND ND BQL BQL Propylparaben 2.11 ± 0.5 1.50 ± 0.1 2.69 ± 0.9 1.69 ± 1.8 2.14 ± 0.3 2.08 ± 0.4 2.14 ± 0.4 1.71 ± 1.1 1.93 ± 1.0 Butylparaben 2.45 ± 0.5 ND 1.56 ± 0.6 0.92 ± 0.4 1.37 ± 0.5 3.04 ± 1.0 1.61 ± 0.1 ND ND Triethanolamine 2.17 ± 2.0 ND ND BQL BQL ND ND ND ND Diethanolamine BQL ND ND 0.90 ± 0.6 BQL 1.73 ± 1.3 BQL ND ND Triclosan 2.59 ± 0.9 7.58 ± 4.7 5.47 ± 2.4 BQL BQL BQL BQL 3.45 ± 3.8 3.52 ± 4.0 ND, not detected; BQL, below quantitation limit. John Wiley & Sons, Ltd. **A**, Total ion chromatogram of standard PPPCPs (50 ng/L) obtained from UHPLC--Q‐Orbitrap‐MS analysis. **B**, Total ion chromatogram of analysis of PPPCPs in the river water samples ::: {#d99e3332 .fig} ::: {#d99e3334 .fig} Among pharmaceuticals, β‐blockers like atenolol, metoprolol, and propranolol were found in water at concentrations of 5.84, 9.16, and 1.61‐3.08 ng/L, respectively, while pindolol was detected but not quantified at LOQ level. Carbamazepine (an antiepileptic medication used to treat epilepsy and bipolar disorders) that is one of the highest consumed drugs in India was observed in the concentration range of 1.76‐9.49 ng/L in some of the samples.[@ansa202000015-bib-0043] Diclofenac, a non‐steroidal anti‐inflammatory drug (NSAID), was found in the concentration range of 1.34‐5.12 ng/L. The values reported for diclofenac are low in the analyzed samples in comparison to those obtained from previous reports.[@ansa202000015-bib-0044] At most of the sampling points, ketoprofen and naproxen were either not detected or were below quantitation limits. The compounds like β‐estradiol, estrone, and prednisolone were found in the range of 0.76‐3.42 ng/L. The phthalates are considered as potential endocrine‐disrupting chemicals (EDCs) in humans and cause numerous health disorders.[@ansa202000015-bib-0045] Dihexyl phthalate (DHP) and dibutyl phthalate (DBP) used as regular plasticizers were found at the concentrations of 8.67 and 7.72 ng/L, respectively, followed by bis(2‐methoxyethyl) phthalate (DMEP) (6.51 ng/L). Dicyclohexyl phthalate (DCHP) and diethyl phthalate (DEP) were found in the range of 1.49‐5.42 ng/L, whereas dimethyl phthalate (DMP) was found in one sample at a concentration of 5.12 ng/L. Dioctyl phthalate (DOP) was found at below quantification level (BQL) at two sampling points, due to its low solubility in water. The parabens (methyl, ethyl, propyl, and butyl) are a class of preservatives found in most of the cosmetics and food commodities and were found at low concentrations in the range of 0.92‐3.04 ng/L.[@ansa202000015-bib-0046] The other personal care products, triclosan, triethanolamine, and diethanolamine were found in the range of 0.90‐7.58 ng/L.[@ansa202000015-bib-0047] Out of 26, 21 analytes were detected in the river water samples in low concentrations. However, this study emphasizes the need for continuous cleanup/remediation measures to effectively remove the PPPCPs from river water samples. Comparison of present method with earlier reported methods {#ansa202000015-sec-0280} The present method was found to be superior to earlier reported methods for the analysis of PPPCPs including phthalates with respect to linearity, LOD, LOQ, and recovery (Table [5](#ansa202000015-tbl-0005){ref-type="table"}). The method linearity of the present study was in the range of 1--125 ng/L and the values of LOD and LOQ were also low in the present study, which shows that the present study is better than those reported earlier. ::: {#ansa202000015-tbl-0005 .table-wrap} Comparison of the SPE method with earlier reported methods <th style="text-align: left;">Serial number</th> <th style="text-align: left;">Similar analytes</th> <th style="text-align: left;">Matrix</th> <th style="text-align: left;">Extraction Method</th> <th style="text-align: left;">Instrument</th> <th style="text-align: left;">Linearity</th> <th style="text-align: left;">LOD</th> <th style="text-align: left;">LOQ</th> <th style="text-align: left;">Recovery</th> <th style="text-align: left;">References</th> <td style="text-align: left;">1</td> <td style="text-align: left;"><p>Estrone</p> <td style="text-align: left;">1.0 g of Sewage sludge</td> <td style="text-align: left;">Ultrasonic extraction</td> <td style="text-align: left;">GC‐MS/MS</td> <td style="text-align: left;">2‐2000 ng/g</td> <td style="text-align: left;">1.6‐11 ng/g</td> <td style="text-align: left;">5.4‐39 ng/g</td> <td style="text-align: left;">75.3‐95.5%</td> <td style="text-align: left;"><span class="citation" data-cites="ansa202000015-bib-0016"><sup>16</sup></span></td> <td style="text-align: left;">2</td> <td style="text-align: left;"><p>Diclofenac Estrone</p> <td style="text-align: left;">5.0 g of Sediment</td> <td style="text-align: left;">SPE</td> <td style="text-align: left;">UPLC‐MS/MS</td> <td style="text-align: left;">1‐200 ng/ mL</td> <td style="text-align: left;">0.02–0.81 ng/g</td> <td style="text-align: left;">–</td> <td style="text-align: left;">78‐108%</td> <td style="text-align: left;"><span class="citation" data-cites="ansa202000015-bib-0021"><sup>21</sup></span></td> <td style="text-align: left;">3</td> <td style="text-align: left;"><p>Metoprolol</p> <td style="text-align: left;">250 mL of Effluent and surface water</td> <td style="text-align: left;">SPE</td> <td style="text-align: left;">UHPLC‐Q‐Orbitrap‐MS</td> <td style="text-align: left;">1‐1500 ng/mL</td> <td style="text-align: left;">0.02–1.21 ng/mL</td> <td style="text-align: left;">0.07–4.05 ng/mL</td> <td style="text-align: left;">76‐104%</td> <td style="text-align: left;"><span class="citation" data-cites="ansa202000015-bib-0030"><sup>30</sup></span></td> <td style="text-align: left;">4</td> <td style="text-align: left;"><p>Carbamazepine</p> <p>Triclosan including 7 phthalates</p></td> <td style="text-align: left;">3 litre of River Water</td> <td style="text-align: left;">SPE</td> <td style="text-align: left;">UHPLC‐Q‐Orbitrap‐MS</td> <td style="text-align: left;">1‐125 ng/L</td> <td style="text-align: left;">0.12‐0.52 ng/L</td> <td style="text-align: left;">0.41‐1.71 ng/L</td> <td style="text-align: left;">75‐115%</td> <td style="text-align: left;">Present Study</td> John Wiley & Sons, Ltd. CONCLUDING REMARKS {#ansa202000015-sec-0290} A sensitive and efficient analytical method has been developed for the analysis of 26 PPPCPs including phthalates in Ganga River water using SPE‐UHPLC‐Q‐Orbitrap‐MS. The method validation results were: linearity (1‐125 ng/L), LOD (0.12--0.52 ng/L), LOQ (0.41--1.71 ng/L), recovery (75.1‐114.7%), precision (1.2‐9.6% in intraday and 2.0‐13.8% in interday), and matrix effect (83.5‐109.79). The PPPCPs were found in the concentration range of 0.76‐9.49 ng/L, 0.9‐7.58 ng/L, and 1.49‐8.67 ng/L for pharmaceuticals, personal care products, and phthalates, respectively, in Ganga River water samples. The developed and validated method was able to identify and quantify multiclass PPPCPs in water samples using SPE‐UHPLC‐Q‐Orbitrap‐MS with acceptable precision and accuracy, and would be useful for routine environmental monitoring studies. CONFLICT OF INTEREST {#ansa202000015-sec-0310} The authors declare no conflict of interest. The authors gratefully acknowledge Director, CSIR‐IICT, Hyderabad and Director, CSIR‐IITR, Lucknow for academic and infrastructure support. The authors also wish to express their gratitude to GVK Biosciences, Hyderabad for providing necessary infrastructural facilities for carrying out this work at their facility. The authors thankful to DST, India, and AISRF, Australia, for funding through the Indo‐Australia project. The present manuscript bears the CSIR‐IICT communication number of IICT/Pubs./2020/030. The authors are thankful to Dr. G Lakshmi Deepa for her help in manuscript writing. DATA AVAILABILITY STATEMENT {#ansa202000015-sec-0330} The data that support the findings of this study are available in the tables and figures of this article.
Board Thread:Suggestions/@comment-27020934-20160927194501/@comment-25414035-20160927202240 The M939 I made wont be added and I wont make the 2500.
[Crim. No. 8309. Third Dist. Feb. 27, 1979.] THE PEOPLE, Plaintiff and Respondent, v. JOSEPH MICHAEL REMIRO et al., Defendants and Appellants. Counsel Richard K. Turner, James D. Garbolino, under appointments by the Court of Appeal, Alan V. Pineschi and Katherine Mader for Defendants and Appellants. Evelle J. Younger, George Deukmejian, Attorneys General, Jack R. Winkler, Robert H. Philibosian, Chief Assistant Attorneys General, Arnold O. Overoye, Assistant Attorney General, Charles P. Just, Joel E. Carey and Eddie T. Keller, Deputy Attorneys General, D. Lowell Jensen, District Attorney, and John J. Meehan, Assistant District Attorney, for Plaintiff and Respondent. Opinion PUGLIA, P. J. Defendants were convicted of the first degree murder of Oakland School Superintendent Marcus Foster and the attempted murder of Foster’s deputy superintendent Robert Blackburn. The trial was lengthy, the record is voluminous and the contentions are numerous. For reasons which will appear, we shall affirm the judgment as to defendant Remiro and reverse as to defendant Little. A brief summary of facts will provide overall context for the discussion. Additional facts will be set forth as necessary in conjunction with the separate treatment of the several contentions. In the early evening of November 6, 1973, Superintendent Marcus Foster was shot and killed from ambush as he walked from his office to his automobile after a school board meeting. Deputy superintendent Robert Blackburn, who was with Foster at the time, was seriously wounded by a shotgun blast. Bullets removed from Foster’s body had been hollowed out in the tip and filled with cyanide. The authorities made no public mention of the use of cyanide bullets. On November 8, 1973, several bay area newspapers and a radio station received identical documents purporting to originate with the so-called Symbionese Liberation Army (SLA). The document bore a drawing of a seven-headed cobra below which were the letters SLA; entitled “Communique No. 1,” the document purported to order the execution of Foster and Blackburn by cyanide bullets and inveighed at length against, among others, the Oakland School Board, the “fascist ruling class” and the fascist government of Amerika [sic].” In the early morning of January 10, 1974, a Chevrolet van driven by defendant Little in which defendant Remiro was a passenger was stopped for investigation by a police officer in Concord. In the ensuing confrontation, Remiro fired several shots at the officer from a .380 automatic pistol. Remiro escaped immediate apprehension but Little was arrested and the van seized. Inside the van were SLA documents. Later that morning Remiro was arrested. He was armed with a Walther .380 automatic pistol. This pistol was established at trial as the gun which fired five of the bullets removed from Foster’s body and eight expended .380 shells found at the murder scene. One of the bullets removed from Foster’s body and another bullet and empty shell found at the murder scene were possibly fired from a .38 caliber Rossi revolver owned by Little. .In the early evening of January 10, 1974, firemen extinguished a fire of incendiary origin at a house near Concord close to the site where Little and Remiro had been stopped earlier that morning in the Chevrolet van. Among the items discovered in the Concord house were firearms, ammunition, empty shells which had been fired from the .380 Walther and the .38 caliber Rossi, pipe bombs and Molotov cocktails, several SLA “communiques” (including the original of Communique No. 1 from which the copies received by the news media had been duplicated), a typewritten list containing the names and addresses of the five news organizations to which Communique No. 1 had been sent, a hand-drawn map of the Foster ambush-murder scene, Oakland School District publications from which Foster’s photograph had been tom out, a document containing the time and date of the school board meeting following which Foster had been killed, cyanide bullets similar to those which killed Foster and the materials with which they were made, and a shotgun shell which had been fired from or at least worked through the same shotgun which wounded Blackburn. Fingerprints of both defendants and of avowed SLA members Donald DeFreeze, Patricia “Mismoon” Soltysik, and Nancy Ling Perry were found on various documents seized in the house. A wallet containing personal identification papers of Little and other papers bearing his name were found in a bedroom of the Concord house. The Chevrolet van was registered to N. G. Ling, the maiden name of Nancy Ling Perry. Both defendants had been observed working on the van in the driveway of the Concord house. The house had been rented to Nancy Ling Perry and Little using the aliases Nancy and George DeVoto. Both defendants had been seen around the property during the past several months. Each of them carried keys to the house. Both defendants were shown to have been associated with other avowed SLA members during the past two years, including DeFreeze, Soltysik, Perry, Angela Atwood, Camilla Hall and Willie Wolfe. The latter six individuals perished in May 1974 when the house from which they were engaging Los Angeles police in a gun battle caught fire and was completely incinerated. Found in the ashes of the Los Angeles house, next to the outstretched hand of the dead Nancy Ling Perry, was Little’s .38 caliber Rossi revolver; also found were a number of sawed-off rifles, the separated barrels of which had earlier been recovered from the Concord house, a shotgun which Remiro had purchased in Oakland in 1973 and various other guns and ammunition. Several of the weapons found in the Los Angeles ruins had fired some of the expended shells which had been found in the Concord house. The prosecutor sought to prove that even though defendants may not have been the trigger men, they were members of a criminal combination which planned and executed the shootings of Foster and Blackburn as part of a larger conspiracy to wage war, in the words of the SLA manifesto, on the “Fascist United States Government, The Facist [sic] Capitalist Class” and their supporters by means of terroristic acts including murder and kidnapping of officials and prominent members of the community, thereby fomenting violent upheaval and promoting revolutionary change. Neither defendant testified. Following a trial which consumed 46 court days spread over 71 calendar days in which 137 different witnesses were called and over 500 exhibits offered into evidence, defendants were each convicted of first degree murder and attempted murder. I. Gainer Instruction By far the most troublesome of the numerous contentions raised by defendants involves an instruction given to the deliberating jury and later disapproved by the Supreme Court in People v. Gainer (1977) 19 Cal.3d 835 [139 Cal.Rptr. 861, 566 P.2d 997], decided more than two years after the trial in this case. The instruction is frequently referred to as the “dynamite charge” or the “Allen” instruction after the case in which it was first approved, Allen v. United States (1896) 164 U.S. 492 [41 L.Ed. 528, 17 S.Ct. 154], This case was submitted to the juiy on May 30, 1975. After having deliberated 10 days, the jury reconvened in the courtroom at 9 a.m., on June 9, and the following proceedings took place: “The Court: The record may reflect the jurors are present and in their proper places, and the two accused persons are present in court. “Mr. Foreman, you reported to me a figure yesterday that the juiy had ballotted [sic]; and once again I—of course, it is my duty to admonish you that the Court, as well as counsel and anyone else other than the juiy, are not entitled to know how the jury leans, whether for innocence or for guilt, but in this case the Court gave to you eight possible verdicts. “First, I’d like to know whether or not the jury has voted on more than one of those? “Foreman: Your Honor, the eight possible verdicts, the jury has rejected two and adopted two. “The Court: In other words, you have a unanimous opinion as to two, is that correct? “Foreman: That is correct, Your Honor. “The Court: And two you have rejected, is that true? “Foreman: That is correct.- “The Court: Okay. Can you tell me this: Do these that you vote on pertain to one or both of the defendants? “Foreman: The verdicts which the jury has reached, Your Honor,— “The Court: I don’t want to know which ones. I just want to know whether— “Foreman: Both concern a single defendant. “The Court: I see. Those that you have resolved pertain to one of the defendants, is that correct? “Foreman: That is correct, Your Honor. “The Court: And, I take it, tfye other two [sic] pertain to the other defendant? “Foreman: That is correct, Your Honor. Your Honor, the most recent ballot which was taken yesterday reflected numerical count of three to nine. “I believe the jury is of the opinion that there may be a potential impasse, and we would appreciate any further guidance or instruction the Court might want to give. “The Court: Well, I’m going to read to you a statement that has been helpful to juries in other cases, and possibly may be of assistance to you, and it is as follows:...” Without consulting counsel, the court then read an instruction containing both the objectionable features expressly condemned in People v. Gainer, i.e., an admonition to minority jurors to reevaluate their positions in light of the views of the majority and an assertion that the case at some time must be decided. Deliberations then resumed extending throughout the day. At 6:30 p.m. on June 9, the jury returned to the courtroom. After some discussion, the court reread two instructions, CALJIC Nos. 1.00 and 17.40. The foreman indicated that the instructions were sufficient and asked to resume deliberations. At 6:37 p.m., the jurors retired. They returned to the court at 7:04 p.m. at which time verdicts of guilty were rendered against each defendant for first degree murder and attempted murder. The verdicts against Remiro were dated June 8, 1975; those against Little were dated June 9, 1975. Each verdict was read in its entirety and the jurors were individually polled, each affirming the verdicts as his or her own. On August 31, 1977, the Supreme Court decided People v. Gainer, supra, holding “as a judicially declared rule of criminal procedure” (19 Cal.3d at p. 852) that the giving of an Allen-type instruction encouraging minority jurors to reexamine their independent views in light of the majority position constitutes error, and because of the difficulties inherent in attempting to ascertain from a given record whether prejudice did in fact occur, the error is deemed to be reversible per se. (People v. Gainer, supra, 19 Cal.3d at pp. 854-855.) The Gainer court also held that informing the jury that the case at some time must be decided misstates the law and therefore constitutes error (at p. 852), albeit not per se prejudicial (p. 855). Furthermore, the Gainer decision is made applicable to all cases not yet final as of its date, August 31, 1977 (p. 853). We are of course bound to follow the Gainer decision as we are all other decisions of our Supreme Court. (Auto Equity Sales, Inc. v. Superior Court (1962) 57 Cal.2d 450, 455 [20 Cal.Rptr. 321, 369 P.2d 937].) We have no difficulty in concluding that the Gainer decision is irrelevant to the appeal of Remiro, because it is evident that the jury had already concluded its deliberations with respect to Remiro before the giving of the erroneous instruction on June 9. By the foreman’s account, when the jury reconvened to deliberate at 9 a.m. on June 9th, it had already unanimously arrived at two verdicts concerning one of the defendants. The identity of that defendant was established circumstantially when the verdicts were returned in open court later on June 9, and it was revealed that those against Remiro were dated June 8, the date preceding the giving of the erroneous instruction. The verdicts against Little were dated June 9. That these dates were not inaccurate can be inferred from the court’s instruction that the verdicts “shall” be dated “as soon as all of you have agreed upon [them],” and from each juror’s individual affirmation in open court that the verdicts so dated were his or her own. Thus the Allen instruction did not skew the jury’s deliberations as to Remiro toward the result favored by the majority; those deliberations were concluded before the jurors heard the erroneous instruction. It is of course possible, as counsel for Remiro speculates, that the jury reopened deliberations on the Remiro verdicts after the Allen instruction was given; however, that possibility, theoretical only, finds no support in the record. If deliberations were reopened, it must of necessity have been on June 9th. Yet the Remiro verdicts were dated June 8th, and that date was affirmed in open court by each juror. The jury was instructed to date the verdicts as soon as agreed upon and we must presume that official duty has been performed (Evid. Code, § 664). We emphasize that our analysis of the impact vel non of Gainer on Remiro’s appeal does not constitute an attempt “to gauge the precise effect” (19 Cal.3d at p. 854) on the jury of the erroneous admonition to minority jurors, an exercise forbidden to appellate courts by the Gainer decision (p. 855). Rather, the record demonstrates, and we so find, that the erroneous instruction could have had no effect, prejudicial or otherwise, on the Remiro appeal simply because by the time it was given the jury had completed its deliberations and arrived at verdicts of guilty as to Remiro. Defendant Little’s situation is sufficiently distinguishable from that of Remiro that he will fortuitously reap the benefit of the Gainer decision, not because justice requires it but because chance has ordained it. For many years, our system of criminal justice has been noted less for predictability than for instability. Judicial decisions often abruptly discard long-established procedures and replace them with new rules. Typically, these new rules are then applied to cases on appeal which were earlier tried in reliance upon the then existing but now discredited rule. In other words, the rules are changed to the benefit of the defendant and the detriment of the People after the game has been played. The inevitable consequence is the ex post facto creation of error where none otherwise would exist, and the unfortunate reversal of many convictions that would otherwise have been affirmed. These occurrences assuredly exact a considerable cost in loss of public confidence in the judicial system, not to mention the very tangible economic drain on public funds resulting from the retrial of these cases. And public confidence is even further eroded when intervening events render retrial impossible or futile. The instant case provides an all too common example of retroactive application of a judicially fashioned rule announced after trial of the case. Here, however, the trial was in 1975. The inordinate length of the trial and the consequent time required by the parties to brief the voluminous record on appeal have delayed finality of this judgment. It is more than a little ironic that if this case had involved a typical felony conviction, for example, a run-of-the-mill case of burglary, robbery or theft, the appeal, if any, would have been concluded and the judgment in all likelihood would have been final before Gainer was decided and therefore would have been insulated from its retroactive sweep. We do not challenge the wisdom of the Gainer decision substantively, or the right and duty of the courts to effect appropriate changes in the law by judicial decision. We do respectfully suggest, however, that natural law would not be affronted by the conviction and punishment of Little under this judgment, since he was fairly tried under then established, sanctioned procedures. Surely it must be possible to effect orderly change in judicial procedures without the attendant carnage that retroactivity has wreaked over the past two decades, a period in which the judicial landscape, resembling nothing so much as a giant junkyard, has been cluttered with the wreckage of convictions fairly won but sacrificed to the fetish of our highest courts for after-acquired wisdom. The tarnished image of the judiciary today is in significant part attributable to the socially destabilizing effect of wholesale reversals of criminal convictions for failure to comply with rules that did not even exist at the time of trial. Certainly the maintenance of public acceptance of a system largely responsible for the protection of individual rights is no less important than the rights themselves. Many others in California whose trials were infected with what we now know to be “Gainer error” have been and are still being punished under valid judgments which became final before Gainer was decided. Moreover, many other states and most parts of the federal judicial system even now approve of the Allen type instruction (People v. Gainer, supra, 19 Cal.3d at p. 860 (dis. opn. of Clark, J.)). That Little will escape punishment at least under this conviction and perhaps altogether therefore has less to do with justice than with random luck. Inasmuch as this case has understandably attracted great public interest and concern, we have taken pains to explain in some detail why the conviction of one of the defendants must be reversed. We now proceed to that distasteful task. We reject the Attorney General’s claim of invited error predicated upon the express consent of Little’s counsel that a written copy of the Allen instruction already given by the trial court be provided the jury to take into the jury room. Counsel’s consent came immediately after the trial judge, without consulting counsel, read the Allen instruction to the jury. Even if counsel had known that the instruction was error (see Gainer, at p. 846, and cases cited thereat), he had no opportunity to object before the instruction was read (p. 842, fn. 2). By the time counsel was asked if he objected to a written copy of the instruction for the jury it was too late to object; the action had already been taken. Moreover, the record does not affirmatively show or even suggest that counsel’s assent manifested a conscious choice of tactics. Respondent further contends that the error in giving the Allen instruction was rendered harmless by the court’s later admonition to the jurors not to surrender an independent opinion merely because it was not shared by a majority of the jurors. It is true that the trial judge instructed in terms of CALJIC No. 17.40 (see fn. 2, ante, p. 820) less than an hour before the verdicts were returned, just as he had earlier so instructed before the jury commenced its deliberations. Thus, the jury received an instruction essentially contradictory to the prejudicially erroneous portions of the Allen instruction both before and after the latter was given. We are urged to infer primarily from the close proximity between the return of the verdicts and the second curative instruction that the impermissible considerations introduced into the deliberations by the Allen instruction were neutralized. We decline to indulge the suggested inference, however, because as the Gainer court recognized, it is impossible on an appellate record “to gauge the precise effect” (People v. Gainer, supra, 19 Cal.3d at p. 854) of the erroneous instruction on the jury. Accordingly, we cannot assume the jury was not improperly influenced by the Allen instruction in its deliberations as to Little. For these reasons, the judgment against Little must be reversed. II. Motion to Suppress Evidence In a pretrial hearing lasting 10 days, defendants sought to suppress all evidence obtained as a result of their detention after they were accosted in the Chevrolet van, all evidence seized in the Concord house and certain evidence removed from their persons in a booking search. Their motions were denied. At about 1:30 a.m. on January 10, 1974, Concord Police Sergeant David Duge was driving his unmarked police car through a residential neighborhood; he observed a reddish-orange Chevrolet van slowly approach and stop at a stop sign, proceed through the intersection and continue slowly at about 10 to 15 miles per hour in a 25-mile per hour zone. Duge followed at a distance as the van traveled slowly in a circle throughout the neighborhood. Duge had worked the area for the past eight months and was familiar with local traffic patterns. He knew that on weeknights people on the street at this hour were usually purposefully headed home from a swing shift at work. Duge also knew that a large number of residential and automobile burglaries and thefts had been reported in the neighborhood. He knew vans were often used in burglaries. Duge was familiar with vehicles which frequented the area and this van was unfamiliar to him; its slow speed and circular route were suggestive of a “casing” operation in preparation for a burglary or theft. Accordingly, when the van had traveled full circle and reached the point at which he had first noticed it, Duge pulled the vehicle over. Duge, in uniform, approached the driver and asked for his operator’s license. The driver, Little, tendered a California driver’s license bearing the name “Robert James Scalise” and an Oakland address; he asked why he was being stopped. Duge replied he wanted to talk to Little about his suspicious driving; Little gave no reply. The officer then asked the passenger, Remiro, for some identification; Remiro produced a driver’s license bearing his own name and an Oakland address. Little told Sgt. Duge that he and Remiro were looking for a friend in the area. When asked for the friend’s identity, Little muttered something unintelligible; when asked again he replied they were looking for “DeVoto” on Sutherland Court. Defendants had just driven past Sutherland Court as Duge was following them. Although claiming to be lost, neither defendant asked the officer for assistance. Duge returned to his patrol car where he ran a warrant check on “Scalise” and Remiro, and had the dispatcher check the cross-directory for a DeVoto on Sutherland Court. Within minutes the dispatcher reported there were no outstanding warrants under either name and the directory showed no DeVoto on Sutherland Court. Duge knew that Concord experiences a high rate of theft and burglary committed by out-of-towners who “hit” in the suburbs and then flee back to large urban areas like Oakland. Confronted with out-of-town addresses, the apparent absence of a DeVoto on Sutherland Court, and the other information known to him, Duge decided to talk to the passenger. He approached the passenger side of the vehicle, tapped on the window and asked Remiro to step out so he could talk to him. Remiro stepped out. Duge feared for his safety because his cover unit had not yet arrived and he was outnumbered by at least one person with the possibility that more were concealed from his view in the van. Consequently, Duge asked Remiro if he had any weapons and advised him that he was going to frisk him. Remiro stepped back and pulled open his jacket, revealing on his right hip a bulge Duge recognized as the handle of an automatic pistol. As Duge ran for cover, Remiro fired two shots at him. Duge returned the fire and was shot at two more times. As Remiro fled on foot he fired one more shot at Duge. The van speeded away and disappeared from view. Duge transmitted an emergency radio call for help. As Officer Lee arrived shortly thereafter, the van reappeared a block or so away. The two officers stopped the van and arrested defendant Little at gunpoint. Little pointed out and Lee observed in plain view a handgun inside the van on the engine cover. Lee opened the cargo door on the passenger side, looked inside for passengers, and discovered none. Concord Police Officer Breuker next arrived on the scene. He looked into the van from the driver’s side and saw the pistol. Unaware that Officer Lee had already checked the van for occupants, he looked into the open cargo doors, and took a number of pictures showing the position of the pistol which he then seized as evidence. Breuker checked the van for additional evidence of the shooting. On the step of the driver’s door he found a wallet which contained a social security card of “Scalise.” In the rear of the engine compartment, he found a brown paper bag, crimped at the top. Inside the bag, Breuker observed a large stack of papers; the top sheet bore an unusual drawing and the legend, “Symbionese Liberation Army.” In a brown paper bag behind the driver’s seat was a fully loaded, break-apart rifle. Aware that a group calling itself the Symbionese Liberation Army had claimed credit for the assassination of an Oakland school official, Breuker reported his observations to his superior officer. The van was sealed and searched later that morning pursuant to a search warrant obtained on the strength of the foregoing information. Among the items seized under the warrant was a large amount of SLA literature. Little was taken to the police station and booked. Included in his property was a set of eight keys on a chain. Remiro was found hiding behind a parked vehicle on Sutherland Court and was apprehended by Concord police officers at about 5:30 a.m. on January 10, 1974. On his person was a .380 Walther automatic pistol. He was transported to the police station, booked, and a set of keys was removed from his person. At 6:26 p.m. on January 10, 1974, Contra Costa County firemen arrived at the house located at 1560 Sutherland Court to combat a structural fire. The fire was controlled at 6:29 p.m. Fire Captain Spomer arrived at 6:31 p.m. Spomer was informed that a five-gallon gasoline can had been found inside the house and that there had been a flash fire. He entered the house to look for victims and identification of occupants as well as to open up the house for ventilation due to the heavy concentration of smoke. Inside Spomer noted a strong odor of gasoline and observed that all the doors to the hallway had been removed, thus facilitating a draft and permitting the fire to spread more quickly. Spomer suspected arson. He observed Molotov cocktails, assorted ammunition and weapons, stacks of papers on the floors, in the bathtub and elsewhere, and revolutionary literature on the bedroom walls. Searching for identification, Spomer saw an open desk drawer; inside he observed a document entitled “Communique No. 3, Symbionese Liberation Army.” The communique concerned the execution of correctional officers, their wives, and others. Aware of the shootout involving Officer Duge and the discovery of SLA materials in the van, Spomer notified the Concord Police Department of his findings. Captain Tamborski of the Concord Police Department arrived at 1560 Sutherland Court around 7:30 p.m. after being advised of the suspected arson and the discovery of the SLA material. Tamborski entered the house to investigate the suspected arson. In one of the bedrooms, Tamborski seized a wallet and an envelope bearing the name of Russell Jack Little. Tamborski noticed a heavy aroma of gasoline in the house. Fearing for his safety, he exited a minute or two after entering. At that point Tamborski was informed the house was outside the Concord city limits. He therefore informed the Contra Costa Sheriff’s Department of the suspected arson. Between 7:30 and 8 p.m., Tamborski reentered the house accompanying a bomb squad and sheriff’s personnel. Tamborski’s stated purpose for reentering was to look for evidence that might relate to the Duge shootout because at that point the arson investigation had been taken over by the Contra Costa Sheriffs Department. While inside the house, Tamborski observed a cardboard box containing two galvanized pipe bombs. He pointed them out to the bomb squad leader. Marion Homer was the owner of 1560 Sutherland Court. In October 1973, he had rented the house to Nancy Ling Perry and Little posing as Nancy and George DeVoto. “George DeVoto” paid the rent on November 1, 1973. At the time of the fire the rent had been paid through February of 1974. Horner arrived at the scene at approximately 7 p.m. He was informed that arson was suspected. From information provided by several neighbors at the scene, he determined that just prior to the fire a heavily laden automobile “bottomed out” on the driveway and sped from the premises after one of the passengers, a tenant, threw something into the garage; moments later an explosion blew out the dining room windows. “Nancy DeVoto” had not been seen or heard from since the automobile’s departure. Presuming the house abandoned, Horner took possession of the premises. Later that evening he gave oral and written consent for the Contra Costa Sheriff’s office and the Oakland Police Department to conduct a search of the premises. Sergeant John Agler of the Oakland Police Department was in charge of the investigation of the Marcus Foster murder. In the early hours of January 10, 1974, Agler learned of the shooting incident in Concord involving Sergeant Duge. Agler was informed of the fire at 1560 Sutherland Court at about 10 p.m., January 10; he was told that SLA information had been discovered inside the house. At the scene he was informed that arson was suspected and that explosive substances and a communique threatening the lives of prison officials and their families had been found inside the house. Agler was shown the consent to search executed by Homer. Although Agler believed that an emergency situation existed and that it was necessary to go inside the house, he decided nonetheless to delay the search until a warrant could be procured. After receiving the search ' warrant, Agler went through the house and seized a number of items, some of which have been described hereinbefore in the statement of facts. A. Detention of Defendants. Defendants challenge the stop of the van and the intrusiveness of the detention that followed. “[A] reasonable suspicion of involvement in criminal activity will justify a temporary stop or detention” even though the circumstances are also consistent with lawful activity. (In re Tony C. (1978) 21 Cal.3d 888, 894 [148 Cal.Rptr. 366, 582 P.2d 957].) Officer Duge stopped the van because he suspected the possible involvement of the occupants in criminal activity. His suspicions were objectively supported by the unusually slow speed of the vehicle (Williams v. Superior Court (1969) 274 Cal.App.2d 709, 712 [79 Cal.Rptr. 489]), the lateness of the hour (People v. Rosenfeld (1971) 16 Cal.App.3d 619, 622 [94 Cal.Rptr. 380]), the frequency of burglaries and thefts from vehicles in the area (Rosenfeld, supra, at p. 622), and the circuitous route of the van through the neighborhood, suggesting to Duge the occupants were “casing” the area preparatory to some criminal activity (People v. Jackson (1968) 268 Cal.App.2d 306, 310 [74 Cal.Rptr. 40]). Strong weight must be given to the officer’s experience and familiarity with the area (People v. Cowman (1963) 223 Cal.App.2d 109, 117-118 [35 Cal.Rptr. 528]). Duge had patroled the neighborhood for eight months; he was generally familiar with vehicles which frequented the area and did not recognize the van as one of them (see Bramlette v. Superior Court (1969) 273 Cal.App.2d 799, 804 [78 Cal.Rptr. 532]); he knew that vans were commonly used by professional burglars. The investigative stop of the van was legally justified. An investigative detention exceeds constitutional bounds when extended beyond what is reasonable under the circumstances that justified its initiation. (Willett v. Superior Court (1969) 2 Cal.App.3d 555, 559 [83 Cal.Rptr. 22].) “No hard and fast rule can be formulated for determining the reasonableness of the period of time elapsing during a detention. The dynamics of the detention-for-questioning situation may justify further detention, further investigation, search, or arrest.” (Fn. omitted; Pendergraft v. Superior Court (1971) 15 Cal.App.3d 237, 242 [93 Cal.Rptr. 155].) Duge suspected he was dealing with automobile burglars. His initial confrontation with defendants reasonably heightened his suspicion. Both defendants had Oakland addresses suggesting to Duge they fit the pattern of transient predator common to the area; Little’s identification of the “friend” for whom he was looking was equivocal; the fact he was looking for Sutherland Court, had driven past it but did not request the officer’s assistance provided further grounds for suspicion. At that point Duge was justified in seeking verification and additional information from his radio dispatcher. Although there were no outstanding warrants for Remiro or “Scalise,” the fact that the residence of a DeVoto on Sutherland Court could not be verified warranted further inquiry. At this point the detention had lasted no more than five minutes. As part of a lawful detention, a police officer may request a suspect to alight from a vehicle if the circumstances warrant it (People v. Mickelson (1963) 59 Cal.2d 448, 450 [30 Cal.Rptr. 18, 380 P.2d 658]). It is warranted where the officer’s safety is involved. (People v. Nickles (1970) 9 Cal.App.3d 986, 991-992 [88 Cal.Rptr. 763].) “[E]ven inchoate and unparticularized suspicion that it would be better for the officer’s safety for the passenger to alight is sufficient to justify such a request, because merely stepping out of a vehicle is a minimal intrusion upon privacy, . . .” (People v. Beal (1974) 44 Cal.App.3d 216, 221 [118 Cal.Rptr. 272].) In the presence of suspected felons, Duge was apprehensive for his safety. His concern was magnified by the fact that he was alone and did not know if there were other people concealed in the rear of the van. Under all the circumstances, his decision to frisk Remiro for weapons was reasonable. (People v. Mickelson, supra, 59 Cal.2d at pp. 450-451; People v. Beal, supra, 44 Cal.App.3d at p. 221; cf. People v. Superior Court (Simon) (1972) 7 Cal.3d 186, 208 [101 Cal.Rptr. 837, 496 P.2d 1205].) The investigative stop and detention of defendants was reasonable both in its initiation and its scope. B. Prewarrant Search of the Van. Police officers may search a vehicle without a warrant where they have reasonable cause to believe it has contraband or evidence of crime or is itself an instrumentality of the commission of a crime, because there is no distinction of constitutional significance between an immediate search and the immobilization of the vehicle until a warrant is obtained. (Chambers v. Maroney (1970) 399 U.S. 42, 48-52 [26 L.Ed.2d 419, 426-429, 90 S.Ct. 1975]; People v. Laursen (1972) 8 Cal.3d 192, 201 [104 Cal.Rptr. 425, 501 P.2d 1145].) When Breuker searched the van he was aware that one suspect was in custody at the scene, an armed suspect was at large possibly in the area, there was a pistol in the van and the van had been involved in the gun battle. It was therefore reasonable to believe, as Breuker did, that the van contained more evidence of the crime as well as evidence helpful in the apprehension of Remiro who was at large and known to be armed and dangerous (see People v. Laursen, supra, 8 Cal.3d at p. 201). The circumstances permitted a warrantless search of the vehicle. Moreover Little’s gun and the wallet and identification of “Scalise” were not the product of a search, both having been in plain view of the seizing officer. C. Search of the House at 1560 Sutherland Court. Within an hour and a half after the arrival of the fire department at the Sutherland Court house, Fire Captain Spomer and Police Captain Tamborski had each entered the house twice. The fire department had arrived at 6:26 p.m. Although the fire was brought under control minutes thereafter, a unit remained at the scene until 6:30 the following morning in case the fire had not been completely extinguished. The entries of Captains Spomer and Tamborski occurred between 6:30 and 8 p.m. on January 10. Officials may enter a structure without a warrant to extinguish a I fire and may remain a reasonable time after the blaze has been extinguished to investigate the cause thereof. The warrantless seizure and observation of evidence while in the premises to suppress the fire or determine its cause are constitutional. (Michigan v. Tyler (1978) 436 U.S. 499, 509-510 [56 L.Ed.2d 486, 498-499, 98 S.Ct. 1942]; Romero v. Superior Court (1968) 266 Cal.App.2d 714, 719-720 [72 Cal.Rptr. 430].) The entries of Spomer and Tamborski into the house were related to the fire, its control and causes and the neutralization of conditions therein rendered even more dangerous by the fire, e.g., the presence of Molotov cocktails and other explosives. These entries without a warrant were proper; once inside the officers could lawfully seize items in plain view that constituted contraband or evidence. Little’s wallet and the envelope bearing his name constituted evidence of occupancy of the house; accordingly Tamborski was justified in seizing them. Tamborski’s second entry into the house accompanying the sheriff’s bomb squad was additionally justified by other exigent circumstances (see Warden v. Hayden (1967) 387 U.S. 294, 298 [18 L.Ed.2d 782, 787, 87 S.Ct. 1642]). The known presence in the house of bombs and volatile substances created an emergency situation that justified a warrantless entry (Romero v. Superior Court, supra, 266 Cal.App.2d at pp. 716-717; People v. Superior Court (Peebles) (1970) 6 Cal.App.3d 379, 382-383 [85 Cal.Rptr. 803]). Moreover Tamborski was aware of the Foster murder, Duge’s gun battle with Little and Remiro, probable SLA involvement in both violent incidents and the apparent association of Little and the SLA with the Sutherland Court premises; he had also been informed of the contents of Communique No. 3 threatening the murder of correctional employees and their families. Thus Tamborski had reason to link the house with an ongoing criminal conspiracy involving the gravest crimes, both past and prospective. The actual existence of such a conspiracy and its murderous objectives had already been demonstrated beyond dispute. The gravity of the offense is a relevant factor in determining whether an emergency exists. (People v. Sirhan (1972) 7 Cal.3d 710, 739 [102 Cal.Rptr. 385, 497 P.2d 1121].) Where circumstances reasonably justify the apprehension that a criminal conspiracy constitutes an imminent threat to life “it is essential that law enforcement officers be allowed to take fast action in their endeavors to combat such crimes.” (Ibid.) The suspected presence of evidence of such a conspiracy in the Sutherland Court house justified Tamborski’s entry on the premises and the seizure of such evidence. (Ibid.) So far as the record shows, the house was not again entered until around 3 a.m. on January 11, when Sergeant Agler of the Oakland Police Department executed the search warrant for the premises. Defendant’s sole objection to the warrant is that it is overbroad in purporting to authorize the seizure, inter alia, of “revolutionaiy material.” All that is required is reasonable particularity in the description in a search warrant of items to be seized. (People v. Barthel (1965) 231 Cal.App.2d 827, 832 [42 Cal.Rptr. 290].) Here the officers were confronted with an emergency such that inordinate delay might well precipitate mortal consequences. Under the circumstances a more precise description of some of the items subject to seizure could not reasonably be required. (United States v. Scharfman (2d Cir. 1971) 448 F.2d 1352, 1355, cert, den., 405 U.S. 919 [30 L.Ed.2d 789, 92 S.Ct. 944].) Moreover, it would be unreasonable in those circumstances to require the officer to delay to engage in more extended investigation where the situation demanded alacrity (Halpin v. Superior Court (1972) 6 Cal.3d 885, 901-902 [101 Cal.Rptr. 375, 495 P.2d 1295] (cone. opn. of Mosk, J.)). Hence the generic description of “revolutionary materials” was adequate. Under the circumstances the authorities cited by defendants imposing the requirement of more particularized description of items to be seized under a warrant where First Amendment rights are involved are inapposite. The revolutionary materials to be seized under this warrant, although writings, were sought not for the ideas they communicated nor in connection with any literary properties they may have had but for their evidentiary value as proof of a criminal conspiracy. (See Stanford v. Texas (1965) 379 U.S. 476, 485, fn. 16 [13 L.Ed.2d 431, 437, 85 S.Ct. 506]; United States v. Scharfinan, supra, 448 F.2d at p. 1354.) Assuming arguendo that the search warrant was invalid in the respect urged by defendants, the items seized by Sgt. Agler including the so-called “revolutionary materials” were nevertheless the product of a lawful search. Putting aside the search warrant, there were other bases, i.e., emergency and consent, upon which Sgt. Agler’s search of the Sutherland Court premises was justified. 1. Emergency. The emergency created by the existence of a criminal conspiracy constituting an imminent threat to human life has already been discussed as justification for Captain Tamborski’s second entry into the premises. Sgt. Agler was in charge of the investigation into the Foster murder and was more intimately acquainted with the SLA and its activities than Tamborski. Knowledge of the details of the Foster killing and of the contents of SLA Communique No. 1 issued in its aftermath led him to believe the SLA was engaged in a conspiracy to assassinate public officials. By the time he arrived at Sutherland Court on January 10, Sgt. Agler knew of the shooting incident involving Sgt. Duge and the consequent arrest of two suspects, one of whom was armed with a Walther pistol; he knew that such a weapon had killed Foster; Agler had already compared cartridge casings ejected at the scene of the Duge shootout with those recovered from the scene of the Foster murder and determined the firing pin impressions on the primers were the same. Agler was also aware that SLA literature had been found in the Chevy van and in the Concord house; he had compared the literature found in the van with the Foster communique and observed the common use of identical seven-headed cobras. Agler was informed that explosive devices had been found at the Concord house; that the fire was of incendiary origin; that a tenant of the house had exited, thrown objects into the house and departed just before it caught fire; and that no one had since returned to the house. Agler also knew that at least three suspects were directly involved with the Foster killing but only two suspected of SLA involvement, Little and Remiro, were then in custody. Agler was aware that an SLA communique threatening the execution of correctional officials and their families with cyanide bullets had been found in the house. Considering the demonstrated capacity of the SLA to cany out its murderous designs, Agler’s belief that a continuing conspiracy in fact existed to perpetrate additional murders was reasonably grounded. In our view the exigencies confronting Agler and his fellow officers were even more compelling than those described in People v. Sirhan, supra, where “the mere possibility that there might be . . . evidence [of a conspiracy to assassinate prominent political leaders] in the [defendant’s] house fully warranted” a search of the house. (People v. Sirhan, supra, 7 Cal.3d at p. 739.) 2. Consent. After talking with firemen and observers at the scene, Marion Homer took possession of his property and consented to its search. Sgt. Agler had been shown Homer’s written consent before he conducted the warrant search. Abandoned property is subject to search and seizure without a warrant. (Abel v. United States (1960) 362 U.S. 217, 241 [4 L.Ed.2d 668, 687, 80 S.Ct. 683]; People v. Smith (1966) 63 Cal.2d 779, 801 [48 Cal.Rptr. 382, 409 P.2d 222].) A landlord may consent to a search of premises abandoned by a tenant (People v. Urfer (1969) 274 Cal.App.2d 307, 310 [79 Cal.Rptr. 60].) Here the owner of the premises, not himself a law enforcement official, determined that his tenants had abandoned the property. This determination was based on the belief that his tenants had themselves unlawfully fired the house and permanently absconded. This conclusion in turn was based on information from private citizens who had observed some of the events at the scene and from firemen who were lawfully in the house to extinguish the blaze. The situation here is thus unlike that in Michigan v. Tyler, supra, 436 U.S. 499 [56 L.Ed.2d 486], where it was held that law enforcement officials could not draw a conclusion of abandonment and act thereon based solely upon evidence of arson acquired from their own investigative efforts conducted on the premises. (436 U.S. at pp. 505-506 [56 L.Ed.2d at pp. 495-496].) Here the trial court expressly found an abandonment of the Sutherland Court premises. However, it is not necessary for present purposes that we pass upon the legal sufficiency of the evidence to support that finding, because we are of the view that the circumstances known both to Homer and the officers, whether or not they constituted an abandonment in contemplation of law, justified Sgt. Agler in the reasonable, good faith belief that Homer had authority to consent to search. (People v. Hill (1968) 69 Cal.2d 550, 554-555 [72 Cal.Rptr. 641, 446 P.2d 521]; People v. Smith, supra, 63 Cal.2d at p. 799.) Since it is only unreasonable searches that are unlawful, Horner’s consent provided discrete justification for the search. We conclude that the entry and search by Sgt. Agler was lawful (1) under the authority of the search warrant, (2) in reaction to a bona fide life endangering emergency, and (3) pursuant to the consent of the owner. The securing of a search warrant despite the known existence of and reliance upon two alternate grounds for the search, either one of which alone would justify the search, does not detract from the validity of either such alternate ground. (People v. Sirhan, supra, 7 Cal.3d at p. 739, fn. 17.) In this area the experienced officer knows that a misjudgment can never be retrieved and caution therefore is the better part of discretion. The fire had destroyed the electrical system in the house. Out of necessity the search was conducted by flashlight and a single light supplied by a portable generator. Only items believed to be of evidentiary value, many of which were in plain sight, were seized (see Skelton v. Superior Court (1969) 1 Cal.3d 144, 157 [81 Cal.Rptr. 613, 460 P.2d 485]). These included documents pertaining to the identity of the occupants of the house and the membership, structure and activities of the SLA. Entire stacks of papers were removed, necessarily in some cases because their partially burned condition made segregation impractical and might have damaged any fingerprint evidence thereon. Furthermore, the time required to inventory on the premises each individual document seized would have consumed in excess of one week. We conclude under the circumstances that the search was not unreasonable in scope. D. Seizure of Keys Removed From Defendants’ Persons. It will be recalled that a set of keys was removed from the persons of both defendants when they were booked by the Concord police. These keys were retained with each defendant’s personal property until relinquished later the same day to an officer of the Oakland Police Department who was able to match some of them with the locks of houses and buildings involved in the investigation. The officer did not have a search warrant authorizing seizure of the keys. The warrantless search of a person at booking and seizure of his property is reasonable. (United States v. Edwards (1974) 415 U.S. 800, 802-804 [39 L.Ed.2d 771, 774-776, 94 S.Ct. 1234]; People v. Ross (1967) 67 Cal.2d 64, 70 [60 Cal.Rptr. 254, 429 P.2d 606], revd. on other grounds sub nom., Ross v. California (1968) 391 U.S. 470 [20 L.Ed.2d 750, 88 S.Ct. 1850].) “Once articles have lawfully fallen into the hands of the police they may examine them to see if they have been stolen, test them to see if they have been used in the commission of a crime, return them to the prisoner on his release, or preserve them for use as evidence at the time of trial. [Citation.] During their period of police custody an arrested person’s personal effects, like his person itself, are subject to reasonable inspection, examination, and test.” (People v. Rogers (1966) 241 Cal.App.2d 384, 389-390 [50 Cal.Rptr. 559].) We are satisfied that the trial court ruled correctly in denying defendants’ motions to suppress evidence. III. Other Pretrial Motions A. Motion to Dismiss Indictment. Defendants cite as error the trial court’s denial of their pretrial motions to dismiss the indictment (Pen, Code, § 995.) In the trial court defendants set forth in writing over 90 specific “objections” to grand jury testimony. The same alleged errors are urged upon us. A number of the prosecutor’s questions of grand jury witnesses are condemned as leading or as assuming facts not in evidence. Preliminarily it must be said that many of defendants’ objections to the form of the question are simply not well taken. (See generally, Witkin, Cal. Evidence (2d ed. 1966) Introduction of Evidence at Trial, §§ 1155-1162, pp. 1071-1077.) Furthermore, with respect to the second category of question the complaint is in the nature of an objection to the order of proof, since facts so assumed were ultimately proved. More to the point, the defendants’ criticism focuses on the form of the question rather than the substance of the answer. As to those objections which have some substance, each such question could have been recast to avoid the objection without calling for, or presumably eliciting, a different response. This category of objection thus does not relate either to the admissibility or competence of the testimony elicited by technically imperfect questions. The testimony in response to such questions deficient in form is not incompetent or inadmissible within the meaning of Penal Code section 939.6, subdivision (b). Nor in most of the cited instances can the answers of witnesses which are characterized by defendants as opinion and speculation be disregarded as incompetent or inadmissible. Indeed most such complaints of defendants amount to no more than a quibble with the witness’ choice of words. Granting that some questions were defective in form and further granting that some unqualified lay opinion crept into the record, defendants were not prejudiced thereby. The competent, admissible evidence received by the grand jury was sufficient to support the indictment. A single photograph of Little was shown to several grand jury witnesses who were asked if they could identify him. Citing Stovall v. Denno (1967) 388 U.S. 293 [18 L.Ed.2d 1199, 87 S.Ct. 1967], Little contends the use of a single photograph in this manner rendered the resulting identifications impermissibly suggestive. We reject the contention. Stovall has no application to an in-court identification during a formal judicial hearing. (See People v. Wheeler (1971) 23 Cal.App.3d 290 [100 Cal.Rptr. 198].) Moreover, the record demonstrates that the identifications by the witnesses Damstra and Mullin were based on previous personal contacts with Little and were not dependent on the display of his photograph. With respect to the witness Blackburn, his grand jury testimony identifying Little’s photograph was very tentative, indicating only that Little resembled one of two assailants whom he had scarcely noticed as he walked past them in the twilight. Thus Blackburn’s identification testimony is of such minimal probative value as to have a negligible effect on the overall weight of the evidence presented to the grand jury. The indictment is not vulnerable to the criticism that the prosecutor failed to inform the grand jury of a statement, claimed to be exonerating, made by Blackburn while in the hospital and heavily sedated. The statement was to the effect that one or both of his assailants may have been black. (Both defendants are Caucasian.) The rule of Johnson v. Superior Court (1975) 15 Cal.3d 248 [124 Cal.Rptr. 32, 539 P.2d 792] requiring disclosure to a grand jury of evidence favorable to the defendant is applicable only to post-Johnson indictments. (People v. McAlister (1976) 54 Cal.App.3d 918, 925-926 [126 Cal.Rptr. 881]; People v. Snow (1977) 72 Cal.App.3d 950, 957-958 [140 Cal.Rptr. 427].) The instant indictment preceded the Johnson decision by over one year. Furthermore, failure to disclose the Blackburn statement did not violate defendants’ rights to due process of law. Given the circumstances under which it was made, the probative value of the statement was extremely tenuous; moreover its substance was not inconsistent with the theory of defendants’ guilt as vicariously liable coconspirators. At trial defendants renewed the motion to dismiss the indictment after Blackburn testified and the prosecutor did not ask him to identify, nor did he identify, either defendant. The trial court properly denied the renewed motion as untimely (Pen. Code, § 997; People v. Waters (1975) 52 Cal.App.3d 323, 332 [125 Cal.Rptr. 46]). In any event, in ruling on a motion to dismiss the indictment, the trial court cannot evaluate the credibility of a grand jury witness and the weight of his grand jury testimony by reference to his later testimony at trial no matter the extent to which the two may appear inconsistent (People v. Manson (1976) 61 Cal.App.3d 102, 167-168 [132 Cal.Rptr. 265]). Defendants contend that the standard to be applied by a trial court in ruling on a motion to acquit is the standard that should apply to test the evidentiary sufficiency of an indictment on a motion to dismiss. A motion to acquit should be granted “if the evidence then before the court is insufficient to sustain a conviction of such offense or offenses on appeal.” (Pen. Code, § 1118.1.) The question raised by defendants should be addressed to the Legislature, not this court. Upon examination of the transcript, we conclude that there was before the grand juiy sufficient competent, admissible evidence to constitute reasonable and probable cause to indict. (Pen. Code, § 995.) B. Right to Trial in the Vicinage. Defendants successfully moved venue of the trial from Alameda County. Four counties: Los Angeles, Santa Clara, Monterey and Sacramento, were available in which to try the case. In the trial court, defendants sought transfer to Los Angeles, arguing that it was the only county of those available which approximated Alameda County demographically. Over defendants’ objection, the trial was moved to Sacramento County. Defendants contend that when it is necessary to change venue to insure an accused a fair trial, the right to a jury drawn from the vicinage, i.e., the place where the crime occurred, requires that the transferee jurisdiction approximate as closely as possible the racial and ethnic population mix of the county in which the crime was committed. The right to trial by a jury selected from the residents of the vicinage is guaranteed by the Sixth and Fourteenth Amendments to the federal Constitution (People v. Jones (1973) 9 Cal.3d 546, 556 [108 Cal.Rptr. 345, 510 P.2d 705]). Since by definition the right may be exercised only in a fixed location, it does not follow an accused when he is tried outside the jurisdiction where the crime occurred. The right to trial by a jury selected from the vicinage may be waived; it has been held to have been waived by a successful motion to change venue. (United States v. Angiulo (1st Cir. 1974) 497 F.2d 440, 441, cert, den., 419 U.S. 896 [42 L.Ed.2d 140, 95 S.Ct. 175].) Under the circumstances here then, defendants’ “vicinage” right did not oblige the trial court to transfer the trial to the available county most demographically congruent with Alameda County. In any event, defendants fell far short of demonstrating significant demographic differences between Sacramento and Los Angeles Counties. They produced 1970 census figures which showed a non white population in Alameda County of 21 percent, in Los Angeles County of 14 percent, and in Sacramento County of 10 percent. The 4 percent differential between Los Angeles and Sacramento Counties is not constitutionally significant. Moreover, the proof shows, as defendants concede, that Los Angeles County, defendants’ preferred place of trial, was significantly less heterogenous than Alameda County. C. Motion to Quash Jury Panel. Defendants assert that their constitutional right to be tried by a jury drawn from a representative cross-section of the community was infringed because the exclusive use of voter registration lists to compile the venire from which their jury was selected resulted in under-representation on the venire of black, Spanish-speaking and sumamed and poor people. The latter are deemed by defendants to include those with annual incomes of less than $6,000. Defendants preserved the issue for appeal by a timely motion before trial to quash the jury panel. The motion was denied. “The use of voter registration lists as the sole source of jurors is not constitutionally invalid [citations], at least in the absence of a showing that the use of those lists resulted ‘in the systematic exclusion of a “cognizable group or class of qualified citizens” ’ [citations], or that there was ‘discrimination in the compiling of such voter registration lists.’ [Citations.]” (Fn. omitted; People v. Sirhan, supra, 7 Cal.3d at pp. 749-750.) Prospective jurors are obtained in Sacramento County by random selection from among all the registered voters. Defendants do not claim discrimination in the compiling of those lists. It is defendants’ burden to establish prima facie the systematic exclusion or underrepresentation of cognizable groups in the community and that a group alleged to be excluded or underrepresented is in fact cognizable, i.e., characterized by a “similarity of attitudes, ideas or experience among its members so that the exclusion prevents juries from reflecting a cross-section of the community.” (Adams v. Superior Court (1974) 12 Cal.3d 55, 60 [115 Cal.Rptr. 247, 524 P.2d 375].) Defendants failed to carry their burden. It was shown that in some areas of Sacramento County with proportionately higher concentrations of black, Spanish-speaking and surnamed, and poor people, a lower percentage of all those age 18 or over residing therein were registered to vote than in the areas of the county with less intense minority concentrations. However, it was not shown how many black, Spanish-speaking and surnamed, or poor people age 18 or over resided in those areas of higher minority concentration. Thus defendants’ proof did hot exclude the possibility that an evert higher percentage of minorities in the areas surveyed were registered to vote than the percentage of residents age 18 or over registered in nonminority areas of the county. Furthermore, defendants’ thesis of underrepresentation is based on a comparison of 1970 population figures with 1975 voter registration statistics, an anomoly which led one of defendants’ own experts to opine that conclusions from such a comparison may be totally invalid because population statistics vary substantially within a five-year period. In summary, defendants failed to show that black, Spanish-speaking and surnamed or poor people were registered to vote at a lower percentage rate than the remaining voters in, the county; consequently, defendants have not proved underrepresentation of those minorities on the jury venire. , Defendants argue; however, that all those who reside in the areas with high minority concentrations, all of which are located in the core city or within the urban area of Sacramento County, constitute a cognizable class; defendants claim to have proved underrepresentation of this “class” by virtue of the fact that a lower percentage of such residents age 18 or over register to vote than in other areas of the county. We do not agree that a cognizable class can be established merely on the basis of common geographical location (People v. McDowell (1972) 27 Cal.App.3d 864, 875 [104 Cal.Rptr. 181]). The only shared characteristic of this “class” is that, for whatever reason, the members do not choose to register to vote with the same incidence as residents of other areas of the county. That trait, however, does not identify a cognizable class which must be included in a jury venire if it is to be representative. (People v. Sirhan, supra, 7 Cal.3d at p. 750, fn. 26.) The trial court properly denied the defendants’ motion to quash the jury panel. IV. Alleged Errors During Trial A. Misconduct of Prosecutor in Opening Statement. In his opening statement the prosecutor presaged proof that the engine in the Chevrolet van was obtained in an uncharged robbery committed by Remiro in Berkeley in October 1973 in which a white van was stolen. Defendants’ objection thereto was overruled and a later motion for mistrial was denied. Still later in the trial, out of the presence of the jury, the prosecutor offered evidence of the robbery and disposition of the loot as proof of an overt act relevant to the conspiracy theory upon which the case was tried. The trial court sustained defendants’ objection to the admission of the evidence. Neither then nor thereafter did defendants’ move to strike the offending part of the opening statement or request a curative admonition. Without identifying the legal bases therefor, defendants complain they Were prejudiced by the prosecutor’s opening statement. We observe, however, that the record will not support a finding of bad faith or resort to deceptive or reprehensible tactics on the part of the prosecutor. (See People v. Beivelman (1968) 70 Cal.2d 60, 75 [73 Cal.Rptr. 521, 447 P.2d 913].) In the context of this protracted trial, the remarks singled out for condemnation fade into insignificance in comparison with the quantum of competent evidence of defendants’ guilt. Thus failure to request a curative admonition constitutes a waiver of any claim of error on appeal. (People v. Beivelman, supra.) B. Evidence of SLA Terrorist Conspiracy. Defendants assert error in permitting the People over objection to introduce evidence of the broader scope of the SLA conspiracy of which the Foster-Blackburn murder-assault were but a single manifestation. In particular they complain of the introduction of several SLA communiques seized in the Concord house which reveal plans to attack officials of the Department of Corrections and their families, the General Tire and Rubber Company plant in Burlingame and the Avis Rent-A-Car Company, and to kidnap a member of the University of California Board of Regents and an executive of Kaiser Industries in Oakland. Also introduced over defense objections were items of evidence tending to show preparations for carrying out the activities described in the communiques, including hand-drawn maps of relevant locations, personal data regarding projected kidnap victims and notes suggestive of preparatory surveillance activities. Defendants insist that evidence going beyond the narrow scope of the plans for and perpetration of the Foster murder is irrelevant and more prejudicial than probative. Although conspiracy was not charged, the case was tried on a conspiracy theory. Failure to charge conspiracy as a separate offense does not preclude the People from proving that those substantive offenses which are charged were committed in furtherance of a criminal conspiracy. (People v. Pike (1962) 58 Cal.2d 70, 88 [22 Cal.Rptr. 664, 372 P.2d 656]); nor, it follows, does it preclude the giving of jury instructions based on a conspiracy theory (People v. Washington (1969) 71 Cal.2d 1170, 1174 [81 Cal.Rptr. 5, 459 P.2d 259, 39 A.L.R.3d 541]; People v. Ditson (1962) 57 Cal.2d 415, 447 [20 Cal.Rptr. 165, 369 P.2d 714]). Furthermore, contrary to defendants’ claim, neither law, logic nor common experience dictates that a criminal conspiracy may embrace only one criminal act. The scope of a given conspiracy may be as limited or as expansive as the capacity of criminal minds to design unlawful combinations. In People v. Manson, supra, 61 Cal.App.3d 102, defendants were engaged in a conspiracy of dimensions far broader than the murders with which they were charged. The objective of the conspiracy was the realization of Manson’s fanatical dream of a racial war, a cataclysm he referred to as “Helter Skelter.” In upholding the admission under the coconspirator exception to the hearsay rule of a statement by Manson to a coconspirator after the charged killings had been committed, the Court of Appeal stated, “Boundaries of a conspiracy are not limited by the substantive crimes committed in furtherance of the agreement. [Par.] Here the conspiracy amounted to fulfillment of Manson’s prophesy. . . . The gist of the conspiracy was the comprehended common design, however bizarre and fanciful. It is not necessary that the object of the conspiracy be carried out or completed. [Citations.] The corollary of that proposition is that the conspiracy continues until it is accomplished or abandoned. It is obvious that Helter Skelter was never realized and the conspiracy remained pending. . . .” (Fn. omitted; People v. Manson, supra, 61 Cal.App.3d at pp. 155-156.) Similarly, in the case at bench, the slaying of Superintendent Foster was only a step in a planned series of terrorist activities designed to accomplish the SLA’s avowed goal of fomenting a violent upheaval within American society in order to effect revolutionary change. Here, as in Manson, the ultimate goals of the conspirators were never achieved. A SLA communique of February 4, 1974, claiming credit for the Patricia Hearst kidnapping, and the gun battle with Los Angeles police on May 17, 1974, in which six SLA members died both tended to prove that the conspiracy of which these defendants were members continued in existence at least until the date of the Los Angeles gun battle. The challenged evidence was relevant and admissible to show the nature and scope of the conspiracy which spawned the plans to murder Foster and Blackburn. Evidence of conspiratorial activities both before and after the commission of the charged offenses was necessary to show the commitment of defendants and their coconspirators to the common design and their active participation in the means intended to further its implementation, all of which was relevant to and highly probative of defendants’ criminal culpability in the charged offenses which were themselves but one of several means calculated to achieve the common goal. (People v. Manson, supra, 61 Cal.App.3d at pp. 155-156; People v. Cowan (1940) 38 Cal.App.2d 231, 239 [101 P.2d 125]; People v. Wilson (1926) 76 Cal.App. 688, 694 [245 P. 781]; People v. Schmidt (1917) 33 Cal.App. 426, 446 [165 P. 555].) There is no merit in defendants’ claim that the probative value of the conspiracy evidence was outweighed by its prejudicial effect. (Evid. Code, § 352.) The record demonstrates that the trial court considered at length defendants’ objections on this basis at trial, and chose in its discretion to admit the evidence. The trial court decision admitting evidence over the objection that it is more prejudicial than probative will not be disturbed on appeal absent a showing of manifest abuse of discretion resulting in a miscarriage of justice. (People v. Wein (1977) 69 Cal.App.3d 79, 90 [137 Cal.Rptr. 814].) No such showing has been made here. Defendants claim there was insufficient evidence adduced to prove their participation in the SLA conspiracy. It is well-settled that the unlawful agreement among conspirators may be inferred from the conduct of defendants in mutually carrying out a common illegal purpose, the nature , of the act done, the relationship of the parties, the interests of the alleged conspirators and other circumstances (People v. Cockrell (1965) 63 Cal.2d 659, 667 [47 Cal.Rptr. 788, 408 P.2d 116]; People v. Johnson (1969) 276 Cal.App.2d 232, 237-238 [80 Cal.Rptr. 683]; People v. Lynam (1968) 261 Cal.App.2d 490, 502 [68 Cal.Rptr. 202]). Here, the very nature of the Foster murder—a planned, coordinated ambush carried out by three persons acting together—points ineluctably to a conspiracy. Remiro’s ownership and possession of one of the Foster murder weapons and Little’s ownership of another weapon possibly used in the killing, defendants’ relationships to the Sutherland Court house and its contents, their associations with Nancy Ling Perry, Donald DeFreeze, Willie Wolfe, Camilla Hall, Angela Atwood and Patricia Soltysik, their adherence to and performance of acts in furtherance of the principles set forth in the SLA manifesto, and their consciousness of guilt as evidenced by the flight from and shoot-out with Sgt. Duge and their attempted escape from jail (discussed infra) all furnished more than an adequate evidentiary basis tying them to the conspiracy. A stronger than prima facie showing of the existence of a conspiracy had been made before evidence of acts and declarations in furtherance thereof was admitted. (Cf. Evid. Code, § 1223; People v. Lawrence (1972) 25 Cal.App.3d 498, 510-511 [102 Cal.Rptr. 16].) The order of proof thereafter was a matter for the discretion of the trial judge. The record reveals no abuse. C. Evidence of Escape Attempt. On March 1, 1975, approximately one month before trial began, defendants attempted forcibly to escape from the Alameda County jail. Before they were subdued, Remiro had knocked a guard to the floor, gouged him in the eye, taken his keys and inserted them in the lock on a gun locker; Little had knocked another guard to the floor, stabbed him in the throat with a pencil and battered him repeatedly as the guard struggled to reach an alarm to summon help. At trial defendants sought to exclude all testimony regarding their escape attempt, arguing the prejudicial effect of such evidence outweighs its probative value on the issue of consciousness of guilt. Defendants contend that since they were then being held in custody for other charges in addition to the Foster-Blackburn murder-assault (the other charges arose out of the shoot-out with Sgt. Duge in Contra Costa County), and since the escape attempt occurred almost 16 months after the Foster murder, an inference therefrom of consciousness of guilt could as well be ascribed to the Contra Costa County charges then pending as to the Foster-Blackburn case. Defendants also urge the evidence is unduly inflammatory due to the assaultive nature of the escape attempt. Evidence of escape from custody pending trial is admissible on the issue of consciousness of guilt (People v. Terry (1970) 2 Cal.3d 362, 395 [85 Cal.Rptr. 409, 466 P.2d 961]; People v. Burnett (1967) 251 Cal.App.2d 651, 654-655 [59 Cal.Rptr. 652]). Remoteness of the escape attempt from the crime as to which it is offered to prove consciousness of guilt goes to weight, not admissibility (People v. Terry, supra, 2 Cal.3d at p. 395); pendency of charges in addition to that as to which the escape attempt is offered to show consciousness of guilt likewise goes to the weight rather than the admissibility of the evidence (People v. Perry (1972) 7 Cal.3d 756, 772-774 [103 Cal.Rptr. 161, 499 P.2d 129]). Evidence of defendants’ assault on their jailers was essential to prove the escape attempt and to permit the juiy to assess the effect and value of the evidence on the issue of consciousness of guilt. The evidence of attempted escape was properly admitted. D. Other Evidence Claimed to Be Irrelevant and Prejudicial. Gary Ling, Nancy Ling Perry’s brother, testified and identified his sister’s handwriting in a notebook which was seized at 1560 Sutherland Court. The notebook contained a hand-drawn map of the Foster ambush site. Ling was a competent witness to identify his sister’s handwriting (Evid. Code, § 1416; People v. Williams (1960) 187 Cal.App.2d 355, 367 [9 Cal.Rptr. 722]). His testimony was relevant in establishing a foundation for the notebook’s admissibility. The notebook itself was relevant to proof of the general SLA conspiracy and the specific plan to assassinate Foster and Blackburn. Colton Westbrook testified about the Black Cultural Association, an outside group concerned with the rights of black prisoners at the California Medical Facility at Vacaville. Westbrook testified that Little, Willie Wolfe and others attended association meetings in Vacaville in the spring and summer of 1972; Donald DeFreeze, then a prison inmate at Vacaville, was an active participant in the association during the time of Little’s attendance. The trial court limited the jury’s consideration of Westbrook’s testimony to the case against Little. The testimony was relevant in showing an early association among Wolfe, Little, and DeFreeze who became coconspirators within a few months thereafter. Wilbur Taylor, president of the Chabot Gun Club in Oakland, laid an evidentiary foundation for sign-in sheets at the club shooting range for several dates from April 6, 1973, through November 4, 1973. Appearing on these sheets in various combinations on identical days were the names of defendants and Willie Wolfe, M. Soltysik, N. Perry, Angela Atwood, William Harris and Emily Harris. Taylor was not personally present when any of these sheets were signed. The records were admissible under Evidence Code section 1271 which authorizes the receipt under the business records exception to the hearsay rule of a writing made in the regular course of business at or near the time of the act, condition or event recorded, when the custodian of records or other qualified witness testifies to the document’s identity and its mode of preparation and when, in the opinion of the court, the sources of information, method, and time of preparation are such as to justify its admission. Admissibility of such records is within the broad discretion of the trial court (People v. Williams (1973) 36 Cal.App.3d 262, 275 [111 Cal.Rptr. 378]). The fact that the person who actually prepared the record is not called as a witness does not render the document inadmissible (People v. Williams, supra, 36 Cal.App.3d at p. 275). Taylor’s testimony provided an adequate foundation for admission of the sign-in sheets. The fact that the sign-in sheets do not show that any of the conspirators were necessarily present at the same time goes only to the weight, not the admissibility, of the evidence. The evidence itself tends to show association of the conspirators and preparations to achieve the objects of the conspiracy and thus is relevant. Christopher Thompson testified that he sold the .38 caliber Rossi revolver to Little in March 1973. A firearms expert identified the Rossi as a possible murder weapon and as the weapon which fired some of the shells found at 1560 Sutherland Court. Regardless of whether he ever personally used the weapon, Little’s purchase of it was evidence of conduct in furtherance of the SLA’s goals. The testimony was thus relevant. Witnesses Agler and Tamborski made brief reference to the presence of pipe bombs in the Sutherland Court house; witness Robert Manning referred to the arrival of the bomb squad. The relevance of the evidence is clear in conjunction with other evidence tending to associate defendants with the Concord house as it suggests that defendants’ involvement with the SLA was more than a mere devotion to the group’s fanatical rhetoric and in fact extended to participation in amassing arms as called for by the organization’s written credo. V. Security Measures at Trial Defendants charge their right to public trial was infringed by the implementation of a sheriff’s security order which for a time apparently permitted the photographing, fingerprinting and searching of members of the public seeking entry to the trial (the order is not part of the record on appeal). The order was modified by the court on the 11th day of defendants’ 46-day trial to delete the fingerprinting requirement. Defendants assert the order discouraged some members of the public from attending the trial and thus the trial was not public. “Under normal conditions a public trial is one which is open to the general public at all times. This right of attendance may be curtailed under special circumstances without infringement of the constitutional right, but it cannot be denied altogether, nor can it be restricted except in cases of necessity. The most common of these is the necessity of preserving order and preventing interference with the proceedings.” (People v. Byrnes (1948) 84 Cal.App.2d 72, 73 [190 P.2d 290].) In one of the exchanges between court and counsel in this matter, the court indicated it felt the security measures other than fingerprinting remained appropriate due to a threat to bomb the Sacramento County courthouse received by the sheriff after defendants’ arrival in Sacramento for trial, and the background of the SLA and defendants’ alleged involvement with that group. It is also apparent from the record that the trial, court felt the security measures were necessary in part for defendants’ own protection as it had been reported that certain persons in disagreement with their political philosophies might attempt to gain entrance to the trial. Furthermore, from the record in Jordan v. Lowe (see fn. 5, ante, p. 847), we note the court could well fear that defendants posed a security risk not only in terms of the possibility that some of their comrades still at large might attempt some sort of rescue maneuver, but also because jeweler’s saws were found in defendants’ effects when they were transferred from Alameda to Sacramento County. Also leaflets were being distributed at the courthouse urging protests and demonstrations to free “prisoners of war” Little and Remiro. And known members of the Weather Underground, a group which had claimed responsibility for bombings and other acts of violence, attended defendants’ trial. In fact, during cross-examination of prosecution witness Christopher Thompson, Little physically attacked him on the witness stand to the verbal encouragement of a trial spectator shouting, “Kill him! Kill him!” In this atmosphere the security order was eminently reasonable as an aid in law enforcement’s endeavor to guard against possible harm to defendants, the officers of the court, and trial spectators themselves. Even so, despite these measures, there was no wholesale exclusion of the public from the trial. Defendants’ rights to a public trial were not improperly curtailed. VI. Disposition The judgment of conviction as to defendant Remiro is affirmed. Because the giving of the instruction condemned in People v. Gainer, supra, constituted prejudicial error per se, the judgment of conviction as to defendant Little must be and is reversed. Paras, J., concurred. KARLTON, J. I concur in the majority’s determination that under compulsion of People v. Gainer (1977) 19 Cal.3d 835 [139 Cal.Rptr. 861, 566 P.2d 997], the conviction of defendant Little must be reversed. I must respectfully dissent from the majority’s failure to reverse the Remiro conviction on the same Under present law a retrial free of the error condemned in People v. Gainer, supra, is required in both the Little and Remiro cases. The issue presented to this intermediate tribunal is neither the guilt nor innocence of the defendants (that is a matter for the jury), nor whether “Gainer” error should be per se reversible (that having been determined by the Supreme Court). Our function in this case is to apply the law explicated by the Supreme Court to proceedings in the trial court; when we do so, reversal is required. In People v. Gainer, supra, the Supreme Court held, in terms leaving no room for interpretation, “in criminal trials an Allen instruction ‘should never again be read in a California courtroom’. ” (People v. Gainer, supra, at p. 857.) An Allen-type instruction was read in this case. In Gainer, the Supreme Court determined its ruling would be applicable “to all cases not yet final as of the date of this decision.” (Id., at p. 853.) This case was not yet final as of the date of this decision. In Gainer, the court held that the portion of the Allen instruction admonishing minority jurors to reexamine their position is per se reversible error (id., at pp. 854-855). Such an admonition was given in this case. The three elements being present, the result seems inevitable. The majority argues that it does not violate the “per se” rule of Gainer “because it is evident that the jury had already concluded its deliberations with respect to Remiro before the giving of the erroneous instruction on June 9.” It must first be observed that the nature of a per se rule is that it is per se. This court is precluded from making the very determination which the majority undertakes. Moreover, because of the extremely limited circumstances under which a jury’s deliberations may be made of record (see, e.g., Silverhart v. Mount Zion Hospital (1971) 20 Cal.App.3d 1022 [98 Cal.Rptr. 187, 54 A.L.R.3d 250]) at best the majority may only infer that the Allen instruction had no effect. Indeed, an examination of the majority opinion demonstrates that very fact. For instance the majority observes, “The identity of that defendant was established circumstantially when the verdicts were returned in open court. . . . That these dates were not inaccurate can be inferred from the court’s instruction . . . .” (Italics added.) Moreover, I cannot concur in the majority’s denial that the giving of the Allen instruction affected the jury’s reconsidering its verdict. The majority argues that “that possibility, theoretical only, finds no support in the record.” This is a per se rule, and the appellant has no burden whatsoever to demonstrate support in the record. Moreover, the giving of the Allen instruction, in itself, might well preclude jurors with doubt from raising the doubt subsequent to the signing of the verdict but before that verdict was recorded and thus became final. Again, any such doubts cannot be the subject of an admissible affidavit and thus cannot be part of the record. (Silverhart v. Mount Zion Hospital, supra, 20 Cal.App.3d 1022.) The court’s determination “that the erroneous instruction could have had no effect” is simply untenable. We certainly may reasonably assume that it did not affect the jury’s determination, and undoubtedly the record would support such an assumption, but such determinations or assumptions are simply not open to us under Gainer. The majority makes a very persuasive case that, at least insofar as defendant Remiro, the law ought not to be per se reversal, since the Allen instruction did not affect the verdict. The majority is free to use the peculiar facts in this case to demonstrate what the law ought to be; however, with the greatest of respect, I suggest the majority is not free to substitute a prejudicial error doctrine for a per se rule (Auto Equity Sales, Inc. v. Superior Court (1962) 57 Cal.2d 450 [20 Cal.Rptr. 321, 369 P.2d 937]). To use Mr. Witkin’s felicitous phrase, “we are bound but not gagged.” (Witkin, Manual on Appellate Court Opinions (1977) pp. 168-169.) Aside from stare decisis, the majority opinion fails to address a critical basis for the per se rule—judicial economy. The Supreme Court recognized that the partial retroactivity of Gainer would have a significant impact on the courts. It chose that the impact would fall on trial rather than appellate courts. The Supreme Court, in explaining its ruling, noted “This conclusion also has the beneficial effect of removing a fertile source of criminal appeals. Were the giving of an Allen-type charge potentially proper, the appellate courts of this state would be required to sift the facts and circumstances of each case in which the charge was delivered to determine whether the charge placed undue pressure on the jury to agree. . . . Other courts which have banned Allen have also done so in the name of appellate economy. [Citations.]” (People v. Gainer, supra, 19 Cal.3d at pp. 852-853, fn. 17.) As I have indicated, the majority makes a good case that exception to the per se rule is appropriate—but as to the question of reversal, that argument is irrelevant. What Justice Jackson said concerning the United States Supreme Court is equally true of our Supreme Court: “[They] are not final because [they] are infallible, but [they] are infallible only because [they] are final.” (Brown v. Allen (1953) 344 U.S. 443, 540 [97 L.Ed. 469, 533, 73 S.Ct. 397] (cone. opn. of Jackson, J.).) Inasmuch as I concur in the judgment concerning the reversal of the Little conviction, extended discussion of that portion of the opinion is not required. The Supreme Court briefly discussed the issue of retroactivity in its decision and, noting that the purpose of the Gainer decision was to rectify “judicial error which significantly infects the fact-finding process at trial,” determined that the rule there announced would “apply to the instant matter and to all cases not yet final as of the date of this decision.” (People v. Gainer, supra, 19 Cal.3d at p. 853.) The issue of retroactivity is one of extreme complexity, and reasonable people might well disagree as to the application of the criteria to a given situation. I am less certain than my brethren of the majority that the “tarnished reputation” of the judiciary is related to our high court’s efforts to ensure criminal trials consistent with values inherent in our Constitution and fundamental notions of a fair trial. However that may be, our sworn duty is to protect those principles and I cannot help but believe that the abandonment of such principles would ultimately only lead to the people’s justified contempt for the courts. We need not fear, however. I have confidence in both our courts and our people. I suspect that someday in the future, the majority’s hyperbole will be viewed with bemused wonder. A petition for a rehearing was denied March 23, 1979, and the petitions of appellant Remiro and respondent for a hearing by the Supreme Court were denied May 3, 1979. Clark, J., and Richardson, J., were of the opinion that the petitions should be granted. The complete text of the instruction as given follows: “In a large proportion of cases and, perhaps, strictly speaking, in all cases, absolute certainty cannot be attained or expected. “Although the verdict to which a juror agrees must, of course, be his own verdict, the result of his own convictions, and not a mere acquiescence in the conclusion of his or her fellows, yet in order to bring twelve minds to a unanimous result, you must examine the questions submitted to you with candor, and with a proper regard and deference to the opinions of each other. “You should consider that the case must at some time be decided, that you are selected in the same manner and from the same source from which any future jury must be selected, and there is no reason to suppose the case will ever be submitted to twelve men or women more intelligent, more impartial, or more competent to decide it, or that more or clearer evidence will be produced on the one side or the other. And with this view it is your duty to decide the case, if you can conscientiously do so. “In order to make a decision more practicable, the law imposes the burden of proof upon the State to establish every part of their case to a moral certainty and beyond a reasonable doubt, and the State of California—pardon me—and if in any part of it you are left in doubt, the defendant is entitled to the benefit of the doubt and must be acquitted. “But in conferring together, you ought to pay proper respect to each other’s opinions, and listen with a disposition to be convinced to each other’s arguments. “And, on the other hand, if much of the larger number of your panel are for conviction, a dissenting juror should consider whether a doubt in his or her own mind is a reasonable one, which makes no impression upon the minds of the many men or women equally honest, equally intelligent with himself or herself, and to have heard the same evidence with the same attention and equal desire to arrive at the truth and under the sanction of the same oath. “On the other hand, if a majority are for acquittal, the minority ought to seriously ask themselves whether they may not reasonably and ought not to doubt the correctness of the judgment which is not concurred in by most of those with whom they are associated, and distrust the weight or sufficiency of that evidence which fails to carry conviction in the minds of their fellows. “This is given to you as a suggestion of the theory and rationale behind jurors coming to a decision one way or the other. . . .” CALJIC No. 17.40 as given by the court provided: “Both the People and the defendants are entitled to the individual opinion of each juror. “It is the duty of each of you to consider the evidence for the purpose of arriving at a verdict, if you can do so. “Each of you must decide the case for yourself, but should do so only after a discussion of the evidence and the instructions with the other jurors. “And you will notice there it doesn’t say there the manner in which that discussion is had, just in open assembly. “You should not hesitate to change an opinion, if you are convinced it is erroneous. Elowever, you should not be influenced to decide any question a particular way because a majority of the jurors, or any of them, favor such a decision.” The Supreme Court has urged the continued use of CALJIC No. 17.40, noting that it contains none of the objectionable features of the usual Allen charge. (People v. Gainer (1977) 19 Cal.3d 835, 856 [139 Cal.Rptr. 861, 566 P.2d 997].) Communique No. 1 “serve[d] notice on the fascist Board of Education and its fascist supporters that [the SLA has] issued a Death Warrant on All Members [of the school board] and Supporters” of an Oakland School District plan to increase security measures in the public schools to combat violence and vandalism. But for the murder of Superintendent Foster, Communique No. 1 might otherwise have been dismissed as puerile posturing and bluster by a group of pathetic social misfits. However, in the stark light of reality, it took on an especially ominous and sinister aspect, posing a concrete and imminent threat of violent death to every member of the Oakland School Board and others in the community who might be regarded as their supporters. It was a threat the police could not and the courts should not ignore. Remiro purchased the .380 Walther pistol used to kill Foster in July 1973 at a San Leandro sporting goods store. Little purchased the .38 caliber Rossi revolver in March 1973 from a private party. The bill of sale was made out to a fictitious purchaser. Among the items of evidence seized in the Concord house was a document entitled “The Seven Principles of the Symbionese Liberation Army” (sometimes referred to herein as the “SLA manifesto”). It spelled out the credo and goals of the SLA and the means of implementing them. Little’s fingerprints were found on several pages thereof. Little’s fingerprints were also found on two unnumbered SLA communiques—one concerning a plan to kidnap a named Kaiser Corporation executive for ransom and another concerning an attack on the General Tire and Rubber Company in Burlingame, on a list of Regents and officers of the University of California, on a piece of paper containing a name and address identified thereon as that of the “Treas. of Bd. of Regents,” on a list of gun stores and a book entitled Markmanship and on many other documents admitted into evidence which were seized in the Concord house. Remiro’s fingerprints were also found on the Markmanship book, and on a notebook containing handwritten notes which recited some of the seven principles of the Symbionese Liberation Army and which also contained a hand-drawn sketch of the area around the administrative offices of the Oakland Unified School District where Superintendent Foster was shot. Remiro’s fingerprints were found on several other pieces of documentary evidence seized in the Concord house. Fingerprints of Perry, Soltysik and DeFreeze were also found on numerous documents seized at that location. There was a plethora of evidence showing close association among defendants and the six-named SLA members over a substantial period of time. This included evidence that Little had assisted DeFreeze to evade apprehension after his escape from Soledad Prison in March 1973 and evidence that various combinations of these individuals visited an East Bay firing range on several occasions in 1973 and practice-fired weapons. William and Emily Harris also participated in some of the latter activities. As examples of fealty to SLA goals, defendants acquired false identification papers and were involved in the rental of apartments and houses intended to serve as sáfe houses for SLA “combat units;” they also acquired numerous weapons and ammunition for stockpiling all in accordance with the approved modus operandi set forth in the SLA manifesto. On April 17, 1975 (the 12th day of trial), a petition for extraordinary relief was filed in this court by the American Civil Liberties Union seeking on behalf of certain alleged would-be spectators to prohibit further implementation of the security order. We denied the petition on April 24, 1975 (Jordan, et al. v. Lowe, 3 Civ. 15181). Assigned by the Chairperson of the Judicial Council. Because I believe the conviction of both defendants must be reversed, and because the majority has decided otherwise, I will not discuss the various other issues raised by the defendants. Given the majority’s ruling, such a discussion would merely extend this opinion beyond tolerable limits for no perceptible purpose. I simply note that I do not concur in all of the court’s other rulings. As a trial judge, under the famous determinative doctrine of “whose ‘ox is being gored,’ ” I might have preferred a different choice—but like my brethren, no one asked us.
Structure and Development of the Nephridia. 345 consideration. Child (1900) has given an accurate account of the early development within the eg-g-membrane. My own descriptions will have reference to the metamorphosis and larvai development up to a period at which most of the detìuitive external characte ristics are complete. Very little care is needed iu order to preserve the larvae for weeks or even months in a healtby condition, and capable of growth and development. The egg-strings are placed in clean sea watcr in large, tìat, well-lighted dishes covered by sheets of glass to prevent excess of evaporation, and contaiuing a few pieces of Ulva for aeration. It is advisable to change the sea-water at inter vals of a week or ten days; otherwise the dishes require very little attention. The organic debris present, derived from minute Algae, particles of decaying Ulva, and the bodies of dead larvae, seem to fiiruish the larvae with suftìcient food for the development of a fairly large proportion of their number. Under these conditions, larvae have beeu kept in the laboratory for periods of 14 to 15 weeks, apparently iu a perfectly healthy condition and exhibiting ali the normal activities. Development seems, however, to progress more slowly than under the naturai conditions, for the largest S])eci mens reared never exceeded a length of 15 mm., although exhibiting iu other respects au almost perfect agreement with the adult in appearance, anatomica! structure, and behavior. . The food-supply is in ali probability insufficient for rapid growth; this is indicated by the fact that if kept in sea-water to which Carmine powder has been added the larvae usually exhibit a greatly increased rate of growth, especially iu the early stages. It would no doubt be pos sible, by the employment of suitable methods of feeding, to rear them to more advanced stages than the above. The larvae leave the egg-strings in from two to three days after oviposition, in the form of slightly elongated maggot-like free-swim miug orgauisms (about 0.3 mm. in leugth), which exhibit a most pronounced positive phototaxis combined with negative geotaxis. As a result of these tendencies they swim rapidly to the light side of the dish and there gather iu enormous numbers at the surface of the water. At this stage a larva possesses in addition to the peri stominm which is without setae), three setigerous trunk somites, in each of which are two paired sets of setae corresponding to the notopodial and neuiopodial setae of the adult. The notopodial setae of each side are generali}- two in number, spoon-shaped and spear-
using System.Threading.Tasks; using Discord; using Discord.Commands; namespace AtlasBotNode.Modules.Smash { [Group("SSBU")] public class SmashUltimateModule : ModuleBase { [Command("character")] public async Task GetCharacter(string characterName) { var embedBuilder = new EmbedBuilder(); embedBuilder.WithImageUrl("https://www.ssbwiki.com/images/thumb/c/c1/Pichu_SSBU.png/250px-Pichu_SSBU.png"); embedBuilder.WithThumbnailUrl( "https://www.ssbwiki.com/images/thumb/d/d6/PichuHeadSSBU.png/50px-PichuHeadSSBU.png"); embedBuilder.WithCurrentTimestamp(); embedBuilder.WithColor(Color.Blue); embedBuilder.WithTitle("Pichu"); embedBuilder.AddField( "Description", "Pichu returns as a playable character in Super Smash Bros. Ultimate. This marks its first playable appearance in 17 years, a distinction shared with fellow Melee newcomer Young Link. Like Pikachu, Pichu now has a female variant to select from via alternate costumes; in Pichu\'s case, it is Spiky-eared Pichu. Unlike Pikachu Libre, Spiky-eared Pichu does not appear as a Spirit."); embedBuilder.AddField( "Updates", "**2.0.0:**\n- Pummel has less hitlag (15 frames → 14). This makes it faster and allows for an additional use at high percents before the opponent escapes."); embedBuilder.AddField("Notable Players", "Captain L, JaKaL, Nietono, VoiD, Yetey"); await ReplyAsync(string.Empty, embed: embedBuilder.Build()); } } }
305 P.3d 543 STATE of Idaho, Plaintiff-Respondent, v. John DOE (2012-07), Defendant-Appellant. No. 39272. Court of Appeals of Idaho. May 14, 2013. Review Denied Aug. 29, 2013. Nevin, Benjamin, McKay & Bartlett, LLP, Boise, for appellant. Robyn A. Fyffe argued. Hon. Lawrence G. Wasden, Attorney General; Mark W. Olson, Deputy Attorney General, Boise, for respondent. Mark W. Olson argued. GRATTON, Judge. John Doe appeals from the district court’s order, on intermediate appeal, reversing the magistrate’s order granting Doe’s motion to expunge his record pursuant to Idaho Code § 20-525A. I. FACTUAL AND PROCEDURAL BACKGROUND In July 1997,'when Doe was fourteen years old, he committed the crime of tobacco possession by a minor in violation of I.C. § 18-1502. In December 1997, the State accused him of coming within the purview of the Juvenile Corrections Act (JCA) for committing the felony offense of burglary and the misdemeanor offenses of petit theft, being a runaway, and being beyond the control of his parents. In January 1998, the State filed a JCA petition again accusing Doe of being a runaway. He admitted to committing the offenses and was sentenced to juvenile probation. In the instant ease, in September 1998, Doe was cited for possession of marijuana. In May 1999, Doe pled guilty to possession of drug paraphernalia by a minor in violation of I.C. § 18-1502C and was sentenced to pay a fine and costs. In December 2000, at the age of seventeen, Doe was charged with possession of marijuana. However, on that occasion, the State asserted that Doe fell within the purview of the JCA for violating I.C. § 37-2732(c)(3), rather than charging him for a second time with a misdemeanor under I.C. § 18-1502C. In October 2001, Doe was charged with underage consumption of alcohol under I.C. § 23-949. Sometime after Doe’s eighteenth birthday, he began to turn his life around. Doe attempted to enlist in the Army, but was turned away because of his juvenile record. In October 2006, Doe petitioned the juvenile court for expungement of his record. In November 2006, the juvenile court expunged Doe’s JCA cases; however, his misdemeanors for: (1) possession of drug paraphernalia; (2) tobacco possession; and (3) underage consumption of alcohol were not expunged. The expungement of his JCA cases was sufficient to allow him to enlist in the Army. Doe was deployed to Iraq for fifteen months, returned in February 2009, and was honorably released from active duty in the spring of 2010. Doe sought work as a correctional officer while he attended the College of Western Idaho. Doe was unable to obtain employment as a correctional officer because of his misdemeanor conviction in this case. In September 2010, Doe filed a motion to expunge the records associated with the conviction of his misdemeanor. Doe brought the motion pursuant to I.C. § 20-525A, which permits individuals to petition for expungement of records from proceedings adjudicated under the purview of the JCA. In the alternative, Doe sought to have his record expunged pursuant to Idaho Court Administrative Rule 32(i) or the magistrate’s inherent authority. Doe also argued that if I.C. § 20-525A does not provide him with relief, then the statute violates the Equal Protection Clauses of the United States and Idaho Constitutions. The State objected to Doe’s motion, arguing the magistrate court did not have authority to expunge the records. The magistrate granted Doe’s motion pursuant to I.C. § 20-525A. While recognizing that I.C. § 20-525A only provided for the expungement of records of proceedings adjudicated under the purview of the JCA, the magistrate reasoned: In this case, the state had the option of pursuing the possession of marijuana charge under the JCA or Idaho Code § 18-1502C. As previously noted, the charge was ultimately resolved with fines and costs of $213.50, and no other punishment, classes or probation. Since the state contemporaneously dismissed a failure to appear citation, it appears likely that the resolution was part of a negotiated plea agreement. In choosing to pursue under section 18-1502C it appears the state did not feel it necessary to subject Defendant to the JCA and was pursuing a less onerous outcome. I have no reason to believe that the state[’]s seemingly benign decision not to proceed under the JCA was designed to deprive Defendant of his right to expungement under that Act. In fact, in providing the option of prosecuting cases involving possession of marijuana by minors under Idaho Code § 18-1502C, it is unlikely the legislature intended to create a loophole where this type of misdemeanor is ineligible for expungement while other more serious non-violent crimes remain eligible. The magistrate did not, however, expunge the records of the convictions for possession of tobacco or underage consumption of alcohol. The State appealed the magistrate’s determination. In its intermediate appellate capacity, the district court reversed the magistrate. The district court concluded that only the records of proceedings adjudicated under the JCA were eligible for expungement and the misdemeanor crime did not fall within the plain wording of the JCA. The district court also rejected Doe’s alternative arguments to affirm the magistrate’s order. The district court did, however, remand the case to the magistrate court for a determination of whether Doe may have his records sealed pursuant to I.C.A.R. 32(i). Doe timely appealed. II ANALYSIS On review of a decision of the district court, rendered in its appellate capacity, we review the decision of the district court directly. Losser v. Bradstreet, 145 Idaho 670, 672, 183 P.3d 758, 760 (2008). We examine the magistrate record to determine whether there is substantial and competent evidence to support the magistrate’s findings of fact and whether the magistrate’s conclusions of law follow from those findings. Id. A. Idaho Code § 20-525A Doe claims the JCA’s statutory scheme and the legislature’s expressed intent “makes it apparent that the legislature intended juveniles charged with misdemeanor marijuana cases in magistrate court to be able to expunge their records.” This Court exercises free review over the application and construction of statutes. State v. Reyes, 139 Idaho 502, 505, 80 P.3d 1103, 1106 (Ct.App.2003). Where the language of a statute is plain and unambiguous, this Court must give effect to the statute as written, without engaging in statutory construction. State v. Burnight, 132 Idaho 654, 659, 978 P.2d 214, 219 (1999); State v. Escobar, 134 Idaho 387, 389, 3 P.3d 65, 67 (Ct.App.2000). The language of the statute is to be given its plain, obvious, and rational meaning. Burnight, 132 Idaho at 659, 978 P.2d at 219. If the language is clear and unambiguous, there is no occasion for the court to resort to legislative history, or rules of statutory interpretation. Escobar, 134 Idaho at 389, 3 P.3d at 67. When this Court must engage in statutory construction because an ambiguity exists, it has the duty to ascertain the legislative intent and give effect to that intent. State v. Beard, 135 Idaho 641, 646, 22 P.3d 116, 121 (Ct.App.2001). To ascertain such intent, not only must the literal words of the statute be examined, but also the context of those words, the public policy behind the statute and its legislative history. Id. It is incumbent upon a court to give an ambiguous statute an interpretation which will not render it a nullity. Id. Constructions of an ambiguous statute that would lead to an absurd result are disfavored. State v. Doe, 140 Idaho 271, 275, 92 P.3d 521, 525 (2004). Idaho Code § 20-525A, which is part of the JCA, allows for the expungement of certain records associated with the JCA. Specifically, with regard to misdemeanor offenses, the statute provides: “[a]ny person who has been adjudicated in a case under this act and found to be within the purview of the act for having committed misdemeanor” may petition the magistrate court for expungement of those records. I.C. § 20-525A(2). “Upon the entry of the order [granting expungement] the proceedings in the petitioner’s ease shall be deemed never to have occurred and the petitioner may properly reply accordingly upon any inquiry in the matter.” I.C. § 20-525A(5). Moreover, once expunged, a record of the proceedings can be revealed only upon an order of a court of competent jurisdiction, and is not subject to inspection except “by the person who is the subject of the records.” I.C. § 20-525A(5). Idaho Code § 20-525A is unambiguous and plainly applies only to matters adjudicated under the JCA. The statute states, to qualify for expungement, the person must have been “adjudicated in a case under this act and found to be within the purview of the act for having committed misdemeanor or status offenses.” I.C. § 20-525A(2). The statute does not provide any court with the power to expunge the records for a conviction unassociated with the JCA. To hold otherwise, would be contrary to the unambiguous and plain language of the statute. Doe argues, in the alternative, that the plain language of I.C. § 20-525A does not control because it would constitute a “patently absurd result.” Doe’s contention has some appeal, as a person can have a felony conviction expunged under the JCA while a lesser crime, as in this case, cannot be expunged if not within the purview of the JCA. However, challenging this result is a legislative, not judicial, matter. The Idaho Supreme Court recently addressed the absurd result argument in Verska v. Saint Alphonsus Regional Medical Center, 151 Idaho 889, 896, 265 P.3d 502, 509 (2011). There, the Court noted Thus, we have never revised or voided an unambiguous statute on the ground that it is patently absurd or would produce absurd results when construed as written, and we do not have the authority to do so. “The public policy of legislative enactments cannot be questioned by the courts and avoided simply because the courts might not agree with the public policy so announced.” Indeed, the contention that we could revise an unambiguous statute because we believed it was absurd or would produce absurd results is itself illogical. “A statute is ambiguous where the language is capable of more than one reasonable construction.” An unambiguous statute would have only one reasonable interpretation. An alternative interpretation that is unreasonable would not make it ambiguous. If the only reasonable interpretation were determined to have an absurd result, what other interpretation would be adopted? It would have to be an unreasonable one. Verska, 151 Idaho at 896, 265 P.3d at 509 (internal citations omitted). Doe next claims that if I.C. § 20-525A does not grant the relief requested, then the statute violates the Equal Protection Clauses of both the United States and Idaho Constitutions. Doe specifically argues that I.C. § 20-525A “facially discriminates between juveniles petitioned into juvenile court ... and those who are cited and held to answer in magistrate court.” Whether an act of the legislature is reasonable, arbitrary, or discriminatory is a question of law for determination by this Court. Coghlan v. Beta Theta Pi Fraternity, 133 Idaho 388, 395, 987 P.2d 300, 307 (1999) (quoting Bon Appetit Gourmet Foods, Inc. v. Dep’t of Employment, 117 Idaho 1002, 1003, 793 P.2d 675, 676 (1989)). When this Court performs an equal protection analysis, it identifies the classification under attack, articulates the standard under which the classification will be tested, and then determines whether the standard has been satisfied. Coghlan, 133 Idaho at 395, 987 P.2d at 307. Legislative acts are presumed to be constitutional, with any doubt concerning interpretation of a statute being resolved in favor of finding the statute constitutional. Meisner v. Potlatch Corp., 131 Idaho 258, 261, 954 P.2d 676, 679 (1998). Therefore, Doe bears the burden of overcoming the presumption of the validity of I.C. § 20-525A. See. Meisner, 131 Idaho at 261, 954 P.2d at 679. Different levels of scrutiny apply to equal protection challenges. When considering the Fourteenth Amendment, strict scrutiny applies to fundamental rights and suspect classes; intermediate scrutiny applies to classifications involving gender and illegitimacy; and rational basis scrutiny applies to all other challenges. Id. at 261-62, 954 P.2d at 679-80. For analyses made under the Idaho Constitution, slightly different levels of scrutiny apply. Strict scrutiny, as under federal law, applies to fundamental rights and suspect classes. Id. at 261, 954 P.2d at 679. Means-focus scrutiny, unlike the federal intermediate scrutiny, is employed “where the discriminatory character of a challenged statutory classification is apparent on its face and where there is also a patent indication of a lack of relationship between the classification and the declared purpose of the statute.” Coghlan, 133 Idaho at 395, 987 P.2d at 307 (quoting Jones v. State Bd. of Med., 97 Idaho 859, 871, 555 P.2d 399, 411 (1976)). Rational basis scrutiny applies to all other challenges. See Coghlan, 133 Idaho at 395, 987 P.2d at 307. Doe does not assert which standard this Court should apply to his equal protection claim. Rather, Doe merely suggests his argument meets even the lowest standard, rational basis. Doe admits his claim does not involve a fundamental right or a suspect class. Therefore, strict scrutiny is not appropriate. Furthermore, intermediate-level scrutiny under the Fourteenth Amendment is not appropriate because this case does not involve gender or illegitimacy. See Meisner, 131 Idaho at 261, 954 P.2d at 679. Analyzing Doe’s claim under Article I, Section 2 of the Idaho Constitution, it is clear that means-focus scrutiny is also inapplicable. Any discriminatory character between different juveniles is not apparent on the face of I.C. § 20-525A. Moreover, there is no patent indication of a lack of relationship between the classification and the legislative purpose of I.C. § 20-525A. The statute’s purpose is to “protect the community, hold the juvenile offender accountable for his actions, and assist the juvenile offender in developing skills to become a contributing member of a diverse community.” I.C. § 20-501. Because the statute’s classification directly relates to its declared purpose, means-focus scrutiny does not apply. Therefore, it is appropriate to apply rational basis scrutiny to Doe’s equal protection claim. Under both the United States and Idaho Constitutions, a classification will pass rational basis review if it is rationally related to a legitimate government purpose and “if there is any conceivable state of facts which will support it.” Meisner, 131 Idaho at 262, 954 P.2d at 680 (quoting Bint v. Creative Forest Prods., 108 Idaho 116, 120, 697 P.2d 818, 822 (1985)). Courts applying rational basis review do not judge the wisdom or fairness of the challenged legislation. Coghlan, 133 Idaho at 396, 987 P.2d at 308. Here, the legislature had a rational basis to assign exclusive procedures and expungement benefits to proceedings within the purview of the JCA. The express legislative intent of the JCA is different than that of regular criminal proceedings, regardless of whether those other proceedings concern juveniles or adults. It is a logical extended aim of such legislation to provide juvenile offenders an opportunity to expunge their juvenile records after fulfilling the express rehabilitative goals of the JCA and proceeding to become productive members of society. It is likewise logical for the legislature to decline to extend such a unique expungement opportunity to juvenile offenders outside of the purview of the JCA, who are not necessarily subject to the same types of supervision and rehabilitative programming. B. Idaho Court Administrative Rule 32(i) Doe next claims I.C.A.R. 32(i) allows a court to expunge the record of his criminal proceedings, rather than merely sealing the record. The Idaho Supreme Court addressed Rule 32(i) and its uses in State v. Turpen, 147 Idaho 869, 871, 216 P.3d 627, 629 (2009). There, the Court stated: However, requests for expungement are seen with some frequency by the trial and appellate courts of this state. See State ex rel. City of Sandpoint v. Whitt, 146 Idaho 292, 293, 192 P.3d 1116, 1117 (Ct.App.2008) (denial of request for expungement of record of conviction for injury to child not appealed); State v. Parkinson, 144 Idaho 825, 172 P.3d 1100 (2007) (affirming denial of request for expungement of record of dismissed conviction from NCIC database); State v. Knapp, 139 Idaho 381, 79 P.3d 740 (Ct.App.2003), overruled by State v. Kimball, 145 Idaho 542, 181 P.3d 468 (2008) (petitioner sought relief including expungement of the record of his case). As no reported Idaho decision addresses expungement of court records under I.C.A.R. 32(i), and as it is unnecessary to define standards relating to the exercise of the inherent powers of the courts when such standards are already prescribed by court rule, we find it appropriate to give guidance to the courts of this State in dealing with requests for expungement of court records. Idaho Court Administrative Rule 32 governs the records maintained by the judicial department. The rule recognizes the public’s “right to examine and copy the judicial department’s declarations of law and public policy and to examine and copy the records of all proceedings open to the public.” I.C.A.R. 32(a). However, I.C.A.R. 32(i) authorizes the trial court to seal or redact court records on a case-by-case basis. The rule requires the custodian judge to hold a hearing and make a factual finding as to whether the individual’s interest in privacy or whether the interest in public disclosure predominates. “If the court redacts or seals records to protect predominating privacy interests, it must fashion the least restrictive exception from disclosure consistent with privacy interests.” Id. Before entering an order redacting or sealing records, the court must make one or more of the following determinations in writing: (1) That the documents or materials contain highly intimate facts or statements, the publication of which would be highly objectionable to a reasonable person, or (2) That the documents or materials contain facts or statements that the court finds might be libelous, or (3) That the documents or materials contain facts or statements, the dissemination or publication of which would reasonably result in economic or financial loss or harm to a person having an interest in the documents or materials, or compromise the security of personnel, records or public property of or used by the judicial department, or (4) That the documents or materials contain facts or statements that might threaten or endanger the life or safety of individuals. In determining whether to grant a request to seal or redact records, trial courts are expected to apply “the traditional legal concepts in the law of invasion of privacy, defamation, and invasion of proprietary business records as well as common sense respect for shielding highly intimate material about persons.” Id. The decisions of the trial courts will be subject to review for abuse of discretion. This case will be remanded to permit the magistrate judge to conduct a hearing under I.C.A.R. 32(i) and make the factual determination whether Turpen’s privacy interest or the public interest in disclosure predominates. If the magistrate judge finds that Turpen’s privacy interest predominates, then the court must make written findings and may redact or seal Turpen’s court records to the least restrictive extent necessary to protect Turpen’s privacy interests. Turpen, 147 Idaho at 871-72, 216 P.3d at 629-30. In Turpén, the Supreme Court made clear that its use of the term “expunge” referred to the sealing or sequestration of records. Id. at 870-71, 216 P.3d at 628-29. As noted above, Doe’s motion to seal the records associated with his minor in possession of marijuana and paraphernalia conviction was granted. Doe now seeks not only to have his records sealed, but to also obtain, pursuant to Rule 32(i), the relief which would be available under I.C. § 20-525A, namely, the right to respond to inquiry of the matter as if the case had never occurred, and the more restrictive type of expungement where the records are available only upon order of a court of competent jurisdiction and inspection is limited to the person who is the subject of the record, not interested parties upon motion as provided in Rule 32(i). Once again, Doe is seeking relief that is simply not available to him. Rule 32(i) clearly does not provide a mechanism by which the proceedings to which the records relate are deemed never to have occurred. C. Inherent Power Lastly, Doe claims that “if neither I.C.A.R. 32(i) nor I.C. § 20-525A authorized expungement of this case, [the magistrate court] possessed the inherent authority to do so.” Doe notes that the Idaho Supreme Court has acknowledged the potential application of this inherent authority in Turpén: As no reported Idaho decision addresses expungement of court records under I.C.A.R. 32(i), and as it is unnecessary to define standards relating to the exercise of the inherent powers of the courts when such standards are already prescribed by court rule, we find it appropriate to give guidance to the courts of this State in dealing with requests for expungement of court records. Turpen, 147 Idaho at 871, 216 P.3d at 629. Doe correctly notes that the Idaho Supreme Court has discussed the inherent authority of the court to expunge a record. However, Doe takes the power of that authority out of context. In Turpén, the Court, beyond the quoted material provided by Doe, also stated as indicated above: As a preliminary matter, we find it appropriate to define the manner in which we use the words “expunge” and “expungement.” Our use of these words is not literal. “Expunge” is defined as follows: “To destroy; blot out; obliterate; erase; efface designedly; strike out wholly. The act of physically destroying information— including criminal records — in files, computers, or other depositories.” BLACK’S LAW DICTIONARY 522 (5th ed. 1979). We do not contemplate the destruction of public records. Whether such records are physical or stored electronically; rather, when we refer to expungement, we mean the issuance of a court order requiring physical or electronic sequestration of such records from public access or inspection. Thus, when we refer to “expungement” we do so in the narrower sense of “expungement of record” which is defined as the “[p]roeess by which [a] record of criminal conviction is destroyed or sealed____” Id. Turpen, 147 Idaho at 870-71, 216 P.3d at 628-29. Moreover, the Court chose not to address the inherent ability to “expunge” a criminal record because a court rule already existed which granted the relief sought. Id. The Idaho Supreme Court discussed an inherent power of the court; however, the power contemplated only allowed a court to seal or redact a criminal record. Doe already received this remedy from the magistrate court. We are aware of no authority which would extend a court’s inherent powers to provide the relief requested. III. CONCLUSION Idaho Code § 20-525A only provides ex-pungement of the records of proceedings that occurred within the purview of the JCA. Doe’s marijuana possession conviction was prosecuted as a misdemeanor under I.C. § 18-1502C, and thus did not fall within the purview of the JCA. Likewise, I.C.A.R. 32(i) only provides a court the authority to seal or redact the record of criminal proceedings. Idaho Code § 20-525A also does not violate equal protection because the statute is rationally related to a legitimate government purpose. Therefore, the district court’s order, on intermediate appeal, reversing the magistrate’s order granting Doe’s motion to expunge his record is affirmed. Chief Judge GUTIERREZ and Judge LANSING concur. . On appeal, Doe is not claiming these convictions should be expunged. Apparently, Doe is not pursuing the expungement of those records because only the possession of marijuana conviction is impeding him from receiving employment as a correctional officer.- . Prior to this appeal, the magistrate granted Doe’s motion to seal the records in this case as to both the physical file and ISTARS. . Idaho Code § 20-525A(l) applies to felony offenses and increases the length of time the person must wait to qualify for expungement. . Doe conceded that he merely paid a fine for the complained of offense, rather than being supervised and receiving rehabilitative programming through the JCA.
Refactor the Okta scanner to produce systems instead of datasets Is your feature request related to a specific problem? Currently the Okta scanner returns datasets Describe the solution you'd like The Okta scanner should output systems, as Okta gives information about the various systems that an organization is using. Describe alternatives you've considered, if any Leaving it as datasets Additional context Okta system scanning is a critical part of the new Config UI flow @earmenda can you note here what is possible to get from the Okta api? #751 will close this, created #752 for the question above.
Song at Eventide Song at Eventide is a 1934 British musical film directed by Harry Hughes and starring Fay Compton, Lester Matthews and Nancy Burne. The screenplay concerns a top cabaret singer who is blackmailed in a scandal that threatens to ruin her and her family. Partial cast * Fay Compton - Helen d'Alaste * Lester Matthews - Lord Belsize * Nancy Burne - Patricia Belsize * Leslie Perrins - Ricardo * Tom Helmore - Michael Law * Minnie Rayner - Blondie * O. B. Clarence - Registrar * Tully Comber - Jim * Barbara Gott - Anna * Charles Paton - Director
'''import math co = float(input('Tamanho cateto oposto: ')) ca = float(input('Tamanho cateto adjacente: ')) hipotenusa = math.hypot(co, ca) print('O comprimento da hipotenusa é: {:.2f}'.format(hipotenusa))''' co = float(input('Tamanho cateto oposto: ')) ca = float(input('Tamanho cateto adjacente: ')) hi = ((co**2 + ca**2))**(1/2) print('O comprimento da hipotenusa é: {:.2f}'.format(hi))
/**************************************************************************\ Copyright Microsoft Corporation. All Rights Reserved. \**************************************************************************/ namespace Microsoft.Communications.Contacts { using System; using System.Collections.Generic; using System.Globalization; using System.IO; using System.Text; using Standard; internal static class VCard21 { /// <summary>"BASE64"</summary> private const string _EncodingBase64 = "BASE64"; /// <summary>"QUOTED-PRINTABLE"</summary> private const string _EncodingQuotedPrintable = "QUOTED-PRINTABLE"; /// <summary>"CHARSET="</summary> // private const string _DeclareCharset = "CHARSET="; /// <summary>"TYPE="</summary> private const string _DeclareType = "TYPE="; /// <summary>"ENCODING="</summary> private const string _DeclareEncoding = "ENCODING="; /// <summary>"VALUE="</summary> private const string _DeclareValue = "VALUE="; /// <summary>"BEGIN:VCARD"</summary> private const string _BeginVCardProperty = "BEGIN:VCARD"; /// <summary>"END:VCARD"</summary> private const string _EndVCardProperty = "END:VCARD"; /// <summary>"URL"</summary> private const string _UrlType = "URL"; private delegate void _ReadVCardProperty(_Property prop, Contact contact); private class _Property { public string Name; public readonly List<string> Types; public string ValueString; public byte[] ValueBinary; public _Property() { Types = new List<string>(); } // Append extra labels into the returned array. // Used for Photos where the labels are implied by the vcard type. public string[] GetLabels(string append1, string append2) { var labelList = new List<string>(); // Add the optional added strings. if (!string.IsNullOrEmpty(append1)) { labelList.Add(append1); } if (!string.IsNullOrEmpty(append2)) { labelList.Add(append2); } foreach (string label in Types) { int index = _labelMap.IndexOfValue(label); if (-1 != index) { labelList.Add(_labelMap.Keys[index]); } } if (labelList.Count == 0) { return null; } return labelList.ToArray(); } public string[] GetLabels() { return GetLabels(null, null); } } #region Static Data // Mapping of Contact labels to VCard equivalent types. private static readonly SortedList<string, string> _labelMap = new SortedList<string, string>(StringComparer.OrdinalIgnoreCase) { {PropertyLabels.Business, "WORK"}, {PropertyLabels.Personal, "HOME"}, {PropertyLabels.Preferred, "PREF"}, {AddressLabels.Domestic, "DOM"}, {AddressLabels.International, "INTL"}, {AddressLabels.Parcel, "PARCEL"}, {AddressLabels.Postal, "POSTAL"}, // No equivalent label for "MSG" {PhoneLabels.Bbs, "BBS"}, {PhoneLabels.Car, "CAR"}, {PhoneLabels.Cellular, "CELL"}, {PhoneLabels.Fax, "FAX"}, {PhoneLabels.Voice, "VOICE"}, {PhoneLabels.Video, "VIDEO"}, {PhoneLabels.Isdn, "ISDN"}, {PhoneLabels.Mobile, "CELL"}, {PhoneLabels.Modem, "MODEM"}, {PhoneLabels.Pager, "PAGER"}, // Labels that I want to include, but aren't strictly part of the VCARD format. // Readers shouldn't choke on these, but other than ours they probably won't // correctly pick up the values. {UrlLabels.Rss, "RSS"} }; private static readonly SortedList<string, _ReadVCardProperty> _writeVCardPropertiesMap = new SortedList<string, _ReadVCardProperty>(StringComparer.OrdinalIgnoreCase) { {"ADR", _ReadAddresses}, {"BDAY", _ReadBirthday}, {"COMMENT", _ReadNotes}, {"EMAIL", _ReadEmailAddress}, {"FN", _ReadFormattedName}, {"LABEL", _ReadLabel}, {"LOGO", _ReadLogo}, {"MAILER", _ReadMailer}, {"N", _ReadName}, {"PHOTO", _ReadPhoto}, {"ORG", _ReadOrganization}, {"ROLE", _ReadRole}, {"SOUND", _ReadPhonetic}, {"TEL", _ReadPhoneNumbers}, {"TITLE", _ReadTitle}, {"UID", _ReadUniqueIdentifier}, {"URL", _ReadUrl} }; private const string Crlf = "\r\n"; #endregion #region Encoding/Decoding Functions private static string[] _TokenizeEscapedMultipropString(string multiprop) { Assert.IsNotNull(multiprop); var list = new List<string>(); string[] split = multiprop.Split(';'); Assert.BoundedInteger(1, split.Length, int.MaxValue); list.Add(split[0]); for (int i = 1; i < split.Length; ++i) { // Check for escaped ';'s if (split[i - 1].EndsWith("\\", StringComparison.Ordinal)) { list[list.Count-1] += ";" + split[i]; } else { list.Add(split[i]); } } return list.ToArray(); } // In the presence of badly formed data this just returns null. private static string _DecodeQuotedPrintable(string encodedProperty) { Assert.IsNotNull(encodedProperty); var sb = new StringBuilder(); for (int i = 0; i < encodedProperty.Length; ++i) { // look for the escape char. if ('=' == encodedProperty[i]) { if (i + 2 >= encodedProperty.Length) { // badly formed data. Not enough space for an escape sequence. return null; } int iVal; if (int.TryParse(encodedProperty.Substring(i + 1, 2), NumberStyles.AllowHexSpecifier, null, out iVal)) { sb.Append((char)iVal); i += 2; } } else { sb.Append(encodedProperty[i]); } } return sb.ToString(); } private static string _EncodeQuotedPrintable(byte[] property) { Verify.IsNotNull(property, "property"); Verify.AreNotEqual(0, property.Length, "property", "Array can't be of length 0"); const int MaxLine = 76; // Only directly print characters between these two values (other than '=') const char LowPrintableChar = ' '; const char HighPrintableChar = '~'; var qpString = new StringBuilder(); // Current length of the line (including expanded characters) int lineLength = 0; for (int ix = 0; ix < property.Length; ++ix) { var ch = (char)property[ix]; if (ch == '\t' || (ch >= LowPrintableChar && ch <= HighPrintableChar && ch != '=')) { if (lineLength >= MaxLine) { // Wrap! Don't have space for another character. qpString.Append("=" + Crlf); lineLength = 0; } qpString.Append(ch); ++lineLength; } else { if (lineLength >= MaxLine - 3) { // Wrap! Don't have space for another three (=XX) characters on this line. qpString.Append("=" + Crlf); lineLength = 0; } qpString.Append(string.Format(CultureInfo.InvariantCulture, "={0:X2}", (uint)ch)); lineLength += 3; } } // If a '\t' or ' ' appear at the end of a line they need to be =XX escaped. int lastIndex = qpString.Length -1; char lastChar = qpString[lastIndex]; if ('\t' == lastChar || ' ' == lastChar) { qpString.Remove(lastIndex, 1); // Need three characters, so lineLength less the one just removed. if (lineLength >= MaxLine - 2) { // Wrap! Don't have space for another three (=XX) characters on this line. qpString.Append("=" + Crlf); } qpString.Append(string.Format(CultureInfo.InvariantCulture, "={0:X2}", (uint)lastChar)); } return qpString.ToString(); } private static bool _ShouldEscapeQuotedPrintable(string property) { return property.Contains("\n") || property.Contains("\r"); } #endregion #region Read VCard Property Functions private static void _ReadAddresses(_Property addressProp, Contact contact) { Assert.IsNotNull(addressProp); Assert.IsNotNull(contact); // Always create a new address node for any ADR property. var addr = new PhysicalAddressBuilder(); string[] elements = _TokenizeEscapedMultipropString(addressProp.ValueString); switch (elements.Length) { default: // too many. Ignore extras case 7: addr.Country = elements[6]; goto case 6; case 6: addr.ZipCode = elements[5]; goto case 5; case 5: addr.State = elements[4]; goto case 4; case 4: addr.City = elements[3]; goto case 3; case 3: addr.Street = elements[2]; goto case 2; case 2: addr.ExtendedAddress = elements[1]; goto case 1; case 1: addr.POBox = elements[0]; break; case 0: Assert.Fail("Tokenize shouldn't have yielded an empty array."); break; } contact.Addresses.Add(addr, addressProp.GetLabels()); } private static void _ReadBirthday(_Property bdayProp, Contact contact) { Assert.IsNotNull(bdayProp); Assert.IsNotNull(contact); DateTime bday; if (DateTime.TryParse(bdayProp.ValueString, out bday)) { contact.Dates[DateLabels.Birthday] = bday; } } private static void _ReadEmailAddress(_Property emailProp, Contact contact) { Assert.IsNotNull(emailProp); Assert.IsNotNull(contact); var email = new EmailAddressBuilder { Address = emailProp.ValueString }; // Try to determine a type from this if (emailProp.Types.Contains("INTERNET") || emailProp.Types.Contains("TYPE=INTERNET")) { email.AddressType = "SMTP"; } else { // Try to coerce a type. Otherwise leave it blank. Smtp is already implicitly default. foreach(string type in emailProp.Types) { if (type.StartsWith(_DeclareType, StringComparison.OrdinalIgnoreCase)) { email.AddressType = type.Substring(_DeclareType.Length); break; } } } contact.EmailAddresses.Add(email, emailProp.GetLabels()); } private static void _ReadLabel(_Property labelProp, Contact contact) { // This is directly related to the physical addresses. // Without supporting property grouping there's no good way to tell whether // this label corresponds to an existing address node (or vice versa). // I'm okay with skipping this property on read since the ADR should be present. // Better to not get this wrong in the 5% (probably more) case. } private static void _ReadFormattedName(_Property nameProp, Contact contact) { Assert.IsNotNull(nameProp); Assert.IsNotNull(contact); // Don't expect multiple names so just coalesce FN, N, and SOUND to the default Name. // Explicitly ignoring any labels set on the name. contact.Names.Default = new NameBuilder(contact.Names.Default) { FormattedName = nameProp.ValueString }; } private static void _ReadLogo(_Property logoProp, Contact contact) { Assert.IsNotNull(logoProp); Assert.IsNotNull(contact); Assert.IsNotNull(logoProp); Assert.IsNotNull(contact); Photo photo = _ReadPhotoProperty(logoProp); // Add any other labels on the Photo. Business and Logo are implied by the LOGO type. contact.Photos.Add(photo, logoProp.GetLabels(PhotoLabels.Logo, PropertyLabels.Business)); } private static void _ReadMailer(_Property mailerProp, Contact contact) { Assert.IsNotNull(mailerProp); Assert.IsNotNull(contact); contact.Mailer = mailerProp.ValueString; } private static void _ReadName(_Property nameProp, Contact contact) { Assert.IsNotNull(nameProp); Assert.IsNotNull(contact); var nb = new NameBuilder(contact.Names.Default); string[] names = _TokenizeEscapedMultipropString(nameProp.ValueString); switch (names.Length) { default: // too many. Ignore extras case 5: nb.Suffix = names[4]; goto case 4; case 4: nb.Prefix = names[3]; goto case 3; case 3: nb.MiddleName = names[2]; goto case 2; case 2: nb.GivenName = names[1]; goto case 1; case 1: nb.FamilyName = names[0]; break; case 0: Assert.Fail("Tokenize shouldn't have yielded an empty array."); break; } contact.Names.Default = nb; } private static void _ReadNotes(_Property notesProp, Contact contact) { Assert.IsNotNull(notesProp); Assert.IsNotNull(contact); contact.Notes = notesProp.ValueString; } private static void _ReadPhoneNumbers(_Property phoneProp, Contact contact) { Assert.IsNotNull(phoneProp); Assert.IsNotNull(contact); contact.PhoneNumbers.Add(new PhoneNumber(phoneProp.ValueString), phoneProp.GetLabels()); } private static void _ReadPhonetic(_Property nameProp, Contact contact) { Assert.IsNotNull(nameProp); Assert.IsNotNull(contact); // Don't expect multiple names so just coalesce FN, N, and SOUND to the default Name. // Explicitly ignoring any labels set on the name, except that phonetic must be a URL. if (nameProp.Types.Contains(_DeclareValue + _UrlType)) { contact.Names.Default = new NameBuilder(contact.Names.Default) { Phonetic = nameProp.ValueString }; } } private static void _ReadPhoto(_Property photoProp, Contact contact) { Assert.IsNotNull(photoProp); Assert.IsNotNull(contact); Photo photo = _ReadPhotoProperty(photoProp); // Add any other labels on the Photo. UserTile is implied by the PHOTO type. contact.Photos.Add(photo, photoProp.GetLabels(PhotoLabels.UserTile, null)); } private static Photo _ReadPhotoProperty(_Property photoProp) { var pb = new PhotoBuilder(); // support either URLs or inline streams. if (photoProp.Types.Contains(_DeclareValue + _UrlType)) { Uri uri; if (Uri.TryCreate(photoProp.ValueString, UriKind.RelativeOrAbsolute, out uri)) { pb.Url = uri; } } else if (null != photoProp.ValueBinary) { pb.Value = new MemoryStream(photoProp.ValueBinary); // look for a type to put into it also. pb.ValueType = "image"; foreach (string token in photoProp.Types) { if (token.StartsWith(_DeclareType, StringComparison.OrdinalIgnoreCase)) { pb.ValueType = "image/" + token.Substring(_DeclareType.Length); break; } } } return pb; } private static void _ReadOrganization(_Property orgProp, Contact contact) { // In VCF there are three related properties: ORG, ROLE, and TITLE. // There reasonably could be multiple organizations on a vcard but short // of property groupings there's no way to distinguish, and even then // no guarantee that the property groupings will be present in the case // of multiple sets of these properties. // So instead, treat this like name and assume only the default, but rather // than use .Default, use the PropertyLabels.Business indexer. Assert.IsNotNull(orgProp); Assert.IsNotNull(contact); var position = new PositionBuilder(contact.Positions[PropertyLabels.Business]); string[] elements = _TokenizeEscapedMultipropString(orgProp.ValueString); Assert.BoundedInteger(1, elements.Length, int.MaxValue); // ORG is weird in that it doesn't actually say what the tokens represent. // The first one can be safely assumed to be Company, but anything else it's probably // best to just put back the ';'s and stick the string somewhere visible. position.Company = elements[0]; if (elements.Length > 1) { position.Office = string.Join(";", elements, 1, elements.Length -1); } contact.Positions[PropertyLabels.Business] = position; } private static void _ReadRole(_Property roleProp, Contact contact) { // In VCF there are three related properties: ORG, ROLE, and TITLE. // There reasonably could be multiple organizations on a vcard but short // of property groupings there's no way to distinguish, and even then // no guarantee that the property groupings will be present in the case // of multiple sets of these properties. // So instead, treat this like name and assume only the default, but rather // than use .Default, use the PropertyLabels.Business indexer. Assert.IsNotNull(roleProp); Assert.IsNotNull(contact); var position = new PositionBuilder(contact.Positions[PropertyLabels.Business]) { Role = roleProp.ValueString }; contact.Positions[PropertyLabels.Business] = position; } private static void _ReadTitle(_Property titleProp, Contact contact) { // In VCF there are three related properties: ORG, ROLE, and TITLE. // There reasonably could be multiple organizations on a vcard but short // of property groupings there's no way to distinguish, and even then // no guarantee that the property groupings will be present in the case // of multiple sets of these properties. // So instead, treat this like name and assume only the default, but rather // than use .Default, use the PropertyLabels.Business indexer. Assert.IsNotNull(titleProp); Assert.IsNotNull(contact); var position = new PositionBuilder(contact.Positions[PropertyLabels.Business]) { JobTitle = titleProp.ValueString }; contact.Positions[PropertyLabels.Business] = position; } private static void _ReadUniqueIdentifier(_Property nameProp, Contact contact) { // Someone (maybe us) bothered to put a UID on the contact, so if it matches // our ContactId's GUID format then go ahead and use it. Guid id; if (Utility.GuidTryParse(nameProp.ValueString, out id)) { contact.ContactIds.Default = id; } } private static void _ReadUrl(_Property urlProp, Contact contact) { Assert.IsNotNull(urlProp); Assert.IsNotNull(contact); Uri uri; if (Uri.TryCreate(urlProp.ValueString, UriKind.RelativeOrAbsolute, out uri)) { contact.Urls.Add(uri, urlProp.GetLabels()); } } #endregion #region Write VCard Property Functions private static void _WriteAddresses(Contact contact, TextWriter sw) { for (int i = 0; i < contact.Addresses.Count; ++i) { PhysicalAddress address = contact.Addresses[i]; // ADR:Address;Structured // Escape ';' in multiprops with a '\' // Note that WAB doesn't actually escape the ; properly, so it may mess up on this read. var adrPropBuilder = new StringBuilder(); adrPropBuilder .Append(address.POBox.Replace(";", "\\;")).Append(";") .Append(address.ExtendedAddress.Replace(";", "\\;")).Append(";") .Append(address.Street.Replace(";", "\\;")).Append(";") .Append(address.City.Replace(";", "\\;")).Append(";") .Append(address.State.Replace(";", "\\;")).Append(";") .Append(address.ZipCode.Replace(";", "\\;")).Append(";") .Append(address.Country.Replace(";", "\\;")); string adrProp = adrPropBuilder.ToString(); // If there aren't any properties then don't write this. if (adrProp.Replace(";", null).Length > 0) { _WriteLabeledProperty(sw, "ADR", contact.Addresses.GetLabelsAt(i), null, adrProp); } if (!string.IsNullOrEmpty(address.AddressLabel)) { _WriteLabeledProperty(sw, "LABEL", contact.Addresses.GetLabelsAt(i), null, address.AddressLabel); } } } private static void _WriteBirthday(Contact contact, TextWriter sw) { DateTime? bday = contact.Dates[DateLabels.Birthday]; if (null != bday) { // ISO 8601 format _WriteStringProperty(sw, "BDAY", bday.Value.ToString("s", CultureInfo.InvariantCulture)); } } private static void _WriteEmailAddresses(Contact contact, TextWriter sw) { // EMAIL:E-mail addresses for (int i = 0; i < contact.EmailAddresses.Count; ++i) { EmailAddress email = contact.EmailAddresses[i]; if (!string.IsNullOrEmpty(email.Address)) { string addInternet = null; if (string.IsNullOrEmpty(email.AddressType) || string.Equals("SMTP", email.AddressType, StringComparison.OrdinalIgnoreCase)) { addInternet = "INTERNET"; } _WriteLabeledProperty(sw, "EMAIL", contact.EmailAddresses.GetLabelsAt(i), addInternet, email.Address); } } } private static void _WriteLabeledProperty(TextWriter sw, string propertyPrefix, IEnumerable<string> labels, string additionalLabel, string value) { Assert.IsFalse(propertyPrefix.EndsWith(":", StringComparison.Ordinal)); var propertyBuilder = new StringBuilder(); propertyBuilder.Append(propertyPrefix); if (!string.IsNullOrEmpty(additionalLabel)) { // This shouldn't be added for any case of this. Assert.IsFalse(additionalLabel.Contains(";")); propertyBuilder.Append(";").Append(additionalLabel); } foreach (string label in labels) { Assert.IsNeitherNullNorEmpty(label); propertyBuilder.Append(";"); // If the label's not mapped then don't write it. // Expect unmapped labels to contain ":", which are completely illegal string mapped; if (_labelMap.TryGetValue(label, out mapped)) { propertyBuilder.Append(mapped); } } _WriteStringProperty(sw, propertyBuilder.ToString(), value); } private static void _WriteMailer(Contact contact, TextWriter sw) { // MAILER:mailer string mailer = contact.Mailer; if (!string.IsNullOrEmpty(mailer)) { _WriteStringProperty(sw, "MAILER", mailer); } } private static void _WriteName(Contact contact, TextWriter sw) { // The VCF spec implies that name isn't multi-valued, so don't mess with enumerating them. // Just use the default. // FN:Formatted Name Name name = contact.Names.Default; if (!string.IsNullOrEmpty(name.FormattedName)) { _WriteStringProperty(sw, "FN", name.FormattedName); } // SOUND:Phonetic of FN. if (!string.IsNullOrEmpty(name.Phonetic)) { _WriteStringProperty(sw, "SOUND", name.Phonetic); } // N:Name;Structured // Escape ';' in multiprops with a '\' // Note that WAB doesn't actually escape the ; properly, so it may mess up on this read. var nPropBuilder = new StringBuilder(); nPropBuilder .Append(name.FamilyName.Replace(";", "\\;")).Append(";") .Append(name.GivenName.Replace(";", "\\;")).Append(";") .Append(name.MiddleName.Replace(";", "\\;")).Append(";") .Append(name.Prefix.Replace(";", "\\;")).Append(";") .Append(name.Suffix.Replace(";", "\\;")); string nProp = nPropBuilder.ToString(); // If there aren't any properties then don't write this. if (nProp.Replace(";", null).Length > 0) { _WriteStringProperty(sw, "N", nProp); } } private static void _WriteNotes(Contact contact, TextWriter sw) { // NOTE:Notes string note = contact.Notes; if (!string.IsNullOrEmpty(note)) { _WriteStringProperty(sw, "NOTE", note); } } private static void _WriteOrganization(Contact contact, TextWriter sw) { // Organizational properties Position position = contact.Positions[PropertyLabels.Business]; // Title: if (!string.IsNullOrEmpty(position.JobTitle)) { _WriteStringProperty(sw, "TITLE", position.JobTitle); } // ROLE:Business Category if (!string.IsNullOrEmpty(position.Role)) { _WriteStringProperty(sw, "ROLE", position.Role); } // LOGO: Company logo // Contact schema doesn't directly associate the logo with the business, but it's also // only kind of implied by vCards that they are also. Can use the [Business,Logo] labels here. Photo logo = contact.Photos[PropertyLabels.Business, PhotoLabels.Logo]; if (logo != default(Photo)) { _WritePhotoProperty(sw, "LOGO", logo); } // AGENT:Embedded vCard (Unsupported) // Contacts can contain links to other contacts, but not going to expose this through vCard export. // ORG: Structured organization description var oPropBuilder = new StringBuilder(); // Only the first field is actually defined (Name), the others are just kindof open-ended, // so don't write unnecessary properties. Unfortunately on read we also won't know what // these properties are actually supposed to represent. oPropBuilder.Append(position.Company.Replace(";", "\\;")); foreach (string orgInfo in new[] { position.Organization, position.Profession, position.Department, position.Office }) { if (!string.IsNullOrEmpty(orgInfo)) { oPropBuilder.Append(";") .Append(orgInfo.Replace(";", "\\;")); } } string oProp = oPropBuilder.ToString(); // If there aren't any properties then don't write this. if (oProp.Replace(";", null).Length > 0) { _WriteStringProperty(sw, "ORG", oProp); } } private static void _WritePhoneNumbers(Contact contact, TextWriter sw) { // TEL:Telephone numbers for (int i = 0; i < contact.PhoneNumbers.Count; ++i) { PhoneNumber number = contact.PhoneNumbers[i]; if (!string.IsNullOrEmpty(number.Number)) { _WriteLabeledProperty(sw, "TEL", contact.PhoneNumbers.GetLabelsAt(i), null, number.Number); } } } private static void _WritePhoto(Contact contact, TextWriter sw) { Photo userTile = contact.Photos[PhotoLabels.UserTile]; if (userTile != default(Photo)) { _WritePhotoProperty(sw, "PHOTO", userTile); } } private static void _WritePhotoProperty(TextWriter sw, string propertyPrefix, Photo photo) { // 76 is more correct. I'm trying to emulate Outlook 2007's formatting here. const int MaxLine = 72; // When writing the photo if it's inline it can be base64 encoded, or VALUE=URL:<url> if (null != photo.Value && 0 != photo.Value.Length) { var bytes = new byte[photo.Value.Length]; photo.Value.Position = 0; photo.Value.Read(bytes, 0, bytes.Length); // Base64FormattingOptions doesn't give me the option of shifting the newlines, // so need to do it manually. string encoded = Convert.ToBase64String(bytes, Base64FormattingOptions.None); string photoType = null; if (!string.IsNullOrEmpty(photo.ValueType)) { // Expecting mime-types, so just give the type as the value after the '/' photoType = ";" + _DeclareType + photo.ValueType.Substring(Math.Max(0, photo.ValueType.LastIndexOf('/'))); } sw.Write(propertyPrefix + photoType + ";" + _DeclareEncoding + _EncodingBase64 + ":"); for (int i = 0; i < encoded.Length; i += MaxLine) { sw.Write(Crlf + " "); sw.Write(encoded.Substring(i, Math.Min(encoded.Length-i, MaxLine))); } sw.Write(Crlf); sw.Write(Crlf); } else if (null != photo.Url && !string.IsNullOrEmpty(photo.Url.ToString())) { _WriteStringProperty(sw, propertyPrefix + ";" + _DeclareValue + _UrlType, photo.Url.ToString()); } } private static void _WriteStringProperty(TextWriter sw, string propertyPrefix, string value) { Assert.IsFalse(propertyPrefix.EndsWith(":", StringComparison.Ordinal)); sw.Write(propertyPrefix); if (_ShouldEscapeQuotedPrintable(value)) { sw.Write(";" + _DeclareEncoding + _EncodingQuotedPrintable + ":"); sw.Write(_EncodeQuotedPrintable(Encoding.ASCII.GetBytes(value))); } else { sw.Write(":"); sw.Write(value); } sw.Write(Crlf); } private static void _WriteUniqueIdentifier(Contact contact, TextWriter sw) { // UID:Universal identifier Assert.IsTrue(contact.ContactIds.Default.HasValue); _WriteStringProperty(sw, "UID", contact.ContactIds.Default.Value.ToString()); } private static void _WriteUrls(Contact contact, TextWriter sw) { // URL:Webpages for (int i = 0; i < contact.Urls.Count; ++i) { Uri uri = contact.Urls[i]; if (null != uri && !string.IsNullOrEmpty(uri.ToString())) { _WriteLabeledProperty(sw, "URL", contact.Urls.GetLabelsAt(i), null, uri.ToString()); } } } #endregion /// <summary> /// /// </summary> /// <param name="contact"></param> /// <param name="filePath"></param> /// <remarks>This implementation is based on the VersitCard 2.1 specification.</remarks> /// <returns></returns> public static void EncodeToVCard(Contact contact, string filePath) { Verify.IsNotNull(contact, "contact"); using (var sw = new StreamWriter(filePath, false, Encoding.ASCII)) { _EncodeToVCardStream(contact, sw); } } // vCards can be chained together in a single file. // This is useful as a transport mechanism. public static void EncodeCollectionToVCard(IList<Contact> contacts, string filePath) { Verify.IsNotNull(contacts, "contacts"); using (var sw = new StreamWriter(filePath, false, Encoding.ASCII)) { foreach (Contact c in contacts) { if (null != c) { _EncodeToVCardStream(c, sw); // A linebreak visually separates each vcard within the file. sw.WriteLine(); } } } } // This function writes vCard properties as they're described in the vCard 2.1 specification. // For several of the properties in the spec it lists valid type tags (Property Parameters) // that can be added, the equivalent of the Contact Schema's labels. Not all properties // have these types, even though most of them really should, e.g. e-mails and urls. // It also supports groupings of properties, but it doesn't really say how they're supposed // to be consumed. This doesn't exactly map to a concept in Contacts other than to // ensure that the same node contains data disparate in the vcard, e.g. Org-Role-Title. // That there's a grouping implies that it's OK with multiple values for any property, but // I really don't think most applications are expecting to consume multiple name properties. // The spec doesn't say that the property parameters are strictly limited to the enumerated // properties. In fact, WAB writes HOME and WORK on the URLs. It also doesn't say whether // the enumerated properties are the only ones that can be added. For properties where it's // reasonable for VCFs to have multiple values I add the additional mapped Contact labels. // For types, such as EmailType, I'm just going to embed the string as the label. The types // of e-mail addresses in 1996 doesn't make sense in 2007, and any updated list isn't going // to make sense in 2018, so it seems silly to try and guess (GMAIL as PRODIGY, anyone?) // // 2.1 is much more ubiquitous than 3.0, and 3.0 has more ambiguities regarding a lot of this // than 2.1 does. Neither do a great job of addressing globalization issues. Since this // is just a transport mechanism (vcard itself is just designed to be embedded in mime) I'm // sticking with 2.1. private static void _EncodeToVCardStream(Contact contact, StreamWriter sw) { Verify.IsNotNull(contact, "contact"); Assert.IsNotNull(sw); // VCard properties that are not written: // * TZ:TimeZone // * GEO: latitude/longitude coordinates // * AGENT: nor any other kind of embedded reference to other contacts // * CERT: Certificates. _WriteStringProperty(sw, "BEGIN", "VCARD"); _WriteStringProperty(sw, "VERSION", "2.1"); _WriteName(contact, sw); _WritePhoto(contact, sw); _WriteBirthday(contact, sw); _WriteAddresses(contact, sw); _WritePhoneNumbers(contact, sw); _WriteEmailAddresses(contact, sw); _WriteMailer(contact, sw); _WriteOrganization(contact, sw); _WriteNotes(contact, sw); _WriteUrls(contact, sw); _WriteUniqueIdentifier(contact, sw); _WriteStringProperty(sw, "REV", DateTime.UtcNow.ToString("s", CultureInfo.InvariantCulture)); _WriteStringProperty(sw, "END", "VCARD"); sw.Flush(); } private static string _ReadVCardItem(TextReader tr) { // BUGBUG: // VCard 2.1 Spec 2.1.3: // "Long lines of text can be split into a multiple-line // representation using the RFC 822 “folding” technique. // That is, wherever there may be linear white space // (NOT simply LWSP-chars), a CRLF immediately followed by // at least one LWSP-char may instead be inserted." // This is one of the parts of the VCard spec that hasn't aged well. // Support for this requires peeking ahead in the stream to see if // the next line follows the pattern. This isn't reasonable for // most stream implementations. It also only helps when the data is // split with white space. Most vCard readers I've seen also // ignore that part of the spec so I'm less concerned about it than // I am with other issues (e.g. CHARSET support). // Quoted Printable encoded properties curry based on a trailing '='. // Base 64 encoded properties curry until an empty line is read. bool isQuotedPrintable; bool isBase64; string line; do { line = tr.ReadLine(); if (null == line) { return null; } // Every property in a vCard should have a ':'... // Is it reasonable to fail here if -1? // Lines between vCard objects might look like this. // At the very least we're not going to parse this for multiple lines. int delimeterIndex = Math.Max(line.IndexOf(':'), 0); string propertyDeclaration = line.Substring(0, delimeterIndex).ToUpperInvariant(); // Do this calculation inside the loop. // If there's both QUOTED-PRINTABLE and BASE64 I'm not going to try and guess. isQuotedPrintable = propertyDeclaration.Contains(_EncodingQuotedPrintable); isBase64 = propertyDeclaration.Contains(_EncodingBase64); // Property names must come at the beginning of the line. // Per above comment a valid vcard may have a line start with a space, // so we'll just ignore it. } while (string.IsNullOrEmpty(line) || line.StartsWith(" ", StringComparison.Ordinal) || (isQuotedPrintable && isBase64)); var sbProperty = new StringBuilder(); if (isQuotedPrintable) { bool carry; do { carry = line.EndsWith("=", StringComparison.Ordinal); sbProperty.Append(line, 0, line.Length - (carry ? 1 : 0)); // If there was a soft line break read the next line also. } while (carry && null != (line = tr.ReadLine())); } else if (isBase64) { do { sbProperty.Append(line); line = tr.ReadLine(); // Keep reading until there's a blank line. } while (!string.IsNullOrEmpty(line)); } else { // not a multi-line property. sbProperty.Append(line); } return sbProperty.ToString(); } public static ICollection<Contact> ReadVCard(TextReader tr) { var retContacts = new List<Contact>(); try { var vcardProperties = new Stack<List<_Property>>(); string line; while (null != (line = _ReadVCardItem(tr))) { if (_BeginVCardProperty.Equals(line, StringComparison.OrdinalIgnoreCase)) { vcardProperties.Push(new List<_Property>()); } // If we're not currently reading a vcard then we don't care. else if (vcardProperties.Count > 0) { if (_EndVCardProperty.Equals(line, StringComparison.OrdinalIgnoreCase)) { List<_Property> vcard = vcardProperties.Pop(); retContacts.Add(_ParseVCard(vcard)); } else { _Property prop; if (_TryParseVCardProperty(line, out prop)) { vcardProperties.Peek().Add(prop); } } } } } catch { // If there's an Exception then dispose of the pending contacts. foreach (Contact c in retContacts) { c.Dispose(); } throw; } return retContacts; } private static bool _TryParseVCardProperty(string line, out _Property prop) { Assert.IsNotNull(line); prop = null; string valueString; byte[] valueBinary = null; int colonIndex = line.IndexOf(':'); // If this doesn't contain a colon it's not a property. if (colonIndex == -1) { return false; } prop = new _Property(); // Note some properties, such as AGENT may be empty. // AGENT is actually a weird case, because its property is the embedded vcard object // that follows on the next line. valueString = line.Substring(colonIndex + 1); string[] propTags = _TokenizeEscapedMultipropString(line.Substring(0, colonIndex)); Assert.BoundedInteger(1, propTags.Length, int.MaxValue); // Ignoring group tags for now. // Don't know of any clients that write them with an expectation of their consumption. int dotIndex = propTags[0].IndexOf('.'); if (-1 != dotIndex) { // Mildly concerned about property strings that look like "GROUP.;TYPE=..." // which isn't a property... Shouldn't need to special case it though. propTags[0] = propTags[0].Substring(dotIndex); } prop.Name = propTags[0]; for (int i = 1; i < propTags.Length; ++i) { // Look for encodings before putting it into the _Property. // Ignoring CHARSETs altogether here. if (propTags[i].StartsWith(_DeclareEncoding, StringComparison.OrdinalIgnoreCase)) { propTags[i] = propTags[i].Substring(_DeclareEncoding.Length); } // These strings don't strictly require the ENCODING= prefix. // They're unambiguous even without it, so look for them directly as well. if (propTags[i].Equals(_EncodingBase64)) { valueBinary = Convert.FromBase64String(valueString); valueString = null; // skip assigning this as a type. continue; } if (propTags[i].Equals(_EncodingQuotedPrintable)) { valueString = _DecodeQuotedPrintable(valueString); // skip assigning this as a type. continue; } prop.Types.Add(propTags[i]); } prop.ValueString = valueString; prop.ValueBinary = valueBinary; return true; } private static Contact _ParseVCard(List<_Property> vcard) { Assert.IsNotNull(vcard); var contact = new Contact(); foreach (_Property prop in vcard) { _ReadVCardProperty mapFunc; if (_writeVCardPropertiesMap.TryGetValue(prop.Name, out mapFunc)) { mapFunc(prop, contact); } } return contact; } } }
import chai from 'chai'; import * as ParamValidation from './param-validation'; /* eslint-disable prefer-arrow-callback */ /* eslint-disable func-names */ const {expect} = chai; const addEventSchema = ParamValidation.addEvent.body; const updateEventSchema = ParamValidation.updateEvent.body; describe('param-validation tests', function () { describe('# Add - Valid', function () { it('should not return an error', function (done) { const { error } = addEventSchema.validate({ updateDate: '2018-04-12', titleText: 'title', mainText: 'test', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.equal(undefined); done(); }); }); describe('# Add - invalid updateDate', function () { it('should return error', function (done) { const { error } = addEventSchema.validate({ updateDate: '12-04-2018', titleText: 'title', mainText: 'test', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.not.equal(undefined); expect(error.details[0].message).to.equal('"updateDate" with value "12-04-2018" fails to match the required pattern: /^\\d{4}-\\d{2}-\\d{2}$/'); done(); }); }); describe('# Add - missing updateDate', function () { it('should return error', function (done) { const { error } = addEventSchema.validate({ titleText: 'title', mainText: 'test', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.not.equal(undefined); expect(error.details[0].message).to.equal('"updateDate" is required'); done(); }); }); describe('# Add - missing titleText', function () { it('should return error', function (done) { const { error } = addEventSchema.validate({ updateDate: '2018-04-12', mainText: 'test', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.not.equal(undefined); expect(error.details[0].message).to.equal('"titleText" is required'); done(); }); }); describe('# Add - missing mainText', function () { it('should return error', function (done) { const { error } = addEventSchema.validate({ updateDate: '2018-04-12', titleText: 'title', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.not.equal(undefined); expect(error.details[0].message).to.equal('"mainText" is required'); done(); }); }); describe('# Add - invalid redirectionUrl', function () { it('should return error', function (done) { const { error } = addEventSchema.validate({ updateDate: '2018-04-12', titleText: 'title', mainText: 'test', redirectionUrl: 'https/github.com/dfshannon/' }); expect(error).to.not.equal(undefined); expect(error.details[0].message).to.equal('"redirectionUrl" must be a valid uri'); done(); }); }); describe('# Add - missing redirectionUrl', function () { it('should return error', function (done) { const { error } = addEventSchema.validate({ updateDate: '2018-04-12', titleText: 'title', mainText: 'test' }); expect(error).to.not.equal(undefined); expect(error.details[0].message).to.equal('"redirectionUrl" is required'); done(); }); }); describe('# Update - Valid', function () { it('should not return an error', function (done) { const { error } = updateEventSchema.validate({ updateDate: '2018-04-12', titleText: 'title', mainText: 'test', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.equal(undefined); done(); }); }); describe('# Update - invalid updateDate', function () { it('should return error', function (done) { const { error } = updateEventSchema.validate({ updateDate: '12-04-2018', titleText: 'title', mainText: 'test', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.not.equal(undefined); expect(error.details[0].message).to.equal('"updateDate" with value "12-04-2018" fails to match the required pattern: /^\\d{4}-\\d{2}-\\d{2}$/'); done(); }); }); describe('# Update - missing updateDate', function () { it('should not return error', function (done) { const { error } = updateEventSchema.validate({ titleText: 'title', mainText: 'test', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.equal(undefined); done(); }); }); describe('# Update - missing titleText', function () { it('should not return error', function (done) { const { error } = updateEventSchema.validate({ updateDate: '2018-04-12', mainText: 'test', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.equal(undefined); done(); }); }); describe('# Update - missing mainText', function () { it('should not return error', function (done) { const { error } = updateEventSchema.validate({ updateDate: '2018-04-12', titleText: 'title', redirectionUrl: 'https://github.com/dfshannon/' }); expect(error).to.equal(undefined); done(); }); }); describe('# Update - invalid redirectionUrl', function () { it('should return error', function (done) { const { error } = updateEventSchema.validate({ updateDate: '2018-04-12', titleText: 'title', mainText: 'test', redirectionUrl: 'https/github.com/dfshannon/' }); expect(error).to.not.equal(undefined); expect(error.details[0].message).to.equal('"redirectionUrl" must be a valid uri'); done(); }); }); describe('# Update - missing redirectionUrl', function () { it('should not return error', function (done) { const { error } = updateEventSchema.validate({ updateDate: '2018-04-12', titleText: 'title', mainText: 'test' }); expect(error).to.equal(undefined); done(); }); }); });
package abstractfactory // nike represents nike brand type nike struct{} // GetShoe returns an nike shoe instance func (n *nike) GetShoe() iShoe { return &nikeShoe{} } // GetShort returns an nike short instance func (n *nike) GetShort() iShort { return &nikeshort{} }
The regenerative capacity of mesenchymal stem cells (MSCs) is contingent on their content of multipotent progenitors \[[@B1]\]. Despite its importance to the efficacy of MSC therapies, the clonal heterogeneity of MSCs remains poorly defined. To address this deficiency, the current study presents a novel high-capacity assay to quantify the clonal heterogeneity in MSC potency and demonstrates its utility to resolve regenerative properties as a function of potency. Human bone marrow was the source of MSCs in this study. The versatility and accessibility of marrow-derived MSCs make them a standard for many therapeutic applications. Materials and methods Primary MSCs were harvested from the iliac crest bone marrow of healthy adult volunteers and cultured as previously described \[[@B2]\]. The in vitro assay developed for this study utilizes a 96-well format to (1) clone fluorescent MSCs stained with CellTracker Green by limiting dilution, (2) generate matched clonal colonies, (3) differentiate 3 matched colonies per clone to quantify trilineage potential to exhibit adipo-, chondro- and osteogenesis as a measure of potency, and (4) cryopreserve the 4th matched colony of each clone in situ in an undifferentiated state for future use. Clones of known potency were evaluated for their colony-forming efficiency as a measure of proliferation potential \[[@B3]\]. Expression of the heterotypic cell adhesion molecule CD146 on the surface of MSC clones was measured with flow cytometry. All eight categories of trilineage potential were detected in human marrow MSCs. Multipotent MSCs had a higher proliferation potential than lineage-committed MSCs. Tripotent clones formed colonies with a median efficiency of 50%, as compared with 14% and 1% for bi- and unipotent clones, respectively (*p* \< 0.01). Likewise, colonies that formed from tripotent clones had the largest median diameter. CD146 may be a biomarker of MSC potency. Histograms of fluorescence intensity from pooled tripotent clones labeled with anti-CD146 antibody shifted to higher CD146 expression relative to the parent MSC preparation from which the clones were generated; whereas, the histograms for parent MSCs and their unipotent progeny were similar. In particular, the mean fluorescence intensity of tripotent clones was nearly twice the value for the parent and unipotent MSCs (*p* \< 0.05). The research presented here addresses a basic deficiency in stem cell technology by developing a quantitative and high-capacity assay to characterize the clonal heterogeneity of MSC potency. The data suggest a complex hierarchy of lineage commitment in which proliferation potential and CD146 expression diminish with loss of potency. The capacity of multipotent MSCs for ex vivo expansion and their differential expression of a potential potency marker will facilitate rapid production of efficacious MSC therapies with consistent progenitor content. The assay has numerous basic research and clinical applications given the importance of heterogeneity to the therapeutic potential of MSCs. We thank Alan Tucker for his assistance with flow cytometry, Dina Gaupp for preparing samples for histology, and Prof. Darwin Prockop for his helpful conversations and suggestions about this project. This research was sponsored by grants to Prof. O'Connor from the National Institutes of Health (R03EB007281) and National Science Foundation (BES0514242).
Thread:Valeth/@comment-28265227-20170610121742/@comment-28265227-20190720115631 @Penguin-san Yeah, and I feel this pics better than the CE one ! This season anime so great I very enjoy watch lot of them. But where my Konosuba ??? Okamoto Nobuhiko is a lolicon LOL ! Let see if anyone get what I mean  :D
# Introduction ## Try/Rescue/Else/After Using `try..rescue` is a powerful construct for catching errors when they occur. Rescuing errors allows functions to return defined values when it is necessary. The `try..rescue` construct also offers us two additional features we can make use of: - the `else` block - When the try block succeeds, the result is matched to this block. - the `after` block - Whether the try block succeeds or raises, the code in the `after` block is always executed as long as the program is running at that point. - The result of the `after` block is not returned to the calling scope. ```elixir try do :a rescue _ -> :error else :a -> :success after :some_action end # => :success ``` ## Dynamic Dispatch When Elixir resolves the function to be invoked, it uses the Module's name to perform a lookup. The lookup can be done dynamically if the Module's name is bound to a variable. ```elixir defmodule MyModule do def message(), do: "My message" end atom = MyModule atom.message() # => "My message" ``` Internally, a Module's name is an atom. All Elixir module atoms are automatically prefixed with `Elixir.` ```elixir is_atom(Enum) # => true Enum == Elixir.Enum # => true ```
Open access peer-reviewed chapter The Role of an NFκB-STAT3 Signaling Axis in Regulating the Induction and Maintenance of the Pluripotent State By Jasmin Roya Agarwal and Elias T. Zambidis Submitted: September 13th 2013Reviewed: January 8th 2014Published: July 2nd 2014 DOI: 10.5772/57602 Downloaded: 1425 1. Introduction Induced pluripotent stem cells (iPSC) are generated by reprogramming differentiated somatic cells to a pluripotent cell state that highly resembles embryonic stem cells (ESC) [1]. Fully reprogrammed iPSC can differentiate into any adult cell type [2-6]. Takahashi and Yamanaka generated the first iPSC in 2006 by transfecting fibroblasts with four defined factors: SOX2, OCT4, KLF4, c-MYC (SOKM; also referred to as Yamanaka factors) [7]. The clinical use of iPSC offers great potential for regenerative medicine as any cell type can be generated from true pluripotent cells [8-10]. However, human clinical iPSC applications are currently limited by inefficient methods of reprogramming that often generate incompletely reprogrammed pluripotent states that harbor potentially cancerous epigenetic signatures, and possess limited or skewed differentiation capacities [11-13]. Many standard iPSC lines do not fully resemble pluripotent ESC, and often retain an epigenetic memory of their cell of origin [14, 15]. Such incompletely reprogrammed iPSC also display limited differentiation potential to all three germ layers (e.g., endoderm, ectoderm, mesoderm) [16, 17]. To avoid integrating retroviral constructs that may carry mutagenic risks, many non-viral methods have been described for hiPSC derivation [18, 19]. For example, one successful approach is to transiently express reprogramming factors with EBNA1-based episomal vectors [20-22]. It was initially intuitive to reprogram skin fibroblasts due to their easy accessibility. However, standard episomal reprogramming in fibroblasts occurs at even lower efficiencies (< 0.001-0.1%) than reprogramming with retroviral vectors (0.1%–1%) [23-25]. Subsequent studies revealed that various cell types possess differential receptiveness for being reprogrammed to pluripotency [26-30]. One highly accessible human donor source is blood, which has been demonstrated to reprogram with significantly greater efficiency than fibroblasts [4, 20, 31-33]. The innate immune system possesses highly flexible cell types that are able to adapt quickly to various pathogens by eliciting defense responses that protect the host [34-36]. Innate immune cells derived from the myeloid lineage (eg, monocyte-macrophage, dendritic cells, neutrophils) are able to reactivate some unique features of pluripotent stem cells that may give them greater flexibility for being reprogrammed to a pluripotent cell state than other differentiated cells [37]. Additionally, the differentiation state of the cell seems to be of critical importance for its reprogramming efficiency [38]. Our group established a reprogramming method that solves many of the technical caveats cited above (Figure 1). We have generated high-fidelity human iPSC (hiPSC) from stromal-primed (sp) myeloid progenitors [20]. This system can reprogram >50% of episome-expressing myeloid cells to high-quality hiPSC characterized by minimal retention of hematopoietic-specific epigenetic memory and a molecular signature that is indistinguishable from bona fide human ESC (hESC). The use of bone marrow-, peripheral-or cord blood (CB)-derived myeloid progenitor cells instead of fibroblasts, and a brief priming step on human bone marrow stromal cells / mesenchymal stem cells (MSC) appeared to be critical for this augmented reprogramming efficiency. In this system, CD34+ - enriched cord blood cells (CB) are expanded with the growth factors (GF) FLT3L (FMS-like tyrosine kinase 3 ligand), SCF (stem cell factor) and TPO (thrombopoietin) for 3 days, subsequently nucleofected with non-integrating episomes expressing the Yamanaka factors (4F, SOX2, OCT4, KLF4, c-MYC), and then co-cultured on irradiated MSC for an additional 3 days. Cells are then harvested, and passaged onto MEF (mouse embryonic fibroblasts), and hiPSC are generated via standard methods and culture medium. The initial population of enriched CD34+ CB progenitors quickly differentiates to myeloid and monocytic cells in this system, and reprogrammed cells arise from CD34- myeloid cells. The first iPSC colonies appear around day 10, and stable mature iPSC colonies can be established after ~21-25 days. The episomal constructs are partitioned after relatively few cell divisions (e.g., 2-9 passages) to generate high quality non-integrated hiPSC. A proteomics and bioinformatics analysis of this reprogramming system implicated significant activation of MSC-induced inflammatory TLR-NFκB and STAT3 signaling [20]. A combination of cell contact-dependent and soluble factors mediate these effects. A recent study similarly implicated inflammatory TLR3 signaling as a novel trigger for enhanced fibroblast reprogramming, albeit at much lesser efficiencies than observed in our myeloid reprogramming system. TLR3 signaling leads to epigenetic modifications that favor an open chromatin state, which increases cell plasticity and the induction of pluripotency [39]. Lee et al. termed this novel link between inflammatory pathways and cell reprogramming ‘Transflammation’ [40]. In this chapter we will discuss hypotheses why inflammation-activated myeloid cells may be highly receptive to factor-mediated reprogramming. Specifically, we will explore the role of the NFκB-STAT3 signaling axis in mediating the unique susceptibility of myeloid cells to high-quality human iPSC derivation. 2. Overview of the canonical and non-canonical NFκB pathway Multipotent myeloid progenitors are derived from hematopoietic stem cells and differentiate to monocytes macrophages, dendritic cells, and granulocytes, which elicit the initial innate immune response toward pathogens [41]. NFκB (nuclear factor kappa-light-chain-enhancer of activated B cells) is a central transcription factor that regulates these innate immune responses during microbial infections [42-44]. The NFκB system belongs to a group of early-acting transcription factors that are present in the cytoplasm in an inactive state but can be quickly activated by multiple inflammatory stimuli [45, 46]. 2.1. The canonical NFκB signaling pathway The NFκB family consists of 5 members; p65 (RelA), p50 and c-Rel are involved in canonical signaling, and p52 and RelB are involved in non-canonical signaling. Canonical NFκB signaling is characterized by activation of the IκB kinase complex (IKK), which contains two kinases, IKK1/α and IKK2/β along with a non-catalytic subunit called IKKγ (NEMO) [47, 48]. Unstimulated NFκB is sequestered in the cytoplasm by IκBα protein. In contrast, activation of the IKK complex (e.g., by TLRs) leads to IKKβ-mediated serine phosphorylation of IκBα triggering its proteasome-mediated degradation and its dissociation from NFκB [49, 50]. This activates the p65:p50 dimer through p65 phosphorylation and leads to NFκB translocation into the nucleus where it induces target gene expression. Subsequent acetylation keeps p65 in the nucleus [51]. This can be reverted by HDAC3 (histone deacetylase 3)-induced deacetylation of p65, which increases the affinity of NFκB proteins for IκBα and nuclear export [52, 53]. Canonical NFκB signaling is a fast and transient process that regulates complex inflammatory processes that includes the initial pro-inflammatory phase, the induction of apoptosis, and even tumorigenesis [54]. It can be activated by toll-like receptors (TLR), which recognize characteristic pathogenic molecules to activate innate immune responses [55-57]. 2.2. The non-canonical NFκB signaling pathway Non-canonical NFκB signaling is stimulated via the NFκB-inducing kinase (NIK), which leads to phosphorylation of the p100 precursor protein and generation of the p52:RelB dimer that translocates to the nucleus to activate gene transcription. This pathway is uniquely dependent on steady state levels of NIK expression, which are controlled under normal conditions through TRAF3-directed ubiquitination and proteasomal degradation. Non-canonical NFκB signaling is slow but persistent and requires de novo NIK protein synthesis and NIK stabilization [58]. It is activated by receptors that belong to the TNFR (tumor necrosis factor receptor) superfamily like BAFF (B-cell-activating factor), CD40 or lymphotoxin β-receptor (LTβR) [59-62]. The common feature of these receptors is the possession of a TRAF-binding motif, which recruits TRAF members (e.g., TRAF2 and TRAF3) during ligand ligation [63, 64]. Receptor recruitment of TRAF members triggers their degradation, and leads to NIK activation and p100 processing [65]. Additionally, BAFF is an important component of pluripotency-supporting growth media for the culture of ESC and a regulator of B-cell maturation [66]. It predominantly activates non-canonical NFκB signaling due to its possession of an atypical TRAF-binding sequence, which interacts only with TRAF3 but not with TRAF2 [67]. TRAF3 degradation is sufficient to trigger non-canonical NFκB signaling, whereby activation of the canonical NFκB pathway requires TRAF2 recruitment [68]. 2.3. CD40 stimulates both NFκB pathway components Another receptor associated with NFκB signaling is CD40, which is expressed on various cell types including B cells and monocytes. The CD40 receptor interacts with its ligand CD40L, which is primarily expressed on activated T cells. This signaling is majorly involved in B-cell activation, dendritic cell maturation, antigen presentation and acts as a co-stimulatory pathway of T-cells [69]. Upon ligation by CD40L, CD40 targets both the canonical and non-canonical NFκB pathways via proteolysis of TRAF2 and TRAF3 [70-72]. Non-canonical NFκB signaling regulates hematopoietic stem cell self-renewal via regulating their interactions with the microenvironment [73]. The deregulation of non-canonical hematopoietic NFκB signaling is associated with auto-immunity, inflammation and lymphoid malignancies [58, 74]. 2.4. NFκB subunit functions A third NFκB signaling pathway is activated following response to DNA damage that results in IκB degradation independent of IKK. This results in dimerization of free NFκB subunits that are mobilized similarly to canonical NFκB signaling [47]. Unlike RelA, RelB, and c-Rel, the p50 and p52 NFκB subunits do not contain transactivation domains in their C-terminus. Nevertheless, the p50 and p52 NFκB members play critical roles in modulating the specificity of NFκB functions and form heterodimers with RelA, RelB, or c-Rel [75]. Cell contact-dependent signals are crucial during immune responses and can be mediated through NFκB signaling [76]. This can be augmented by co-stimulatory signals like CD40 or CD28 that directly bind to NFκB proteins like p65 [77-81]. 3. Functional role of NFκB signaling in stem cells 3.1. Differential roles of canonical and non-canonical NFκB signaling in embryonic stem cells TLR activation is not only important for mediating innate immune responses, but also for stem cell differentiation. For example, hESC are characterized by the expression of pluripotency genes and markers such as OCT4, NANOG, alkaline phosphatase (AP) and telomerase [82-86]. NFκB signaling has been demonstrated to be crucial for maintaining ESC pluripotency and viability, and drives lineage-specific differentiation [87, 88]. A balance of canonical and non-canonical NFκB signaling regulates these opposing functions; non-canonical pathway signaling maintains hESC pluripotency, and canonical pathway signaling regulates hESC viability and differentiation [89, 90]. For example, non-canonical NFκB signaling has to be silenced during cell differentiation, which allows this pathway to act like a switch between hESC self-renewal and differentiation. RelB positively regulates several key pluripotency markers and represses lineage markers by direct binding to their regulatory units. RelB down-regulation reduces the expression of pluripotency genes like SOX2 and induces differentiation-associated genes like BRACHYURY (mesodermal marker), CDX2 (trophoectodermal marker) and GATA6 (endodermal marker) [89]. 3.2. Canonical NFκB signaling in hematopoietic stem cells RelB/p52 signaling also positively regulates hematopoietic stem-progenitor cell (HSPC) self-renewal in response to cytokines (e.g., TPO and SCF) and maintains osteoblast niches and the bone marrow stromal cell population. It negatively regulates HSPC lineage commitment through cytokine down-regulation in the bone marrow microenvironment, although it is able to direct early HSC commitment to the myeloid lineage [73, 91]. Canonical p65 signaling also regulates hematopoietic stem cell functions and lineage commitment by controlling key factors involved in hematopoietic cell fate [92-94]. Canonical NFκB signaling is positively regulated by Notch1, which facilitates nuclear retention of NFκB proteins and promotes self-renewal [95-98]. FGF2 (fibroblast growth factor 2) is important for hESC self-renewal and preserves the long-term repopulating ability of HSPC through NFκB activation [99-102]. Deletion of p65, p52 and RelB dramatically decreases HSC differentiation, function and leads to extramedullary hematopoiesis [103]. NFκB pathway components and FGF4 are highly expressed in CD34+HSPC from cord blood, where they regulate clonogenicity. Nuclear p65 can be detected in 90% CB-derived CD34+ cells but only in 50% BM-derived CD34+ cells [104]. The important role of NFκB in regulating myeloid cell lineage development has been most potently revealed via genetic deletion of IKKβ, IκBα, and RelB, which resulted in granulocytosis, splenomegaly and impaired immune responses [73, 103]. 3.3. Canonical NFκB signaling during ESC differentiation Canonical NFκB signaling is very low in the undifferentiated pluripotent state, where it maintains hESC viability. However, it strongly increases during lineage-specific differentiation of pluripotent stem cells. p65 binds to the regulatory regions of similar differentiation genes as RelB with opposing effects on their activation or silencing. It regulates cell proliferation by direct binding to the CYCLIN D1 promoter [89]. There are different levels of inhibiting canonical NFκB signaling: first, p65 translational repression by the microRNA cluster miR-290 to maintain low p65 protein amounts and second, the inhibition of translated p65 by physical interaction with NANOG. Similarly, OCT4 expression is reversely correlated with canonical NFκB signaling [105]. In contrast to most observations in mouse ESC, NFκB probably plays a more important role in the maintenance of human ESC pluripotency [106]. Finally, active TLRs are expressed on embryonic, hematopoietic and mesenchymal stem cells (MSC), thus implicating their roles in a variety of stem cell types [107-110]. 4. Role of NFκB signaling during reprogramming to pluripotency Undifferentiated human iPSC have elevated NFκB activities, which play important roles in maintaining OCT4 and NANOG expression in pluripotent hiPSC [111]. Innate immune TLR signaling was recently shown to enhance nuclear reprogramming probably through the induction of an open chromatin state, and global changes of epigenetic modifiers [39]. This normally increases cell plasticity in response to a pathogen, but may also enhance the induction of pluripotency, transdifferentiation and even malignant transformation [112-116]. The EBNA (Epstein-Barr virus nuclear antigen) is a virus-derived protein that is not only a critical component of episomal reprogramming vectors, where it mediates extra-chromosomal self-replication, but it is also known to activate several TLRs [117-119]. These include TLR3, which is known to augment reprogramming efficiencies through the activation of inflammatory pathways [39, 120]. TLR3 recognizes double-stranded RNA from retroviruses and signals through TRAF6 and NFκB [121-123]. The TLR3 agonist poly I:C was shown to have the same effect as retroviral particles in enhancing Yamanaka factor-induced iPSC production. TLR3 causes widespread changes in the expression of epigenetic modifiers and facilitates nuclear reprogramming by inducing an open chromatin state through down-regulation of histone deacetylases (HDACs) and H3K4 (histone H3 at lysine 4) trimethylations [38, 39, 124]. These epigenetic modifications mark transcriptionally active genes, whereas the H3K9me3 (Histone H3 at lysine 9) modification marks transcriptionally silenced genes [125, 126]. Histone deacetylation is generally associated with a closed chromatin state and HDAC inhibitors were shown to enhance nuclear reprogramming [127, 128]. Histone acetylation favors an open chromatin state, and is maintained by proteins containing histone acetyltransferase (HAT) domains, such as p300 and CBP [129, 130]. Interestingly, p300/CBP is able to interact with NFκB [131, 132]. RelB directly interacts with the methyltransferase G9a to mediate gene silencing of differentiation genes [133]. Epigenetic changes that allow an open chromatin state are crucial for giving the Yamanaka factors access to promoter regions necessary for the induction of pluripotency. Epigenetic chromatin modifications by TLRs are normally involved in the expression of host defense genes during infections [134-136]. This capability can be deployed to enable nuclear reprogramming as TLR3 was shown to change the methylation status of the Oct4 and Sox2 promoters. Interestingly, changes in these methylation marks were not observed with TLR3 activation alone but only in the presence of the reprogramming factors. Although TLR3 by itself promotes an open chromatin configuration, the reprogramming proteins are likely necessary to direct the epigenetic modifiers to the appropriate promoter sequences [137]. Lee et al. described the potential of inflammatory pathways to facilitate the induction of pluripotency as ‘transflammation’ [40, 138]. 5. Overview of the JAK/STAT pathway The JAK/STAT pathway (Janus kinase/signal transducer and activator of transcription) integrates a complex network of exterior signals into the cell, and can be activated by a variety of ligands and their receptors [139]. These receptors are associated with a JAK tyrosine kinase at their cytoplasmic domain. The JAK family consists of the four members JAK1, JAK2, JAK3 and TYK2 [140, 141]. Many cytokines and growth factors signal through this pathway to regulate immune responses, cell proliferation, differentiation and apoptosis [142-146]. Ligand binding induces the multimerization of gp130 receptor subunits, which brings two JAKs close to each other inducing trans-phosphorylation. Such activated JAKs phosphorylate their receptor at the C-terminus and the transcription factor STAT at tyrosine residues. This allows STAT dimerization and their nuclear translocation to induce target gene transcription. [147, 148] STAT3 acetylation is critical for stable dimer formation and DNA binding [149]. From the 7 mammalian STATs, STAT3 and STAT5 are expressed in many cell types, are activated by a plethora of cytokines and growth factors, and integrate complex biological signals [150, 151]. The other STAT proteins mainly play specific roles in the immune response to bacterial and viral infections. STAT3 is an acute phase protein with important functions during immediate immune reactions [152-154]. STAT3 can be recruited by receptor tyrosine kinases that harbor a common STAT3 binding motif in their cytoplasmic domain (e.g., GCSF (granulocyte colony-stimulating factor), LIF (leukemia inhibitory factor), EGF (epidermal growth factor), PDGF (platelet-derived growth factor), interferons (IFNγ) and interleukins (IL-6, IL-10)) [155-158]. Many cytokines signal through IL-10/STAT3 to achieve an immunosuppressive function or anti-apoptotic effect [159, 160]. IL-10 is also required during terminal differentiation of immunoglobulins [161]. STAT3 can be phosphorylated at tyrosine or serine residues. The phosphorylation site can play distinct roles in the regulation of downstream gene transcription [162]. Stat3-deficient mice die during early embryogenesis due to Stat3 requirement for the self-renewal of ESC [163]. Negative feedback regulation of the JAK/STAT circuitry is mediated by the SOCS family of target genes (suppressors of cytokine signaling) in a way that activated STAT induces SOCS transcription [164, 165]. SOCS proteins can bind to phosphorylated JAKs as a pseudo-substrate to inhibit JAK kinase activity and turn off the pathway [166, 167]. SOCS are negative regulators of the immune response [168, 169]. A small peptide antagonist of SOCS1 was shown to bind to the activation loop of JAK2 leading to constitutive STAT activation and TLR3 induction. This boosts the immune system to exert broad antiviral activities [170]. The JAK/STAT pathway also interacts with many other signaling pathways in a complex manner to regulate cell homeostasis and immune reactions [149, 171]. 6. Functional role of the JAK/STAT pathway in stem cells 6.1. Stat3 maintains naïve pluripotency in mouse embryonic stem cells ESC pluripotency is regulated by transcriptional networks that maintain self-renewal and inhibit differentiation [172-174]. Stat3 and Myc are necessary to maintain mouse ESC (mESC) self-renewal and bind to many ESC-enriched genes [175]. Their target genes include pluripotency-related transcription factors, polycomb group repressive proteins, and histone modifiers [176, 177]. The transcription factor Stat3 is a key pluripotency factor required for ESC self-renewal [178, 179]. Mouse ESC require LIF-Stat3 (leukemia inhibitory factor) and Bmp4 (bone morphogenic protein 4) to remain pluripotent in in vitro cultures, whereas human ESC require FGF2/MAPK (fibroblast growth factor / mitogen-activated protein kinase) and TGFβ/Activin/Nodal (transforming growth factor β) [180-183]. Nevertheless, the core circuitry of pluripotency is conserved among species and includes OCT4, SOX2 and NANOG [174]. 6.2. The LIF-IL6-STAT3 circuitry LIF belongs to the IL-6 family of cytokines and acts in parallel through the Jak/Stat3 and PI3K/Akt (Phosphatidylinositide 3-kinase) pathways to maintain Oct4, Sox2 and Nanog expression via Kruppel-like factor 4 (Klf4) and T-box factor 3 [184, 185]. Lif and IL-6 are necessary for STAT3 phosphorylation mediated by Jak1 [186]. Stat3 phosphorylation positively regulates Klf4 and Nanog transcripts and facilitates Lif-dependent maintenance of pluripotency in a signaling loop [106]. Stat3 directly binds to genomic sites of Oct4 and Nanog, regulates the Oct4-Nanog circuitry and is necessary to maintain the self-renewal and pluripotency of mESC [187-189]. Overexpression of Stat3 maintains mESC self-renewal even in the absence of Lif [190]. Withdrawal of LIF up-regulates the NFκB pathway and results in ESC differentiation as well as STAT3 disruption [191-193]. The interleukin 6 (IL-6) response element (IRE) is activated by STAT3, vice versa IL-6 stimulation leads to STAT3 phosphorylation and transactivation of IRE- containing promoters providing a positively regulated STAT3-IL6 loop. STAT3 directly associates with c-Jun and c-Fos in response to IL-6 [194]. c-Jun and c-Fos are DNA binding proteins and components of the AP-1 (activation protein-1) transcription factor complex [195]. AP-1 can be activated by TLR2/4, IL-10 or STAT3 to regulate inflammatory responses or drive keratinocyte differentiation in interplay with STAT3 and c-MYC [196]. Tlr2 also plays an important role in the maintenance of mESC [107]. STAT3 is important to tune appropriate amounts of AP-1 proteins required for proper differentiation. DNA binding sites for both AP-1 and STAT3 have been found in many gene promoters [194, 197]. It is important to note that c-Jun is able to capture or release the NuRD (nucleosome remodeling and deacetylation) repressor complex, an important epigenetic modulator of gene silencing [198, 199]. STAT3 is able to bind to bivalent histone modifications enabling a quick switch between the activation of pluripotency genes during ESC maintenance and their inhibition during cell differentiation [193]. 6.3. STAT3 signaling in immune cells STAT3 also has complex functions during hematopoietic development, immune regulation, cell growth, and leukemic transformation [200-202]. It is critically important for the survival and differentiation of lymphocytes and myeloid progenitors [171]. STAT3 signaling can be activated in a cell contact-dependent way, which is distinct from its cytokine activation. Co-cultures of MSC (human mesenchymal stem cells) and APC (antigen-presenting cell) increase STAT3 signaling in both cell types in a cell contact-dependent way, which mediates the immune-modulatory effects of MSC to block APC maturation and induce T-cell tolerance [203]. MSC are high-proliferative non-hematopoietic stem cells with the ability to differentiate into multiple mesenchymal lineages [204-206]. They accumulate in tumor environments in response to NFκB signaling and produce cytokines [207]. MSCs are FDA-approved for the treatment of severe acute GVHD, due to their immunomodulatory properties [208]. STAT3 phosphorylation is induced by cell-cell contacts and inhibited in postconfluent cells that consequently become apoptotic. Therefore, STAT3 may represent a molecular junction that allows cell proliferation or growth arrest depending on the state of the cell. Increased STAT3 activity may promote cell survival during cell confluency [209]. 6.4. Cell contact-dependent STAT3 signaling during cell transformation Constitutive STAT3 activation can by itself result in cellular transformation [210-214]. For example, contact-dependent STAT3 activation is known to play a promoting role in the interactions between tumor cells and their environment [215-218]. Cell transformation and the induction of pluripotency may share very similar signaling processes, and it is possible that STAT3 may represent a common axis [219, 220]. During early tumor development, certain cells have to acquire stem cell-like features that allow them to self-renew (tumor-initiating cells) and to produce cell progeny (tumor bulk) [221-224]. These tumor-initiating cells are very difficult to eradicate during chemotherapies and often re-establish the tumor seen as clinical relapse [225-227]. Tumor-initiating cells display strong inflammatory gene signatures with elevated IL6-STAT3-NFκB signaling to sustain their self-renewal [228-231]. A better understanding of the mechanism by which STAT3 and NFκB regulates the acquisition of pluripotency and self-renewal might also give us crucial insight about tumor development, and may lead to future novel therapies [171, 232]. 7. The role of STAT3 signaling during reprogramming 7.1. STAT3 is a master reprogramming factor Activation of Stat3 is a limiting factor for the induction of pluripotency, and its over-expression eliminates the requirement for additional factors to establish pluripotency [233]. These key properties have positioned Stat3 signaling as one of the master reprogramming factors that dominantly instructs naïve pluripotency [175]. Elevated Stat3 activity overcomes the pre-iPSC reprogramming block and enhances the establishment of pluripotency induced by SOKM [234]. Stat3 and Klf4 co-occupy genomic sites of Oct4, Sox2 and Nanog. Klf4 and c-Myc are downstream targets of Stat3 signaling and part of the transcriptional network governing pluripotency. The Stat3 effect is combinatorial with other reprogramming factors, which implies that additional targets of Stat3 play a pivotal role [235]. 7.2. STAT3 is an epigenetic regulator Stat3 activation regulates major epigenetic events that induce an open-chromatin state during late-stage reprogramming to establish pluripotency [236-238]. For example, Stat3 signaling stimulates DNA methylations to silence lineage commitment genes and facilitates DNA demethylations to activate pluripotency-related genes [106, 239, 240]. Other chromatin modifications include histone acetylation and deacetylation, which are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activities. Histone acetylation is associated with an open chromatin state that allows active gene transcription. HDAC inhibitors are known to significantly improve the efficiency of iPSC generation by allowing promoter accessibility [128, 241, 242]. STAT3 suppresses HDAC expression and repressive chromatin regulators to establish an open-chromatin structure giving full access to transcriptional machineries. The key pluripotency factor Nanog cooperates with Stat3 to maintain ESC pluripotency [173]. Interestingly, HDAC inhibitors but not NANOG over-expression rescues complete reprogramming in the presence of STAT3 inhibition. Finally, DNA demethylation is regulated in mammalian cells by Tet proteins (tet methylcytosine dioxygenase), which convert 5-methylcytosine (5mC) to 5-hydroxymethylcytosine (5hmC). Tet1 suppresses ESC differentiation and Tet1 knockdown leads to defects in ESC self-renewal. Tet1 up-regulation is positively regulated by Stat3 during the late-reprogramming stage [243-246]. 8. Interactions between NFκB and STAT3 signaling 8.1. Synergistic NFκB and STAT3 signaling The NFκB and STAT3 pathways are closely interconnected in regulating immune responses [247, 248]. STAT3 activation itself induces further STAT3 phosphorylation. Un-phosphorylated STAT3 that accumulates in the cell can bind to un-phosphorylated NFκB in competition with IκB. The resulting STAT3/NFκB dimer localizes to the nucleus to induce NFκB-dependent gene expression [249]. STAT3 associates with the p300/CBP (CREB-binding protein) co-activator enabling its histone acetyltranferase activity to open chromatin structures, which allows chromatin-modifying proteins to bind the DNA and activate gene transcription. [250, 251] Tyrosine-phosphorylated and acetylated STAT3 additionally binds to the NFκB precursor protein p100 and induces its processing to p52 by activation of IKKα. STAT3 then binds to the DNA-binding p52 complex to assist in the activation of target genes [252]. Both, the NFκB and STAT3 pathway synergize during terminal B-cell differentiation [253]. Phosho-p65/STAT3 dimers and phospho-STAT3/NFκB dimer complexes can bind to κB motifs. Also, phospho-STAT3 and phosho-p50 interact with each other. Soluble CD40L rapidly activates NFκB p65 and up-regulates IL10 receptors on the cell surface. This renders STAT3 more susceptible to IL-10 induced phosphorylation [161]. Macrophage activation is regulated by Toll-like receptors, JAK/STAT signaling and immunoreceptors that signal via ITAM motifs [254, 255]. These pathways have low activity levels under homeostatic conditions but are strongly activated during innate immune responses. ITAM-coupled receptors cooperate with TLRs in driving NFκB signaling and inflammation during infections, whereas extensive ITAM activation inhibits JAK/STAT signaling to limit the immune reaction [256, 257]. Pleiotropic cytokines like interferons and IL-6 regulate the balance of pro-and anti-inflammatory functions by activating variable levels of STAT1 and STAT3 [258]. 8.2. NFκB and STAT3 synergies in stem cells NFκB and STAT3 are also part of an important stem cell pathway axis [259, 260]. A functional link between NANOG, NFκB and LIF/STAT3 signaling was shown in the maintenance of pluripotency [228]. Non-canonical NFκB signaling is activated by STAT3 through activation of IKKα and p100 processing [58]. Conversely, STAT3 inhibits TLR-induced canonical NFκB activity probably through up-regulated SOCS3. C-terminal binding of NANOG inhibits the pro-differentiation activities of canonical NFκB signaling and directly cooperates with STAT3 to maintain ESC pluripotency. NANOG and STAT3 bind to each other and synergistically activate STAT3-dependent promoters [106, 261]. The STAT3 pathway also interacts with many signaling pathways that are critically involved in the reprogramming process. For example, STAT3 signaling activates the MYC transcriptome and signals in loop with LIN28 [229]. LIN28 is expressed in undifferentiated hESC and is able to enhance the reprogramming efficiency of fibroblasts. It is down-regulated upon ESC differentiation [262-265]. Proto-oncogene tyrosine-protein kinase Src activation triggers an inflammatory response mediated by NFκB that directly activates IL6 and Lin28B expression through a binding site in the first intron. IL6-mediated activation of STAT3 transcription is necessary for monocyte activation and tumorigenesis. IL6 itself further activates NFκB, thereby completing a positive NFκB-STAT3-IL6 feedback loop that links inflammation to cell transformation [229]. Constitutive STAT3 signaling maintains constitutive NFκB activity in tumors by inhibiting its nuclear export through p65 acetylation, although STAT3 signaling inhibits NFκB activation during normal immune responses [52]. 9. The role of epigenetic regulators during the induction of pluripotency 9.1. The NuRD complex A panoply of chromatin remodelers play active, regulatory roles during the reprogramming process [266, 267]. For example, the Mbd3/NuRD complex is an important epigenetic regulator that restricts the expression of key pluripotency genes [268]. MBD3 (Methyl-CpG-binding domain protein 3) is part of the NuRD (nucleosome remodeling and deacetylation) repressor complex, which mediates chromatin remodeling through histone deacetylation via HDAC1/2 and ATPase activities [269-271]. The NuRD complex interacts with methylated DNA to mediate heterochromatin formation and transcriptional silencing of ESC-specific genes. Whereas MBD2 recruits NuRD to methylated DNA, MBD3 fails to bind methylated DNA as it evolved from a methyl-CpG-binding domain to a protein–protein interaction module [272]. Mbd3 antagonizes the establishment of pluripotency and facilitates differentiation [273]. 9.2. MBD3 suppression is a rate-limiting step in factor-mediated reprogramming Recent evidence suggested that efficient reprogramming may require NuRD complex down-regulation [274]. The reprogramming factors OCT4, SOX2, KLF4 and MYC bind to MBD3, a critical component of the NURD complex. In the absence of MBD3, SOKM over-expression induces pluripotency with almost 100% efficiency [275]. Such reprogramming occurs within seven days in mouse cells. Once pluripotency is established, MBD3 does not appear to compromise its maintenance. The MBD3/NuRD repressor complex is probably the predominant molecular block that prevents the induction of ground-state pluripotency. Several reprogramming factors directly interact with the MBD3/NuRD complex to form a potent negative regulatory complex that restrains pluripotency gene reactivation. Thus, chromatin de-repression is of critical importance for the conversion of somatic cells into iPSC. 9.3. Bivalent histone modifications Embryonic stem cells are not only able to maintain their undifferentiated state indefinitely, but also need to retain their ability to differentiate into various cell types [276]. The co-existence of these two features requires the combined action of signal transduction pathways, transcription factor networks, and epigenetic regulators [277]. Pluripotent gene expression has to be maintained in a way that it can be rapidly silenced upon receiving differentiation signals. The NuRD complex maintains this ESC flexibility by inducing variability in pluripotency factor expression that results in a low-expressing subpopulation of ESCs primed for differentiation [268, 278]. The control of gene expression by juxtaposition of antagonistic chromatin regulators is a common regulatory strategy in ESC, called bivalent histone modification [279, 280]. Individual promoters exhibit trimethylation of two different residues of histone H3: lysine 4 (H3K4me3) and lysine 27 (H3K27me3) [281, 282]. H3K27me3 is a repressive histone modification, whereas H3K4me3 is an activation-associated mark [283]. Both epigenetic markers have opposing effects and allow quick adjustments between ESC self-renewal and differentiation. Bivalent genes are generally transcriptionally silent in ESCs but are prone for rapid activation. MBD3 binding is enriched at bivalent genes characterized by 5hmC modifications. STAT3 binds to bivalent histone modifications and is able to switch between cellular pluripotency and differentiation [236, 284, 285]. 9.4. MBD3 may prevent completion of the reprogramming process MBD3 plays key roles in the biology of 5-hydroxy-methylcytosine (5hmC) [286]. 5hMC is an oxidation product of 5-methylcytosine (5mC) [287, 288]. MBD3 silences pluripotency genes like Oct4 and Nanog through 5-hydroxy-methylation of their promoters. MBD3 binds to 5hmC in cooperation with Tet1 to regulate 5hmC-marked genes, but does not interact with 5mC. Mbd3 interaction with 5hmC recruits NuRD to its targets resulting in gene repression. Knockdown of the MBD3/NuRD complex affects the expression of 5hmC-marked genes [289]. Mbd3 acts upstream of Nanog and may block the transition from partially to fully reprogrammed iPSC by silencing Nanog. Nanog overexpression was dominant over Mbd3 knockdown in the induction of efficient reprogramming and is in general sufficient to maintain mESC pluripotency. Mbd3 depletion facilitates the transcription of Oct4 and Nanog and leads to the generation of iPSC and chimeric mice even in the absence of Sox2 or c-Myc [290]. The depletion of Mbd3/NuRD does not replace Oct4 during iPSC formation as reprogramming did not occur with Klf4 and c-Myc alone. Mbd3-dependent silencing of pluripotency factors occurs during ESC differentiation. This involves NuRD-dependent deacetylation of H3K27 required for the binding of the polycomb repressive complex two. NuRD-dependent silencing of pluripotency genes prevents the de-differentiation of somatic cells. In the absence of Mbd3, NuRD disassembles, which lowers this epigenetic barrier and allows the activation of pluripotency genes. Drug-induced down-regulation of Mbd3/NuRD may greatly improve the efficiency and fidelity of reprogramming [291]. 9.5. STAT3-MBD3 counteractions Stat3 promotes the expression of self-renewal transcription factors and opposes NURD-mediated repression of several hundred target genes in ESCs. The opposing functions of Stat3 and NuRD maintain variability in the levels of key self-renewal transcription factors. Stat3, but not NuRD, is the rate-limiting factor for pluripotency gene expression. Self-renewing ESC face a barrier that prohibits differentiation. NuRD constrains this barrier within a range that can be overcome when self-renewal signals are withdrawn [268, 278, 292]. Mbd3/NuRD-mediated gene silencing is a critical determinant of lineage commitment in embryonic stem cells and allows cells to exhibit pluripotency and self-renewal. Mbd3-deficient ESC show persistent self-renewal even in the absence of Lif. They are able to undergo the initial steps of differentiation, but their ability for lineage commitment is severely compromised. They fail to downregulate undifferentiated cell markers as well as upregulate differentiation markers [293]. Stat3 has many downstream effectors like the proto-oncogene c-Jun that is part of the AP-1 complex [194]. The transactivation domain of un-phosphorylated c-Jun recruits Mbd3/NuRD to AP-1 target genes to mediate gene repression. This repression is relieved by c-Jun N-terminal phosphorylation or Mbd3 depletion. Upon JNK activation, NuRD dissociates from c-Jun, which results in de-repression of target gene transcription. Termination of the JNK signal induces Mbd3/NuRD re-binding to un-phosphorylated c-Jun and cessation of target gene expression (Figure 2) [199]. 10. Conclusions In this review, we have discussed a potentially novel link between inflammatory pathways and efficient cell reprogramming. In this context, our group reported that bone marrow stromal-primed human myeloid cell progenitors are significantly more receptive to reprogramming stimuli than other cell types [20]. Myeloid cells harbor a unique epigenetic plasticity that allows them to quickly respond to a plethora of pathogens. They are innately equipped to transcriptionally and epigenetically activate key inflammatory pathways via an interconnected NFκB and STAT3 signaling machinery [294]. Both pathways act as epigenetic modifiers during normal inflammation stimulation, and both are also known to promote ESC pluripotency by inducing an open chromatin state that allows other transcription factors to regulate cell fates [236]. This epigenetic remodeling may prove crucial for efficient reprogramming, as well as the generation of high quality iPSC that resemble ESC without excessive epigenetic memory of their cell of origin [295]. Moreover, Stat3 is a master reprogramming factor that is able to dominantly instruct pluripotency, yet is also inherently interconnected with inflammatory signaling cascades (Figure 2). It binds to bivalent histone modifications, and allows rapid transitions between pluripotency and differentiation [193]. The NFκB pathway acts in synergy with downstream STAT3 signaling, whereby non-canonical NFκB signaling maintains pluripotency through epigenetic silencing of differentiation genes and canonical NFκB signaling promotes cell differentiation [296]. Finally, recent evidence suggests that strong chromatin repression by the NuRD complex is a key rate-limiting factor during reprogramming to pluripotency. This important complex may normally function to ensure that differentiated cells do not reactivate pluripotency genes, which might enable tumorigenesis [268]. We propose the hypothesis that NuRD complex silencing might be more easily achieved through the activation of inflammatory pathways in receptive cells such as those from the myeloid lineage. It remains to be elucidated how all these processes are inter-regulated. It will be especially important to link reprogramming efficiency with the resulting quality of the pluripotent state achieved in hiPSC. We hypothesize that epigenetic plasticity in inflammatory cells that normally allows chromatin accessibility to the transcriptional machinery, could be manipulated to facilitate a complete erasure of the donor epigenetic memory during factor-mediated reprogramming. Additionally, preventing cancerous epigenetic patterns in iPSC via more accurate high-fidelity reprogramming methods will be the foundation for future clinical applications [13]. Finally, the basic understanding of pluripotency induction may also give us a better understanding of how tumor-initiating cells arise and how they can be eradicated to prevent tumor relapse, thus potentially opening a new era of cancer treatments. Acknowledgments JRA was supported by a fellowship from the German Research Foundation (DFG, DJ 71/1-1). ETZ was supported by grants from the NIH/NHLBI U01HL099775 and the Maryland Stem Cell Research Fund (2011-MSCRF-II-0008-00; 2007-MSCRF-II-0379-00). We would like to thank Dr. Alan Friedman for assistance in reading and editing the manuscript. © 2014 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. How to cite and reference Link to this chapter Copy to clipboard Cite this chapter Copy to clipboard Jasmin Roya Agarwal and Elias T. Zambidis (July 2nd 2014). The Role of an NFκB-STAT3 Signaling Axis in Regulating the Induction and Maintenance of the Pluripotent State, Pluripotent Stem Cell Biology - Advances in Mechanisms, Methods and Models, Craig S. Atwood and Sivan Vadakkadath Meethal, IntechOpen, DOI: 10.5772/57602. Available from: chapter statistics 1425total chapter downloads More statistics for editors and authors Login to your personal dashboard for more detailed statistics on your publications. Access personal reporting Related Content This Book Next chapter Embryonic Stem Cells and Oncogenes By Hiroshi Koide Related Book First chapter Embryonic Stem Cells and the Capture of Pluripotency By Timothy J. Davies and Paul J. Fairchild We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities. More About Us
LED Design Language for Visual Affordance of Voice User Interfaces ABSTRACT A method is implemented at an electronic device for visually indicating a voice processing state. The electronic device includes at least an array of full color LEDs, one or more microphones and a speaker. The electronic device collects via the one or more microphones audio inputs from an environment in proximity to the electronic device, and processes the audio inputs by identifying and/or responding to voice inputs from a user in the environment. A state of the processing is then determined from among a plurality of predefined voice processing states, and for each of the full color LEDs, a respective predetermined LED illumination specification is determined in association with the determined voice processing state. In accordance with the identified LED illumination specifications of the full color LEDs, the electronic device synchronizes illumination of the array of full color LEDs to provide a visual pattern indicating the determined voice processing state. RELATED APPLICATIONS This application claims priority to the following provisional applications, each of which is incorporated by reference in its entirety: - - U.S. Provisional Application No. 62/334,434, filed May 10, 2016, titled “Implementations for Voice Assistant on Devices”; - U.S. Provisional Application No. 62/336,551, filed May 13, 2016, titled “Personalized and Contextualized Audio Briefing”; - U.S. Provisional Application No. 62/336,566, filed May 13, 2016, titled “LED Design Language for Visual Affordance of Voice User Interfaces”; - U.S. Provisional Application No. 62/336,569, filed May 13, 2016, titled “Voice-Controlled Closed Caption Display”; and - U.S. Provisional Application No. 62/336,565, filed May 13, 2016, titled “Media Transfer among Media Output Devices.” This application is also related to the following patent applications, each of which is incorporated by reference in its entirety: - - U.S. patent application Ser. No. ______ (Attorney Docket No.<PHONE_NUMBER>-US), filed May _, 2017, titled “Voice-Controlled Closed Caption Display”; - U.S. patent application Ser. No. ______ (Attorney Docket No.<PHONE_NUMBER>-US), filed May _, 2017, titled “Media Transfer among Media Output Devices”; - U.S. patent application Ser. No. ______ (Attorney Docket No.<PHONE_NUMBER>-US), filed May _, 2017, titled “Personalized and Contextualized Audio Briefing”; and - U.S. patent application Ser. No. ______ (Attorney Docket No.<PHONE_NUMBER>-US), filed May _, 2017, titled “Implementations for Voice Assistant on Devices.” TECHNICAL FIELD This application relates generally to computer technology, including but not limited to methods and systems for using an array of full color light emitting diodes (LEDs) to visualize a voice processing state associated with a voice activated electronic device that is used as a user interface in a smart home or media environment. BACKGROUND Electronic devices integrated with microphones have been widely used to collect voice inputs from users and implement different voice-activated functions according to the voice inputs. For example, many state-of-the-art mobile devices include a voice assistant system (e.g., Siri and Google Assistant) that is configured to use voice inputs to initiate a phone call, conduct a restaurant search, start routing on a map, create calendar events, add a post to a social network, recognize a song and complete many other tasks. The mobile devices often include display screens that allow the users who provide the voice inputs to check the status of the tasks requested via the voice inputs. However, when an electronic device having a relatively simple structure and made at a low cost is applied to implement similar voice activated functions as the mobile devices, use of a display screen would significantly increase the cost of the electronic device. Thus, there is a need to use a simple and low-cost user interface to indicate a status of voice input processing in an electronic device that includes one or more microphones and functions as a voice interface. In addition, the voice activated functions currently implemented in mobile devices are limited to Internet-based functions that involve remote servers (e.g., a search engine, a social network server or a voice assistant server). The results of the voice activated functions are displayed on or used to control the mobile devices themselves, and do not impact any other remote or local electronic devices accessible to the user. Given that voice inputs are convenient for the user, it is beneficial to allow the user to use voice inputs to control the other electronic devices accessible to the user in addition to requesting the Internet-based functions limited between the remote servers and the mobile devices. SUMMARY Accordingly, there is a need to create a smart media environment or a smart home environment where an electronic device provides an eyes-free and hands-free voice interface to activate voice-activated functions on other media play devices or smart home devices coupled within the smart media or home environment. In some implementations of this application, a smart media environment includes one or more voice-activated electronic devices and multiple media display devices each disposed at a distinct location and coupled to a cast device (e.g., a set top box). Each voice-activated electronic device is configured to record a voice message from which a cloud cast service server determines a user voice request (e.g., a media play request, a media transfer request or a closed caption initiation request). The cloud cast service server then directs the user voice request to a destination cast device as indicated by the voice message. The voice-activate electronic device is also configured to display a visual pattern via an array of full color LEDs indicating a corresponding voice processing state. Similar arrangement could be used to control smart home devices to implement voice-activated functions in a smart home environment. Such methods optionally complement or replace conventional methods of requiring a user to use a remote control or a client device to control the media devices or the smart home devices in a smart media or home environment. In accordance with one aspect of this application, a method is implemented at an electronic device for visually indicating a voice processing state. The electronic device includes an array of full color LEDs, one or more microphones, a speaker, a processor and memory storing at least one program for execution by the processor. The method includes collecting via the one or more microphones audio inputs from an environment in proximity to the electronic device, and processing the audio inputs. The processing includes one or more of identifying and responding to voice inputs from a user in the environment. The method further includes determining a state of the processing from among a plurality of predefined voice processing states, and for each of the full color LEDs, identifying a respective predetermined LED illumination specification associated with the determined voice processing state. The illumination specification includes one or more of an LED illumination duration, pulse rate, duty cycle, color sequence and brightness. The method further includes in accordance with the identified LED illumination specifications of the full color LEDs, synchronizing illumination of the array of full color LEDs to provide a visual pattern indicating the determined voice processing state. In accordance with one aspect of this application, a method is executed at server system including a processor and memory storing at least one program for execution by the processor for playing media content on a media output device. The media content play method includes receiving a voice message recorded by an electronic device, and determining that the voice message includes a first media play request. The first media play request includes a user voice command to play media content on a destination media output device and a user voice designation of the media output device, and the user voice command includes at least information of a first media play application and the media content that needs to be played. The media content play method further includes in accordance with the voice designation of the media output device, identifying (e.g., in a device registry) a cast device associated in a user domain with the electronic device and coupled to the media output device. The cast device is configured to execute one or more media play applications for controlling the media output device to play media content received from one or more media content hosts. The media content play method further includes sending to the cast device a second media play request including the information of the first media play application and the media content that needs to be played, thereby enabling the cast device to execute the first media play application that controls the media output device to play the media content. In accordance with another aspect of this application, a method is executed at a server system including a processor and memory storing at least one program for execution by the processor for initiating by voice display of closed captions (CC) for media content. The CC display media method includes receiving a voice message recorded by an electronic device, and determining that the voice message is a first closed caption initiation request. The first closed caption initiation request includes a user voice command to initiate closed captions and a user voice designation of a display device playing the media content for which closed captions are to be activated. The CC display method further includes in accordance with the designation of the display device, identifying (e.g., in a device registry) a cast device associated in a user domain with the electronic device and coupled to the designated display device. The cast device is configured to execute a media play application for controlling the designated display device to display media content received from a media content host. The CC display method further includes sending a second closed caption initiation request to the cast device coupled to the designated display device, thereby enabling the cast device to execute the media play application that controls the designated display device to turn on the closed captions of media content that is currently displayed on the designated display device and display the closed captions according to the second closed caption initiation request. In accordance with another aspect of this application, a method is executed at a server system including a processor and memory storing at least one program for execution by the processor for moving media content display from a source media output device to a destination media output device. The media transfer method includes receiving a voice message recorded by an electronic device, and determining that the voice message includes a media transfer request. The media transfer request includes a user voice command to transfer media content that is being played to a destination media output device and a user voice designation of the destination media output device. The media transfer method further includes obtaining from a source cast device instant media play information of the media content that is currently being played. The instant play information includes at least information of a first media play application, the media content that is currently being played, and a temporal position related to playing of the media content. The media transfer method further includes in accordance with the voice designation of the destination media output device, identifying (e.g., in a device registry) a destination cast device associated in a user domain with the electronic device and coupled to the destination media output device, and the destination cast device is configured to execute one or more media play applications for controlling the destination media output device to play media content received from one or more media content hosts. The media transfer method further includes sending to the destination cast device a media play request including the instant media play information, thereby enabling the destination cast device to execute the first media play application that controls the destination media output device to play the media content from the temporal location. In accordance with some implementations, a cast device includes means for performing the operations of any of the methods described above. BRIEF DESCRIPTION OF THE DRAWINGS For a better understanding of the various described implementations, reference should be made to the Description of Implementations below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures. FIG. 1 is an example smart media environment in accordance with some implementations. FIG. 2A is an example operating environment in which a voice-activated electronic device interacts with a cast device, a client device or a server system of a smart media environment in accordance with some implementations. FIG. 2B is an example flow chart of a media play control process that controls the cast device and its associated media play activities according to control path B shown in FIG. 2A. FIG. 3 is another example operating environment in which cast devices interact with a client device, voice-activated electronic devices or a server system of the smart media environment in accordance with some implementations. FIGS. 4A and 4B are a front view and a rear view of a voice-activated electronic device in accordance with some implementations. FIG. 4C is a perspective view of a voice-activated electronic device 190 that shows speakers contained in a base of the electronic device 190 in an open configuration in accordance with some implementations. FIGS. 4D and 4E are a side view and an expanded view of a voice-activated electronic device that shows electronic components contained therein in accordance with some implementations, respectively. FIGS. 4F(1)-4F(4) show four touch events detected on a touch sense array of a voice-activated electronic device in accordance with some implementations. FIG. 4F(5) shows a user press on a button on a rear side of the voice-activated electronic device in accordance with some implementations. FIG. 4G is a top view of a voice-activated electronic device in accordance with some implementations, and FIG. 4H shows six example visual patterns displayed by an array of full color LEDs for indicating voice processing states in accordance with some implementations. FIG. 5 is a block diagram illustrating an example electronic device that is applied as a voice interface to collect user voice commands in a smart media environment in accordance with some implementations. FIG. 6 is a block diagram illustrating an example cast device that is applied for automatic control of display of media content in a smart media environment in accordance with some implementations. FIG. 7 is a block diagram illustrating an example server in the server system 140 of a smart media environment in accordance with some implementations. An example server is one of a cloud cast service sever. FIG. 8 is a block diagram illustrating an example client device that is applied for automatic control of media display in a smart media environment in accordance with some implementations. FIG. 9 is a block diagram illustrating an example smart home device in a smart media environment in accordance with some implementations. FIG. 10 is a flow diagram illustrating a method of visually indicating a voice processing state in accordance with some implementations. FIG. 11 is a flow diagram illustrating a method of initiating display of closed captions for media content by voice in accordance with some implementations. FIG. 12 is a flow diagram illustrating a method of initiating by voice play of media content on a media output device in accordance with some implementations. FIG. 13 is a flow diagram illustrating a method of moving play of media content from a source media output device to a destination media output device in accordance with some implementations. Like reference numerals refer to corresponding parts throughout the several views of the drawings. DESCRIPTION OF IMPLEMENTATIONS While digital revolution has provided many benefits ranging from openly sharing information to a sense of global community, emerging new technology often induces confusion, skepticism and fear among consumers, preventing consumers from benefiting from the technology. Electronic devices are conveniently used as voice interfaces to receive voice inputs from users and initiate voice-activated functions, and thereby offer eyes-free and hands-free solutions to approach both existing and emerging technology. Specifically, the voice inputs received at an electronic device can carry instructions and information even if a user's line of sight is obscured and his hands are full. To enable hands-free and eyes-free experience, the voice-activated electronic device listens to the ambient (i.e., processes audio signals collected from the ambient) constantly or only when triggered. On the other hand, user identities are linked with a user's voice and a language used by the user. To protect the user identities, voice-activated electronic devices are normally used in non-public places that are protected, controlled and intimate spaces (e.g., home and car). In accordance with some implementations of the invention, a voice-activated electronic device includes an array of full color light emitting diodes (LEDs). While the electronic device processes audio inputs collected from one or more microphones, the array of full LEDs are illuminated to provide a visual pattern according to LED illumination specifications determined according to a state of the processing. The array of full color LEDs is configured to provide a plurality of visual patterns each corresponding to a voice processing state (e.g., hot word detection, listening, thinking and speaking). This LED design language used to create the visual patterns is applied to at least partially resolve the problem of user confusion, apprehension, and uneasiness and promote understanding, adoption and enjoyment of the corresponding voice interface experience. Further, in accordance with some implementations of the invention, a voice-activated electronic device uses voice inputs to initiate and control video playback on display devices. Specifically, a server system (e.g., a cloud cast service server) receives a voice message recorded by the voice-activated electronic device, and determines that the voice message includes a media play request further including a user voice command to play media content on a media output device (optionally including the voice-activated electronic device itself) and a user voice designation of the media output device. The user voice command includes at least information of a first media play application and the media content that needs to be played. In accordance with the voice designation of the media output device, the server system identifies a cast device associated in a user domain with the electronic device and coupled to the media output device, and the cast device is configured to execute one or more media play applications for controlling the media output device to play media content received from one or more media content hosts. The server system then sends to the cast device the information of the first media play application and the media content that needs to be played, thereby enabling the cast device to execute the first media play application that controls the media output device to play the media content. In some implementations, while the media content is displayed on a media output device, the voice-activated electronic device allows a user to use their voice to turn on and off captions on the TV without involving any user interaction with a remote control or a second screen device (e.g., a mobile phone, a tablet computer and a laptop computer). Specifically, a server system is configured to determine from a voice message a first closed caption initiation request including a user voice command to initiate closed captions and a user voice designation of a display device playing the media content for which closed captions are to be activated. After identifying a cast device associated in a user domain with the electronic device and coupled to the designated display device, the server system sends a second closed caption initiation request to the cast device, thereby enabling the cast device to execute the media play application that controls the designated display device to turn on the closed captions of media content that is currently displayed on the designated display device and display the closed captions according to the second closed caption initiation request. Further, in accordance with some implementations of the invention, while the media content is displayed on a first media output device, the voice-activated electronic device allows a user to use their voice to initiate a media transfer of the media content from the first media output device to a second media output device. The transfer maintains the corresponding media play state at least by resuming the media content on the second media output device at an exact point of the media content that has been played on the first media output device. Specifically, a server system is configured to determine from a voice message a media transfer request including a user voice command to transfer media content that is being played to a destination media output device and a user voice designation of the destination media output device. The server system then obtains from a source cast device instant media play information of the media content that is currently being played, and the instant play information includes at least information of a first media play application, the media content that is currently being played, and a temporal position related to playing of the media content. After identifying a destination cast device associated in a user domain with the electronic device and coupled to the designated display device, the server system sends to the destination cast device a media play request including the instant media play information, thereby enabling the destination cast device to execute the first media play application that controls the destination media output device to play the media content from the temporal location. In some implementations, the destination cast device is identified in a device registry. Reference will now be made in detail to implementations, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described implementations. However, it will be apparent to one of ordinary skill in the art that the various described implementations may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the implementations. Smart Media/Home Environment FIG. 1 is an example smart media environment 100 in accordance with some implementations. The smart media environment 100 includes a structure 150 (e.g., a house, office building, garage, or mobile home) with various integrated devices. It will be appreciated that devices may also be integrated into a smart media environment 100 that does not include an entire structure 150, such as an apartment, condominium, or office space. The depicted structure 150 includes a plurality of rooms 152, separated at least partly from each other via walls 154. The walls 154 may include interior walls or exterior walls. Each room may further include a floor 156 and a ceiling 158. One or more media devices are disposed in the smart media environment 100 to provide media content that is stored at a local content source or streamed from a remote content source (e.g., content host(s) 114). The media devices can be classified to two categories: media output devices 106 that directly output the media content to audience, and cast devices 108 that are networked to stream media content to the media output devices 108. Examples of the media output devices 106 include, but are not limited to television (TV) display devices and music players. Examples of the cast devices 108 include, but are not limited to, a set-top boxes (STBs), DVD players and TV boxes. In the example smart media environment 100, the media output devices 106 are disposed in more than one location, and each media output device 106 is coupled to a respective cast device 108 or includes an embedded casting unit. The media output device 106-1 includes a TV display that is hard wired to a DVD player or a set top box 108-1. The media output device 106-2 includes a smart TV device that integrates an embedded casting unit to stream media content for display to its audience. The media output device 106-3 includes a regular TV display that is coupled to a TV box 108-3 (e.g., Google TV or Apple TV products), and such a TV box 108-3 streams media content received from a media content host server 114 and provides an access to the Internet for displaying Internet-based content on the media output device 106-3. In addition to the media devices 106 and 108, one or more electronic devices 190 are disposed in the smart media environment 100 to collect audio inputs for initiating various media play functions of the media devices. In some implementations, these voice-activated electronic devices 190 (e.g., devices 1901-1, 190-2 and 190-3) are disposed in proximity to a media device, for example, in the same room with the cast devices 108 and the media output devices 106. Alternatively, in some implementations, a voice-activated electronic device 190-4 is disposed in a room having one or more smart home devices but not any media device. Alternatively, in some implementations, a voice-activated electronic device 190 is disposed in a location having no networked electronic device. The electronic device 190 includes at least one or more microphones, a speaker, a processor and memory storing at least one program for execution by the processor. The speaker is configured to allow the electronic device 190 to deliver voice messages to a location where the electronic device 190 is located in the smart media environment 100, thereby broadcasting music, reporting a state of audio input processing, having a conversation with or giving instructions to a user of the electronic device 190. As an alternative to the voice messages, visual signals could also be used to provide feedback to the user of the electronic device 190 concerning the state of audio input processing. When the electronic device 190 is a conventional mobile device (e.g., a mobile phone or a tablet computer), its display screen is configured to display a notification concerning the state of audio input processing. In accordance with some implementations, the electronic device 190 is a voice interface device that is network-connected to provide voice recognition functions with the aid of a cloud cast service server 116 and/or a voice assistance server 112. For example, the electronic device 190 includes a smart speaker that provides music to a user and allows eyes-free and hands-free access to voice assistant service (e.g., Google Assistant). Optionally, the electronic device 190 is one of a desktop or laptop computer, a tablet and a mobile phone that includes a microphone. Optionally, the electronic device 190 is a simple and low cost voice interface device. Given simplicity and low cost of the electronic device 190, the electronic device 190 includes an array of full color light emitting diodes (LEDs) rather than a full display screen, and displays a visual pattern on the full color LEDs to indicate the state of audio input processing. When voice inputs from the electronic device 190 are used to control the media output devices 106 via the cast devices 108, the electronic device 190 effectively enables a new level of control of cast-enabled media devices. In a specific example, the electronic device 190 includes a casual enjoyment speaker with far-field voice access and functions as a voice interface device for Google Assistant. The electronic device 190 could be disposed in any room in the smart media environment 100. When multiple electronic devices 190 are distributed in multiple rooms, they become cast audio receivers that are synchronized to provide voice inputs from all these rooms. Specifically, in some implementations, the electronic device 190 includes a WiFi speaker with a microphone that is connected to a voice-activated personal assistant service (e.g., Google Assistant). A user could issue a media play request via the microphone of electronic device 190, and ask the personal assistant service to play media content on the electronic device 190 itself or on another connected media output device 106. For example, the user could issue a media play request by saying to the WiFi speaker “OK Google, Play cat videos on my Living room TV.” The personal assistant service then fulfils the media play request by playing the requested media content on the requested device using a default or designated media application. A user could also make a voice request via the microphone of the electronic device 190 concerning the media content that has already been played on a display device. In some implementations, closed captions of the currently displayed media content are initiated or deactivated on the display device by voice when there is no remote control or a second screen device is available to the user. Thus, the user can turn on the closed captions on a display device via an eyes-free and hands-free voice-activated electronic device 190 without involving any other device having a physical user interface, and such a voice-activated electronic device 190 satisfies federal accessibility requirements for users having hearing disability. In some implementations, a user may want to take a current media session with them as they move through the house. This requires the personal assistant service to transfer the current media session from a first cast device to a second cast device that is not directly connected to the first cast device or has no knowledge of the existence of the first cast device. Subsequent to the media content transfer, a second output device 106 coupled to the second cast device 108 continues to play the media content previously a first output device 106 coupled to the first cast device 108 from the exact point within a music track or a video clip where play of the media content was forgone on the first output device 106. In some implementations, in addition to the media devices (e.g., the output devices 106 and the cast devices 108) and the voice-activated electronic devices 190, smart home devices could also be mounted on, integrated with and/or supported by a wall 154, floor 156 or ceiling 158 of the smart media environment 100 (which is also broadly called as a smart home environment in view of the existence of the smart home devices). The integrated smart home devices include intelligent, multi-sensing, network-connected devices that integrate seamlessly with each other in a smart home network and/or with a central server or a cloud-computing system to provide a variety of useful smart home functions. In some implementations, a smart home device is disposed at the same location of the smart home environment 100 as a cast device 108 and/or an output device 106, and therefore, is located in proximity to or with a known distance with respect to the cast device 108 and the output device 106. The smart home devices in the smart media environment 100 may include, but are not limited to, one or more intelligent, multi-sensing, network-connected thermostats 122, one or more intelligent, network-connected, multi-sensing hazard detectors 124, one or more intelligent, multi-sensing, network-connected entryway interface devices 126 and 128 (hereinafter referred to as “smart doorbells 126” and “smart door locks 128”), one or more intelligent, multi-sensing, network-connected alarm systems 130, one or more intelligent, multi-sensing, network-connected camera systems 132, and one or more intelligent, multi-sensing, network-connected wall switches 136. In some implementations, the smart home devices in the smart media environment 100 of FIG. 1 includes a plurality of intelligent, multi-sensing, network-connected appliances 138 (hereinafter referred to as “smart appliances 138”), such as refrigerators, stoves, ovens, televisions, washers, dryers, lights, stereos, intercom systems, garage-door openers, floor fans, ceiling fans, wall air conditioners, pool heaters, irrigation systems, security systems, space heaters, window AC units, motorized duct vents, and so forth. The smart home devices in the smart media environment 100 may additionally or alternatively include one or more other occupancy sensors (e.g., touch screens, IR sensors, ambient light sensors and motion detectors). In some implementations, the smart home devices in the smart media environment 100 include radio-frequency identification (RFID) readers (e.g., in each room 152 or a portion thereof) that determine occupancy based on RFID tags located on or embedded in occupants. For example, RFID readers may be integrated into the smart hazard detectors 104. In some implementations, in addition to containing sensing capabilities, devices 122, 124, 126, 128, 130, 132, 136 and 138 (which are collectively referred to as “the smart home devices” or “the smart home devices 120”) are capable of data communications and information sharing with other smart home devices, a central server or cloud-computing system, and/or other devices (e.g., the client device 104, the cast devices 108 and the voice-activated electronic devices 190) that are network-connected. Similarly, each of the cast devices 108 and the voice-activated electronic devices 190 is also capable of data communications and information sharing with other cast devices 108, voice-activated electronic devices 190, smart home devices, a central server or cloud-computing system 140, and/or other devices (e.g., the client device 104) that are network-connected. Data communications may be carried out using any of a variety of custom or standard wireless protocols (e.g., IEEE 802.15.4, Wi-Fi, ZigBee, 6LoWPAN, Thread, Z-Wave, Bluetooth Smart, ISA100.11a, Wireless HART, Mi Wi, etc.) and/or any of a variety of custom or standard wired protocols (e.g., Ethernet, HomePlug, etc.), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document. In some implementations, the cast devices 108, the electronic devices 190 and the smart home devices serve as wireless or wired repeaters. In some implementations, a first one of and the cast devices 108 communicates with a second one of the cast devices 108 and the smart home devices via a wireless router. The cast devices 108, the electronic devices 190 and the smart home devices may further communicate with each other via a connection (e.g., network interface 160) to a network, such as the Internet 110. Through the Internet 110, the cast devices 108, the electronic devices 190 and the smart home devices may communicate with a smart server system 140 (also called a central server system and/or a cloud-computing system herein). Optionally, the smart server system 140 may be associated with a manufacturer, support entity, or service provider associated with the cast devices 108 and the media content displayed to the user. Accordingly, the smart server system 140 may include a voice assistance server 112 that processes audio inputs collected by voice-activated electronic devices, one or more content hosts 104 that provide the displayed media content, a cloud cast service server 116 creating a virtual user domain based on distributed device terminals, and a device registry 118 that keeps a record of the distributed device terminals in the virtual user environment. Examples of the distributed device terminals include, but are not limited to the cast devices 108, the media output devices 106, the electronic devices 190 and the smart home devices. In some implementations, these distributed device terminals are linked to a user account (e.g., a Google user account) in the virtual user domain. In some implementations, the network interface 160 includes a conventional network device (e.g., a router). The smart media environment 100 of FIG. 1 further includes a hub device 180 that is communicatively coupled to the network(s) 110 directly or via the network interface 160. The hub device 180 is further communicatively coupled to one or more of the above intelligent, multi-sensing, network-connected devices (e.g., the cast devices 108, the electronic devices 190, the smart home devices and the client device 104). Each of these network-connected devices optionally communicates with the hub device 180 using one or more radio communication networks available at least in the smart media environment 100 (e.g., ZigBee, Z-Wave, Insteon, Bluetooth, Wi-Fi and other radio communication networks). In some implementations, the hub device 180 and devices coupled with/to the hub device can be controlled and/or interacted with via an application running on a smart phone, household controller, laptop, tablet computer, game console or similar electronic device. In some implementations, a user of such controller application can view status of the hub device or coupled network-connected devices, configure the hub device to interoperate with devices newly introduced to the home network, commission new devices, and adjust or view settings of connected devices, etc. FIG. 2A is an example operating environment in which a voice-activated electronic device 190 interacts with a cast device 108, a client device 104 or a server system 140 of a smart media environment 100 in accordance with some implementations. The voice-activated electronic device 190 is configured to receive audio inputs from an environment in proximity to the voice-activated electronic device 190. Optionally, the electronic device 190 stores the audio inputs and at least partially processes the audio inputs locally. Optionally, the electronic device 190 transmits the received audio inputs or the partially processed audio inputs to a voice assistance server 112 via the communication networks 110 for further processing. The cast device 108 is configured to obtain media content or Internet content from one or more content hosts 114 for display on an output device 106 coupled to the cast device 108. As explained above, the cast device 108 and the voice-activated electronic device 190 are linked to each other in a user domain, and more specifically, associated with each other via a user account in the user domain. Information of the cast device 108 and information of the electronic device 190 are stored in the device registry 118 in association with the user account. In some implementations, the cast device 108 and the voice-activated electronic device 190 do not include any display screen, and have to rely on the client device 104 to provide a user interface during a commissioning process. Specifically, the client device 104 is installed with an application that enables a user interface to facilitate commissioning of a new cast device 108 or a new voice-activated electronic device 190 disposed in proximity to the client device 104. A user may send a request on the user interface of the client device 104 to initiate a commissioning process for the new cast device 108 or electronic device 190 that needs to be commissioned. After receiving the commissioning request, the client device 104 establishes a short range communication link with the new cast device 108 or electronic device 190 that needs to be commissioned. Optionally, the short range communication link is established based near field communication (NFC), Bluetooth, Bluetooth Low Energy (BLE) and the like. The client device 104 then conveys wireless configuration data associated with a wireless local area network (WLAN) to the new cast device 108 or electronic device 190. The wireless configuration data includes at least a WLAN security code (i.e., service set identifier (SSID) password), and optionally includes a SSID, an Internet protocol (IP) address, proxy configuration and gateway configuration. After receiving the wireless configuration data via the short range communication link, the new cast device 108 or electronic device 190 decodes and recovers the wireless configuration data, and joins the WLAN based on the wireless configuration data. Additional user domain information is entered on the user interface displayed on the client device 104, and used to link the new cast device 108 or electronic device 190 to an account in a user domain. Optionally, the additional user domain information is conveyed to the new cast device 108 or electronic device 190 in conjunction with the wireless communication data via the short range communication link. Optionally, the additional user domain information is conveyed to the new cast device 108 or electronic device 190 via the WLAN after the new device has joined the WLAN. Once the cast device 108 and the electronic device 190 have been commissioned into the user domain, the cast device 108, the output device 106 and their associated media play activities could be controlled via two control paths (control path A and control path B). In accordance with control path A, a cast device application or one or more media play applications installed on the client device 104 are used to control the cast device 108 and its associated media play activities. Alternatively, in accordance with control path B, the electronic device 190 is used to enable eyes-free and hands-free control of the cast device 108 and its associated media play activities (e.g., playback of media content play on the output device 106, and activation of closed captions of media content currently displayed on the output device 106). FIG. 2B is an example flow chart of a media play control process 250 that controls the cast device 108 and its associated media play activities according to control path B shown in FIG. 2A. An assistant server (e.g., a voice assistance server 112) is configured to support the voice activated electronic device 190, control interactions with a search stack and resolve which media action needs to be executed according to raw voice inputs collected by the electronic device 190. The assistant server sends (202) a request to the cloud cast service server 116 which converts the media action into an Action Script that can then be executed by the target cast device 108. There are two possible execution paths for the Action Script. In accordance with a first execution path A, it is returned in the response to the assistant server. This is a “local path.” If the target cast device 108 is the voice-activated electronic device 190 itself, then the Action Script is readily available from the assistant server. Alternatively, in accordance with a second execution path B, the cloud cast service server 116 dispatches the Action Script to the device via a Cloud Messaging service. This is a remote execution path. In some implementations, both execution paths are taken in parallel, and the target cast device 108 ignores the Action Script that arrives second. A unique_command_id is associated with every ExecuteCloudCastCommand. In some implementations, a voice assistant server makes a remote procedure call (RPC) of executeCastCommand with a CloudCastCommand as follows: message CloudCastCommand { optional string unique_command_id = 1 ; optional string source_device_id = 2 ; optional string target_device_id = 3 ; optional string app_id = 4 ; optional string content_id = 5 ; optional string content_auth_token = 6 ; } message ExecuteCastCommandRequest { optional CloudCastCommand cast_command = 1 ; } message ExecuteCastCommandResponse { optional CloudCastCommand cast_command = 1 ; optional string cast_action_script = 2 ; } Once the command is obtained, the cloud cast service server 116 maintains this CloudCastCommand in a consistent storage keyed by a unique_command_id and target_device_id. The CloudCastCommand will be replaced or removed when another command is issued for the same target cast device 108 or the electronic device 190 or when /execution Report endpoints receives either SUCCESS/ERROR status. The cloud cast service server 116 then cleans up Command that is stale (haven't finished in a certain time period), and generates the Cast Action Script. Once Cast Action Script is generated, the cloud cast service server 116 returns the script in the RPC response, and sends the Response using Google Cloud Messaging Service if (source_device_id!=target_device_id). In some implementations, the cast device 108 reports (204) its status during and after executing Cast Action Script as follows: message ReportExecutionStatusRequest { enum StatusCode { UNKNOWN = 0 ; SUCCESS = 1 ; ERROR = 2 ; QUEUED = 3 ; IN_PROGRESS = 4 ; } optional string device_id = 1 ; optional string unique_command_id = 2 ; optional StatusCode status_code = 3 ; // A single action in the action script that is being reported in this // request. optional string last_action = 4 ; // Contains custom device status data based on status_code or error code. // e.g. For “CAST::EINJECTWRAPPED” error_code, a custom error string will be // set in this field. optional string custom_data = 5 ; // Error code is a string which is defined in go/cast actionscript optional string error_code = 6 ; } message ExecutionReportResponse { // TBD } In some implementations, the cast device 108 updates its status with a status message whenever its status changes. In some implementations, the cast device 108 periodically sends a heartbeat to inform the cloud cast service server 116 of their presence, and the cloud cast service server 116 updates a last_action_time field to the time since epoch in seconds. The cloud cast service server 116 sends the execution status message to source device (e.g. the voice-activated electronic device 190) optionally via a Cloud Messaging service. The voice-activated electronic device 190 will then call S3 for TTS and playback. Voice Activated Media Play on a Media Output Device Referring to FIG. 2A, after the cast device 108 and the voice-activated electronic device 190 are both commissioned and linked to a common user domain, the voice-activated electronic device 190 can be used as a voice user interface to enable eyes-free and hands-free control of media content streaming to the cast device 108 involving no remote control, client device 104 or other second screen device. For example, the user may give voice commands such as “Play Lady Gaga on Living Room speakers.” A Lady Gaga music track or video clip is streamed to a cast device 108 associated with the “Living Room speakers.” The client device 104 is not involved, nor is any cast device application or media play application loaded on the client device 104. The cloud cast service 116 is the proxy service that communicatively links the voice-activated electronic device to the cast device 108 and makes casting to the cast device 108 possible without involving any applications on the client device 104. Specifically, a voice message is recorded by an electronic device 190, and the voice message is configured to request media play on a media output device 106. Optionally, the electronic device 190 partially processes the voice message locally. Optionally, the electronic device 190 transmits the voice message or the partially processed voice message to a voice assistance server 112 via the communication networks 110 for further processing. A cloud cast service server 116 determines that the voice message includes a first media play request, and that the first media play request includes a user voice command to play media content on a media output device 106 and a user voice designation of the media output device 106. The user voice command further includes at least information of a first media play application (e.g., YouTube and Netflix) and the media content (e.g., Lady Gaga music) that needs to be played. In accordance with the voice designation of the media output device, the cloud cast service server 116 in a device registry 118 a cast device associated in the user domain with the electronic device 190 and coupled to the media output device 106. The cast device 108 is configured to execute one or more media play applications for controlling the media output device 106 to play media content received from one or more media content hosts 114. Then, the cloud cast service server 116 sends to the cast device 108 a second media play request including the information of the first media play application and the media content that needs to be played. Upon receiving the information sent by the cloud cast service server 116, the cast device 108 executes the first media play application and controls the media output device 106 to play the requested media content. In some implementations, the user voice designation of the media output device 106 includes description of the destination media output device. The cloud cast service server 116 identifies in the registry the destination media output device among a plurality of media output devices according to the description of the destination media output device. In some implementations, the description of the destination media output device includes at least a brand (“Samsung TV”) or a location of the media output device 106 (“my Living Room TV”). Voice Activated Closed Caption Display U.S. Federal Accessibility Laws require that electronic communications and information technologies, such as websites, email, or web documents, be accessible, and that video content must be presented with an option of closed captions for users who are deaf or hard of hearing. Referring to FIG. 2A, after the cast device 108 and the voice-activated electronic device 190 are both commissioned and linked to a common user domain, the voice-activated electronic device 190 can be used as a voice user interface to enable eyes-free and hands-free control of closed caption display with media content that is being currently displayed on the media output device 106. Specifically, a voice recognition system translates a voice command to turn captions on to a recognizable message sent to the cloud cast service. The cloud cast service interprets this message and send a command to a media play application (e.g., YouTube) installed on a cast device. The media play application receives that command and renders a caption track based on the message. As such, the user can then use voice to toggle captions on and off on the media output devices. This control of closed caption display does not involve any remote control, client device 104 or other second screen device, nor does it invoke any cast device application or media play application loaded on the client device 104. Therefore, the voice-activated control of closed caption display meets the federal accessibility requirements particularly for users who are deaf or hard of hearing. When a user intends to initiate display of closed captions for currently displayed media content, the user sends a voice message (e.g., “Turn on closed captioning.”) recorded by an electronic device 190. Optionally, the electronic device 190 partially processes the voice message locally. Optionally, the electronic device 190 transmits the voice message or the partially processed voice message to a voice assistance server 112 for further processing. A cloud cast service server 116 determines that the voice message is a first closed caption initiation request, and that the first closed caption initiation request includes a user voice command to initiate closed captions and a user voice designation of a display device 106 playing the media content for which closed captions are to be activated. In some implementations, the electronic device 190 transmits the recorded voice message to the cloud cast service server 116 directly. The cloud cast service server 116 determines that the voice message is the first closed caption initiation request by forwarding the voice message to the voice assistance server 112 to parse the voice message and identify the user voice command and the user voice designation of the destination media device, and receiving from the voice assistance server 112 the user voice command and the user voice designation of the destination media device. In accordance with the designation of the display device, the cloud cast service server 116 identifies in a device registry 118 a cast device 108 associated in the user domain with the electronic device 190 and coupled to the designated display device 106. The cast device 108 is configured to execute a media play application for controlling the designated display device to display media content received from a media content host. In some implementations, both the electronic device 190 and the cast device 108 are associated with a user account of the user domain. The user account could be a Google user account. Then, the cloud cast service server 116 sends a second closed caption initiation request to the cast device coupled to the designated display device. Upon receiving the information sent by the cloud cast service server 116, the cast device 108 executes the media play application to control the designated display device 106 to turn on the closed captions of media content that is currently displayed on the designated display device 106 and display the closed captions according to the second closed caption initiation request. In some implementations, the closed captions are displayed on the designated display device according to a default closed caption display specification. In some implementations, in accordance with the first closed caption initiation request, the cloud cast service server 116 determines a display specification of the closed captions. The second closed caption initiation request includes the display specification of the closed caption, and the cast device is configured to execute the media play application to control the display device to display the closed captions according to the display specification. Further, in some implementations, the display specification of the closed captions includes at least one of a font (e.g., Arial), a font size (e.g., 12), a font color (e.g., white) and a background color (e.g., Black). Further, in some implementations, sending the display specification of the closed captions via the cloud cast service server 116 allows users to adjust the format of their closed captions by translating custom voice commands (such as “larger captions” or “change the background color to blue”) to update the closed caption initiation request sent to the cast device 108. Additionally, such voice-activated control of closed caption display allows any electronic device with a microphone (e.g., a mobile phone) to initiate playback of media content and adjust closed captions on the media display device 106. In some implementations, the electronic device, the cast device and the designated display device are disposed in proximity to each other, but are located remotely from the cloud cast service system 116, the voice assistance server 112 and the device registry 118. In some implementations, two or more of the cloud cast service system 116, the voice assistance server 112 and the device registry 118 are integrated in a single server. In some implementations, the cloud cast service system 116, the voice assistance server 112 and the device registry 118 are distinct from a content host 114 that provides the media content to the cast device 108 for display on the designated display device 106. In some implementations, the user voice designation of the media output device 106 includes description of the destination media output device. The cloud cast service server 116 identifies in the registry the destination media output device among a plurality of media output devices according to the description of the destination media output device. In some implementations, the description of the destination media output device includes at least a brand (“Samsung TV”) or a location of the media output device 106 (“my Living Room TV”). Voice Activated Media Transfer Among Media Output Devices FIG. 3 is another example operating environment in which cast devices 108 interact with a client device 104, voice-activated electronic devices 190 or a server system of the smart media environment 100 in accordance with some implementations. The smart media environment 100 includes a first cast device 108-1 and a first output device 106-1 coupled to the first cast device. The smart media environment 100 also includes a second cast device 108-2 and a second output device 106-2 coupled to the first cast device. The cast devices 108-1 and 108-2 are optionally located in the same location (e.g., the living room) or two distinct locations (e.g., two rooms) in the smart media environment 100. Each of the cast devices 108-1 and 108-2 is configured to obtain media content or Internet content from media hosts 114 for display on the output device 106 coupled to the respective cast device 108-1 or 108-2. Both the first and second cast devices are communicatively coupled to the cloud cast service server 116 and the content hosts 114. The smart media environment 100 further includes one or more voice-activated electronic devices 190 that are communicatively coupled to the cloud cast service server 116 and the voice assistance server 112. In some implementations, the voice-activated electronic devices 190 are disposed independently of the cast devices 108 and the output devices 106. For example, as shown in FIG. 1, the electronic device 190-4 is disposed in a room where no cast device 108 or output device 106 is located. In some implementations, the first electronic device 190-1 is disposed in proximity to the first cast device 108-1 and the first output device 106-1, e.g., the first electronic device 190-1, the first cast device 108-1 and the first output device 106-1 are located in the same room. Optionally, the second electronic device 190-2 is disposed independently of or in proximity to the second cast device 108-2 and the second output device 106-2. When media content is being played on the first output device 106-1, a user may send a voice command to any of the electronic devices 190 to request play of the media content to be transferred to the second output device 106-2. The voice command includes a media play transfer request. In one situation, the user could issue the voice command to the electronic device 190-1 disposed in proximity to the first cast device 108-1 before the user moves to a destination location. Alternatively, in another situation, the user could issue the voice command to the electronic device 190-2 disposed in proximity to the second device 108-2 after the user reaches the destination location. The voice command is transmitted to the cloud cast service server 116. The cloud cast service server 116 sends a media display information request to the first cast device 108-1 to request instant media play information of the media content that is currently being played on the first output device 106-1 coupled to the first cast device 108-1. The first cast device 108-1 then returns to the cloud cast service server 116 the requested instant play information including at least information of a first media play application (e.g., YouTube), the media content that is currently being played (e.g., “Lady Gaga—National Anthem—Super Bowl 2016”), and a temporal position related to playing of the media content. The second cast device 108-2 then receives a media display request including the instant play information from the cloud cast service server 116, and in accordance with the instant play information, executes the first media play application that controls the second output device 106-2 to play the media content from the temporal location. In a specific example, when a music playlist is played on the first output device 106-1, the user says “Play on my living room speakers.” The first output device 106-1 stops playing the currently played song, and the stopped song resumes on the living room speakers. When the song is completed, the living room speakers continue to play the next song on the music playlist previously played on the first output device 106-1. As such, when the user is moving around in the smart home environment 100, the play of the media content would seamlessly follow the user while only involving limited user intervention (i.e., giving the voice command). Such seamless transfer of media content is accomplished according to one or more of the following operations: - - A voice assistant service (e.g., a voice assistance server 112) recognizes that it is a user voice command to transfer media from one output device (source) to another output device (destination); - The Assistant service passes a message including the user voice command to the cloud cast service server 116; - The cloud cast service server 116 then asks the source output device 106-1 to provide a blob of data that is needed for transferring the media stream; - The content of the blob of data is partner dependent but it typically contains the current media content being played, the position with the current media content and the stream volume of the current media content; - Optionally, the content of the blob of data include information of a container for the current media content (e.g., the playlist to which the media content belong), and a position of the current media content within the playlist; - The cloud cast service server 116 tells the source device to stop playing the media content; - The cloud cast service server 116 then loads the appropriate receiver application (e.g., media play application) on the destination (i.e. the same receiver application that is running on the source output device); - The cloud cast service server 116 sends this blob of data to the destination cast device 108-2 along with an instruction to the receiver application to resume transfer of the media content; and - The receiver application interprets the data blob to resume the media content accordingly. Specifically, on a server side, a method is implemented by the cloud cast service server 116 for moving play of media content display from a source media output device to a destination media output device. The cloud cast service server 116 receives a voice message recorded by an electronic device 190-1 or 190-2, and determines that the voice message includes a media transfer request. As explained above, the electronic device could be disposed in proximity to the source cast device 108-1 located at a first location, in proximity to the destination cast device 108-2 located at a second location, or independently of both the source and destination cast devices. In some implementations, the electronic devices 190, the source cast device 108-1 and the destination cast device 108-2 are associated a user account in a user domain managed by the cloud cast service server 116. The user account could be a Google user account. The media transfer request in the user voice command includes a user voice command to transfer media content that is being played to a destination media output device 190-2 and a user voice designation of the destination media output device 190-2. In some implementations, after receiving the voice message recorded by an electronic device 190-1 or 190-2, the cloud cast service server 116 forwards the voice message to a voice assistance server 112 that parses the voice message and identifies the user voice command and the voice designation of the destination media output device, and receives from the voice assistance server 112 the user voice command and the voice designation of the destination media output device 106-2. The cloud cast service server 116 obtains from a source cast device 108-1 instant media play information of the media content that is currently being played. The instant play information includes at least information of a first media play application, the media content that is currently being played, and a temporal position related to playing of the media content. The temporal position could be recorded when the user requests the move of the media content to the destination output device 106-2. In some implementations, the cloud cast service server 116 identifies that the media content is currently being played at the source media output device 106-1. The cloud cast service server 116 identifies in the device registry 118 the source cast device 108-1 associated in the user domain with the electronic device 190 and coupled to the source media output device 106-1. Then, the cloud cast service server 116 sends a media information request to the source cast device 108-1, and thereby receives the instant media play information from the source cast device 108-1. In accordance with the voice designation of the destination media output device, the cloud cast service server 116 identifies in a device registry 118 a destination cast device 108-2 associated in a user domain with the electronic device and coupled to the destination media output device 106-2. The destination cast device 108-2 is configured to execute one or more media play applications for controlling the destination media output device 106-2 to play media content received from one or more media content hosts 114. In some implementations, the user voice designation of the destination media output device 106-2 includes description of the destination media output device 106-2 (e.g., a brand and a location of the output device 106-2). The cloud cast service server 116 identifies in the registry 112 the destination media output device 106-2 among a plurality of media output devices according to the description of the destination media output device 106-2. Thus, the user does not have to provide an accurate device identification that matches the record in the device registry 112, and the cloud cast service server 116 can determine the destination media output device 106-2 based on the description of the destination media output device 106-2. After obtaining the instant play information and identifying the destination cast device 108-2, the cloud cast service server 116 sends to the destination cast device 108-2 a media play request including the instant media play information, thereby enabling the destination cast device 108-2 to execute the first media play application that controls the destination media output device 106-2 to play the media content from the temporal location. In some implementations, in accordance with the user voice command, the cloud cast service server 116 sends also sends a media stop request to the source cast device 108-1, thereby enabling the source cast device 108-1 to execute the first media play application that controls the source cast device 108-1 coupled thereto to forgo the play of the media content on the source media output device 106-1. This media transfer method abstracts the data needed to transfer a media stream away from the service and places it directly with the streaming service provider so they can define the parameters (e.g., a Google cast protocol) needed to transfer the stream currently playing. This keeps the design of this invention very flexible to accommodate any type of media partner or media stream. Additionally it leverages cloud infrastructure (via the cloud cast service) to transfer messages and coordinate playback between the source and destination devices. This allows this transfer to occur without these cast devices having any knowledge of each other or being on the same wireless local area network. Media transfer via the cloud cast service server 116 also enables scalability, flexibility and data security. The blob of data needed to transfer media is specifically loosely defined to accommodate the number of content provider partners and the number of stream types. Streams may be individual tracks, playlists, live streams, advertisements, auto playing videos and many other content formats. Keeping the data blob flexible and partner dependent allows a single method to work for all types of media streams. Further, by having the cloud cast service independently connect with the source and destination cast devices, there is no requirement for these devices to be connected to each other, be on the same WLAN or have knowledge of each other. In addition, there is no disintermediation by the CCS. The data being sent between the receiver applications on the source and the destination is opaque to the cloud cast service server 116. This allows confidential details about the transferred media session to stay with the partner who employs the cloud cast service. Physical Features of a Voice-Activated Electronic Device FIGS. 4A and 4B are a front view 400 and a rear view 420 of a voice-activated electronic device 190 in accordance with some implementations. The electronic device 190 is designed as warm and inviting, and fits naturally in many areas of a home. The electronic device 190 includes one or more microphones 402 and an array of full color LEDs 404. The full color LEDs 404 could be hidden under a top surface of the electronic device 190 and invisible to the user when they are not lit. In some implementations, the array of full color LEDs 404 is physically arranged in a ring. Further, the rear side of the electronic device 190 optionally includes a power supply connector 408 configured to couple to a power supply. In some implementations, the electronic device 190 presents a clean look having no visible button, and the interaction with the electronic device 190 is based on voice and touch gestures. Alternatively, in some implementations, the electronic device 190 includes a limited number of physical buttons (e.g., a button 406 on its rear side), and the interaction with the electronic device 190 is further based on press on the button in addition to the voice and touch gestures. One or more speakers are disposed in the electronic device 190. FIG. 4C is a perspective view 440 of a voice-activated electronic device 190 that shows speakers 422 contained in a base 410 of the electronic device 190 in an open configuration in accordance with some implementations. FIGS. 4D and 4E are a side view 450 and an expanded view 460 of a voice-activated electronic device 190 that shows electronic components contained therein in accordance with some implementations, respectively. The electronic device 190 includes an array of full color LEDs 404, one or more microphones 402, a speaker 422, Dual-band WiFi 802.11ac radio(s), a Bluetooth LE radio, an ambient light sensor, a USB port, a processor and memory storing at least one program for execution by the processor. Further, in some implementations, the electronic device 190 further includes a touch sense array 424 configured to detect touch events on the top surface of the electronic device 190. The touch sense array 424 is disposed and concealed under the top surface of the electronic device 190. In some implementations, the touch sense array 424 arranged on a top surface of a circuit board including an array of via holes, and the full color LEDs are disposed within the via holes of the circuit board. When the circuit board is positioned immediately under the top surface of the electronic device 190, both the full color LEDs 404 and the touch sense array 424 are disposed immediately under the top surface of the electronic device 190 as well. FIGS. 4F(1)-4F(4) show four touch events detected on a touch sense array 424 of a voice-activated electronic device 190 in accordance with some implementations. Referring to FIGS. 4F(1) and 4F(2), the touch sense array 424 detects a rotational swipe on a top surface of the voice activated electronic 190. In response to detection of a clockwise swipe, the voice activated electronic 190 increases a volume of its audio outputs, and in response to detection of a counterclockwise swipe, the voice activated electronic 190 decreases the volume of its audio outputs. Referring to FIG. 4F(3), the touch sense array 424 detects a single tap touch on the top surface of the voice activated electronic 190. In response to detection of a first tap touch, the voice activated electronic 190 implements a first media control operation (e.g., plays specific media content), and in response to detection of a second tap touch, the voice activated electronic 190 implements a second media control operation (e.g., pauses the specific media content that is currently being played). Referring to FIG. 4F(4), the touch sense array 424 detects a double tap touch (e.g., two consecutive touches) on the top surface of the voice activated electronic 190. The two consecutive touches are separated by a duration of time less than a predetermined length. However, when they are separated by a duration of time greater than the predetermined length, the two consecutive touches are regarded as two single tap touches. In response to detection of the double tap touch, the voice activated electronic 190 initiates a hot word detection state in which the electronic device 190 listens to and recognizes one or more hot words (e.g., predefined key words). Until the electronic device 190 recognizes the hot words, the electronic device 190 does not send any audio inputs to the voice assistance server 112 or the cloud cast service server 118. In some implementations, the array of full color LEDs 404 is configured to display a set of visual patterns in accordance with an LED design language, indicating detection of a clockwise swipe, a counter-clockwise swipe, a single tap or a double tap on the top surface of the voice activated electronic 190. For example, the array of full color LEDs 404 may light up sequentially to track the clockwise or counter-clockwise swipe as shown in FIGS. 4F(1) and 4F(2), respectively. More details on visual patterns associated with voice processing states of the electronic device 190 are explained below with reference to FIGS. 4G and 4H(1)-4H(8). FIG. 4F(5) shows an example user touch or press on a button 406 on a rear side of the voice-activated electronic device 190 in accordance with some implementations. In response to a first user touch or press on the button 406, microphones of the electronic device 190 are muted, and response to a second user touch or press on the button 406, the microphones of the electronic device 190 are activated. LED Design Language for Visual Affordance of Voice User Interface In some implementations, given simplicity and low cost of the electronic device 190, the electronic device 190 includes an array of full color light emitting diodes (LEDs) rather than a full display screen. A LED design language is adopted to configure illumination of the array of full color LEDs and enable different visual patterns indicating different voice processing state of the electronic device 190. The LED Design Language consists of a grammar of colors, patterns, and specific motion applied to a fixed set of full color LEDs. The elements in the language are combined to visually indicate specific device states during the use of the electronic device 190. In some implementations, illumination of the full color LEDs aims to clearly delineate the passive listening and active listening states of the electronic device 190 among other important states. Placement of the full color LEDs complies with physical constraints of the electronic device 190, and the array of full color LEDs could be used in a speaker that is made by a third party original equipment manufacturer (OEM) based on specific technology (e.g., Google Assistant). When the array of full color LEDs is used in a speaker that is made by a third party OEM based on specific technology, the full color LEDs and the LED design language are configured to fit a corresponding physical user interface of the OEM speaker. In this situation, device states of the OEM speaker remain the same, while specific visual patterns representing the device states could be varied (for example, the colors of the full color LEDs could be different but are displayed with similar animation effects). In a voice-activated electronic device 190, passive listening occurs when the electronic device 190 processes audio inputs collected from its surrounding environment but does not store the audio inputs or transmit the audio inputs to any remote server. In contrast, active listening occurs when the electronic device 190 stores the audio inputs collected from its surrounding environment and/or shares the audio inputs with a remote server. In accordance with some implementations of this application, the electronic device 190 only passively listens to the audio inputs in its surrounding environment without breaching privacy of users of the electronic device 190. FIG. 4G is a top view of a voice-activated electronic device 190 in accordance with some implementations, and FIG. 4H shows six example visual patterns displayed by an array of full color LEDs for indicating voice processing states in accordance with some implementations. In some implementations, the electronic device 190 does not include any display screen, and the full color LEDs 404 provide a simple and low cost visual user interface compared with the a full display screen. The full color LEDs could be hidden under a top surface of the electronic device and invisible to the user when they are not lit. Referring to FIGS. 4G and 4H, in some implementations, the array of full color LEDs 404 are physically arranged in a ring. For example, as shown in FIG. 4H(6), the array of full color LEDs 404 may light up sequentially to track the clockwise or counter-clockwise swipe as shown in FIGS. 4F(1) and 4F(2), respectively A method is implemented at the electronic device 190 for visually indicating a voice processing state. The electronic device 190 collects via the one or more microphones 402 audio inputs from an environment in proximity to the electronic device, and processes the audio inputs. The processing includes one or more of identifying and responding to voice inputs from a user in the environment. The electronic device 190 determines a state of the processing from among a plurality of predefined voice processing states. For each of the full color LEDs 404, the electronic device 190 identifies a respective predetermined LED illumination specification associated with the determined voice processing state. The illumination specification includes one or more of an LED illumination duration, pulse rate, duty cycle, color sequence and brightness. In some implementations, the electronic device 190 determines that the voice processing state is associated with one of a plurality of users, and identifies the predetermined LED illumination specifications of the full color LEDs 404 by customizing at least one of the predetermined LED illumination specifications (e.g., the color sequence) of the full color LEDs 404 according to an identity of the one of the plurality of users. Further, in some implementations, in accordance with the determined voice processing state, the colors of the full color LEDs include a predetermined set of colors. For example, referring to FIGS. 4H(2), 4H(4) and 4H(7)-(10), the predetermined set of colors include Google brand colors including blue, green, yellow and red, and the array of full color LEDs is divided into four quadrants each associated with one of the Google brand colors. In accordance with the identified LED illumination specifications of the full color LEDs, the electronic device 190 synchronizes illumination of the array of full color LEDs to provide a visual pattern indicating the determined voice processing state. In some implementations, the visual pattern indicating the voice processing state includes a plurality of discrete LED illumination pixels. In some implementations, the visual pattern includes a start segment, a loop segment and a termination segment. The loop segment lasts for a length of time associated with the LED illumination durations of the full color LEDs and configured to match a length of the voice processing state. In some implementations, the electronic device 190 has more than twenty different device states (including the plurality of predefined voice processing states) that are represented by the LED Design Language. Optionally, the plurality of predefined voice processing states includes one or more of a hot word detection state, a listening state, a thinking state and a responding state. 1. Hot Word Detection State and Listening State In some implementations, the electronic device 190 listens to and recognizes one or more hot words (e.g., predefined key words) in the hot word detection state. Until the electronic device 190 recognizes the hot words, the electronic device 190 does not send any audio inputs to the voice assistance server 112 or the cloud cast service server 118. Upon the detection of the hot words, the electronic device 190 starts to operate in the listening state when the microphones records audio inputs that are further transmitted to the cloud for further processing. In the listening mode, the audio inputs starting from a predetermined temporal position (e.g., two seconds before detection of the hot word) is transmitted to the voice assistance server 112 or the cloud cast service server 118, thereby facilitating seamless queries for a more natural conversation-like flow. Accordingly, in some implementations, in accordance with a determination that the determined voice processing state is a hot word detection state that occurs when one or more predefined hot words are detected, the array of full color LEDs is divided into a plurality of diode groups that are alternately arranged and configured to be lit sequentially, and diodes in each of the plurality of diode groups are lit with different colors. Further, in some implementations, in accordance with a determination that the determined voice processing state is a listening state that occurs when the electronic device is actively receiving the voice inputs from the environment and providing received voice inputs to a remote server, all full color LEDs are lit up with a single color, and each full color LED illuminates with different and varying brightness. As shown in FIGS. 4H(1), (3) and (5), the visual pattern could be configured to be consistent with human reactions (e.g., breathing, flickering, blinking, and swiping) associated with the voice processing state. For example, one of the most impactful places to use the Google brand colors, the attentive wake-up spin followed by the gentle breathing animation signals patient, and eager, yet respectful listening. The colors themselves conjure a sense of brand and embodiment of the Google voice assistant. These elements contrast with the dead front of the device to show very clear not recording and recording states. 2. Thinking Mode or Working Mode Specifically, in some implementations, in accordance with a determination that the voice processing state is a thinking state that occurs when the electronic device is processing the voice inputs received from the user, an increasing number of RGB diodes are lit up during a first illumination cycle of the LED illumination duration, and a decreasing number of RGB diodes are lit up during a second illumination cycle following the first illumination cycle. Such a visual pattern is consistent with a human reaction that a person is thinking. Optionally, the microphones 402 are turned off in the thinking mode. Referring to FIGS. 4H(3), 4H(5) and 4H(6), motion most similar to progress bars and other types of digital waiting signals are used in the visual pattern to indicate the thinking mode. In some implementations, white is used with the chasing animation. Brand colors are intentionally not used here to provide better distinction contrast and highlighting with respect to the other voice processing states. 3. Responding Mode or Speaking Mode Alternatively, in some implementations, in accordance with a determination that the voice processing state is a responding state that occurs when the electronic device broadcasts a voice message in response to the voice inputs received from the user, a subset of the full color LEDs are lit up with a single color of distinct and varying brightness, and variation of the brightness of each of the subset of the fully color LEDs is consistent with a voice speed associated with the voice inputs from the user. In some implementations, the speaking mode is where the voice assistant shows its chops. A set of colors (e.g., the Google brand colors) are used in the visual pattern, such that the full color LEDs visually signifies closure to the voice query, i.e., that the question has been answered. Individual Devices Involved in the Smart Media Environment FIG. 5 is a block diagram illustrating an example electronic device 190 that is applied as a voice interface to collect user voice commands in a smart media environment 100 in accordance with some implementations. The electronic device 190, typically, includes one or more processing units (CPUs) 502, one or more network interfaces 504, memory 506, and one or more communication buses 508 for interconnecting these components (sometimes called a chipset). The electronic device 190 includes one or more input devices 510 that facilitate user input, such as the button 406, the touch sense array and the one or more microphones 402 shown in FIGS. 4A-4H. The electronic device 190 also includes one or more output devices 512, including one or more speakers 422 and the array of full color LEDs 404. Memory 506 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Memory 506, optionally, includes one or more storage devices remotely located from one or more processing units 502. Memory 506, or alternatively the non-volatile memory within memory 506, includes a non-transitory computer readable storage medium. In some implementations, memory 506, or the non-transitory computer readable storage medium of memory 506, stores the following programs, modules, and data structures, or a subset or superset thereof: - - Operating system 516 including procedures for handling various basic system services and for performing hardware dependent tasks; - Network communication module 518 for connecting the electronic device 190 to other devices (e.g., the server system 140, the cast device 108, the client device 104, the smart home devices 120 and the other electronic device(s) 190) via one or more network interfaces 504 (wired or wireless) and one or more networks 110, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on; - Input/output control module for receiving inputs via one or more input devices 510 enabling presentation of information at the electronic device 190 via one or more output devices 512, including: - Voice processing module 522 for processing audio inputs or voice messages collected in an environment surrounding the electronic device 190, or preparing the collected audio inputs or voice messages for processing at a voice assistance server 112 or a cloud cast service server 118; - LED control module 524 for generating visual patterns on the full color LEDs 404 according to device states of the electronic device 190; and - Touch sense module 526 for sensing touch events on a top surface of the electronic device 190; and - Voice activated device data 530 storing at least data associated with the electronic device 190, including: - Voice device settings 532 for storing information associated with the electronic device 190 itself, including common device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.), information of a user account in a user domain, and display specifications 536 associated with one or more visual patterns displayed by the full color LEDs; and - Voice control data 534 for storing audio signals, voice messages, response messages and other data related to voice interface functions of the electronic device 190. Specifically, the display specifications 536 associated with one or more visual patterns displayed by the full color LEDs include predetermined LED illumination specifications associated with each of the one or more visual patterns. For each of the full color LEDs, the illumination specifications include one or more of an LED illumination duration, pulse rate, duty cycle, color sequence and brightness associated with the respective visual pattern. Each visual pattern corresponds to at least one voice processing state. Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, memory 506, optionally, stores a subset of the modules and data structures identified above. Furthermore, memory 506, optionally, stores additional modules and data structures not described above. FIG. 6 is a block diagram illustrating an example cast device 108 that is applied for automatic control of display of media content in a smart media environment 100 in accordance with some implementations. The cast device 108, typically, includes one or more processing units (CPUs) 602, one or more network interfaces 604, memory 606, and one or more communication buses 608 for interconnecting these components (sometimes called a chipset). Memory 606 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Memory 606, optionally, includes one or more storage devices remotely located from one or more processing units 602. Memory 606, or alternatively the non-volatile memory within memory 606, includes a non-transitory computer readable storage medium. In some implementations, memory 606, or the non-transitory computer readable storage medium of memory 606, stores the following programs, modules, and data structures, or a subset or superset thereof: - - Operating system 616 including procedures for handling various basic system services and for performing hardware dependent tasks; - Network communication module 618 for connecting the cast device 108 to other computers or systems (e.g., the server system 140, the smart home devices 120 and the client device 104) via one or more network interfaces 604 (wired or wireless) and one or more networks 110, such as the Internet, other wide area networks, local area networks, metropolitan area networks, cable television systems, satellite television systems, IPTV systems, and so on; - Content decoding module 620 for decoding content signals received from one or more content sources 114 and outputting the content in the decoded signals to an output display device 106 coupled to the cast device 108; - Automatic media display module 624 including one or more media play applications 624 for controlling media display, e.g., causing media output to the output device 106 according to instant media play information received from a cloud cast service server 116; and - cast device data 626 storing at least data associated with automatic control of media display (e.g., in an automatic media output mode and a follow-up mode), including: - Cast device settings 628 for storing information associated with user accounts of a cast device application, including one or more of account access information, information for device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.), and information for automatic media display control; and - Media player application settings 630 for storing information associated with user accounts of one or more media player applications, including one or more of account access information, user preferences of media content types, review history data, and information for automatic media display control. Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, memory 606, optionally, stores a subset of the modules and data structures identified above. Furthermore, memory 606, optionally, stores additional modules and data structures not described above. FIG. 7 is a block diagram illustrating an example server in the server system 140 of a smart media environment 100 in accordance with some implementations. An example server is one of a cloud cast service sever 116. The server 140, typically, includes one or more processing units (CPUs) 702, one or more network interfaces 704, memory 706, and one or more communication buses 708 for interconnecting these components (sometimes called a chipset). The server 140 could include one or more input devices 710 that facilitate user input, such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing camera, or other input buttons or controls. Furthermore, the server 140 could use a microphone and voice recognition or a camera and gesture recognition to supplement or replace the keyboard. In some implementations, the server 140 includes one or more cameras, scanners, or photo sensor units for capturing images, for example, of graphic series codes printed on the electronic devices. The server 140 could also include one or more output devices 712 that enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays. Memory 706 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Memory 706, optionally, includes one or more storage devices remotely located from one or more processing units 702. Memory 706, or alternatively the non-volatile memory within memory 706, includes a non-transitory computer readable storage medium. In some implementations, memory 706, or the non-transitory computer readable storage medium of memory 706, stores the following programs, modules, and data structures, or a subset or superset thereof: - - Operating system 716 including procedures for handling various basic system services and for performing hardware dependent tasks; - Network communication module 718 for connecting the server system 140 to other devices (e.g., various servers in the server system 140, the client device 104, the cast device 108, and the smart home devices 120) via one or more network interfaces 704 (wired or wireless) and one or more networks 110, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on; - User interface module 720 for enabling presentation of information (e.g., a graphical user interface for presenting application(s) 826-830, widgets, websites and web pages thereof, and/or games, audio and/or video content, text, etc.) at the client device 104; - Command execution module 721 for execution on the server side (e.g., games, social network applications, smart home applications, and/or other web or non-web based applications for controlling the client device 104, the cast devices 108, the electronic device 190 and the smart home devices 120 and reviewing data captured by such devices), including one or more of: - a cast device application 722 that is executed to provide server-side functionalities for device provisioning, device control, and user account management associated with cast device(s) 108; - one or more media player applications 724 that is executed to provide server-side functionalities for media display and user account management associated with corresponding media sources; - one or more smart home device applications 726 that is executed to provide server-side functionalities for device provisioning, device control, data processing and data review of corresponding smart home devices 120; and - a voice assistance application 728 that is executed to arrange voice processing of a voice message received from the electronic device 190 or directly process the voice message to extract a user voice command and a designation of a cast device 108 or another electronic device 190; and - Server system data 730 storing at least data associated with automatic control of media display (e.g., in an automatic media output mode and a follow-up mode), including one or more of: - Client device settings 732 for storing information associated with the client device 104, including common device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.), and information for automatic media display control; - Cast device settings 734 for storing information associated with user accounts of the cast device application 722, including one or more of account access information, information for device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.), and information for automatic media display control; - Media player application settings 736 for storing information associated with user accounts of one or more media player applications 724, including one or more of account access information, user preferences of media content types, review history data, and information for automatic media display control; - Smart home device settings 738 for storing information associated with user accounts of the smart home applications 726, including one or more of account access information, information for one or more smart home devices 120 (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.); and - Voice assistance data 740 for storing information associated with user accounts of the voice assistance application 728, including one or more of account access information, information for one or more electronic device 190 (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.). When the server 140 includes a cloud cast service server 116, memory 706, or the non-transitory computer readable storage medium of memory 706, stores the following programs, modules, and data structures, or a subset or superset thereof: - - Device registration module 750 for managing the device registry 118 coupled to the cloud cast service server 116; - Cloud cast application 760 for relaying a user voice command identified in a voice message to one or more of the cast device(s) 180, the electronic device(s) 190 and the smart home device(s) 120 that are coupled in a cloud cast user domain; and - Status reporting module 770 for maintaining the states of the cast device(s) 180, the electronic device(s) 190 and the smart home device(s) 120 that are coupled in a cloud cast user domain. Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, memory 706, optionally, stores a subset of the modules and data structures identified above. Furthermore, memory 706, optionally, stores additional modules and data structures not described above. FIG. 8 is a block diagram illustrating an example client device 104 that is applied for automatic control of media display in a smart media environment 100 in accordance with some implementations. Examples of the client device include, but are not limited to, a mobile phone, a tablet computer and a wearable personal device. The client device 104, typically, includes one or more processing units (CPUs) 802, one or more network interfaces 804, memory 806, and one or more communication buses 808 for interconnecting these components (sometimes called a chipset). The client device 104 includes one or more input devices 810 that facilitate user input, such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing camera, or other input buttons or controls. Furthermore, some the client devices 104 use a microphone and voice recognition or a camera and gesture recognition to supplement or replace the keyboard. In some implementations, the client device 104 includes one or more cameras, scanners, or photo sensor units for capturing images, for example, of graphic series codes printed on the electronic devices. The client device 104 also includes one or more output devices 812 that enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays. Optionally, the client device 104 includes a location detection device 814, such as a GPS (global positioning satellite) or other geo-location receiver, for determining the location of the client device 104. Memory 806 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Memory 806, optionally, includes one or more storage devices remotely located from one or more processing units 802. Memory 806, or alternatively the non-volatile memory within memory 806, includes a non-transitory computer readable storage medium. In some implementations, memory 806, or the non-transitory computer readable storage medium of memory 806, stores the following programs, modules, and data structures, or a subset or superset thereof: - - Operating system 816 including procedures for handling various basic system services and for performing hardware dependent tasks; - Network communication module 818 for connecting the client device 104 to other devices (e.g., the server system 140, the cast device 108, the electronic device 190, the smart home devices 120 and the other client devices 104) via one or more network interfaces 804 (wired or wireless) and one or more networks 110, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on; - User interface module 820 for enabling presentation of information (e.g., a graphical user interface for presenting application(s) 826-830, widgets, websites and web pages thereof, and/or games, audio and/or video content, text, etc.) at the client device 104 via one or more output devices 812 (e.g., displays, speakers, etc.); - Input processing module 822 for detecting one or more user inputs or interactions from one of the one or more input devices 810 and interpreting the detected input or interaction; - Web browser module 824 for navigating, requesting (e.g., via HTTP), and displaying websites and web pages thereof, including a web interface for logging into a user account associated with a cast device 108, an electronic device 190, a media application or a smart home device 120, controlling the cast device 108, the electronic device 190 or the smart home device 120 if associated with the user account, and editing and reviewing settings and data that are associated with the user account; - One or more applications for execution by the client device (e.g., games, social network applications, smart home applications, and/or other web or non-web based applications for controlling the cast devices 108, the electronic device 190 and/or the smart home devices 120 and reviewing data captured by such devices), including one or more of: - a cast device application 826 that is executed to provide client-side functionalities for device provisioning, device control, and user account management associated with cast device(s) 108; - a voice activated device application 827 that is executed to provide client-side functionalities for device provisioning, device control, and user account management associated with electronic device 190; - one or more media player applications 828 that is executed to provide client-side functionalities for media display and user account management associated with corresponding media sources; and - one or more smart home device applications 830 that is executed to provide client-side functionalities for device provisioning, device control, data processing and data review of corresponding smart home devices 120; and - client data 832 storing at least data associated with automatic control of media display (e.g., in an automatic media output mode or a follow-up mode), including: - Client device settings 834 for storing information associated with the client device 104 itself, including common device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.), and information for automatic media display control; - Cast device settings 836 for storing information associated with user accounts of the cast device application 826, including one or more of account access information, information for device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.), and information for automatic media display control; - Media player application settings 838 for storing information associated with user accounts of one or more media player applications 828, including one or more of account access information, user preferences of media content types, review history data, and information for automatic media display control; - Smart home device settings 840 for storing information associated with user accounts of the smart home applications 830, including one or more of account access information, information for smart home device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.); and - Voice activated device settings 842 for storing information associated with user accounts of the voice activated device application 827, including one or more of account access information, information for electronic device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.). In some implementations, each of the cast device application 826, the voice activated device application 827, the media player applications 828 and the smart home device applications 830 causes display of a respective user interface on the output device 812 of the client device 104. In some implementations, user accounts of a user associated with the cast device application 826, the voice activated device application 827, the media player applications 828 and the smart home device applications 830 are linked to a single cloud cast service account. The user may use the cloud cast service account information to log onto all of the cast device application 826, the voice activated device application 827, the media player applications 828 and the smart home device applications 830. In some implementations, the memory 806, or the non-transitory computer readable storage medium of memory 806, stores a cloud cast application 844 that is executed to provide client-side functionalities for function control and user account management associated with the cast device 108, the smart home device 120 and the electronic device 190 that are linked to the same cloud cast service account (e.g., a Google user account). Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, memory 806, optionally, stores a subset of the modules and data structures identified above. Furthermore, memory 806, optionally, stores additional modules and data structures not described above. FIG. 9 is a block diagram illustrating an example smart home device 120 in a smart media environment 100 in accordance with some implementations. The smart home device 120, typically, includes one or more processing units (CPUs) 902, one or more network interfaces 904, memory 906, and one or more communication buses 908 for interconnecting these components (sometimes called a chipset). Memory 906 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Memory 906, optionally, includes one or more storage devices remotely located from one or more processing units 902. Memory 906, or alternatively the non-volatile memory within memory 906, includes a non-transitory computer readable storage medium. In some implementations, memory 906, or the non-transitory computer readable storage medium of memory 906, stores the following programs, modules, and data structures, or a subset or superset thereof: - - Operating system 916 including procedures for handling various basic system services and for performing hardware dependent tasks for the smart home device 120; - Network communication module 918 for connecting the smart home device 120 to other computers or systems (e.g., the server system 140, the client device 104, the cast device 108, the electronic device 190 and other smart home devices 120) via one or more network interfaces 904 (wired or wireless) and one or more networks 110, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on; - Smart home device module 922 for enabling the smart home device 120 to implement its designated functions (e.g., for capturing and generating multimedia data streams and sending the multimedia data stream to the client device 104 or the server system 140 as a continuous feed or in short bursts, when the smart home device 120 includes a video camera 132); - Smart home device data 924 storing at least data associated with device settings 926. In some implementations, the smart home device 120 is controlled by voice. Specifically, the cloud cast service server 116 receives a voice message recorded by an electronic device 190, and determines that the voice message includes a smart device control request (e.g., zoom in or out of a video camera, turning off a false alarm and an inquiry of the temperature measured from a smart thermostat). The smart device control request includes a user voice command to control a smart home device 120 and a user voice designation of the smart home device. In accordance with the voice designation of the smart home device, the cloud cast service server 116 identifies in a device registry 118 a smart home device 120 associated in a user domain with the electronic device. The cloud cast service server 116 then sends to the smart home device 1290 another device control request, thereby enabling the smart home device module 922 of the smart home device 120 to control the smart home device 120 according to the user voice command. Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, memory 906, optionally, stores a subset of the modules and data structures identified above. Furthermore, memory 906, optionally, stores additional modules and data structures not described above. Voice Based LED Display and Media Control Methods in the Smart Media Environment FIG. 10 is a flow diagram illustrating a method 1000 of visually indicating a voice processing state in accordance with some implementations. The method 1000 is implemented at an electronic device 190 with an array of full color LEDs, one or more microphones, a speaker, a processor and memory storing at least one program for execution by the processor. The electronic device 190 collects (1002) via the one or more microphones 402 audio inputs from an environment in proximity to the electronic device 190, and processes (1004) the audio inputs. The processing is implemented at voice processing module 522, and includes one or more of identifying and responding to voice inputs from a user in the environment. The electronic device 190 then determines (1006) a state of the processing from among a plurality of predefined voice processing states. For each of the full color LEDs, the electronic device 190 identifies (1008) a respective predetermined LED illumination specification associated with the determined voice processing state, and the respective illumination specification includes (1010) one or more of an LED illumination duration, pulse rate, duty cycle, color sequence and brightness. In accordance with the identified LED illumination specifications of the full color LEDs, the electronic device 190 (specifically, LED control module 524) synchronizes illumination of the array of full color LEDs to provide a visual pattern indicating the determined voice processing state. More details on the method 1000 have been explained above with reference to FIGS. 4A-4H and 5. Method 1000 is, optionally, governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of a voice-activated electronic device 190. Each of the operations shown in FIG. 10 may correspond to instructions stored in the computer memory or computer readable storage medium (e.g., memory 506 of the electronic device 190 in FIG. 5). The computer readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the computer readable storage medium may include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted by one or more processors. Some operations in the method 1000 may be combined and/or the order of some operations may be changed. FIG. 11 is a flow diagram illustrating a method 1100 of initiating display of closed captions for media content by voice in accordance with some implementations. The method 1100 is implemented at a server system (e.g., a cloud cast service server 116) including a processor and memory storing at least one program (e.g., the cloud cast application 760) for execution by the processor. The server system receives (1102) a voice message recorded by an electronic device 190, and determines (1104) that the voice message is a first closed caption initiation request. The first closed caption initiation request includes (1106) a user voice command to initiate closed captions and a user voice designation of a display device 106 playing the media content for which closed captions are to be activated. In accordance with the designation of the display device, the server system identifies (1108) in a device registry 118 a cast device 108 associated in a user domain with the electronic device 190 and coupled to the designated display device 106. The cast device 108 is configured (1110) to execute a media play application for controlling the designated display device to display media content received from a media content host. The server system (specifically, the cloud cast application 760) then sends (1112) a second closed caption initiation request to the cast device coupled to the designated display device, thereby enabling the cast device to execute the media play application that controls the designated display device to turn on the closed caption of media content that is currently displayed on the designated display device and display the closed caption according to the second closed caption initiation request. More details on the method 1100 have been explained above with reference to FIGS. 2A, 2B and 5-7. FIG. 12 is a flow diagram illustrating a method 1200 of initiating by voice play of media content on a media output device in accordance with some implementations. The method 1200 is implemented at a server system (e.g., a cloud cast service server 116) including a processor and memory storing at least one program for execution by the processor. The server system receives (1202) a voice message recorded by an electronic device, and determines (1204) that the voice message includes a first media play request. The first media play request includes (1206) a user voice command to play media content on a media output device and a user voice designation of the media output device 106, and the user voice command includes at least information of a first media play application and the media content that needs to be played. In accordance with the voice designation of the media output device, the server system identifies (1208) in a device registry 118 a cast device 108 associated in a user domain with the electronic device 190 and coupled to the media output device 106. The cast device 108 is configured to (1210) execute one or more media play applications for controlling the media output device 106 to play media content received from one or more media content hosts. The server system (specifically, the cloud cast application 760) then sends (1212) to the cast device 108 a second media play request including the information of the first media play application and the media content that needs to be played, thereby enabling the cast device 108 to execute the first media play application that controls the media output device 106 to play the media content. More details on the method 1200 have been explained above with reference to FIGS. 2A, 2B and 5-7. FIG. 13 is a flow diagram illustrating a method 1300 of moving play of media content from a source media output device to a destination media output device in accordance with some implementations. The method 1200 is implemented at a server system (e.g., a cloud cast service server 116) including a processor and memory storing at least one program for execution by the processor. The server system receives (1302) a voice message recorded by an electronic device 190, and determines (1304) that the voice message includes a media transfer request. The media transfer request includes (1306) a user voice command to transfer media content that is being played to a destination media output device and a user voice designation of the destination media output device. The server system obtains (1308) from a source cast device (e.g., the cast device 108-1 of FIG. 3) instant media play information of the media content that is currently being played. The instant play information includes (1310) at least information of a first media play application, the media content that is currently being played, and a temporal position related to playing of the media content. In accordance with the voice designation of the destination media output device, the server system identifies (1312) in a device registry 118 a destination cast device (e.g., the cast device 108-2 of FIG. 3) associated in a user domain with the electronic device 190 and coupled to the destination media output device (e.g., the output device 106-2 of FIG. 3). The destination cast device is configured to (1314) execute one or more media play applications for controlling the destination media output device to play media content received from one or more media content hosts. The server system (specifically, the cloud cast application 760) then sends (1316) to the destination cast device a media play request including the instant media play information, thereby enabling the destination cast device to execute the first media play application that controls the destination media output device to play the media content from the temporal location. More details on the method 1300 have been explained above with reference to FIGS. 3 and 5-7. Methods 1100, 1200 and 1300 are, optionally, governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of a cloud cast service server 116. Each of the operations shown in FIGS. 12-14 may correspond to instructions stored in the computer memory or computer readable storage medium (e.g., memory 706 of the server system in FIG. 7). The computer readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the computer readable storage medium may include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted by one or more processors. Some operations in each of the methods 1100, 1200 and 1300 may be combined and/or the order of some operations may be changed.
uploadify doesn't display properly in IE I've been using uploadify for about an hour now and think its great but I've just checked it in IE and the queue doesn't display. The file uploader still works and does everything it's supposed to but the queue just doesn't display so it appears as if it's doing nothing. I have the latest flash player so it can't be that. Thanks Sorted it. Must have been some conflicting styles cos i deleted some stuff from my stylesheet and it works now getting this exact issue but can not determine what it could be.
package es.ubu.lsi.ubumonitor.webservice.core; import es.ubu.lsi.ubumonitor.webservice.ParametersUserid; import es.ubu.lsi.ubumonitor.webservice.WSFunctions; /** * Función de Moodle que devuelve la información de los cursos matriculados del usuario. * @author Yi Peng Ji * */ public class CoreEnrolGetUsersCourses extends ParametersUserid { /** * Constructor con el id de usuario. * @param userid id de usuario */ public CoreEnrolGetUsersCourses(int userid) { super(userid); } /** * {@inheritDoc} */ @Override public WSFunctions getWSFunction() { return WSFunctions.CORE_ENROL_GET_USERS_COURSES; } }
Fannie Lee SMITH, Plaintiff-Appellant, v. UNITED STATES of America, Defendant-Appellee. No. 14795. United States Court of Appeals Sixth Circuit. June 15, 1962. Louis E. Peiser, Memphis, Tenn., on the brief, William K. Moody, Memphis, Tenn., for appellant. Thomas L. Robinson, U. S. Atty., Edward N. Vaden, Asst. U. S. Atty., Memphis, Tenn., on the brief. William H. Orrick, Jr., Asst. Atty. Gen., Civil Divi-. sion, Department of Justice, Washington, D. C., for appellee. Before MILLER, Chief Judge, and McALLISTER and WEICK, Circuit Judges. ORDER. This action was filed against the Government under the Federal Tort Claims Act by the mother of Oliver Smith, Jr., a six-year old boy who was run over and killed by a mail truck on a street in Memphis, Tennessee. The action is based upon the alleged negligence of the driver of the mail truck. 28 United States Code, §§ 1346(b), 2674. The District Judge, hearing the case without a jury, held that the plaintiff had not met the burden of proving negligence on the part of the operator of the truck and dismissed the complaint. We have reviewed the evidence and are of the opinion that this finding of fact is not clearly erroneous and must be accepted on this appeal. Rule 52(a), Rules of Civil Procedure, 28 U.S.C.; Beit v. United States, 260 F.2d 386, C.A., 6th; Valente v. United States, 264 F.2d 800, C.A. 6th; Gillen v. United States, 281 F.2d 425, 427, C.A. 9th. It is ordered that the judgment be affirmed.
Give me the complete political testimony on the topic of Supply that ends with: ...ity Canadians for how taxpayer dollars are spent?.
import java.util.Scanner; public class P2084_进制转换 { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int m = sc.nextInt(); String str = sc.next(); int len = str.length(); int carry = 0; StringBuilder sb = new StringBuilder(); for (int i = 0; i < len; i++) { if (str.charAt(i) - '0' == 0) { continue; } sb.append((str.charAt(i) - '0') + "*" + m + "^" + (len - 1 - i) + "+"); } sb.deleteCharAt(sb.length() - 1); System.out.println(sb); } }
# GitLab CAF Module This submodule is part of [Cloud Adoption Framework](https://github.com/aztfmod/terraform-azurerm-caf) landing zones for [GitLab on Terraform](https://github.com/gitlabhq/terraform-provider-gitlab). You can instantiate this submodule directly using the following parameters: ```terraform module "gitlab_projects" { source = "aztfmod/caf/azurerm/modules/devops/providers/gitlab" version = "3.5.0" } ``` <!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK --> ## Requirements | Name | Version | |------|---------| | terraform | >= 0.13 | ## Providers | Name | Version | |------|---------| | gitlabhq/gitlab | >=3.5.0 | ## Inputs | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| | project | The project configuration map | `map` | n/a | yes | ## Outputs | Name | Description | |------|-------------| | id | The Project ID. | | name | The Project name. | <!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK --> ## Example Usage (Guideline) - Clone desired CAF configuration, for demo purposes, you can use [Starter Configuration](https://github.com/Azure/caf-terraform-landingzones-starter) ```bash git clone https://github.com/Azure/caf-terraform-landingzones-starter ``` - Export required environment variables ```bash export ENVIRONMENT=<name_of_the_environment> # such as, demo, staging, production, etc. export GITLAB_TOKEN=<token_created_at_gitlab.com_or_gitlab_server> # created on gitlab.com or gitlab server # run the following command if you're using gitlab server export GITLAB_BASE_URL=<url_of_the_gitlab_server> ``` - Provision Cloud Adoption Framework (CAF) Launchpad resources by executing the following script ```bash rover -lz /tf/caf/public/landingzones/caf_launchpad -launchpad -var-folder /tf/caf/configuration/${ENVIRONMENT}/level0/launchpad -parallelism 30 -level level0 -env ${ENVIRONMENT} -a apply ``` - Add GitLab project and couple of project variables into Level 1 (_Foundation Level_) ```bash rover -lz /tf/modules/examples/ -var-folder /tf/modules/examples/devops/providers/gitlab/new_project/ -level level1 -env ${ENVIRONMENT} -a apply ``` If you want to change the GitLab project configuration or project variables, see the example configuration in [examples/gitlab/new_project/gitlab_project.tfvars](./examples/gitlab/new_project/gitlab_project.tfvars) file; ```terraform gitlab_projects = { test_project = { name = "test_project" description = "test project description" visibility = "private" variables = { var1 = { value = "testvalue1" protected = true masked = false } var2 = { value = "testvalue2" protected = false masked = true } var3 = { value = "testvalue3" protected = true } var4 = { value = "testvalue4" masked = true } } } } ``` > After it finish running (it may take couple of minutes) you'll see a project is created in the GitLab instance, with couple of project variables
Talk:Ness vs Silver/<EMAIL_ADDRESS> Giygas is attacking him in game from various different points in time as well man. He deals with foes with super speed, teleportation, time control, flight, and more. His main foe is an entire psychic alien army that is attacking from the past and future. Face it, your "research" on Ness was shoddy.
Improvement in argand gas-burners M. F: GALE. ARGAND GAS-BURNER. Patented July 25.1876. l fnnbninr: N4 PETERS. FHOTO-LITHOGRAPHER. WASHINGTON. D C. UNITED STATEs PATENT QFFICE. MosEs F. GALE, or 'BEooKLYN, E. 1)., NEW YORK. IMPROVEMENT IN ARGAND GAS-BU'RNERS. Specification forming part of Letters Patent No. 180,336, dated July 25, 1876; application filed May 5, 1876. i To all whom it may concern Be it known that I, MOSES GALE, of Brooklyn, E: D., county of Kings, and State of New York, have invented certain new and useful Improvements in Argand Gas-Burners, of which the following is a full and exact description, in connection with the accompanying drawings. I Figure l is a side elevation and partial section of the burner complete. Fig. 2 is a plan or top view. Fig. 3 is a side elevation of the valve shown in Fig. 1.' Fig.5; is a plan of the valve-seat in Fig. 1. Fig. 5 isa top view of the valve on its seat. --Figs. 6, 7, 8, 9, 10, and 11 are views of various modifications of valves and seats, as will appear hereinafter in the specification. This invention pertains to the devices known as the Argand gas-burner, and has for its object chiefly-the proper regulation of the flow of the gas; and therefore it consists, chiefly, in the combination of the parts that constitute the regulating-valve of the burner and its attachments. lhe valve itself (shown at A in the several fi gures) is known as the rotating disk-valve, which, by a partial rotation upon its seat, uncovers holes or a port for the escape of the gas to the burner proper, as shown at B, and which may be of any of the well-known or other suit able forms. Said valve is made in the form of a disk, as shown in Figs. 3, 5, 6-, and 7, and'is furnished on its back with a forked stem, as at C, the slit in which is flat on both sides, to receive a secondary stem, for turning it like a key. Said secondary stem is shown at D, Figs. 1 and 9, and upon its outer end is shown the handle, as at E, for turning it, and thereby turning the valve. To prevent the gas from escaping around the outer end of the stem D, it is made with a conical projection or plug, as at F, which projects upward against a seat on the inner face of the valve-chamber, as at G; and between the back of said plug and the back of the valve there is interposeda spiral spring, as at H, which presses the two in opposite directions, and against their seats, as their stems are in separate pieces, and thereby keep both gas-tight when desired. l The valve shown in Figs. 1 and 3 is a plain disk, with a flat rib on its face wide enough to cover the two holes or ports shown in Fig. 4, which serves as its seat. The object of the disk is to spread the 'gas as it flows to the burner through the valve, to prevent a very common humming noise in such burners. A modification of such a'valve. is shown in Figs. 6, 7, and 8, where the part that tits on the seat, as at Fig. 4, is perforated with holes to correspond to the holes in the seat when so turned, and the spreading-disk is placed on the stem a little above the valve-face, with a space between them, as at K, Figs. 6 and 8. Another form of the disk-valve is shown at Figs. 9, l0, and 11, where the face of the valve is slightly convex, and the seat concave, and the two surfaces fit together to cover the inlet, as at L in Fig. 9 but when turned'one-quarter round the valve will be fully open, as in Fig.- MOSES F. GALE. Attest: BoYD ELIOT, JOHN W. RIPLEY.
Refactor: install.conf.yaml shell commands configuration Describe the suggestion Currently, we repeat the dotbot's shell configs in every instruction. https://github.com/eieioxyz/dotfiles_macos/blob/b8d9db12f505f617a9b6e9d5ea0628313e1b9c77/install.conf.yaml#L21-L33 Describe the improvement We can simply add them to dotbot's default configurations to prevent duplication. Like the following: - - defaults: shell: stdout: true stderr: true What if one of them is different? If want to override these configurations at some point, say i.e., turn off stderr for a specific shell command (let's say setup_node script), we can simply add the same configuration as we used to do. When dotbot sees this it will turn off stderr when running setup_node. - command: ./setup_node.zsh stderr: false I like it, Yasin. I'll accept the PR after I've had a chance to think about what kind of lesson I can make from this.
import { UserContext } from '../user-context.js'; import { AccessRequires } from './access.js'; import { BaseDao } from './dao-base.js'; export interface Prlink { id: number; userId: number; cid: number; ctime: string; code: string; clickFirst: string; // first time click time clickLast: string; // last time click time } export class PrlinkDao extends BaseDao<Prlink, number>{ constructor() { super({ table: 'rplink', stamped: false }); } @AccessRequires('a_pwd_reset') async create(utx: UserContext, data: Pick<Prlink, 'id'>) { return super.create(utx, data); } }
Talk:670 AM External links modified Hello fellow Wikipedians, I have just modified 1 one external link on 670 AM. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes: * Added archive https://web.archive.org/web/20120430090244/http://www.fcc.gov/encyclopedia/am-broadcast-station-classes-clear-regional-and-local-channels to http://www.fcc.gov/encyclopedia/am-broadcast-station-classes-clear-regional-and-local-channels Cheers.— InternetArchiveBot (Report bug) 09:10, 30 September 2016 (UTC)
Board Thread:Fun and Games/@comment-27458210-20160317180649/@comment-25252415-20160416163612 I don't even know what level that is.
Merpeople can breathe above the waves for a time, but it is unclear if they can ever truly leave their habitat. Sub-species Appearances * Harry Potter and the Goblet of Fire * Harry Potter and the Goblet of Fire (film) * Harry Potter and the Half-Blood Prince * Fantastic Beasts and Where to Find Them
#ifndef MARNAV_UNITS_BASIC_UNIT_CMP_HPP #define MARNAV_UNITS_BASIC_UNIT_CMP_HPP #include "basic_unit.hpp" #include <marnav/math/floatingpoint.hpp> #include <marnav/math/floatingpoint_ulps.hpp> #include <limits> #include <type_traits> namespace marnav::units { // basic_unit operator== // // Implementations for // - non-floating point // - floating point IEEE754 // - floating point non-IEEE754 template <class U, class R, typename std::enable_if<!std::is_floating_point<typename U::value_type>::value, int>::type = 0> constexpr bool operator==(const basic_unit<U, R> & v1, const basic_unit<U, R> & v2) noexcept { return v1.value() == v2.value(); } template <class U, class R, typename std::enable_if<std::is_floating_point<typename U::value_type>::value && !std::numeric_limits<typename U::value_type>::is_iec559, int>::type = 0> constexpr bool operator==(const basic_unit<U, R> & v1, const basic_unit<U, R> & v2) noexcept { return math::is_same(v1.value(), v2.value()); } template <class U, class R, typename std::enable_if<std::is_floating_point<typename U::value_type>::value && std::numeric_limits<typename U::value_type>::is_iec559, int>::type = 0> constexpr bool operator==(const basic_unit<U, R> & v1, const basic_unit<U, R> & v2) noexcept { return math::nearly_equal(v1.value(), v2.value()); } template <class U, class R> constexpr bool operator!=(const basic_unit<U, R> & v1, const basic_unit<U, R> & v2) noexcept { return !(v1 == v2); } template <class U, class R> constexpr bool operator<(const basic_unit<U, R> & v1, const basic_unit<U, R> & v2) noexcept { return v1.value() < v2.value(); } template <class U, class R> constexpr bool operator<=(const basic_unit<U, R> & v1, const basic_unit<U, R> & v2) noexcept { return (v1 < v2) || (v1 == v2); } template <class U, class R> constexpr bool operator>(const basic_unit<U, R> & v1, const basic_unit<U, R> & v2) noexcept { return !(v1 <= v2); } template <class U, class R> constexpr bool operator>=(const basic_unit<U, R> & v1, const basic_unit<U, R> & v2) noexcept { return !(v1 < v2); } template <class U1, class R1, class U2, class R2, typename = typename std::enable_if<!std::is_same<U1, U2>::value>::type> constexpr bool operator==(const basic_unit<U1, R1> & v1, const basic_unit<U2, R2> & v2) noexcept { return v1 == basic_unit<U1, R1>(v2); } template <class U1, class R1, class U2, class R2, typename = typename std::enable_if<!std::is_same<U1, U2>::value>::type> constexpr bool operator!=(const basic_unit<U1, R1> & v1, const basic_unit<U2, R2> & v2) noexcept { return !(v1 == v2); } template <class U1, class R1, class U2, class R2, typename = typename std::enable_if<!std::is_same<U1, U2>::value>::type> constexpr bool operator<(const basic_unit<U1, R1> & v1, const basic_unit<U2, R2> & v2) noexcept { return v1 < basic_unit<U1, R1>(v2); } template <class U1, class R1, class U2, class R2, typename = typename std::enable_if<!std::is_same<U1, U2>::value>::type> constexpr bool operator<=(const basic_unit<U1, R1> & v1, const basic_unit<U2, R2> & v2) noexcept { return (v1 < v2) || (v1 == v2); } template <class U1, class R1, class U2, class R2, typename = typename std::enable_if<!std::is_same<U1, U2>::value>::type> constexpr bool operator>(const basic_unit<U1, R1> & v1, const basic_unit<U2, R2> & v2) noexcept { return !(v1 <= v2); } template <class U1, class R1, class U2, class R2, typename = typename std::enable_if<!std::is_same<U1, U2>::value>::type> constexpr bool operator>=(const basic_unit<U1, R1> & v1, const basic_unit<U2, R2> & v2) noexcept { return !(v1 < v2); } } #endif
Ayrshire, Iowa: Energy Resources From Open Energy Information <metadesc> Ayrshire, Iowa: energy resources, incentives, companies, news, and more. </metadesc> Ayrshire is a city in Palo Alto County, Iowa. It falls under Iowa's 4th congressional district.[1][2] References[edit] 1. US Census Bureau Incorporated place and minor civil division population dataset (All States, all geography) 2. US Census Bureau Congressional Districts by Places.
Board Thread:Fun and Games/@comment-30076428-20200108170107/@comment-31867538-20200113190003 AwesomeEthan48 wrote: 1pizza877 wrote: AwesomeEthan48 wrote: 1pizza877 wrote: JordanLovesLizards wrote: AwesomeEthan48 wrote: 1pizza877 wrote: JordanLovesLizards wrote: AwesomeEthan48 wrote: AwesomeEthan48 wrote: RedHunter12 wrote: AwesomeEthan48 wrote: RedHunter12 wrote: Ichigo: What's in there is none of your buisness! Ichigo then launches a Getsuga Tenshou at Zelda, who reflects it back at him using Nayru's Love. Ichigo: Agh! What? How did you do that?! We then cut to Shovel Knight and Hercule Satan landing punches on each other. Hercule Satan: Is that...the best you have to offer? Shovel Knight: As if! Ichigo: Got it. Bankai! Ichigo: NANI?! Azure then travels forwards in time at leavs Ichigo on a barren planet Ichigo: It's that the best you can do? Flash: Hah! You're no match for- Flash: Oh, that hurts... Kirby: You know, I wonder what the others are doing. We then cut back to Ethan, Kat, Sub-Zero, and Jordan. There, he sees Venom and Zero hiding in the trees.''' Jordan lunges towards the trees and cuts them all down, revealing the combatants hiding in them. Venom: HAHAHAHAHA! You're mine! Zero: This'll be over in a nanosecond! Jordan: Are you willing to bet your life on that? Zero: No match. 1pizza877: Lets see if you like this. Zero reacts as he stabs him in the chest as 1pizza877 throws a plasma gernade at him. Sub-Zero: Interesting company you keep. Kat: Tell me about it. Kat: So that's basically it in a nutshell. What happened to you guys? Luigi: Phew! I think we-a lost him. Zelda: Hey guys! Jordan: I see you finally decided to join us.
#pragma once #include <vector> #include <algorithm> namespace base { class ContainerUtil { public: template <class T> static std::vector<std::vector<T>> group(std::vector<T> elements, bool (*comparer)(T , T)) { std::vector<std::vector<T>> groups; for (unsigned int i = 0; i < elements.size(); i++) { bool found = false; for (unsigned int j = 0; j < groups.size(); j++) { if (comparer(elements[i], (groups[j])[0])) { groups[j].push_back(elements[i]); found = true; break; } } if (!found) { std::vector<T> newGroup; newGroup.push_back(elements[i]); groups.push_back(newGroup); } } return groups; } }; }
[SPARK-42681] Relax ordering constraint for ALTER TABLE ADD|REPLACE column options What changes were proposed in this pull request? Currently the grammar for ALTER TABLE ADD|REPLACE column is: qualifiedColTypeWithPosition : name=multipartIdentifier dataType (NOT NULL)? defaultExpression? commentSpec? colPosition? ; This enforces a constraint on the order of: (NOT NULL, DEFAULT value, COMMENT value FIRST|AFTER value). We can update the grammar to allow these options in any order instead, to improve usability. Why are the changes needed? This helps make the SQL syntax more usable. Does this PR introduce any user-facing change? Yes, the SQL syntax updates slightly. How was this patch tested? Existing and new unit tests build timed out but succeeded on rerun: https://github.com/vitaliili-db/spark/actions/runs/4346311324/jobs/7598960402 @gengliangwang can you review this please? @vitaliili-db Thanks for the work! LGTM overall. I got only one comment: the wording "column options" sounds weird. Shall we call it "column descriptor"? @gengliangwang great catch, yes, we should follow standard. Renamed. Thanks, merging to master
"grayer upper parts and thicker bill." We have tested the material at hand as regards these two characters and are abso lutely unable to distinguish our birds from the San Jacinto Mountains and elsewhere in southern California, from perfectly comparable material as regards age and stage of plumage from various parts of the Sierra Nevada, central as well as southern The Museum's series includes good specimens from extreme southern San Diego County: Campo, .Mountain Spring, Cuya maca and Volcan mountains. These, also, are in no appreciable way different from plumifera. Valley Quail An abundant species in the San Jacinto Mountains, found at all suitable points from the Lower Sonoran valleys surround ing the range, up into the lower edge of Transition. Twenty two specimens were collected, as follows: Snow Creek, three nos. 2160-2162) ; Cabezon, five (nos. 1657 1661 ; Banning, four nos. 2018 2021 ; Yallevista, three (nos. 3094 3096) : Dos Palmos. two (nos. 2491, 2492) ; and Palm Canon, live i nos. 3046 3050). Other points of record are Vandeventer Plat, Kenworthy, Hemel Lake, Thomas .Mountain. Strawberry Valley, and Schain's Ranch. Mosl of these localities are in Upper Sonoran. the highest points only — Strawberry Valley (6000 feet) and Thomas Mountain (6800 feet) — being ;it the lower edge of Transition. This quail breeds in greatest abundance on the sage-brush covered floor of the upper Hemel Valley, the region from Hemet Lake to Vandeventer Flat being peculiarly adapted to the species. At Kenworthy. in this valley, they were numerous, and nearly all in pairs at the time of our arrival, May 19. A nesl was found here on May 23 (no. 72), very imperfectly concealed at the base of a scanty clump of sage brush. The slight depression in the
<?php namespace App\Http\Requests; use Illuminate\Foundation\Http\FormRequest; class CancelRequest extends FormRequest { /** * Determine if the user is authorized to make this request. * * @return bool */ public function authorize() { return TRUE; } /** * Get the validation rules that apply to the request. * * @return array */ public function rules() { return [ 'power' => 'required', 'water' => 'required', ]; } public function messages() { return [ 'power.required' => 'กรุณาระบุไฟฟ้าหน่วยสุดท้าย', 'water.required' => 'กรุณาระบุน้ำปะปาหน่วยสุดท้าย', ]; } }
#include <cybozu/ip_address.hpp> #include <cybozu/test.hpp> AUTOTEST(ipv4) { cybozu_test_no_exception( cybozu::ip_address("123.123.123.0") ); cybozu_test_exception( cybozu::ip_address("123.456.789.0"), cybozu::ip_address::bad_address ); cybozu_test_exception( cybozu::ip_address("localhost"), cybozu::ip_address::bad_address ); cybozu::ip_address ipv4("123.123.123.0"); cybozu_assert( ipv4.is_v4() ); cybozu_assert( ipv4.str() == "123.123.123.0" ); } AUTOTEST(ipv6) { cybozu_test_no_exception( cybozu::ip_address("::1") ); // cybozu_test_no_exception( // cybozu::ip_address("fe80::dead:beaf%lo") // ); cybozu_test_exception( cybozu::ip_address("fg::1"), cybozu::ip_address::bad_address ); cybozu_test_exception( cybozu::ip_address("::1%lo"), cybozu::ip_address::bad_address ); cybozu::ip_address ipv6("fd00:1234::dead:beaf"); cybozu_assert( ipv6.is_v6() ); cybozu_assert( ipv6.str() == "fd00:1234::dead:beaf" ); } AUTOTEST(compare) { using cybozu::ip_address; cybozu_assert( ip_address("127.0.0.1") == ip_address("127.0.0.1") ); cybozu_assert( ip_address("127.0.0.1") != ip_address("162.193.0.1") ); cybozu_assert( ip_address("127.0.0.1") != ip_address("::1") ); cybozu_assert( ip_address("::1") == ip_address("::1") ); // cybozu_assert( ip_address("fe80::1%lo") == ip_address("fe80::1%lo") ); // cybozu_assert( ip_address("fe80::1%lo") != ip_address("fe80::1%eth0") ); } AUTOTEST(has_ip_address) { for( int i = 0; i < 1000; ++i ) cybozu::has_ip_address(cybozu::ip_address("11.11.11.11")); }
module Metanorma class Compile def relaton_export(isodoc, options) return unless options[:relaton] xml = Nokogiri::XML(isodoc) { |config| config.huge } bibdata = xml.at("//bibdata") || xml.at("//xmlns:bibdata") # docid = bibdata&.at("./xmlns:docidentifier")&.text || options[:filename] # outname = docid.sub(/^\s+/, "").sub(/\s+$/, "").gsub(/\s+/, "-") + ".xml" File.open(options[:relaton], "w:UTF-8") { |f| f.write bibdata.to_xml } end def clean_sourcecode(xml) xml.xpath(".//callout | .//annotation | .//xmlns:callout | "\ ".//xmlns:annotation").each(&:remove) xml.xpath(".//br | .//xmlns:br").each { |x| x.replace("\n") } HTMLEntities.new.decode(xml.children.to_xml) end def extract(isodoc, dirname, extract_types) return unless dirname extract_types.nil? || extract_types.empty? and extract_types = %i[sourcecode image requirement] FileUtils.rm_rf dirname FileUtils.mkdir_p dirname xml = Nokogiri::XML(isodoc) { |config| config.huge } sourcecode_export(xml, dirname) if extract_types.include? :sourcecode image_export(xml, dirname) if extract_types.include? :image extract_types.include?(:requirement) and requirement_export(xml, dirname) end def sourcecode_export(xml, dirname) xml.at("//sourcecode | //xmlns:sourcecode") or return FileUtils.mkdir_p "#{dirname}/sourcecode" xml.xpath("//sourcecode | //xmlns:sourcecode").each_with_index do |s, i| filename = s["filename"] || sprintf("sourcecode-%04d.txt", i) export_output("#{dirname}/sourcecode/#{filename}", clean_sourcecode(s.dup)) end end def image_export(xml, dirname) xml.at("//image | //xmlns:image") or return FileUtils.mkdir_p "#{dirname}/image" xml.xpath("//image | //xmlns:image").each_with_index do |s, i| next unless /^data:image/.match? s["src"] %r{^data:image/(?<imgtype>[^;]+);base64,(?<imgdata>.+)$} =~ s["src"] fn = s["filename"] || sprintf("image-%<num>04d.%<name>s", num: i, name: imgtype) export_output("#{dirname}/image/#{fn}", Base64.strict_decode64(imgdata), binary: true) end end REQUIREMENT_XPATH = "//requirement | //xmlns:requirement | //recommendation | "\ "//xmlns:recommendation | //permission | //xmlns:permission".freeze def requirement_export(xml, dirname) xml.at(REQUIREMENT_XPATH) or return FileUtils.mkdir_p "#{dirname}/requirement" xml.xpath(REQUIREMENT_XPATH).each_with_index do |s, i| fn = s["filename"] || sprintf("%<name>s-%<num>04d.xml", name: s.name, num: i) export_output("#{dirname}/requirement/#{fn}", s) end end end end
User:Cjsellers2000 Fags www.Youtube.com Era 2008/ECW Era End Of An Era | 2008 Return 2008 - 2009 In late 2008, Wrestling Franchise Federation returned! Beginning of 2009 Early 2009 - Ending Already? Nintendo64 Era 2009 and Beyond - Nintendo64 Generation Extreme Caw Wrestling 2009- Wrestling Franchise Federation and World Wrestling Federation/Enteratinment 2009 - New Caw Show vs Wrestling Franchise Federation 1 Brand? VKM Owner? CJSellersProductions Return 2009 - WWE Smackdown vs Raw 2009? No More Nintendo 64 Generation WFF Official Roster * may change* CJSellers Breanna Styles Shannon X Antonio Brandon Amber Crystal Christina Jason Styles Raw WFF Champion: Shannon X World Tag Team Champions: The Miz and John Morrison Intercontinental Champion: Masked Man Womens Champion : Breanna Styles WFF Smackdown! WFF World Heavyweight Champion: Chris Jericho Tag Team Champions: Carlito and Chavo Guerrero United States Champion: Santino Marella WCW World Classic Champion: Vince McMahon Extreme Circuit Wrestling ECW Champion: Antonio Cruiserweight Champion: JTG ECW Hardcore Champion: Tommy Dreamer 2009 Backlash 2009 Judgment Day 2009 ECW One Night Stand 2009 Vengance 2009 Great American Bash 09' SummerSlam Unforgiven Cyber Sunday Survivor Series 2009 Royal Rumble No Way Out WrestleMania 2009
# Docker, Django, PostgreSQL starter (Incomplete, work-in-progress) This is a simple starter for a [Django](https://www.djangoproject.com/) app with a [PostgreSQL](https://www.postgresql.org/) backend running inside [Docker](https://www.docker.com/) containers. This starter is based on the following resources, that I used to make some of the design decisions: - [Quickstart: Compose and Django](https://docs.docker.com/compose/django/) - [How to Dockerize a Django web app elegantly](https://medium.com/faun/tech-edition-how-to-dockerize-a-django-web-app-elegantly-924c0b83575d) - [Dockerizing Django with Postgres, Gunicorn, and Nginx](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) - [lukin0110/docker-django-boilerplate](https://github.com/lukin0110/docker-django-boilerplate) You might want to look at those resources. ## Getting started First, you'll need the following variables in your environment. It is best to keep these in a file called `.env` that is ignored by git. ``` ADMIN_USERNAME=CHANGE-THIS-VALUE ADMIN_PASSWORD=CHANGE-THIS-VALUE ADMIN_EMAIL=CHANGE-THIS-VALUE SECRET_KEY=CHANGE-THIS-VALUE POSTGRES_PASSWORD=CHANGE-THIS-VALUE POSTGRES_USER=CHANGE-THIS-VALUE POSTGRES_PASSWORD=CHANGE-THIS-VALUE POSTGRES_DB=CHANGE-THIS-VALUE POSTGRES_PORT=CHANGE-THIS-VALUE ``` The `POSTGRES_` values are used to configure the PostgreSQL instance running in a Docker container when in development. In production, you might be using Heroku Postgres, in which case the app will use the `DATABASE_URL` environment variable instead. You'll need [docker](https://docs.docker.com/install/) to run this app on your computer. I assume you'll run production on [Heroku](https://www.heroku.com) and included the files necessary to do so using [Heroku Docker deploys](https://devcenter.heroku.com/articles/build-docker-images-heroku-yml) To start the app, run `docker-compose up`. To bring the app down, run `docker-compose down`. The first time you start-up in development, or after you make model changes, you'll need to run database migrations. With the app running, run `docker-compose exec web python manage.py migrate` in order to run the migrations. ## Changing the models If you change the models you'll need new migrations. Run ``` docker-compose exec web python manage.py makemigration -n SOMENAMEHERE ``` You'll see a new migration file that you can add to version control. See [Django migrations](https://docs.djangoproject.com/en/2.2/topics/migrations/). ## Notes - Whitenoise is used for static assets. See [here](https://devcenter.heroku.com/articles/django-assets). - [Gunicorn](https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/gunicorn/) is used in production on heroku.
import brica1 import numpy as np import pygazebo.msg.poses_stamped_pb2 class VisualAreaComponent(brica1.Component): def __init__(self): super(VisualAreaComponent, self).__init__() self.last_position = np.array((0, 0)) def __position_to_area_id(self, pos2d): x = pos2d[0] y = pos2d[1] radius = 1 maze_width = 1 if x*x + y*y < radius*radius: return (0, 0) areaIdX = 0 if x < maze_width*0.5: areaIdX = -1 if x > maze_width*0.5: areaIdX = 1 areaIdY = 0 if y < maze_width*0.5: areaIdY = -1 if y > maze_width*0.5: areaIdY = 1 return (areaIdX, areaIdY) def callback(self, data): pose = pygazebo.msg.poses_stamped_pb2.PosesStamped() message = pose.FromString(data) turtlebot_id = 0 if message.pose[turtlebot_id].name != "turtlebot": raise Exception("message.pose[0].name is not turtlbot") position = np.array(( message.pose[turtlebot_id].position.x, message.pose[turtlebot_id].position.y)) orientation = np.array(( message.pose[turtlebot_id].orientation.x, message.pose[turtlebot_id].orientation.y, message.pose[turtlebot_id].orientation.z, message.pose[turtlebot_id].orientation.w)) vel = self.last_position - position self.last_position = position self.set_state("out_body_velocity", np.array((vel[0], vel[1])).astype(np.float32)) self.set_state("out_body_position", position.astype(np.float32)) self.set_state("out_body_orientation", orientation.astype(np.float32)) def fire(self): for key in self.states.keys(): self.results[key] = self.states[key]
Use @Embeddable Without Nested JSON I'm trying to use the @Embedded and @Embeddable javax annotations to keep my Java classes cleaner, but I want the resulting JSON to be flattened. THE DESIRED BEHAVIOR: [ { "id": "6edbced5-2d27-4257-a140-2925291daaf6", "name": "Online Maria DB", "address": "Syble Forks", "city": "Dallas", "state": "Texas", "country": "United States" "phoneNumber": "(789) 740-5789", "orgUserName": "online-maria" } ] THE ACTUAL BEHAVIOR: [ { "id": "6edbced5-2d27-4257-a140-2925291daaf6", "name": "Online Maria DB", "addressDetails": { "address": "Syble Forks", "city": "Dallas", "state": "Texas", "country": "United States" }, "phoneNumber": "(789) 740-5789", "orgUserName": "online-maria" } ] Is this possible using these annotation? What I have so far: Organization.java @Embedded private Address address; Address.java @Embeddable public class Address { ... } Please share what you have done so far. Made an edit and added it to my question. I assume you use Jackson for serialize these objects right? You can use Jackson's @JsonUnwrapped annotation if you do use Jackson. Also you can write custom serializer as well public class Organization { @JsonUnwrapped @Embedded private Address address; // other code }
Touch sensor unit ABSTRACT The disclosure improves the flexibility of arrangement of a separator provided at an end of a touch sensor unit to ensure electrical insulation. A sensor body included in a touch sensor unit includes a tubular insulator that is elastically deformed when an external force is applied; linear electrodes that are provided inside the tubular insulator and come into contact with each other as the tubular insulator is elastically deformed; a resistor disposed on an outer side of an end of the tubular insulator; connection wires connecting the linear electrodes and the resistor; a separator interposed between the connection wires and preventing contact between the connection wires; a mold part including at least the connection wires, the resistor, and the separator; and a cover member covering at least a part of the connection wires, the resistor, and the separator via the mold part. CROSS REFERENCE TO RELATED APPLICATIONS This application claims the priority benefit of Japanese Patent Application No. 2019-125796, filed on Jul. 5, 2019. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification. BACKGROUND Technical Field The disclosure relates to a touch sensor unit used for detecting contact with an obstacle. Description of Related Art A vehicle such as an automobile may be provided with an opening/closing body (for example, a sliding door or a tailgate) for opening and closing an opening of the vehicle, and an opening/closing device for driving the opening/closing body. The opening/closing device includes an electric motor which is a drive source, and an operation switch for turning the electric motor on/off. The electric motor included in the opening/closing device operates based on the operation of the operation switch, and drives the opening/closing body to open or close. Among opening/closing devices, there are automatic opening/closing devices for driving the opening/closing body to open or close regardless of whether the operation switch is operated. One of the conventional automatic opening/closing devices includes a touch sensor unit for detecting an obstacle caught between the opening and the opening/closing body, and drives the opening/closing body based on the detection result of the touch sensor unit. For example, when an obstacle is detected by the touch sensor unit, the automatic opening/closing device drives open the opening/closing body which has been driven to close, or stops it there. An example of the touch sensor unit as described above is described in Patent Document 1 (Japanese Patent Application Laid-Open No. 2017-204361 ([0066] to [0072], FIG. 11, and FIG. 12)). The touch sensor unit described in Patent Document 1 includes a sensor body, and a sensor holder holding the sensor body. The sensor body includes an insulating tube and two linear electrodes provided in the insulating tube. Each of the linear electrodes includes a core wire (stranded wire) composed of a plurality of bundled copper wires, and a sheath composed of conductive rubber or the like and covering the core wire. The two linear electrodes constituting the sensor body are provided spirally in the insulating tube and intersect each other in a non-contact state. The two linear electrodes provided in the insulating tube are connected in series via a resistor. Specifically, on one end side of the insulating tube, the sheath of each linear electrode is removed and a part of the core wire is exposed. Then, the exposed part of the core wire of one linear electrode is connected to one end of the resistor, and the exposed part of the core wire of the other linear electrode is connected to the other end of the resistor. In the following description, the exposed part of the core wire of each linear electrode may be referred to as a “connection wire”. A separator is disposed between the connection wires of the linear electrodes for preventing contact (short circuit) between the connection wires. The separator is formed of an insulating material, and includes a separator body and a covering part that covers the separator body. An insertion protrusion protrudes on an end of the separator body to be inserted between the two linear electrodes in the insulating tube from an end of the insulating tube. When the insertion protrusion is inserted between the two linear electrodes, the separator body is interposed between the connection wires of the linear electrodes. At the same time, at least the connection wires and the resistor are covered by the covering part. Furthermore, a formation mold part is provided on the inner side of the covering part, and the connection wires and the resistor are covered by the mold part. In other words, the connection wires and the resistor covered by the covering part are embedded in the mold part formed on the inner side of the covering part. The sensor body that constitutes the touch sensor unit described in Patent Document 1 includes an insulating tube and two linear electrodes provided in the insulating tube. Moreover, the two linear electrodes are provided spirally in the insulating tube. Therefore, when the insulating tube and the linear electrodes are cut at any position in the longitudinal direction thereof, the positions of the linear electrodes in the circumferential direction of the insulating tube differ depending on the cutting position. That is, when the insulating tube and the linear electrodes are cut at any two or more positions in the longitudinal direction thereof, the arrangement of the two linear electrodes at each cross-section is not uniform. However, the resistor to which the connection wire of each linear electrode is connected needs to be disposed at a predetermined position in the circumferential direction of the insulating tube. Thus, the covering part of the separator, which covers the connection wires and the resistor, also needs to be disposed at a predetermined position in the circumferential direction of the insulating tube. Therefore, Patent Document 1 describes that, after the insertion protrusion of the separator is inserted between the two linear electrodes from the end of the insulating tube, the position of the covering part is adjusted by rotating the separator in the circumferential direction of the insulating tube. However, if the separator with the insertion protrusion inserted between the two linear electrodes is rotated in the circumferential direction of the insulating tube, a force may be applied to the linear electrodes. Then, due to the elasticity of the linear electrodes, a force is applied to the separator to rotate the separator in the reverse direction. As a result, the position of the separator may be shifted and an unexpected gap may be generated between the separator and the linear electrodes. Furthermore, if a large gap is generated between the separator and the linear electrodes, during formation of the mold part, molten resin may flow into the insulating tube from the gap and be cured in the insulating tube. The disclosure addresses issues of the flexibility of arrangement of the separator provided at an end of the touch sensor unit to ensure electrical insulation. SUMMARY In one embodiment of the disclosure, a touch sensor unit is provided, including a sensor body and a sensor holder holding the sensor body. The sensor body includes: a tubular insulator housed in the sensor holder and elastically deformed when an external force is applied; a plurality of electrodes provided inside the tubular insulator and coming into contact with each other as the tubular insulator is elastically deformed; an electrical component disposed on an outer side of an end of the tubular insulator; a plurality of connection wires connecting each of the electrodes and the electrical component; an insulating member interposed between the plurality of connection wires and preventing contact between the connection wires; a mold part including at least the connection wires, the electrical component, and the insulating member; and a cover member covering at least a part of the connection wires, the electrical component, and the insulating member via the mold part. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a front view showing a tailgate of a vehicle on which a touch sensor unit is mounted. FIG. 2 is a side view showing the tailgate of the vehicle on which the touch sensor unit is mounted. FIG. 3 is a perspective view showing a configuration of the touch sensor unit. FIG. 4 is an enlarged cross-sectional view showing a structure of a sensor body and a sensor holder. FIG. 5 is an explanatory view showing the structure of the sensor body. FIG. 6 is another explanatory view showing the structure of the sensor body. FIG. 7 is another explanatory view showing the structure of the sensor body. FIG. 8 is a perspective view showing a separator. FIG. 9 is another explanatory view showing the structure of the sensor body. FIG. 10 is a perspective view showing a mold part and a cover member. FIG. 11 is an explanatory view showing a molding process of the mold part. FIG. 12 is another explanatory view showing the molding process of the mold part. FIG. 13A is a side view showing a modified example of the mold part. FIG. 13B is a bottom view showing the modified example of the mold part. FIG. 14 is a perspective view showing another modified example of the mold part. DESCRIPTION OF THE EMBODIMENTS Hereinafter, an example of a touch sensor unit to which the disclosure is applied will be described in detail with reference to the drawings. As shown in FIG. 1 and FIG. 2, the touch sensor unit 20 according to the present embodiment is mounted on a vehicle 10. The vehicle 10 as shown is a so-called hatchback vehicle. The rear portion of the vehicle 10 is provided with an opening (rear opening 11) through which large luggage can be taken in and out of the vehicle interior. The rear opening 11 is opened or closed by an opening/closing body 12 that is rotatably supported by a hinge (not shown) provided on the rear side of the vehicle 10. The opening/closing body 12 is called a “tailgate”, a “rear gate”, a “bag door”, or the like, but is referred to as “tailgate” in this specification. The vehicle 10 is equipped with a power tailgate device 13 that rotates (opens or closes) the tailgate 12 in the directions indicated by the solid and broken arrows in FIG. 2. The power tailgate device 13 includes an actuator 13 a with a speed reducer that opens or closes the tailgate 12, a controller 13 b controlling the actuator 13 a based on an operation of a switch (not shown), and a pair of touch sensor units 20 for detecting an obstacle BL. That is, the touch sensor unit 20 according to the present embodiment is one of the components of the power tailgate device 13 mounted on the vehicle 10. As shown in FIG. 1, the touch sensor units 20 are provided on the outer peripheral surface of the tailgate 12. Specifically, the touch sensor units 20 are respectively provided on two side surfaces of the tailgate 12 in the vehicle width direction. More specifically, the touch sensor units 20 are provided on two curved side surfaces (edges) of the tailgate 12 along the shapes of the side surfaces. Thus, when the obstacle BL is caught between the rear opening 11 and the tailgate 12, the obstacle BL is detected by the touch sensor unit 20. The touch sensor unit 20 outputs a detection signal when detecting the obstacle BL. The detection signal output from the touch sensor unit 20 is input to the controller 13 b. The controller 13 b to which the detection signal is input drives open the tailgate 12 that is being driven to close, or stops the tailgate 12 that is being driven to close there regardless of the operation state of the operation switch. As shown in FIG. 3, the touch sensor unit 20 includes a sensor body 30, a sensor holder 31, and a bracket 32. The sensor body 30, the sensor holder 31, and the bracket 32 are integrated. The bracket 32 shown in FIG. 3 is formed of a resin material such as plastic, and has substantially the same length as the side surface (edge) of the tailgate 12 (FIG. 1 and FIG. 2) and presents a plate-shaped appearance as a whole. As shown in FIG. 3, a part of the sensor body 30 in the longitudinal direction is fixed to the sensor holder 31 while the remaining part is not fixed to the sensor holder 31. Then, the sensor holder 31 to which a part of the sensor body 30 is fixed is fixed (joined) to the bracket 32. In the following description, a part of the sensor body 30 in the longitudinal direction, which is not fixed to the sensor holder 31, may be referred to as a “lead-out part” to be distinguished from other parts. However, such distinction is merely for convenience of explanation. The touch sensor unit 20 having the basic structure as described above is attached to the vehicle 10 by fixing (joining) the bracket 32 to the edge of the tailgate 12 (FIG. 1 and FIG. 2). At this time, the lead-out part of the sensor body 30 is drawn to the inner side of the tailgate 12 from a lead-in hole provided in the tailgate 12. Further, the lead-in hole with the lead-out part drawn there into is closed by a grommet GM attached to the lead-out part. Hereinafter, the touch sensor unit 20 will be described in more detail. As shown in FIG. 3, the sensor body 30 constituting the touch sensor unit 20 has a tubular insulator 40, a plurality of electrodes 41 and 42 which are provided inside the tubular insulator 40 and come into contact with each other as the tubular insulator 40 is elastically deformed, a connector 43, and a mold part 44. A part of the tubular insulator 40 in the longitudinal direction, which includes the electrodes 41 and 42 therein, is embedded in the sensor holder 31. The sensor holder 31 is formed of insulating rubber and has elasticity. That is, the sensor holder 31 is elastically deformed when an external force is applied, and returns to the original shape when the external force is removed. In addition, the connector 43 is connected to another connector (not shown). By connecting the connector 43 to another connector, the touch sensor unit 20 is electrically connected to the controller 13 b (FIG. 1 and FIG. 2), allowing the detection signal output from the touch sensor unit 20 to be input to the controller 13 b. As shown in FIG. 4, the sensor holder 31 has a housing part 31 a and a base part 31 b that are integrally formed. The housing part 31 a is hollow, and the sensor body 30 is housed in the housing part 31 a and the base part 31 b is joined to the bracket 32 (FIG. 3). The tubular insulator 40 shown in FIG. 4 is a tube composed of insulating rubber and has elasticity. That is, the tubular insulator 40 is elastically deformed when an external force is applied, and returns to the original shape when the external force is removed. Further, the inner diameter of the tubular insulator 40 is about three times the outer diameter of the electrodes 41 and 42. As shown in FIG. 5, the electrodes 41 and 42 housed in the tubular insulator 40 are linear electrodes. The two linear electrodes 41 and 42 are provided spirally inside the tubular insulator 40, and usually, repeatedly intersect each other in a non-contact state. As shown in FIG. 4, the outer peripheral surface of each of the linear electrodes 41 and 42 is fixed (welded) to the inner peripheral surface of the tubular insulator 40, and there is a gap between the two linear electrodes 41 and 42 which is set so that another similar linear electrode may fit in. As shown in FIG. 4, each of the linear electrodes 41 and 42 includes a core wire 50 composed of a plurality of strands 50 a twisted together, and a covering layer (sheath 51) covering the core wire 50. The strand 50 a in the present embodiment is a copper wire. That is, the core wire 50 in the present embodiment is a stranded wire composed of a plurality of copper wires. Further, the sheath 51 in the present embodiment is formed of a conductive resin extruded around the core wire 50. As described above, the tubular insulator 40 that houses the linear electrodes 41 and 42 has elasticity, and the housing part 31 a of the sensor holder 31 that holds the sensor body 30 including the tubular insulator 40 also has elasticity. Therefore, when the housing part 31 a of the sensor holder 31 receives an external force of a certain level or more and is elastically deformed (collapsed), the external force is applied to the tubular insulator 40 accordingly. Then, the tubular insulator 40 is elastically deformed (collapsed), and the two linear electrodes 41 and 42 come close to each other and come into contact with each other in the tubular insulator 40. Specifically, the sheath 51 of one linear electrode 41 and the sheath 51 of the other linear electrode 42 come into contact with each other. As a result, the two linear electrodes 41 and 42 are electrically connected (short-circuited). As shown in FIG. 5 to FIG. 7, the core wires 50 of the linear electrodes 41 and 42 are drawn out from one opening 40 a of the tubular insulator 40. Each of the two core wires 50 drawn out from the opening 40 a of the tubular insulator 40 is a part of the core wire 50 exposed to the outside by partially removing the sheath 51 (FIG. 4) of the linear electrodes 41 and 42, and corresponds to the connection wire in the disclosure. Thus, in the following description, the exposed portion of the core wire 50 in the linear electrode 41 is referred to as a “connection wire 41 a”, and the exposed portion of the core wire 50 in the linear electrode 42 is referred to as a “connection wire 42 a”. The sensor body 30 further has a resistor R as an electrical component disposed on the outer side of the end of the tubular insulator 40. One end of the resistor R is provided with a short connection part C1, and the other end of the resistor R is provided with a long connection part C2. The long connection part C2 is folded 180 degrees and is arranged in parallel to the short connection part C1. The connection wire 41 a of the linear electrode 41 and the short connection part C1 are connected to each other by a connection member SW1, and the connection wire 42 a of the linear electrode 42 and the long connection part C2 are connected to each other by another connection member SW2. As shown in FIG. 5 to FIG. 7, the sensor body 30 further has a separator 60 as an insulating member. As shown in FIG. 8, the separator 60 has a substantially flat plate-shaped separator body 61, and a substantially columnar insertion protrusion 62 that protrudes from one end of the separator body 61 in the longitudinal direction. However, the separator body 61 and the insertion protrusion 62 are integrally formed of an insulating material such as plastic. As shown in FIG. 5 and FIG. 9, the insertion protrusion 62 of the separator 60 is inserted between the two linear electrodes 41 and 42 housed in the tubular insulator 40 from the opening 40 a of the tubular insulator 40. Further, as shown in FIG. 5 to FIG. 7, the separator body 61 of the separator 60 is interposed between the two connection wires 41 a and 42 a and prevents contact (short circuit) between the connection wires 41 a and 42 a. Specifically, the resistor R, the short connection part C1, the connection wire 41 a, and the connection member SW1 are disposed on one side (upper side) of the separator body 61, and the long connection part C2, the connection wire 42 a, and the connection member SW2 are disposed on the other side (lower side) of the separator body 61. As shown in FIG. 8, two closing parts 63 are formed at the tip of the separator body 61 so as to surround the root of the insertion protrusion 62. Then, in the center of the closing part 63 formed on the upper side of the separator body 61, a concave part 63 a is provided for avoiding the connection wire 41 a (FIG. 6). In the center of the closing part 63 formed on the lower side of the separator body 61, a concave part 63 b is formed for avoiding the connection wire 42 a (FIG. 6). The two concave parts 63 a and 63 b are provided at positions different by 180 degrees in the circumferential direction of the insertion protrusion 62. As shown in FIG. 5 to FIG. 7 and FIG. 9, the connection wire 41 a is drawn out on the separator body 61 through the inner side of the concave part 63 a and is connected to the short connection part C1. In addition, the connection wire 42 a is drawn out on the separator body 61 through the inner side of the concave part 63 b and is connected to the long connection part C2. Moreover, as shown in FIG. 5 to FIG. 7, the front surfaces of the two closing parts 63 abut against the end surface of the tubular insulator 40. In other words, the insertion protrusion 62 is inserted into the tubular insulator 40 until the front surfaces of the two closing parts 63 abut against the end surface of the tubular insulator 40. As a result, the opening 40 a of the tubular insulator 40 is closed by the closing parts 63. More specifically, most of the gap between the inner peripheral surface of the tubular insulator 40 and the outer peripheral surfaces of the linear electrodes 41 and 42 (sheaths 51) in the opening 40 a of the tubular insulator 40 is closed by the closing parts 63. In the following description, the connection wires 41 a and 42 a, the resistor R, the connection members SW1 and SW2, and the separator body 61 may be collectively referred to as an “electrical connection part”. That is, the sensor body 30 has the electrical connection part provided on the outer side of the end of the tubular insulator 40. As described above, the mold part 44 is provided on one end side of the sensor body 30 (see FIG. 3). As shown in FIG. 6 and FIG. 7, the mold part 44 includes therein an end of the sensor holder 31, an end of the tubular insulator 40 protruding from the end, and the electrical connection part provided on the outer side of the end. Further, the mold part 44 is covered with a cover member 70 that covers at least a part of the components of the electrical connection part via the mold part 44. In other words, the cover member 70 is provided around the mold part 44 including the electrical connection part therein, and covers a part of the surface of the mold part 44. As shown in FIG. 10, the mold part 44 includes an upper surface 80 covered by the cover member 70, a bottom surface 81 located on the opposite side of the upper surface 80, and a pair of side surfaces 82 and 83 located between the upper surface 80 and the bottom surface 81. The mold part 44 is a resin molded body made by injection molding using a mold. A molding process of the mold part 44 includes at least a “separator assembly process” and a “mold resin injection process”. In the separator assembly process, as shown in FIG. 11, the separator 60 is arranged at a predetermined position in a predetermined direction. Specifically, the separator 60 is inserted between the body of the resistor R and the long connection part C2 so that the resistor R, the short connection part C1, the connection wire 41 a, and the connection member SW1 are arranged on one side (upper side) of the separator body 61, and the long connection part C2, the connection wire 42 a, and the connection member SW2 are arranged on the other side (lower side) of the separator body 61. Thereafter, the insertion protrusion 62 of the separator 60 is inserted between the two linear electrodes 41 and 42 in the tubular insulator 40 from the opening 40 a of the tubular insulator 40. At this time, the insertion protrusion 62 is inserted into the tubular insulator 40 until the front surfaces of the closing parts 63 of the separator 60 abut against the end surface of the tubular insulator 40. As a result, the separator body 61 is interposed between the short connection part C1, the connection wire 41 a, and the connection member SW1 and the long connection part C2, the connection wire 42 a, and the connection member SW2, and prevents contact (short circuit) between these components. Further, the opening 40 a of the tubular insulator 40 is closed by the closing parts 63 with substantially no gap. The tip of the insertion protrusion 62 is formed to be tapered to facilitate insertion between the linear electrodes 41 and 42. In addition, the diameter of the insertion protrusion 62 is slightly larger than the diameter of the linear electrodes 41 and 42, and enters the gap between the linear electrodes 41 and 42 while slightly pushing away the linear electrodes 41 and 42. Thus, the insertion protrusion 62 inserted between the two linear electrodes 41 and 42 does not come out accidentally. In the mold resin injection process, as shown in FIG. 11 and FIG. 12, the end of the sensor holder 31, the end of the tubular insulator 40, and the electrical connection part are disposed on the inner side of the cover member 70 set in the mold (not shown). As shown in FIG. 11, an insertion hole 31 c is formed in the sensor holder 31 over substantially the entire length thereof, and a core metal (not shown) is inserted into the insertion hole 31 c. Therefore, a cap CP that closes the insertion hole 31 c is attached to the sensor holder 31 before the mold resin injection process. Specifically, a protrusion protruding from one end surface of the cap CP is press-fitted into the insertion hole 31 c. Thereby, the insertion hole 31 c is closed and the mold resin is prevented from flowing into the insertion hole 31 c. Thereafter, mold resin is supplied into the mold to mold the mold part 44. At this time, the opening 40 a of the tubular insulator 40 is closed by the closing parts 63 of the separator 60. Thus, the mold resin does not flow into the tubular insulator 40, and even if it flows into the tubular insulator 40, the amount is small. Here, when the mold resin injection process is completed, the separator 60 and the cover member 70 are integrated via the mold part 44. However, the separator 60 and the cover member 70 are originally separate members and are independent from each other before the mold resin injection process. Therefore, in the separator assembly process, the separator 60 can be disposed at a predetermined position in a predetermined direction without being restricted by the position and direction of the cover member 70. That is, the assembly of the separator 60 is highly flexible. In other words, it is not required to rotate the separator 60 with the insertion protrusion 62 inserted between the linear electrodes 41 and 42 in the tubular insulator 40 in the circumferential direction of the tubular insulator 40 to adjust the position and direction of the cover member 70. Therefore, the separator 60 can be disposed at an appropriate position according to the positions of the two linear electrodes 41 and 42 in the opening 40 a of the tubular insulator 40 (which differ depending on the cutting positions of the tubular insulator 40 and the linear electrodes 41 and 42). In addition, the separator 60 is not moved by the elastic restoring force of the linear electrodes 41 and 42. For example, as a result of the separator 60 being moved by the elastic restoring force of the linear electrodes 41 and 42, the opening 40 a of the tubular insulator 40 may not be sufficiently closed by the separator 60 and the mold resin may flow into the tubular insulator 40. Occurrence of such a situation is easily and reliably prevented. The disclosure is not limited to the above embodiment, and various changes can be made without departing from the scope of the disclosure. For example, in some embodiments, as shown in FIG. 13A and FIG. 13B, grooves 84 which communicate with the end surface of the cover member 70 at one end (upper end) and communicate with the bottom surface 81 of the mold part 44 at the other end (lower end) are formed on the side surfaces 82 and 83 of the mold part 44. In the example as shown, two grooves 84 are formed on each of the side surfaces 82 and 83. As shown in FIG. 3, the mold part 44 is located at one end of the touch sensor unit 20. Thus, when attaching the touch sensor unit 20 to the vehicle 10 (FIG. 1 and FIG. 2), that is, when joining the bracket 32 to the edge of the tailgate 12 (FIG. 1 and FIG. 2), the operator often grips the mold part 44 to align one end of the touch sensor unit 20. However, the mold part 44 is smaller than a standard human fingertip and is not easy to grip. The plurality of grooves 84 shown in FIG. 13A and FIG. 13B function as an anti-slip when the mold part 44 is gripped so that the operator can grip the mold part 44 easily. Further, when the grooves 84 as shown are provided on the mold part 44, the mold part 44 is formed using a mold that has convex parts corresponding to the grooves 84. That is, the above mold resin injection process is performed using a mold having convex parts corresponding to the grooves 84. At this time, by putting one end of the convex part of the mold against the end surface of the cover member 70 shown in FIG. 12, it is also possible to prevent the position of the cover member 70 in the mold from shifting. In some embodiments, as shown in FIG. 14, a concave part 85 is formed on the bottom surface 81 of the mold part 44. As shown in FIG. 3, the bottom surface 81 of the mold part 44 is covered by the bracket 32 at the end. Thus, by forming the concave part 85 as shown in FIG. 14 on the bottom surface 81 of the mold part 44, the amount of usage of the resin material can be reduced without impairing the appearance of the touch sensor unit 20. In addition, a pair of protrusion parts 86 and 87 are integrally formed on the mold part 44 shown in FIG. 14. Specifically, a pair of protrusion parts 86 and 87 protruding downward from the bottom surface 81 are provided on two sides of the bottom surface 81 in the width direction. The inner surfaces of the protrusion parts 86 and 87 face each other in the width direction of the bottom surface 81. Furthermore, the outer surface of one protrusion part 86 is flush with the side surface 82 of the mold part 44, and the outer surface of the other protrusion part 87 is flush with the side surface 83 of the mold part 44. In other words, the outer surface of the protrusion part 86 forms a part of the side surface 82 of the mold part 44, and the outer surface of the protrusion part 87 forms a part of the side surface 83 of the mold part 44. As described above, the sensor holder 31 is joined to the bracket 32 (FIG. 3), but not only the sensor holder 31 but also the mold part 44 may be joined to the bracket 32 (FIG. 3). At this time, a series of double-sided tape is affixed from the bottom surface of the sensor holder 31 to the bottom surface 81 of the mold part 44. The protrusion parts 86 and 87 cover up the side surfaces of the double-sided tape affixed to the bottom surface 81 of the mold part 44 to enhance the aesthetic appearance of the touch sensor unit 20. In addition, a protrusion part 31 d connected to the protrusion part 87 is integrally formed on the bottom surface of the sensor holder 31 shown in FIG. 14. The protrusion part 31 d is also for covering up the side surface of the double-sided tape affixed to the bottom surface of the sensor holder 31 to enhance the aesthetic appearance of the touch sensor unit 20. Other Configurations In one embodiment of the disclosure, a touch sensor unit is provided, including a sensor body and a sensor holder holding the sensor body. The sensor body includes: a tubular insulator housed in the sensor holder and elastically deformed when an external force is applied; a plurality of electrodes provided inside the tubular insulator and coming into contact with each other as the tubular insulator is elastically deformed; an electrical component disposed on an outer side of an end of the tubular insulator; a plurality of connection wires connecting each of the electrodes and the electrical component; an insulating member interposed between the plurality of connection wires and preventing contact between the connection wires; a mold part including at least the connection wires, the electrical component, and the insulating member; and a cover member covering at least a part of the connection wires, the electrical component, and the insulating member via the mold part. According to an embodiment of the disclosure, the cover member is provided around the mold part and covers a part of a surface of the mold part. According to another embodiment of the disclosure, the mold part includes an upper surface covered by the cover member, a bottom surface located on an opposite side of the upper surface, and a pair of side surfaces located between the upper surface and the bottom surface. Further, a groove is formed on each of the side surfaces, in which one end of the groove communicates with an end surface of the cover member and the other end of the groove communicates with the bottom surface at the other end. According to another embodiment of the disclosure, a concave part is formed on the bottom surface of the mold part. According to another embodiment of the disclosure, a pair of protrusion parts protruding downward from the bottom surface are provided on two sides in a width direction of the bottom surface of the mold part. Inner surfaces of the protrusion parts face each other in the width direction of the bottom surface. In addition, an outer surface of one protrusion part is flush with one side surface of the mold part, and an outer surface of the other protrusion part is flush with the other side surface of the mold part. The disclosure can improve the flexibility of arrangement of the separator provided at an end of the touch sensor unit to ensure electrical insulation. It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure covers modifications and variations provided that they fall within the scope of the following claims and their equivalents. What is claimed is: 1. A touch sensor unit, comprising: a sensor body; and a sensor holder holding the sensor body, wherein the sensor body comprises: a tubular insulator housed in the sensor holder and elastically deformed when an external force is applied; a plurality of electrodes provided inside the tubular insulator and coming into contact with each other as the tubular insulator is elastically deformed; an electrical component disposed on an outer side of an end of the tubular insulator; a plurality of connection wires connecting each of the electrodes and the electrical component; an insulating member interposed between the plurality of connection wires and preventing contact between the connection wires; a mold part including at least the connection wires, the electrical component, and the insulating member; and a cover member covering at least a part of the connection wires, the electrical component, and the insulating member via the mold part, wherein the cover member is provided around the mold part and covers a part of a surface of the mold part, wherein the mold part comprises an upper surface covered by the cover member, a bottom surface located on an opposite side of the upper surface, and a pair of side surfaces located between the upper surface and the bottom surface, wherein a groove is formed on each of the side surfaces for serving as an anti-slip, wherein one end of the groove communicates with an end surface of the cover member and the other end of the groove communicates with the bottom surface. 2. The touch sensor unit according to claim 1, wherein a concave part is formed on the bottom surface of the mold part. 3. The touch sensor unit according to claim 1, wherein a pair of protrusion parts protruding downward from the bottom surface are provided on two sides in a width direction of the bottom surface of the mold part, inner surfaces of the protrusion parts face each other in the width direction of the bottom surface, an outer surface of one protrusion part is flush with one side surface of the mold part, and an outer surface of the other protrusion part is flush with the other side surface of the mold part. 4. The touch sensor unit according to claim 2, wherein a pair of protrusion parts protruding downward from the bottom surface are provided on two sides in a width direction of the bottom surface of the mold part, inner surfaces of the protrusion parts face each other in the width direction of the bottom surface, an outer surface of one protrusion part is flush with one side surface of the mold part, and an outer surface of the other protrusion part is flush with the other side surface of the mold part.
ERROR [sawtooth-simple-supply-shell 2/5] Hello, While attempting to run the 'docker-compose up' command, the following components give errors: simple-supply-rest-api simple-supply-subscriber simple-supply-shell simple-supply-tp RUN apt-get update && apt-get install -y -q curl gnupg && curl -sSL 'http://p80.pool.sks-keyservers.net/pks/lookup?op=get&search=0x8AA7AF1F1091A5FD' | apt-key add - && echo 'deb [arch=amd64] http://repo.sawtooth.me/ubuntu/chime/stable bionic universe' >> /etc/apt/sources.list && apt-get update The issue seems to appear for the apt key: Warning: apt-key output should not be parsed (stdout is not a terminal) curl: (6) Could not resolve host: p80.pool.sks-keyservers.net gpg: no valid OpenPGP data found. The last message is: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c apt-get update && apt-get install -y -q curl gnupg && curl -sSL 'http://p80.pool.sks-keyservers.net/pks/lookup?op=get&search=0x8AA7AF1F1091A5FD' | apt-key add - && echo 'deb [arch=amd64] http://repo.sawtooth.me/ubuntu/chime/stable bionic universe' >> /etc/apt/sources.list && apt-get update]: exit code: 2 Any insight on how to handle this error would be appreciated, Thank you. One way to help solve this is to run each command by hand on the command line and see which one fails. It looks like the key server p80.pool.sks-keyservers.net does not exist, at least at the moment. Maybe this is temporary or permanent. But if it's permanently removed it would have an impact everywhere with Sawtooth. I managed to get a fix in the meantime by utilizing pgp.surfnet.nl as a replacement for the p80.pool.sks-keyservers.net. I will be closing down the issue.
Method for making a new metal-insulator-metal (MIM) capacitor structure in copper-CMOS circuits using a pad protect layer ABSTRACT A metal-insulator-metal (MIM) capacitor structure and method of fabrication for CMOS circuits having copper interconnections are described. The method provides metal capacitors with high figure of merit Q (X c /R) and which does not require additional masks and metal layers. The method forms a copper capacitor bottom metal (CBM) electrode while concurrently forming the pad contacts and level of copper interconnections by the damascene process. An insulating (Si 3 N 4 ) metal protect layer is formed on the copper and a capacitor interelectrode dielectric layer is formed. A metal protecting buffer is used to protect the thin interelectrode layer, and openings are etched to pad contacts and interconnecting lines. A TiN/AlCu/TiN metal layer is deposited and patterned to form the capacitor top metal (CTM) electrodes, the next level of interconnections, and to provide a pad protect layer on the copper pad contacts. The thick TiN/AlCu/TiN CTM electrode reduces the capacitor series resistance and improves the capacitor figure of merit Q, while the pad protect layer protects the copper from corrosion. BACKGROUND OF THE INVENTION (1) Field of the Invention The present invention relates to a method for making metal capacitors for integrated circuits, and more particularly relates to a method for making metal-insulator-metal (MIM) capacitor structures compatible with copper metallization schemes for wiring-up CMOS circuits. The MIM capacitors utilize the pad protect layer with copper (Cu) bottom electrodes and aluminum/copper (Al/Cu) top electrodes to achieve high capacitance per unit area while providing low series resistance resulting in a circuit having capacitors with high figure of merit Q. (2) Description of the Prior Art Capacitors on semiconductor chips are used for various integrated circuit applications. For example, these on-chip MIM capacitors can beused for mixed signal (analog/digital circuits) applications and radiofrequency (RF) circuits. These capacitors can also serve as decouplingcapacitors to provide improved voltage regulation and noise immunity for power distribution. Typically these capacitors are integrated into the semiconductor circuit when the semiconductor devices are formed on the substrate. For example,the one or two doped patterned polysilicon layers used to make the field effect transistors (FETs) and/or bipolar transistors can also be used to form the capacitors. Alternatively, the capacitors can be fabricated using the multi levels of interconnecting metal patterns (e.g., Al/Cu)used to wire up the individual semiconductor devices (FETs). In recent years portions of the AlCu metallization have been replaced with copper (Cu) to significantly reduce the resistivity of the conductive metal lines and thereby improve the RC(resistance×capacitance) delay time and improve circuit performance. Generally the capacitors can be integrated into the circuit with few additional process steps. The capacitance C for the capacitor is given by the expression C=eA/d where e is the dielectric constant, A is the capacitor area, and d isthe thickness of the capacitor dielectric layer between the two capacitor electrodes. Typically the figure of merit Q for a capacitor ina circuit is X_(c)/R, where X_(c) is the capacitor reactance expressed in ohms, and R is the resistance (ohms) in series with the capacitancereactance. To improve the figure of merit it is desirable to maximizeX_(c) while minimizing the R. In conventional capacitor structures multiple contacts are made to the relatively thin capacitor top metal(CTM) electrode to minimize resistance and improve the figure of merit Q. This is best understood with reference to FIG. 1. As shown in FIG. 1,when a more conventional MIM capacitor C is formed on the partially completed CMOS substrate 10, the CBM electrode is formed from an upperinterconnecting metallurgy layer 15 of TiN/AlCu/TiN. An interelectrodedielectric layer 17 is formed on the CBM electrode top surface. A capacitor top metal (CTM) electrode is formed from a patterned relatively thin AlCu/TiN layer 19 and a planar insulating layer 21 is formed over the capacitor to insulate the capacitor and provide support for the next level of metal interconnections 25. A TiN/AlCu/TiN layer is then deposited and patterned to form the next level of metal interconnections. Vias (holes) 23 are etched in the insulating layer 21to make contact to the CBM electrode 15 and the CTM electrode 19. Unfortunately, to minimize the series resistance R to the capacitor itis necessary to etch a series of closely spaced. vias 23. For example,U.S. Pat. No. 5,926,359 to Greco et al., and U.S. Pat. No. 5,946,567, toW eng et al. are similar to the capacitor structure depicted above. InU.S. Pat. No. 5,406,447 to Miyazaki, a method is described for making an MOS, MIS, or MIM capacitor incorporating a high-dielectric material,such as tantalum oxide, strontium nitrate and the like, as theinterelectrode dielectric layer. In U.S. Pat. No. 5,812,364 to Oku etal., a method is described for making a compatible MIM capacitor on agallium arsenide substrate, but does not address the method of making MIM capacitors integrated with copper metallization schemes for CMOS devices on silicon substrates. There is still a need in the semiconductor industry to form metal-insulator-metal (MIM) capacitors with high capacitance and low series resistance for improved figure of merit Q for advanced Cumetallization schemes on integrated circuits. SUMMARY OF THE INVENTION A principal object of the present invention is to fabricate a Metal-Insulator-Metal (MIM) capacitor structure having a high figure of merit Q for improved circuit performance using CMOS technology. A second object of this invention is to provide this improved capacitor using a Cu damascene process to form the Capacitor Bottom Metal (CBM)and using a patterned pad protection layer for the Capacitor Top Metal(CTM), which is also patterned to form a level of metal interconnection sand to protect the pad contacts. A third object of this invention is to use an insulating protecting buffer layer, which is used to protect the Cu CBM layer from reacting with the SiO₂ interelectrode dielectric layer for the MIM capacitor,that also serves as a portion of the interelectrode dielectric layer. A further object of the present invention by a second embodiment is another method of fabricating a Metal-Insulator-Metal (MIM) capacitor having a high figure of merit Q requiring no additional masks or metal layers. In accordance with the objects of the present invention, a method is described for making MIM capacitors having high figure of merit Q by reducing the series resistance associated with the capacitor. The method is compatible with the copper damascene process for CMOS circuits having planar surfaces formed by chemical/mechanical polishing (CMP). The method for making MIM capacitors for CMOS circuits begins by providing a semiconductor substrate having partially completed CMOS circuits including several levels of electrical interconnections. The next level of metal interconnections is formed by the damascene process,which involves depositing a first insulating layer, for example a chemical-vapor-deposited (CVD) SiO₂, and etching recesses for CBMelectrodes, pad contacts, and the next level of metal interconnections.Typically for the damascene process, a barrier layer is deposited on thefirst insulating layer, and a Cu layer is deposited by either physical vapor deposition (PVD) or by electroplating. The Cu layer is then polished back to form the CBM electrodes, the pad contacts and interconnections in the recesses. Next an insulating protecting buffer layer is deposited. This buffer layer prevents the underlying Cu layer from direct contact with the SiO₂ layer (the inter dielectric layer) that would cause Cu corrosion. Then a second insulating layer, for exampleSiO₂, is deposited on the insulating protecting buffer layer to serve asa portion of a capacitor interelectrode dielectric layer. Alternatively,if the capacitor interelectrode dielectric layer is silicon nitride(Si₃N₄), then the insulating protecting buffer layer is not required. Continuing with the process, a conducting metal protect buffer layer,such as TiN, TaN, Ta, or Ti is deposited to protect the second insulating layer during the photoresist processing. A first photoresistlayer is deposited. A pad contact mask is used to expose and develop thefirst photoresist, and plasma etching is used to etch pad contact openings (windows) and interconnect contact openings in the metal protect buffer layer and the second insulating layer to the insulating protecting buffer layer. The remaining first photoresist layer is then removed. Next the insulating protecting buffer layer in the contact openings is removed to the underlying Cu layer, and a blanket pad protection metal layer is deposited. The pad protection metal layer,which is preferably TaN/Al/TaN or TaN/AlCu/TaN, protects the Cu from corrosion and also serves as the next level of interconnections. A second photoresist mask and plasma etching are used to pattern the blanket pad protection metal layer to form pad protection over the pads,and to also form capacitor top metal (CTM) electrodes, and to form the next level of interconnecting lines. A passivating third insulating layer, such as PECVD silicon nitride and high density plasma (HDP) oxide is deposited to protect the underlying patterned metallurgy. A thirdphotoresist layer is deposited and is exposed through the pad contact mask and developed to provide openings. The third photoresist mask is now used to plasma etch openings through the passivating third insulating layer to the pad protection over the pads and to the CTMelectrodes. The remaining third photoresist layer is then removed to complete the MIM capacitor integrated with the CMOS circuit. In a second embodiment the process is similar to the first embodiment upto and including the formation of the interelectrode dielectric layer.In this second embodiment pad contact openings and vias are etched inthe insulating protecting buffer layer and the second insulating layer.The pad contact openings are etched to the Cu pad contacts, and the viasare etched for the electrical interconnections. The next level of metal,consisting of TiN/AlCu/TiN, is deposited and patterned to form the nextlevel of metal interconnections and concurrently form the capacitor top metal (CTM) electrodes. This results in a relatively thick CTM electrode having low series resistance and a low-series-resistance Cu CBMelectrode that provides a higher figure of merit Q (X_(c)/R). The remaining process steps are similar to the first embodiment. Apassivation layer is deposited and openings are etched for the pad contacts. BRIEF DESCRIPTION OF THE DRAWINGS The objects and other advantages of this invention are best understood with reference to the preferred embodiments when read in conjunction with the following drawings. FIG. 1 shows a schematic cross-sectional view of a prior-art capacitor requiring multiple contacts to the top metal plate to reduce resistance. FIG. 2 is a schematic top view of a MIM capacitor by the method of this invention for the first embodiment. FIGS. 3 through 12 are schematic cross-sectional views for the sequence of process steps for making a MIM capacitor while concurrently protecting the Cu pad contacts for a CMOS circuit formed by the first embodiment of this invention. FIG. 13 show schematic cross-sectional views for the a MIM capacitor forthe method of a second embodiment of this invention in which a layer is removed to simplify the process. DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention relates to a method for making metal-insulator-metal (MIM) capacitors with increased figure of merit Q while concurrently forming pad contacts and electrical interconnectionsfor the CMOS circuits. The method is compatible with the Cu damasceneprocess for CMOS circuits having planar surfaces formed by CMP. These MIM capacitors are used in many mixed-signal (analog/digital) and radiofrequency (RF) circuit applications. The method of this invention utilizes an insulating protecting buffer layer and a blanket pad protection metal layer to form these improved MIM capacitors. The partially completed CMOS circuits on which these MIM capacitors are built are not explicitly shown in the figures to simplify the drawing sand discussion. However, these MIM capacitors are formed on a semiconductor substrate having partially completed CMOS circuits that include P-channel and N-channel FETs and several levels of electrical interconnections. FIG. 2 shows a schematic top view of a MIM capacitor and a pad contact completed up to and including the CTM electrode, shown in cross section in FIG. 10, and fabricated by the method of a first embodiment. The figure shows a patterned capacitor bottom metal electrode 14A, a pad contact 14B, and a level of metal interconnections 14C formed by a Cudamascene process on a partially completed substrate 10. An insulating protective buffer layer 16, an interelectrode dielectric layer 18, and a metal protect buffer layer 20 are formed on the patterned Cu. Openings 3are etched to the pad contacts 14B and openings 5 are etched to the metal interconnections 14C. A blanket pad protection metal layer 26,composed of TiN/AlCu/TiN, is deposited to protect the exposed Cu. The TiN/AlCu/TiN layer 26, the conducting metal protect buffer layer 20, andthe inter-electrode dielectric layer 18 are patterned to form a protective buffer layer 26B over the pad contact 14B, and also patterned to form the capacitor top metal electrode 26A, and to form another level of metal interconnections 26C. FIG. 3 shows a schematic cross-sectional view of the upper portion of a semiconductor substrate. The method for making these improved MIM capacitors for CMOS circuits begins by providing the semiconductor substrate having partially completed CMOS circuits including several levels of electrical interconnections. Although this partially completed substrate is not depicted in the figures, the substrate is typically asingle-crystal silicon substrate having a <100> crystallographicorientation. The CMOS circuits are typically formed from P-channel andN-channel field effect transistors (FETs) and the electrical inter-connections are typically formed from patterned polycide layer sand several patterned metal layers, such as AlCu, and in more advanced circuits, formed from Cu. Continuing with FIG. 3, the next level of metal interconnections is formed by the damascene process. A first insulating layer 12 isdeposited on the partially completed substrate 10. Layer 12 is preferably a CVD SiO₂ deposited, for example, using tetraethosiloxane(TEOS) or TEOS/ozone as the reactant gas. First insulating layer 12 isdeposited to a thickness of between about 10000 and 12000 Angstroms.Using conventional photolithographic techniques and anisotropic etching,recesses (trenches) are etched for capacitor bottom metal (CBM)electrodes 2, for pad contacts 4, and for the next level of metal inter-connections 6, a portion of which is shown FIG. 3. The recessesare etched to a preferred depth of between about 6000 and 8000Angstroms. A conformal barrier layer (not shown) is deposited on thefirst insulating layer 12 and in the trenches. Typically the barrier layer is TaN. Next, a Cu layer 14 is deposited to fill the trenches (2,4, and 6) and more specifically to a thickness of between about 6000 and8000 Angstroms. The Cu layer 14 is deposited preferably usingelectroplating, but physical vapor deposition can also be used. The Cu layer 14 is then chemically-mechanically polished back to form the CBMelectrodes 14A, the pad contacts 14B, and the metal interconnections14C. Referring next to FIG. 4, an insulating protecting buffer layer 16 isdeposited. Layer 16 is preferably composed of Si₃N₄ and is used to protect the Cu layer 14 from reacting with the SiO₂ layer that is formed next. The insulating protecting buffer layer 16 is formed by PECVDusing, for example, dichlorosilane (SiCl₂H₂) and ammonia (NH₃) as thereactant gas mixture, and is deposited to a thickness of between about150 and 300 Angstroms. Referring now to FIG. 5, a second insulating layer 18 is deposited onthe insulating protecting buffer layer 16 to serve as a portion of the capacitor interelectrode dielectric layer. Layer 18 is preferably SiO₂,deposited by plasma-enhanced CVD (PECVD), using a reactant gas such asT EOS. Second insulating layer 18 is deposited to a thickness of between about 150 and 300 Angstroms. Alternatively, the capacitor interelectrodedielectric layer 18 can be Si₃N₄, and then the insulating protecting buffer layer 16 is not required. Referring to FIG. 6, a conducting metal protect buffer layer 20 isdeposited to protect the second insulating layer 18 during photoresistprocessing. Layer 20 is preferably composed of TaN and is deposited byPVD. Layer 20 is deposited to a thickness of between about 500 and 700Angstroms. Referring to FIG. 7, a first photoresist layer is deposited on the conducting metal protect buffer layer 20. A pad contact mask 24 is usedto expose the first photoresist, and the photoresist is developed toprovide an etch mask 22. Referring now to FIG. 8, the photoresist mask 22 and anisotropic plasma etching are used to etch pad contact openings (windows) 3 and interconnect contact openings 5 in the metal protect buffer layer 20 andthe second insulating layer 18 to the insulating protecting buffer layer16. The metal protect buffer layer 20, and the second insulating layer18 are etched using dry etching. Referring to FIG. 9, the remaining first photoresist layer 22 is removed, for example by plasma ashing in oxygen (O₂) and/or by wet stripping. Next the insulating protecting buffer layer 16 exposed in the contact openings 3 and 5 is removed to the underlying Cu layer 14. TheSi₃N₄ insulating protecting buffer layer 16 is removed by etching. Continuing with FIG. 9, a blanket pad protection metal layer 26 isdeposited. Layer 26 is preferably composed of TiN/Al/TiN or TiN/AlCu/TiN, and is deposited by PVD. The TiN is deposited to a thickness of between about 200 and 500 Angstroms, and the Al or AlCualloy is deposited to a thickness of between about 8000 and 10000Angstroms. The pad protection layer 26 protects the exposed Cu in the openings 3 and 5 from corrosion and contamination from the ambient.Layer 26 also serves as the next level of interconnections. Referring to FIG. 10, a second photoresist layer 28 is deposited on the blanket pad protection metal layer 26. A second photoresist mask 30 is used to expose the second photoresist, and the photoresist is developed to provide an etch mask 28. The etch mask 28 and anisotropic plasma etching are used to pattern the blanket pad protection metal layer 26 to form pad protection 26B over the pads 14B, and concurrently to form the capacitor top metal (CTM) electrode 26A and to form the next level of metal interconnections 26C. The TiN/AlCu/TiN layer 26 is patterned using reactive ion etching and a chlorine based etchant gas. The conducting metal protect buffer layer 20 is also etched using a chlorine basedetchant gas. The second insulating layer 18 (interelectrode dielectric layer) is etched selectively to the insulating protect buffer layer 16also using RIE and an chlorine based etchant gas. The above etching process steps can be carried out sequentially in the same etching chamber. This etching step electrically isolates the CTM electrodes 26Afrom the pad protection over the pads 26B. A key feature of this invention is that the CTM electrodes 26A are formed directly on the conducting metal protect buffer layer 20, which provides low resistance in series to the capacitor and improves the figure of merit Q. Referring to FIG. 11, a passivating third insulating layer 32 isdeposited. Layer 32 is preferably composed of a CVD silicon nitridelayer and a HDP oxide layer using a PECVD to a thickness of between about 17000 and 20000 Angstroms. Referring to FIG. 12, a third photoresist layer 34 is deposited by spin coating and is exposed through the pad contact mask 24 and developed toprovide openings 7 in the photoresist over the pad contacts 14B. Thephotoresist mask 34 and RIE are used to etch vias in the passivationlayer 32 to the pad protection 26B over the pad contacts 14B.Concurrently, openings 9 are etched to the interconnecting metallurgy26C. Another key feature of this invention is that the pad contact mask24 used to etch openings 3 and 5 is also used to etch the openings 7 and9, thereby reducing the mask set. The remaining third photoresist layer is then removed to complete the MIM capacitor integrated with the CMOS circuit. Referring to FIG. 13, a method for making a MIM capacitor is described by a second embodiment. The second embodiment is similar to the first embodiment, but without including the conducting metal protect buffer layer 20. While the invention has been particularly shown and described with reference to the preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made without departing from the spirit and scope of the invention. 1-20. (Canceled). 21. A metal-insulator-metal (MIM) capacitors for CMOS circuits on a semiconductor substrate comprised of: a semiconductor substrate having partially completed CMOS circuits including several layers of metal interconnections; a first insulating layer havingrecesses for capacitor bottom metal (CBM) electrodes, pad contacts, anda level of metal interconnections; copper in said recesses that form CBMelectrodes, pad contacts, and a level of metal interconnections; an insulating protecting buffer layer and a second insulating layer over said CBM electrodes that serves as a portion of an interelectrodedielectric layer; a conducting metal protect buffer layer on said second insulating layer; a patterned titanium nitride/aluminum copper/titaniumnitride blanket pad protection metal layer to form capacitor top metal(CTM) electrodes, pad protection on said pad contacts, and next level of metal interconnecting lines; a passivating third insulating layer having openings to said pad protection over said pads and to said CTMelectrodes. 22. The structure of claim 21, wherein said first insulating layer is silicon oxide having a thickness of between about 10000 and12000 Angstroms, and said recesses have a depth of between about 6000and 8000 Angstroms. 23. The structure of claim 21, wherein said insulating protective buffer layer is silicon nitride having a thickness of between about 150 and 300 Angstroms. 24. The structure of claim 21,wherein said second insulating layer is silicon oxide having a thickness of between about 150 and 300 Angstroms. 25. The structure of claim 21,wherein said conducting metal protect buffer layer is tantalum nitridehaving a thickness between about 150 and 300 Angstroms. 26. The structure of claim 21, wherein said conducting metal protect buffer layer is titanium nitride having a thickness of between about 150 and300 Angstroms. 27. The structure of claim 21, wherein said pad protection metal layer has titanium nitride layers with a thickness of between about 200 and 500 Angstroms and has an aluminum copper alloy layer with a thickness of between about 8000 and 10000 Angstroms. 28.The structure of claim 21, wherein said passivating third insulating layer is composed of composed of a layer of silicon nitride and a silicon oxide and is formed to a thickness of between about 17000 and20000 Angstroms.
using System.Collections.Generic; using System.Collections.ObjectModel; using Xendor.Extensions.Collections.Generic; namespace Xendor.CommandModel.Validation { public class Notification : INotification { private readonly List<Error> _errors; public Notification() { _errors = new List<Error>(); } public bool HasErrors => !_errors.IsEmpty(); public IEnumerable<Error> Errors => new ReadOnlyCollection<Error>(_errors); public void AddError(Error error) { _errors.Add(error); } public void AddErrors(ErrorCollection errors) { _errors.AddRange(errors.BrokenDomainRules); } public void Clear() { _errors.Clear(); } } }
#!/usr/bin/env python from importlib import import_module import json import os from flask import Flask, render_template, Response, request, url_for, jsonify, g # from camera.camera_opencv import Camera # Raspberry Pi camera module (requires picamera package) from camera.camera_pi import Camera import smbus import time import sys from PIL import Image import picamera import io ''' camera = picamera.PiCamera() camera.resolution = (300, 300) camera.framerate = 10 ''' bus = smbus.SMBus(1) app = Flask(__name__) frame = 0 @app.route('/') def index(): """Video streaming home page.""" return render_template('index.html') def gen(camera): """Video streaming generator function.""" global frame while True: global frame frame = camera.get_frame() yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n') @app.route('/video_feed') def video_feed(): """Video streaming route. Put this in the src attribute of an img tag.""" return Response(gen(Camera()), mimetype='multipart/x-mixed-replace; boundary=frame') @app.route('/move', methods=['POST']) def move_robot(): state = '' move = { 'left': int(request.form['left']), 'up': int(request.form['up']), 'right': int(request.form['right']), 'down': int(request.form['down']) } if move['up'] and move['down']: move['up']=0 move['down']=0 if move['left'] and move['right']: move['left']=0 move['right']=0 # Move the car if move['up'] and move['right']: state="Moving: up-right" bus.write_byte_data(0x21, 0x00, 9) elif move['up'] and move['left']: state="Moving: up-left" bus.write_byte_data(0x21, 0x00, 7) elif move['down'] and move['right']: state="Moving: down-right" bus.write_byte_data(0x21, 0x00, 3) elif move['down'] and move['left']: state="Moving: down-left" bus.write_byte_data(0x21, 0x00, 1) elif move['up']: state="Moving: up" bus.write_byte_data(0x21, 0x00, 8) elif move['down']: state="Moving: down" bus.write_byte_data(0x21, 0x00, 2) elif move['left']: state="Moving: left" bus.write_byte_data(0x21, 0x00, 4) elif move['right']: state="Moving: right" bus.write_byte_data(0x21, 0x00, 6) else: state="Stopped" bus.write_byte_data(0x21, 0x00, 5) return jsonify({ 'state': state }) @app.route('/recognize', methods=['POST']) def recognize_picture(): global first #os.system("./simple_google_tts en 'please wait i am thinking'") os.system("flite -voice kal16 -t 'please wait i am thinking'") state = 'Recognizing...' global frame image = Image.open(io.BytesIO(frame)) image.save('/home/pi/Pics/2','jpeg') sys.stdout.write("qq") sys.stdout.flush() text = '' length = sys.stdin.read(4) sys.stdout.write("ZZzz") sys.stdout.flush() #if length.isdigit() == False: # length = length[0:2] #print(int(length)) sys.stdout.write(str(length)) sys.stdout.flush() rec = sys.stdin.read(int(length)) state = rec temp = "" for x in state: if x == ',' or x == '(': break else: temp = temp + x # DO SOMETHING HERE #temp = "flite -voice kal16 -t " + "' i guess it is" + temp + "'" temp = "flite -voice kal16 -t " + "' i guess it is" + temp + "'" os.system(temp) return jsonify({ 'state': state }) if __name__ == '__main__': app.debug = False app.run(host='0.0.0.0', threaded=True, port=5000)
from pagerduty_events_api.pagerduty_incident import PagerdutyIncident class PagerdutyService: def __init__(self, key): self.__service_key = key def get_service_key(self): return self.__service_key def trigger(self, description, additional_params={}): incident = PagerdutyIncident(self.__service_key) incident.trigger(description, additional_params) return incident
User talk:Saz/Sazbox Yaaaaay Sazbox! :3 also, FURST -- Super Igor *ninja!!* 09:47, 3 June 2008 (EDT)
Dr. Melvin. Since 1905. Mr. Sloan. And during that time have the investigations of your bureau been directed toward the discovery of a remedy for the disease commonly known as hog cholera ? Mr. Sloan. I will ask you to state, for the record, the approximate value of the swine products annually in the United States. I do not care whether you put it in the record right now, or whether you supply it later. Dr. Melvin. The Bureau of Statistics of our department estimates that the average annual losses of hogs from disease is slightly in excess of 5 per cent. This bureau estimates that out of this loss probably not less than 90 per cent is produced by hog cholera, and we believe that the yearly loss in money is in the neighborhood of $18,000,000. There are, however, no exact statistics oh this subject available. ONMBHONiOOOiOiflOOOiOONKNOOHMNOOOeCiOOOOiOC WNHHNHnfflM*WOU5HHHMK'*'<liM*iC'CCN3^!CONKNHK T-l rH i-t T-H i-H CO COCO T-H t-H ,-H ,-H i— I ,-H ,-H BUEEAU OF ANIMAL INDUSTRY. Mr. Sloan. What other disease, if any there is, afflicting any of the valuable farm animals that anywhere nearly compares in amount with the loss suffered through hog cholera? Dr. Melvin. There are two iliseases which cause tremendous loss. Of course the exact loss is very roughly estimated. The loss through the tick fever has been variously estimated at from $25,000,000 to $40,000,000 a year. The loss on account of tuberculosis in cattle has been estimated at over $10,000,000 a year, and there would be a considerable proportion of loss among swine on account of tuberculosis. That would run into the millions of dollars. I am not prepared to say just how much, though. Mr. Sloan. Then there is only one disease afflicting the farm animals that exceeds in amount of loss to the animal raiser that caused bv cholera — that is the Texas tick fever; is that what you call it? Mr. Melvin. The indirect loss due to tuberculosis is very great. It is harder to estimate the loss from tuberculosis because of its insidious nature; it is not as rapidly fatal as hog cholera. Dr. Melvin. And not so well understood, but the losses in tick fever are comparatively easily arrived at, and those three diseases, I should say, cause the great proportion of the losses to the live stock in the country. The Chairman. On that point, in estimating a loss that comes from the tick fever, are you taking into account actually the loss that occurs, or constructive losses; I mean to say, the retardment of the improvement of the breed, and the expansion of it, etc. ? Dr. Melvin. All those factors we have estimated in computing the loss on account of tick fever. The direct loss by death alone, of course, is not nearly so great as that. amount. The Chairman. So that one of the great purposes and one of the great advantages of the eradication of the tick fever is to make possible the expansion of the cattle industry in the South and the improvement of the grade of cattle, is it not ? Dr. Melvin. Yes, sir. There is a great loss, too, which extends somewhat beyond that, namely, that arising from the restricted facilities in marketing such cattle and in the loss in hides through the damage made by ticks, but all those factors have entered into the estimates. Dr. Melvin. Yes. Mr. Sloan. And in that estimate you did not include the general, incidental loss to the business suffered by deterring the breeder from going on with his work, or loss incident to the dissemination of his work outside of the actual death loss; you did not take that into account at all, did you ? BUREAU OF ANIMAL INDUSTRY. 9 The Chairman. Dr. Craig, of the Indiana agricultural department, made a statement to me that in certain portions of Indiana, under present conditions, men could not grow hogs profitably, and he had advised them not to grow hogs on account of the ravages of the cholera. Do you know of any other section of the country where that statement might be said to obtain ? doubt that there are quite a good many such sections. The Chairman. It is also true, is it not, that under present conditions hogs, in a great many sections, have to be kept in smaller herds than otherwise they would be if there was no danger of cholera ? Dr. Melvin. There is a general law of 1884 that prohibits the interstate shipment of any live stock affected with any contagious, infectious, or communicable disease, and that applies equally to hog cholera. Dr. Melvin. Well, I do not believe it is very well observed by owners of hogs. It is the usual custom, when cholera develops in a herd, for the owner to try to market them as early as possible so as to avoid losing his entire herd. We, however, where we have evidence presented or can obtain evidence that these interstate shipments have been knowingly made, bring prosecution for violation of the statute. Mr. Sloan. Is this disease readily communicable from one animal to the other direct, and also through leaving the germs in freight cars, live-stock cars, and other methods ? tances. Dr. Melvin. In many outbreaks of cholera the disease is subacute, and in this form an owner could ship his apparently well hogs from some western State to a live-stock center, such as Chicago, and without the disease having been developed sufficiently at the time of their arrival at Chicago to cause suspicion. These hogs might then be bought, and frequently are bought for shipment to eastern markets, such as Buffalo, New York, or points in Massachusetts. Under the provisions of the 28-hour law these hogs would either have to be fed, watered, and rested in the cars or unloaded in stock yards en route for feed, water, and rest. In the latter case, we would be very apt< to have several centers of infection established at these unloading yards between Chicago and this eastern point. Following this it would be quite a common matter for some farmer or stockraiser in one of these intermediate States, to bring in some stock hogs, unload them in these 10 BUREAU OF ANIMAL INDUSTRY. infected pens and distribute them among his neighbors, and thus have a number of centers of infection established. This same thing would apply to hogs which might be shipped out of a State through one of these inf ected raih oad yards. On account of the prevalence of cholera the department does not now permit the shipping out from any of the large stock-yard centers of hogs for breeding or feeding purposes, because these yards are considered as being constantly infected with cholera. Mr. Sloan. Doctor, the department has developed a serum. You may state briefly as to its production, efficacy, and what the department has done toward distributing it and demonstrating its efficacy ? Dr. Melvin. We have a small rented farm situated near Ames, Iowa, where we produce each year a limited amount of serum, and at this farm we also conduct further research work looking to some cheaper method of production and studying the disease in its various phases, also testing various serums now on the market. In 1908 we invited representatives of all the States to attend a demonstration at this Ames farm to observe our methods of its production and use. Some 23 States were represented during the three meetings we held that year. This was with a view to encourage the States to take up the work and supply the hog raisers of their States with the serum. Previously, and since then, we have made demonstrations in different sections of the country to show its efficacy. We tried to distribute this work as widely as possible in order to call it to the attention of a large number of people. We have met with almost universal success in arresting the disease, but it has never been a cure, and is not a cure, but is a preventive. Dr. Melvin. No, sir; we have not considered it as such. It is possible, if used in the very earliest stages of the disease, that it might cure, but we do not look upon it as a cure. Dr. Melvin. We have issued circulars at different times suggesting various remedies or formulae that might be used looking to the cure of cholera, but the disease is of such a nature that we have looked upon any medical treatment as of very little value. Usually the disease progresses so rapidly that medicines can not have time to operate, and we feel that the only success which will be attained is in preventing the balance of the infected herd from becoming sick, or treating them with serum in the first instance and preventing the sickness entering the herd at all. The Chairman. As a matter of fact, a few years ago the department submitted to the hog growers of the country a certain receipt or compound of medicines as a cholera cure, did }'ou not ? the best we thought we could suggest at that time. The Chairman. At that time it represented the sum of the best knowledge the Department of Agriculture had, and I say you rested your reputation upon that, did you not? since it was originally issued. The Chairman. Have you carried on actively the work of testing and changing it from the time you published it generally over the country until the present time, Doctor ? three or four years ago. The Chairman. How much money in your department has been devoted to that particular phase of the work, of testing that formula or other formulae of that kind, perfecting it as a medicinal cure of hog cholera, since the publication of that paper ? The Chairman. Then you wish to give the committee to understand that for a series of years you have not been attempting to discover and perfect a cure of hog cholera ? Dr. Melvin. We simply have not been able to get any new thought on the subject of a cure. I should not like the committee to think we have not given it any thought, because we have, but we have been absolutely at sea as to what to recommend further than what has been recommended. The Chairman. I beg your pardon, but you did not quite comprehend my question. My inquiry was not as to the results, but as to the efforts. How much money are you spending each year, and how much have you been spending devoted to the finding of a cure for hog cholera, and what experiments have you conducted ? Dr. Melvin. No, sir. The Chairman. Have you submitted to the Committee on Agriculture at any time an estimate asking for money to carry on original research work looking to the perfecting of a cure of hog choleia ? 12 BUREAU OF ANIMAL INDUSTRY. The Chairman. Have you sufficient authority ander the present terms of your appropriation bill to take up and carry on the work of discovering a cure for hog choleia? beyond what we had already prescribed as remedies. The Chairman. I made a visit to Mount Weather, in Virginia, and I saw Prof. Moore flying kites in the air, at a pretty heavy expense. When I asked him what purpose he had in view, he said he did not know; that there was so little known about the weather that there were not sure what they could find out by flying these kites, but so long as there was so much not known about the weather he felt perfectly justified in keeping ap this activity in his department. I should like to ask you whether you feel the same way— that so long as there is so much not known about hog cholera, and it is of so great importance to agriculture, if you do not feel justified in keeping up your activity along that line ? Dr. Melvin. The two propositions, Mr. Chairman, are very different. We must have some new thought to pursue to investigate, and we have tried in the past various remedies and are in the dark as to any new treatment that we might adopt; then, again, in treating hogs for cholera as compared to flying kites, there would be a great difference, as it would be a constant, heavy expense. Sick hogs, as a rule, die of cholera in a very short time, and it would be a rather difficult matter to continue indefinitelv along? that line, except at a tremendous expense. The Chairman. You say that in times past you tried new remedies for hog cholera. How many years would you have to go back before you reach the limit of that term, "in times past," when you tried new remedies for hog cholera ? Dr. Melvin. We knew it would not in all cases, yes, sir. The Chairman. The point I wish to know and get before the committee, and it is not in a critical vein, but we want to get the actual facts about it. When was it your department quit the search for a cure for hog cholera ? That is the point I wish to know. Dr. Melvin. We practically quit the study of medical treatment when we discovered a preventive treatment. The whole line of medical thought nowadays is looking toward the prevention of disease, rather than the cure, and I presume that the money that is expended in human medicine is in the proportion of 100 to 1 in favor of preventive treatment rather than curative treatment. The Chairman. Let us admit that fact, but we should still like to fix the date when you quit, practically, the study to the end of attempting to find a cure for hog cholera. Dr. Melvin. I would say about five or six years ago. Mr. Sloan. Just in that connection, right there, have there, for a few years prior thereto, been any practical advances made by anybody in the world that seemed to promise an effective remedy for the disease, when once contracted? facturers of various proprietary substances and fake remedies. Mr. Sloan. You spoke about the demonstrations of this serum. About how many demonstrations did you make showing the manner of application and also demonstrating the efficacy of the serum treatment ? Dr. Melvin. We gave two public demonstrations; one in the Kansas City stockyards during the year 1910, and later in the next year we gave a demonstration in the South Omaha stockyards. We have made a great many demonstrations in connection with State officials and private individuals on farms where we thought it was advisable to conduct the work. Mr. Sloan. To what extent were these demonstrations made; that is, what I want to arrive at is to show how much has been done to get this home to the swine breeders and owners ? Dr. Melvin. In Kansas City our demonstration involved work requiring 35 young shoats. In Omaha about 30 pigs were used. These were the most important ones that we conducted and the only public ones. Mr. Sloan. Had you sent men from the department or from. the bureau to distant parts of the country to swine breeders' gatherings or associations to make demonstrations for the purpose of familiarizing the swine owners with the method of application and the efficacy of the preventive ? Dr. Melvin. We have not made any demonstrations at such gatherings. We have had employees to go and address the meetings with reference to its use. We have also, for this same purpose of demonstrating its efficiency for the last three or four years, treated the hogs which were exhibited at the International Livestock Exhibit, at Chicago, and during the last summer, I think, this was done at several State and other prominent fairs in the different States. This, of course, is one of the many means of disseminating hog cholera through the sending out from these fairs to various farmers these stock hogs, which may become infected at these fairs, and if this could be taken up generally by the States and by the General Government it would 14 BUREAU OF ANIMAL INDUSTRY. no doubt result in lessening these outbreaks. As a matter of fact, during the year 1911 there was one herd of hogs which arrived at the International Stock Show in Chicago affected with cholera, and several of them died. The balance of the herd were treated with serum and saved, and there were no outbreaks among the other hundreds of hogs which had also been treated and were on exhibition there. Dr. Melvin. There is a fund, but the estimates as submitted to the committee were only sufficient to provide for work which we now have on hand and not for increasing this demonstration work. We would not be able to conduct or increase this demonstration work without a considerable increase in those funds. Mr. Sloan. If an increase in appropriations should be made, is your bureau so organized that it has or could have available competent men who could be sent to different points in the several States to make these demonstrations ? Dr. Melvin. Yes, sir. Mr. Sloan. If such funds were provided and such competent persons sent, in your opinion would it be wisely spent money looking toward the reduction of this very large annual loss which you have recited to the committee ? Dr. Melvin. I think undoubtedly it would. I think that if we were provided with funds, so we go into several States and take up a considerable section, involving three or four counties in a block, and there demonstrate that losses from cholera need not necessarily exist, that it would be of immense value to the country. I think we could demonstrate to the States that by careful organization and the use of an efficient serum that the cholera could be reduced to a minimum and probably eventually eradicated. Dr. Melvin. There are inefficient serums, as we have been able to demonstrate by experimentation made at our Ames farm. These Were prepared by private manufacturers. It is possible, unless close observation is constantly had of the method of manufacture, that serum which has a low protecting value may be prepared. All serum does not have the same protective value, and unless these serums are tested before they are sent out the manufacturer may think he is sending out efficient serum when, in fact, it is one that under ordinary conditions would not protect. All these things have to be carefully guarded in order to know just how efficient the serum is before it is sent out and in what doses it should be given. We find that all serum can not be used under the same dosage. BUREAU OF ANIMAL INDUSTRY. 15 materially be decreased later, as when the methods were put into practical operation by States, and we would have to, in making any estimates, figure on the maximum expenses rather than on the minimum expenses, because we would not care to undertake an experiment of this sort without being able to put it through successfully. I think that in taking a block of counties of, say, four in a State, considering the maximum number of men that we would require and the maximum amount of serum that we would require, and all that, it would be an item of probably $15,000 for that block of counties. Now, of course, the number of sections that we would take up would be increased in proportion. The Chairman. Just on that point let me ask you one or two questions, please. I have understood from your general statement that hog cholera is a very difficult disease to handle ? Dr. Melvin. Much more so, yes, sir. The Chairman. Do you consider that at the present time that the knowledge of serum, its proper manufacture and the proper manner to use it, so as to insure the highest state of protection, is so well understood that the serum in private hands, or even in a great many instances in State hands, is manufactured properly and applied properly to the farmer's herd ? Dr. Melvin. I think there are many instances where it has been improperly used, but I think these difficulties can be overcome if the department could oversee these methods of preparation along the lines of supervision, such* as the Public Health Service now exercises over manufacturers of diphtheria antitoxin and similar substances for human use. Dr. Melvin. There might be several specific reasons why this treatment failed to protect, but undoubtedly there was some failure in the proper technique in its manufacture or application; either the serum was produced from hyperimmune hogs that were not as thoroughly immunized as they should have been, or the serum was used in too small doses. 16 BUREAU OF ANIMAL INDUSTRY. Dr. Melvin. I do not think very materially. It has been claimed that in some sections the disease has been increased through the distribution of impotent serum used in connection with virulent vaccine virus. The Chairman. We have had a remedy against hog cholera — a preventative, I mean, for six years, but in practice it has not reduced the ravages of the disease among the herds in the United States, practically speaking. Is that the case ? Dr. Melvin. I think so. The Chairman. That is the pjoint I wish to get at. I am going to ask you whether during these six years it has been the custom of your department to send your experts to centers of hog cholera infection to take up the work and to show that those losses can be avoided, and to stop the ravages of the disease. Have you done that ? The Chairman. During these six years, then, you have not sent your department workers to demonstrate the efficacy of the treatment where this disease was ravaging ? The Chairman. I will as£ you to put in the record precisely the instances where you did do it in the last six years. Please furnish specific incidents of those six years where you have sent the agents of your department to centers of hog cholera infection and demonstrated your work. 32 per cent. December 21-22, 1909 (Virginia) : Herd very badly infected. About 35 animals had died, and practically all of the survivors were showing symptoms of hog cholera to a greater or less degree. Treated, 113, many of which were sick; untreated, 179. Results: Treated, survived, 79, or approximately 69.91 per cent; treated, died, 34, or approximately 30 per cent; untreated, survived, 111, or approximately 62 per cent; untreated, died, 68, or approximately 38 per cent. Only the general statement received that all treated animals remained well. July 9, 1910 (Iowa): Owner had lost the greater portion of his herd and had procured 14 pigs from one of his neighbors for this experiment. The simultaneous method was employed. Treated by, 11; virus alone, 3. untreated, exact number not known. Results: Treated, survived, 14, or approximately 78 per cent; treated, died, 4, or approximately 22 per cent.; untreated, no exact data. Received only the statement that all untreated animals died . December — , 1911 (Maryland): Herd badly infected. Approximately 200 hogs had died. This herd is a very valuable one, being composed of pure-bred DurocJerseys. Treated, 6; untreated, approximately 40. Results: Treated — survived, 118 or approximately 98 per cent; died, 2 or approximately 2 per cent. Untreated — no exact figures could be ascertained concerning these hogs, but the asylum veterinarian placed it at approximately 90 per cent. July, 190S. Kansas City, Kans. Experiment. Thirty-five young shoats were purchased from a farm where hog cholera had not existed. These pigs, having been carried to the Kansas City stockyards, and being in charge of a committee appointed by the exchange, were treated as follows: Twenty-two were injected with anti-hogcholera serum prepared by the bureau. Four were injected with virulent hog-cholera blood. Nine were not treated in any manner. All were placed in a pen together. As was expected, the 4 pigs inoculated with the virulent blood contracted hog cholera within a short time and all died. The 9 "checks'' contracted hog cholera from those which were inoculated with hog-cholera blood, and they also died. The 22 pigs treated with the serum remained well with the exception of one or two, which were slightly affected on one or two days. It is not certain, however, that the trouble with the treated hogs was hog cholera, as none died. All of the autopsies on the check animals showed typical lesions of hog cholera. August, 1910 (South Omaha, Nebr.): Experiment. This experiment was undertaken at the request of State officials and the Nebraska Swine Breeders' Association. The Union Stock Yards Co., of South Omaha, also offered to cooperate and to bear the expense incident to the purchase and care of hogs used in the experiment. Thirty pigs, weighing from 40 to 60 pounds, were purchased from a farm which had been free from hog cholera for several years. These hogs were carried to the stockyards and, on July 23, 1910, four of them were injected with blood from hogs sick of hog cholera. These injected pigs, which were placed in a pen by themselves, became sick on the 28th of July, at which time 18 of the remaining pigs were given one dose of the serum, while the other 8 pigs were not treated in any way. The 18 serumtreated pigs and the 8 untreated pigs were then placed in the same pen with the 4 pigs which had been made sick of hog cholera. The four pigs which were inoculated with hog cholera all died. The eight untreated check pigs all contracted hog cholera from the four inoculated ones. The 18 pigs which were given serum and which were confined in the same pen with the 4 original sick pigs and with the 8 untreated pigs. which became sick, remained perfectly well and were finally turned over to the officials of the stockyards companv upon the completion of the experiment on September 17, 1910. In conclusion, the total number of hogs treated by both the serum-alone and the serum-simultaneous methods in the above demonstrations was 744. of which 613. or approximately 82 per cent, survived, while of the untreated hogs, which numbered 362, 228, or approximately 65 per cent, died. The figures given showing the percentage of the untreated animals which died are not absolutely correct in that in the case of two herds the report was to the effect that a large number of untreated hogs died, while in four herds it was reported that all untreated animals died. As we had no definite data as to the number of untreated animals in these herds, they were not considered in figuring the percentage. Dr. Melvin. That was spent for inspection and quarantine; that includes our work in the eradication of scabies among cattle and sheep; our live stock import and export work; the maintenance of our quarantine stations, and what work we have done looking to the eradication and investigation of tuberculosis in cattle. The Chairman. Was any of this $620,000 available, in any degree whatever, in the treatment of hog cholera ? It is limited just to the work here enumerated, is it not ? Dr. Melvin. Scabies is a contagious disease due to a parasite or a scab mite which affects cattle and there is another variety affectingsheep, which it has been the endeavor of the department to eradicate for several years. Dr. Melvin. The section under quarantine on account of cattle scabies include portions of Montana, the two Dakotas, Nebraska, Colorado, New Mexico, Texas, and Oklahoma. Dr. Melvin. Not ordinarily. In old cattle, especially those that are thin and lacking in vigor, it is frequently fatal during the whiter months. Cattle on good feed and grass in the summer generally improve, even with the presence of the disease, but it is very detrimental to the economical handling of live stock. Dr. Melvin. We have districts where this disease abounds under quarantine. Then we have inspectors under general instructions of a field station who inspect systematically the cattle in those vicinities and require their dipping and treatment so as to cure the disease. We also are required to inspect and certify to the clean live stock which go out of these quarantine districts, to any unquarantined section, to prevent its further spread. Dr. Melvin. Ususally that is done by the owners under the direction of the State officials; all of the work is done in cooperation with the State officers, and under their State laws, except that pertaining to interstate shipments, winch of course is strictly Federal work. Mr. Melvin. That is inspection of cattle in the quarantined areas. That is, the areas quarantined on account of ticks, which are to go outside of that area into interstate commerce, and we are required to see that the cars are properly placarded and the billing marked and that separate pens are maintained for the tick} r cattle en route to market centers. The law provides that southern cattle may be shipped to market centers for slaughter; it exempts such cattle from the general provision of the law. Dr. Melvin. That is to cover any work we might have to do in the case of some unexpected outbreak of a contagious disease. We have had this past year, and the year before, an outbreak of dourine among horses in Montana and in Iowa, and that is a general law we have to cover any such instances that might arise. spending for experimental work from a different fund. The Chairman. You have never, then, spent any money from this item here, "inspection work relative to the existence of contagious diseases;" you have never spent any money from this item upon hog cholera, have you ? Dr. Melvin. Yes; tuberculosis and glanders, and then during the last year we had quite an expense paid out of that fund investigating a contagious disease that appeared among horses in Kansas, Nebraska, and eastern Colorado. Dr. Melvin. No specific subdivision was made. The men were in our regular employ, but their expenses were charged to this sum of $620,000, but no specific subdivision has been made under that heading. The Chairman. I will ask you, Doctor, to put into the record the amount of money that you have spent out of this lump sum during the present year, lyl2, tor the inspection work relative to the contagious diseases, and not only the amount but also to give a general itemized statement as to what diseases you have recognized and what work has been done under this item. For the manufacture of tuberculin and mallein and the distribution of same to State officials, bureau officials, and other Government officials, for use in the work of testing cattle for tuberculosis and the testing of For supervising the transportation of live stock movinginterstate, to ascertain if the Federal quarantine regulations are complied with, and for collecting evidence of the violations of such regulations 2, 333. 58 For the inspection of sheep for scabies, and for supervising the dipping of animals so affected; also for supervising the cleaning and disinfection of cars in which such animals have been shipped 274, 405. 45 For the inspection of cattle for scabies and for supervising the dipping of animals so affected; also for supervising the cleaning and disinfection of cars in which such animals have been shipped 174 018. 44 For the inspection of sheep affected with, or exposed to, the disease known as lip and leg ulceration and for supervising the area quarantined on account of this disease 1, 001. 45 For the inspection and testing of cattle moving interstate for purposes other than immediate slaughter, in compliance with the laws of the States to which destined; also for the inspection and testing of horses and mules intended for interstate movement ' 28, 592. 42 For supervising the movement of cattle out of the area quarantined for Texas fever, to markets for slaughter, and for cleaning and disinfecting cars in which such animals were shipped 25. 388. 36 For the slaughter of animals affected with dourine, and for other expenses incidental to cooperative work with the State of Iowa, in the eradication of an outbreak of such disease in that State 1, 834. 78 For supervising the enforcement of the so-called 28-hour law and the collection of evidence of alleged violations thereof, in cooperation with the United States attorneys in charge of the prosecution of such cases. 3. 033. 27 import along the Canadian border 25, 732. 36 For the inspection and tuberculin testing of dairy cattle in cooperation with State and municipal officials, with a view of developing herds free of tuberculosis 20. 167. 61 Dr. Melvin. We have always felt that tuberculosis in hogs was primarily contracted by them from cattle, and that if tuberculosis in cattle was eradicated it would necessarily follow that tuberculosis of hogs would also cease; and we have thought that all activity along that line should be directed toward reducing it or eradicating it in cattle, rather than in hogs, on that account. Dr. Melvin. It is rather doubtful. As a matter of fact, we do not find many so-called "generalized" cases of tuberculosis in hogs. The majority of them are localized cases, and I do not think the per cent of cases that would spread tuberculosis among hogs — that is, from one hog to another — are very many. ized and stay that way and not become generalized. The Chairman. I am not speaking about hogs now, but, in a general way, that represents the progress of the disease from the local to the generalized, does it not ? The Chairman. Would the fact that hogs are butchered at a much younger age than cattle have anything to do with the fact that you find it localized more often in hogs than in cattle ? Dr. Melvin. It undoubtedly has considerable to do with it, but the condition of the lesions rather indicates that its spread within the animal is not continuous; that these foci became calsified; that these lesions in the lymphatic glands become calsified and apparantly would remain in that condition unless some new infection was received, so that it does not seem to be as progressive a disease as in cattle. Dr. Melvin. No; I could not say that, because experimentally hogs become infected quite easily; but for some reason, I do not know why, it does not seem to have the effect of going on and continuing into a generalized form of tuberculosis the same as it does with cattle. Dr. Melvin. Mallein is a preparation prepared from glanders bacilli, which are afterwards filtered out of this preparation and which may be injected into horses, and usually when they are affected it will give a reaction consisting of an elevation of temperature and a large swelling at the place where the subcutaneous injection was made. It is used as a diagnostic agent only, not as a cure. The Chairman. As to what particular phase of this infection and quarantine work, and others that you have described, was your fund deficient for, or the work seemed to indicate that you needed more money for ? The Chairman. I am not speaking particularly about your estimate. I just want to know on what phase of it }^our work is increasing in such a way that your last appropriation was insufficient. Dr. Melvin. We were unable to take up tuberculosis work as much as we would like. We had requests from different States to assist them in cooperative work looking to the eradication of tuberculosis, and we were unable to do so on account of our funds being engaged in other activities; that is, to the extent to which we were called upon. Dr. Melvin. We would have done that, and we would have spent more money on hog-cholera work. We have had many requests for assistance which we were unable to comply with. Dr. Melvin. I think the general field work should come out of this item, although the funds which we have used heretofore have been spent out of the item providing for scientific investigations of diseases of animals. We think this has reached a stage where it is beyond the experimental stage and should be used in a practical way. The Chairman. If you were going to demonstrate, then, to the farmers the efficacy and the proper mode of serum treatment, it would necessarily come out of this particular item, would it not ? The Chairman. Coming to the next item, ''For all necessary expenses for the eradication of the southern cattle tick. 8250,000." is that the amount you spent this past year or this present year ? Dr. Melvin. That is a cooperative piece of work with the various Southern States — I think ail Southern States are engaged in that work with the exception of Florida. We provide part of the force, the main force, the leading force, and the State provides what force they can. It consists in the systematic dipping hi an arsenical solution of cattle so as to rid them of these ticks, with the idea of relieving these sections, as soon as the ticks are eradicated, from quarantine. This requires a farm-to-farm campaign and the confining of cattle during the time it is necessary to undergo treatment. Dr. Melvin. After the female has become fertilized by the male on the cow or other animal, she drops to the ground and lays her eggs under some grass or leaves or in some secluded place, then these eggs hatch out and the young animals crawl up the stalks ot the grass or weeds or vegetation, and when the cattle brush against them in passing along they attach themselves to the animal and crawl up then legs and become attached to the skin and remain there until the time arrives for mating. The Chairman. I see, as reported to the House, it is $325,000. This $325,000, if it should become enacted into law, will again be used in this work of destroying the tick, in carrying the work to the individual farmers, and upon the individual farms ? amount with the Government. The Chairman. Take the next article, "for all necessary expenses for investigations and experiments in. the dairy industry, cooperative investigations of the dairy industry in the various States, inspection of renovated butter factories and markets, $177,900," is it not? that itm, which I see has not been granted by the committee. The Chairman. "For all necessaiy expenses for investigations and experiments in animal husbandry, $52,180." How much have you spent under that item this past year ? The Chairman. Will you tell us just exactly what is comprehended unde<* "necessary expenses foi investigations and experiments in animal industry " ? What line of work have you carried on under that appropriation ? 26 BUREAU OF ANIMAL INDUSTKY. Dr. Melvin. We have an experiment station, located at Beltsville, Md.j where we are conducting experiments in poultry, with breeding of pultry and egg production and poultry feeding, and also experiments in swine; also with milk goats. We have some experiments under that division with some of the States in carrying on animal husbandry investigations. Dr. Melvin. We have several varieties of poultry that are beingbred, and I think some cross-breedmg is being done. At the same time they are doing this they are studying methods of housing and of feeding to determine the best kinds of feed to use to produce the best birds for table and for egg production. The Chairman. While I am asking about it I shall ask you what specific work have you done, for instance, on the disease known as the ' ' black head " of turkeys ? cating disease of the liver and intestines. The Chairman. The loss from that disease in the Middle West is very great, so much so that a great many farmers have been driven entirely out of the business of raising turkeys, and you have found no remedy for it, have you ? The Chairman. There is one point I wish to speak about in taking up the subject of the diseases of poultry. In diseases of poultry, as in diseases of swine, vast amounts of money have been spent in cures for cholera. People ought to be protected against that. If there is not any cure for cholera, fakirs ought not be allowed to go around and sell a little package for 50 cents, and millions and millions of dollars are spent for it, without question, throughout the country. You can not go into a western drug store but you will find a shelf full of hog and poultry cholera cure. Dr. Melvin. Those are a mixture of feeding and breeding experiments. Just at this time I am not familiar enough to go into it in detail. We have, however, taken the precaution to immunize these hogs with our serum and vaccine treatment: that was recently done, and very effectively. The Chairman. Coming to the next item, "For all necessary expenses for scientific investigations in diseases of animals, including the maintenance and improvement of the bureau experiment station at Bethescla, Maryland, and the necessary alterations of buildings thereon, and the necessary expenses for investigations of tuberculin serums, antitoxins, and analogous products, $78,680." How much of that have you spent on that this present year ? Dr. Melvin. No, sir; no futher than this. It is very difficult to make any greater subdivision of that. Our hog-cholera work comes out of that; our study of diseases, such as blackhead in poultry, and other poultry diseases, comes out of that; also any swine diseases; parasitic diseases of sheep, which we have been studying — all of those are paid out of that fund, and from year to year they vary in kind and extent. Some projects will need more money one year and less the next, and we can use it best in that form. The Chairman. The Committee understands this is a sum set aside for the original scientific investigation into the cause of diseases and how to cure them ; is that right ? The Chairman. Inasmuch as you have not been studying the hog cholera in that form for some years, the committee understands you have spent no money out of this appropriation for hog cholera ? The Chairman. Give the committee the main items under that. Dr. Melvin. The salaries, $5,236.67; the travel was $722.47; miscellaneous, $6,200.26. I presume that the miscellaneous item will include principally the cost of hogs and rental of farm and item of feed and items of that nature. The Chairman. Were any other diseases of domestic animals taken up by your bureau other than this on hogs this past year? Under this item of $70,680, the committee would like to know precisely how much of this lump sum was spent under this first heading, "Diseases of animals." Dr. Melvin. We have spent practically all of that, as indicated, for those purposes. We are now studying, at quite a considerable expense, the effect of spoilt grains and sulphured oats upon horses, if any. We have some work under tuberculosis; that is, immunizing against tuberculosis of cattle, that we are carrying on under that 28 BUREAU OF ANIMAL INDUSTRY. item. The study of the nodular diseases of the intestines of sheep is being conducted there. The expenses of our laboratories at Washington and at Bethesda are maintained out of this amount. Dr. Melvin. The maintenance would include, of course, the help to take care of the animals, and we grow some forage there. We have not made any improvements of any considerable extent during the present or the past year at Bethesda. The language is carried from year to year in order that if we should find it necessary to put up a small building we might haveit, or to make some alterations to meet the requirements of some particular investigation, but in fact there have not been many alterations there or changes in the farm. The Chairman. I will ask you to place in the record a complete itemized statement of this particular item, $78,680, showing just precisely what it has been spent for. For experiments and investigations in the study of hog cholera, and for conducting experiments concerning the practical application of antihog-cholera serum for combatting hog cholera 9, 172. 73 animal diseases 3, 742. 55 For investigations of the frozen and dessicated egg industries, with special reference to the bacterial content of the finished products and the sources of contamination 1, 939. 52 For miscellaneous experiments and investigations in the study of roundworms, gid, and tapeworms in sheep, parasites of hogs, and measles of sheep and cattle, and the treatment of cattle mange 10, 245. 77 For repairs, improvements, and general maintenance of the Experiment Station, Bethesda, Md.. and for conducting experiments and investigations there, concerning the study of tuberculosis, Texas fever, and other diseases of animals 30, 691. 57 The Chairman. I will venture it as a guess, and will look it up more accurately, that the State of Indiana is spending more money on the question of hog cholera than the United States Department of Agriculture is, as shown by the doctor's testimony. Part of that, however, will probably be reimbursed by the sale of the serum to the farmers. Dr. Dorset. Since 1894. Mr. Sloan. Have you and some of your coworkers in the Bureau of Animal Industry devoted time to the investigation and discovery and development of a serum calculated to be a preventive for hog cholera ? . since I have been in the Bureau of Animal Industry. Mr. Sloan. Will you state when the work was begun? Go a little into the history of its development, and, among other things, state the manner of production of this serum, just in your own way. 30 BUREAU OF ANIMAL INDUSTRY. Dr. Dorset. I belie Ye that the Bureau of Animal Industry began to study hog cholera from the time of its organization, in the early eighties, and as a result of those early investigations, about the year 1 889, or possibly earlier, they discovered a germ similar to the typhoid germ, that they decided — the men in the bureau at that time decided — was the cause of the disease. This was called the hog cholera bacillus, and was generally accepted all .over the world as the cause of hog cholera. Later, at the time I entered the bureau, efforts were beingmade to secure some sort of serum to prevent hog cholera by the employment of this germ, which was supposed to be the cause. Serums are generally prepared by inoculating animals with the products of the germ that causes the disease, or with the germ itself. In this case this hog cholera germ was being employed, and animals, such as horses and cattle, were inoculated, and the serum from those animals was drawn and used to treat hogs to prevent hog cholera in the field. I did a large amount of field work with that serum myself up until about the year 1901, possibly later than that. The Work at that time was under the direction of my predecessor, Dr. de Schweinitz. Dr. de Schweinitz about that time came to doubt that this so-called germ of hog cholera was the real cause of it. He made a good many experiments to see if he could find out the real cause. He thought the hog lice transmitted it. He tried to find out if that was so. He thought a germ like the malarial organism, in the blood of hogs, caused it. While these investigations were in progress, and before they reached positive determination, Dr. de Schweinitz was taken sick, first in the summer of 1903, and died later, in February, 1904, at which time I succeeded him in charge of the division. About that time, and immediately subsequent thereto, experiments demonstrated that the disease is not caused by this germ that looks like the typhoid fever germ, at all, but that it is caused by an invisible organism that is so small that it passes through the finest porcelain filters, and is not discernible with microscopes of the highest power. The work of the division in that respect has been confirmed, not only in this country, but practically tin all foreign countries, with reference to the similar disease in the foreign countries. I refer to the work of the Imperial Board of Health in Berlin, and to the Austro-Hungarian, and to the French and English authorities. Our ideas of the cause of the disease being thus changed, we could understand why we had failed in the early efforts to produce a serum, because we were using a thing to produce the serum that did not cause the disease at all. So there was begun these first attempts to produce a serum by the method we use now; that was begun in the summer of 1903, under my personal direction, Dr. de Schweinitz being ill at that time. That early, first experiment was inconclusive, so that later, in 1905, we were first able to demonstrate conclusively by a sufficient number of experiments that if you take a hog that has recovered from hog cholera, or is immune from any other cause, and inoculate that hog with a sufficient amount of blood taken from a sick hog, the effect of that injection will be to heighten the immunity of the immune so that his blood serum will contain protective substances in such amount that comparatively small portions of this treated immune serum will protect nonimmune hogs from hog cholera. That is the w T ay that serum was produced. I consider it was first definitelv and absolutely established during the summer BUREAU OF ANIMAL INDUSTRY. 31 of 1905. This being found, the serum Was patented in 1906 a patent was taken out in my name and the rights were ned to the Government and to anyone in the United States to use without the payment of royalty, which is the common custom. Dr. Dorset. We were; yes, sir. I will say, if the Chairman will allow me, that these patents in foreign countries were applied for by me only after I had asked the permission of the Secretary of Agriculture to make application for them. He said he considered that inasmuch as I derived no pecuniary benefits whatever in this country, there should be no objection to my making application for patents in foreign countries. Dr. Dorset. I do not remember what the rules were at that time. The Chairman. Is it not a matter of fact that an employee of the United States Government, working on a salary and using the public funds to discover a matter of public interest, that he is prevented by the rules of the department from becoming the exclusive owner of that discovery? Dr. Dorset. Mr. Chairman, I think that, if I am not mistaken, there has been since the time of this patent a law passed which makes such a provision. I quite agree that whether he is legally bound to do that or not, that he is certainly morally bound not to take private advantage of the discovery in the United States. The Chairman. I will say that I did not intend to bring this matter up now, although I had intended' to do so before we concluded the hearing, so I will ask you a question or two on that subject. Are you the owner of those foreign patents now ? 32 BUREAU OF ANIMAL INDUSTRY. Dr. Dorset. I do not think so, Mr. Chairman. I must confess that I think the foreign patents in the countries where they were originally secured — and there were only a few — have all lapsed through failure to prosecute the work 01 producing the serum, although I am not perfectly clear as to that. I confess I do not know. I do not think, though, the work is being hampered in any country on account of these patents. Dr. Dorset. No, sir. The Chairman. As you are employees of the Department of Agriculture, I still think it is within the province of the committee, but if it is not I do not wish you to answer it. Mr. Sloan will judge as to that. Other employees of the Department of Agriculture were joined with you in this work of taking out the foreign patents on this serum ? Dr. Dorset. I should like, if the Chairman will allow me, to state just exactly who was associated with me, because there was one man associated with me not connected with the department. The Chairman. Of course you are at perfect liberty to state who was associated with you, although I have no right to inquire except as to the members of the Department of Agriculture. Dr. Dorset. I am very glad indeed to make a full and free statement in regard to this, and if we do not have time for it to-day, I hope that I may sometime have the opportunity to give you and the committee the fullest information in regard to all this hog cholera work. I am very desirous and anxious to do it. There were associated with me my brother-in-law, Gilmer Meriwether, of Kansas City, Mo., who, according to my recollection, put up more than one half the money used in securing these patents. Col. S. R. Burch, formerly chief clerk of the Bureau of Animal Industry, put in $100, and I put in personally $100, and Mr. McCabe put in Eersonally $100. Dr. J. A. Emery was also interested. So far as I ave knowledge, all of that money was spent on these patents, and there has never been one cent of revenue from them. Dr. Dorset. The first idea of producing serum of this sort, I find I have made a note of it, and it was in 1902. I can give you the exact date because I made it a practice to keep a notebook in which I inserted what original ideas occurred to me. This one is dated November 13, 1902. Dr. Dorset. Dr. E. A. de Schweinitz. The Chairman. You spoke yesterday in regard to the first doubt that the department had in regard to the course they had been taking on the serum matter; you said this doubt first arose in the mind of your predecessor. At what time did that doubt enter his mind ? the cause of the disease. The Chairman. But the fact of it being a doubt in regard to the cause of the disease finally resulted in an entirely new course of investigation by the department, did it not? Dr. Dorset. I am not able to say how the application was drawn exactly, Mr. Chairman; the whole matter was conducted by the solicitor for the department, who filed the application. I do not know just how it was filed. Dr. Dorset. I have a copy of the patent, which I think must be a copy of the application. I might correct my previous answer and say that I find in the patent that my declaration at the opening states: Be it known that I, Marion Dorset, a citizen and officer of the United States, residing at Washington, in the District of Columbia, have invented certain new and useful improvements in the manufacture of hog cholera antitoxin, of which the following is a specification. Dr. Dorset. I believe that the list that I will give you is correct. It is, to the best of my knowledge. The foreign countries are Canada, France, Spain, Denmark, Norway, Sweden, England, Germany, and Italy. The Chairman. Assuming that these patents for which you applied had all been greated, would not that have given you an absolute monopoly of the preventative cure of hog colera, so far as the known world is concerned, speaking in a hog-raising sense? Dr. Dorset. Yes; but in the application for the United States it was provided that the invention described may be used by the Government, or by any of its officers or employees in the prosecution of work for the Government, or by any other person in the United States without the payment of any royalty thereon, so that the object of taking out the patent in the United States was to secure its free use to the entire people, without the payment of any royalty, and to prevent the patenting of the same process by any individual for his private benefit. The Chairman. Then it was not your purpose at any time to secure or attempt to secure any fees of any kind from the people of the United States, or for a monopoly in the United States? just read, and the original is here. The Chairman. As a matter of fact, Doctor, would not a prompt publication of this formula by the Government of the United States have been effectual in preventing any other man from patenting it in the United States? Dr. Dorset. That is my understanding. The Chairman. Then, as a matter of fact, if this discovery had been made known by the publication b}^ the United States Government, would it be reasonable to suppose that any other man, under the patent laws, could have taken out a patent? Dr. Dorset. I do not know. The Chairman. What is the regulation at the present time about an officer or employee of the Department of Agriculture who makes an original discovery while engaged in his official work in taking out a patent? Dr. Dorset. So far as I am aware, there are two circulars from the Secretary of Agriculture bearing upon this point. One is dated May 8, 1905, which is a department order. Shall I file a copy of this with the record? This prohibits an employee of the department from patenting any discovery or invention which has been made through the expenditure of Government time and money or while connected with the department. I will say that the second circular from the department is a copy of an act of Congress approved June 25, 1910, which, I believe, prohibits the patenting of any device discovered or invented by an employee during the time of his employment or service. 36 BUREAU OF ANIMAL INDUSTRY. eminent were to make an original discovery, as, for instance the hog-cholera serum, and not patent it, that it would be possible for some one to patent it, and thus secure a monopoly of it ? Dr. Dorset. I believe that was the idea when this patent was taken out. Now, I will tell you that the idea of patenting this process had never occurred to me. It did not occur to me originally, but my recollection — while I have no record of the fact, my recollection is that I received word one day, probably the latter part of 1905, that the Secretary desired me to patent this process. It may have been that the idea was to protect the people from me. I have no idea* what the Secretary's ideas were, but I had not thought of it up to that time, and, of course, I never should have applied for a patent as an individual, and I do not know that I would ever have thought of applying for it as I did finally. Dr. Dorset. I think, although I can not say positively, that I received a telephone message either from solicitor's office or from Dr. Melvin, the chief of the bureau. That I can not say positively. The Chairman. When Solicitor McCabe drew up this application for a patent in the United States do you know whether or not he was acting under orders from the Secretary? The Chairman. Did you have knowledge that a certain idea which originated in the Weather Bureau was sought to be patented under practically the same terms that this serum was patented and that it brought up a controversy similar to the one we are discussing? Dr. Dorset. The matter of taking out foreign patents, to the best of my recollection, was first suggested to me by my brother-in-law in Kansas City, who knew of the work I had done, and who knew I had previously patented the article in the United States, and I took it up first with him. He and I arranged to make application for foreign patents in certain countries, and he suggested to me that 1 might be able to get patents in more countries than he was able to furnish the money for, if I could secure others to go in with me and apply for more patents in more foreign countries. That is the way the thing originated. I do not know whether I have fully answered your question or not. The Chairman. The point I wished to get at is this, that men rarely ever contribute money unless they expect money will be returned to them. What I wish to know is, under what agreement yon were acting by which people were contributing money; how the profits were to be divided ? Dr. Dorset. The other half was to be given to those who contributed the money; and, as I contributed some myself, I was to share in that in proportion to the money I put in, in addition to my rights as inventor. The Chairman. One-half of the total profits were to come to you as the originator and the other half to be prorated as the proportion of the expenses of taking out the patents ? The Chairman. And you consider that in your work in the Department of Agriculture you were not correlated with other men in any helpful degree, so that with you remains the distinctive sense or right of being the inventor or the originator of this idea ? Dr. Dorset. Mr. Chairman, there is absolutely no question that I originated the idea. I did have, in the department, the assistance of certain men, who carried out certain experiments under my direction, but the invention was mine. Dr. Dorset. I consider the invention was the new idea that hog cholera could be prevented by the preparation of a serum in the manner I described here yesterday. issued to you. Dr. Dorset. Canada, France, Spain, Denmark, and I have no record as to Norway and SAveden. I do not know whether they did or not. The patents were disallowed in England, Germany, and The Chairman. At any time have you sought, or has any person associated with you sought to enforce your rights as the patentee, as the holder of the patents, in any of these countries ? Dr. Dorset. No, sir; with this possible exception, that I Avas advised — I do not now know just how — that the rights in Canada would lapse if work was not done before a certain date, and I then communicated with an acquaintance of mine in Canada and asked him to undertake the preparation of serum there according to this method, as an agent of mine, and he started to do it, but was not successful and never produced any serum. That is the only thing that has been done. BUREAU OF ANIMAL INDUSTRY. 39 Dr. Dorset. Not to my knowledge. I do not think they could have done it because the patents, as I say, stand in my name as an individual and they would not appear on the patents or have any connection with them. Dr. Dorset. I will say, Mr. Chairman, that the serum is being made extensively in some foreign countries, but as to those in which I received the patent, I do not know. Dr. Dorset. Yes, sir. The Chairman. Then, as a matter of fact, those countries where it is patented have not used the process and those countries where it is not patented have been using the process. That is the statement, is it not ? Dr. Dorset. That may be correct. The cause for that, I believe, is owing to the prevalence of the disease and to the position and attitude of officials of the Government. In England the officials there seem to have felt that the method was not exactly as good as it ought to be, or something of that sort, so that is one country where I did not get a patent where they are not using it to any extent. The same is true in Italy, so far as my knowledge goes. Sir : As you are aware, I have patented a method for immunizing hogs from hog cholera. The rights to use this patent have been assigned to any person in the United States. If you have no objection I should like to patent this same process in foreign countries, my intention being to sell the rights or secure royalties on the use of this method in such countries, if it shall prove to be of practical value. I will probably have associated with me one or two other men in the attempt to secure foreign rights. It goes without saying rhat I will have no connection whatever with the manufacture of this material in the United States. I respectfully request that I be notified promptly whether you have any objection to my patenting this process in foreign countries for my personal benefit, in the manner outlined above. Dr. Marion Dorset, Chief Biochemic Division, Bureau of Animal Industry. Dear Sir : I have received, through the Chief of the Bureau of Animal Industry, your application dated December 1, 1906, for permission to take out patents in foreign countries on the method of immunizing hogs from hog cholera patented by you in the United States for the public benefit. I see no objection to your patenting this process in foreign countries for your personal benefit as outlined in your letter, and have accordingly approved your application. Secretary. I will say, Mr. Chairman, that this action of the Secretary, so far as I know, was not a special or individual case, but represents a custom of the department. There have been other men, I believe, in the department, who have been given similar privileges, and it might be well if I insert a statement recently made by the Secretary of Agriculture before the Committee on Agriculture of the House of Eepresentatives on January 4, 1913, in which he sa} 7 s, referring to taking out patents by department employees: But our department regards anything done of that kind by one of our scientists, to whom you are paying a salary and whose expenses you are paying — we expect that that discovery shall be patented in the name of that man for the benefit of the American people within the United States. If he can sell it abroad we let him do it. Mr. Sloan. Touching the purpose of this patenting, is not that application made for and on behalf of the United States, so that the patent, once granted, will almost conclusively prevent the patenting by anybody else of substantially the same idea, or of raising a question of fact before the Patent Office or elsewhere as to who the discoverer of the method, substance, or appliance might be? Dr. Dorset. That was my understanding. Mr. Sloan. Then^ if as a matter of fact, the United States had not patented this process, but merely announced it in a general way and some other person should have been making investigations some- BUKEAU OF ANIMAL INDUSTRY. 41 what along the same lines, and should make an application before the Patent Department for a patent, it would lay the foundation for a complicated question of fact to be determined before the Patent Department, which would involve the taking of a lot of evidence and everything of that kind? Mr. Sloan. I do not know whether I caught the trend of the chairman's questions, but as I understand that is the purpose of taking a patent, either by the individual "or by the Government, to set aside the real discovery ; to show by whom it was made, and not to leave any broad question of fact unsettled that could be settled very quickly when the facts are fresh in the mind, and the other fellow has not the opportunity to be operating for a few weeks or a few months in manufacturing facts. Dr. Dorset. The sole object of taking out patents, so far as I have ever known, was to protect the people of the United States. Just how this was to be accomplished, I had not given particular thought. Mr. Sloan. Have you had anything to do with any of these demonstrations as to the manner of treatment through the use of serum, concerning which Dr. Melvin testified on yesterday? tions, but they have been under my general direction. Mr. Sloan. What is your opinion with regard to demonstrations which might be made through the various States, where the hog industry is large, as to the probability of their value, and some suggestion as to how they ought to be carried out ? Dr. Dorset. In reply to that question I will say that I agree entirely with the statement of the chairman at yesterday's session that although we have known this serum for some time, and although a great many hogs have been treated with it, with varying success, that we are making little progress toward the eradication of the disease, although a great deal of money has been saved by the treatment of hogs in infected herds. My idea of a demonstration experiment is that it is a most desirable thing to carry out. I believe that from one to four counties should be selected as an experimental area ; that from the States should be secured authority to quarantine that area; that we should have the assistance of State authorities in work in those counties, so far as possible to get it ; that the Bureau of Animal Industry should then place in each of these counties one or more men to control the situation ; that these men should through voluntary or State agents, secure information concerning the number of hogs and the disease in their territory ; that immediately upon the outbreak of disease he should proceed and treat the infected herd, clean up, disinfect the premises, and probably apply serum to all exposed adjoining herds. I believe that at the beginning of the work probably much could be done by placing in this area where the experiment is to be conducted a lecturer of some sort, some department man, to give talks and explain to the farmers what was advisable to be done — an educational campaign, in other words. It is my opinion that, following a method of that sort, the disease can be completely controlled in the county or in a block of counties. The whole matter of success will depend on the organization and the funds available and the serum, of course, to use as needed. No such demonstration as that has ever been carried out. We have worked almost 42 BUREAU OP ANIMAL INDUSTRY. exclusively on individual farms or experiments to show that the serum will protect hogs from hog cholera. The latter fact has been absolutely demonstrated not only in this country but abroad, and we now need to go further and control the disease by linking the serum with an efficient organization. That is essentially my opinion of how the work should be carried out. Mr. Sloan. In what essential particular is the hog cholera proposition now different from the southern cattle-tick proposition — first, as to whether an efficient remedy is discovered, and, second, as to the manner of handling it? Dr. Dorset. First of all, I would say that hog cholera exists in all parts of the United States, no one section being free. We have an efficient remedy against the disease. There is little progress being made in eradication, although there seems to be no reason why we should not proceed to eradicate hog cholera in the same way that the Government is now cooperating with States for the eradication of the Texas fever. The hog-cholera work will probably be more difficult, and for that reason may require the expenditure of more funds in the end ; but there should at least be a start made, showing the States what can be done, so that they will then provide funds to cooperate with the National Government.
Method, graphical user interface, and system for categorizing financial records ABSTRACT A method for categorizing financial records involves obtaining multiple financial records from a financial institution. Each financial record in the multiple financial records is categorized using a category selected from multiple categories, where the multiple categories include multiple business-related categories and at least one non-business-related category. Each financial record categorized using a business-related category selected from the multiple business-related categories is mapped to a tax category selected from multiple tax categories, where the tax category is associated with the business-related category. A financial report is generated that includes the tax category for each financial record categorized using one of the multiple business-related categories. CROSS-REFERENCE TO RELATED APPLICATIONS The subject matter of the present application may be related in part to subject matter contained in U.S. patent application Ser. No. 11/073,396,entitled “Categorization of Financial Transactions,” filed on Mar. 4,2005 in the names of Matt E. Hart, Gordon D. Whitten, Jr., Rupesh D.Shah, and Kevin M. Reeth II, the entire contents of which are incorporated herein by reference. BACKGROUND Small businesses face unique challenges when filing tax returns.Frequently, finances (i.e., income and/or expenses) for a small business are spread across multiple financial accounts (e.g., checking accounts,credit card accounts, money market accounts, or any other type of financial account). In such cases, consolidating finances from all ofthe financial accounts may be helpful to determine total financial amounts to be entered on a tax form for the small business. Further, in many cases, one or more financial accounts is used not only for business finances, but also for personal finances (e.g., finances for a sole proprietor of the small business, or any other individual(s)associated with the small business). Accordingly, in preparation for a tax filing, it may be helpful to categorize financial records associatedwith the shared financial account(s), to determine which financialrecords are related to business finances, and which financial records are related to personal finances. Additionally, for many or all of the financial records associated withthe small business (i.e., financial records, from one or more financial accounts, that are not associated with personal finances), it may be necessary to determine to which tax categories the financial records apply. Proper categorization of financial records may impact the accuracy of the tax filing and/or any tax deductions applicable to the tax filing. Typically, to categorize financial records for a small business,historical financial documents (e.g., receipts, invoices, ledgers, or any other physical financial documents) are maintained throughout the tax year. In preparation for a tax filing, the financial documents are then organized based on tax categories available on a tax form for the small business. This method of categorization depends heavily on reliable maintenance of the financial documents and good knowledge ofthe relationships between the financial documents and the available tax categories. Further, maintenance and/or categorization of the financial documents may be quite time-consuming. SUMMARY In general, in one aspect, the invention relates to a method forcategorizing financial records, comprising obtaining a plurality of financial records from a financial institution, for each financialrecord in the plurality of financial records, categorizing the financialrecord using a category selected from a plurality of categories, wherein the plurality of categories comprises a plurality of business-related categories and at least one non-business-related category, for each financial record categorized using a business-related category selected from the plurality of business-related categories, mapping the financialrecord to a tax category selected from a plurality of tax categories,wherein the tax category is associated with the business-relatedcategory, and generating a financial report comprising the tax category for each financial record categorized using one of the plurality ofbusiness-related categories. In general, in one aspect, the invention relates to a graphical userinterface displaying a categorization interface on a display device,comprising a plurality of financial records from a financialinstitution, and for each financial record in the plurality of financialrecords, a category selector configured to associate the financialrecord with a category selected from a plurality of categories, wherein the plurality of categories comprises a plurality of business-related categories and at least one non-business-related category, wherein each financial record associated with a category selected from the plurality of business-related categories is mapped to a tax category selected froma plurality of tax categories, wherein the tax category is associatedwith the business-related category, and wherein a financial report is generated comprising the tax category for each financial record associated with one of the plurality of business-related categories. In general, in one aspect, the invention relates to a system comprising a financial records collector configured to obtain a plurality of financial records from a financial institution, and a categorizationinterface configured to categorize each financial record in theplurality of financial records using a category selected from aplurality of categories, wherein the plurality of categories comprises aplurality of business-related categories and at least one non-business-related category, wherein each financial record categorized using a business-related category selected from the plurality ofbusiness-related categories is mapped to a tax category selected from aplurality of tax categories, wherein the tax category is associated withthe business-related category, and wherein a financial report is generated comprising the tax category for each financial record categorized using one of the plurality of business-related categories. In general, in one aspect, the invention relates to a computer readable medium comprising executable instructions for categorizing financialrecords by obtaining a plurality of financial records from a financialinstitution, for each financial record in the plurality of financialrecords, categorizing the financial record using a category selected from a plurality of categories, wherein the plurality of categories comprises a plurality of business-related categories and at least one non-business-related category, for each financial record categorized using a business-related category selected from the plurality ofbusiness-related categories, mapping the financial record to a tax category selected from a plurality of tax categories, wherein the tax category is associated with the business-related category, and generating a financial report comprising the tax category for each financial record categorized using one of the plurality ofbusiness-related categories. Other aspects of the invention will be apparent from the following description and the appended claims. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 shows a diagram of a system in accordance with one or moreembodiments of the invention. FIGS. 2-5 show flow charts in accordance with one or more embodiments ofthe invention. FIG. 6 shows a table of mappings in accordance with one or moreembodiments of the invention. FIGS. 7-8 show diagrams of graphical user interfaces in accordance with one or more embodiments of the invention. FIG. 9 shows a diagram of a computer system in accordance with one embodiment of the invention. DETAILED DESCRIPTION Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency. In the following detailed description of embodiments of the invention,numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent toone of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicatingthe description. In general, embodiments of the invention provide a method and graphical user interface to categorize financial records. Financial records from one or more financial institutions are categorized as business-related or non-business-related. Business-related categories are mapped to tax categories, and the categorizations and/or mappings are used to generate a financial report. In one or more embodiments of the invention, financial records needing to be categorized may be obtained from one or more financial institutions. FIG. 1 shows a diagram of a system in accordance with oneor more embodiments of the invention. Specifically, FIG. 1 shows a diagram of a system for categorizing financial records from one or more financial institutions (e.g., 105, 110), in accordance with one or moreembodiments of the invention. As shown in FIG. 1, in one or more embodiments of the invention, a financial records collector (115) may be configured to communicate withthe financial institution(s) (e.g., 105, 110) to obtain financialrecords (not shown). In one or more embodiments of the invention, thefinancial records collector (115) may be a network server (i.e., a server operating on a wide area network (WAN), a local network (LAN), or any other type of private or public network), a software application executing on an end-user's computer system, or any other type of hardware or software module configured to obtain financial records fromthe financial institution(s) (e.g., 105, 110). More specifically, in oneor more embodiments of the invention, the financial records collector(115) may be a web server configured to serve web pages over the Internet (or other network), or a server configured to communicate thefinancial records, directly or indirectly, to a web server. Those skilled in the art will appreciate that, in one or more embodiments ofthe invention, the financial records collector (115) includes functionality to facilitate consolidation of financial records from multiple financial institutions (e.g., 105, 110). In one or more embodiments of the invention, the financial records obtained by the financial records collector (115) may then be presented to a user in a categorization interface (120). More specifically, thecategorization interface (120) may be used to categorize the financialrecords. Categorization of financial records is discussed in detail below. In one or more embodiments of the invention, the categorizationinterface (120) may be a web page, a software application executing on auser's computer system (e.g., financial management software, tax preparation software, etc.), an electronic message (e.g., e-mail, text message, etc.), or any other type of interface for presenting thefinancial records. Further, in one or more embodiments of the invention, the financialrecords collector (115) and/or categorization interface (120) may be integrated with a software application (125). For example, in one ormore embodiments of the invention, the software application (125) may be financial management software, tax preparation software, or any other similar type of software. Those skilled in the art will appreciate that integration of the financial records collector (115) and/or categorization interface (120) with the software application (125) may involve communicating via a network protocol, a shared database (not shown), application program interface (API) calls, or any other type of software integration. FIG. 2 shows a flow chart in accordance with one or more embodiments ofthe invention. Specifically, FIG. 2 shows a flow chart of a method for generating a financial report, in accordance with one or moreembodiments of the invention. Initially, access to one or more financial accounts is configured (Step 205). Next, a determination is made about which categories (i.e., business-related and/or non-business-related categories) apply to the financial account(s) (Step 210). Financial records from the financial account(s) are subsequently obtained (Step215) and categorized (Step 220). After the financial records are categorized, a financial report is generated using the categorizations(Step 225). Each of these steps is discussed in further detail below.Those skilled in the art will appreciate that the method shown in FIG. 2provides a convenient means for categorizing financial records in preparation for a tax filing, especially in cases where a financial account contains business-related financial records and non-business-related financial records, or when financial records for a small business are spread across multiple financial accounts. FIG. 3 shows a flow chart in accordance with one or more embodiments ofthe invention. Specifically, FIG. 3 shows a flow chart of a method for configuring access to one or more financial accounts (e.g., Step 205 of FIG. 205), in accordance with one or more embodiments of the invention.Initially, an account type to configure is obtained (Step 305). In oneor more embodiments of the invention, the account type may be checking,credit card, money market, equity, or any other type of financial account. In one or more embodiments of the invention, the account type may be obtained using a dropdown menu, a text box, a checkbox, a radiobutton, a menu, a voice command, an automated programmatic lookup (e.g.,a query issued to a database or database or software application), or by any other similar means. Next, a financial institution identification (ID) is obtained (Step310). In one or more embodiments of the invention, the financialinstitution ID may be a name of the financial institution, a branch number of the financial institution, a unique character string associated with the financial institution, any other type of identification, or any combination thereof. In one or more embodimentsof the invention, the financial institution ID may be obtained using a dropdown menu, a textbox, a checkbox, a radio button, a menu, a voice command, an automated programmatic lookup (e.g., a query issued to a database or software application), or by any other similar means. Those skilled in the art will appreciate that, in one or moreembodiments of the invention, financial institution IDs available to obtain may be filtered based on the account type selected in Step 305.For example, if an account type of “checking” is obtained in Step 305,then IDs of financial institutions that do not offer checking accounts may not be available to obtain in Step 310. Though not shown in FIG. 3,in one or more embodiments of the invention, if the financialinstitution ID obtained is not supported (e.g., if the financialinstitution does not support obtaining financial records in the manner discussed below), then the method proceeds directly to Step 325,discussed in detail below. Returning to discussion of FIG. 3, login credentials for the financialinstitution are subsequently obtained (Step 315). In one or moreembodiments of the invention, the login credentials may be obtained using a dropdown menu, a textbox, a checkbox, a radio button, a menu, a voice command, an automated programmatic lookup (e.g., a query issued toa database or software application), or by any other similar means.Those skilled in the art will appreciate that, in one or moreembodiments of the invention, the login credentials obtained may be based on the account type obtained in Step 305 and/or the financialinstitution ID obtained in Step 310. Next, one or more available financial accounts is selected (Step 320).In one or more embodiments of the invention, selecting an available financial account may involve querying the financial institution usingthe login credentials obtained in Step 315, to obtain a list of available financial accounts. Further, in one or more embodiments of theinvention, the financial accounts available to select may be limited to those financial accounts of the type selected in Step 305. In one ormore embodiments of the invention, if only a single financial account is available to select, then the financial account may be automatically selected and the method may proceed directly to Step 325, discussed in detail below. In one or more embodiments of the invention, the financial account(s) may be selected using a dropdown menu, a textbox, a checkbox,a radio button, a menu, a voice command, an automated programmatic lookup (e.g., a query issued to a database or software application), or by any other similar means. In one or more embodiments of the invention, access to multiple financial accounts may be configured. Accordingly, after access to a financial account has been configured, a determination may be made whether to configure access to another financial account (Step 325). Inone or more embodiments of the invention, the determination may be made using a dropdown menu, a textbox, a checkbox, a radio button, a menu, a voice command, an automated programmatic lookup (e.g., a query issued toa database or software application), or by any other similar means. If there are no more financial accounts to configure, then the method ends. In one or more embodiments of the invention, when configuring access toa financial account, the user may be prompted to indicate an approximate percentage of transactions (e.g., a transaction volume and/or transaction amount percentage) for the financial account that are directed to business-related expenses and/or income (not shown). If the user's response coincides with configuration and/or historical transaction data for a financial account that has already been configured, then automatic categorization settings may be configured forthe new account based on the data for the previously configured account.Automatic categorization is discussed in detail below. FIG. 4 shows a flow chart in accordance with one or more embodiments ofthe invention. Specifically, FIG. 4 shows a flow chart of a method for determining which categories apply to one or more financial accounts(e.g., Step 210 of FIG. 2), in accordance with one or more embodimentsof the invention. Initially, a business industry associated with the financial account(s)is specified (Step 405). In one or more embodiments of the invention,the business industry may be advertising (e.g., network marketing),agriculture, automotive services (e.g., trucking, sales, repairs, etc.),biotechnology, construction, consulting (e.g., management, humanresources, training, information services, etc.), e-commerce, education,entertainment, financial services, food services, healthcare,hospitality, international business, legal services, manufacturing,media (e.g., print, radio, television, Internet, publishing, etc.),mining, oil services (e.g., drilling, gas distribution, etc.),pharmaceutical, property maintenance (e.g., gardening, plumbing,electrical, etc.), real estate (e.g., agents, brokers, etc.), retail,travel (e.g., travel agencies, transportation services, etc.), software development, or any other type of business industry. In one or moreembodiments of the invention, the business industry may be specified using a dropdown menu, a textbox, a checkbox, a radio button, a menu, a voice command, an automated programmatic lookup (e.g., a query issued toa database or software application), or by any other means. Next, a determination is made whether the business industry selected is recognized (Step 410)—i.e., whether industry-specific tax-related questions are available for the selected business industry. In one ormore embodiments of the invention, the determination may be made by consulting a database, a network resource, a software application, or by any other similar means. In one or more embodiments of the invention,when a business industry is specified in Step 405, only recognized business industries may be available to specify. Accordingly, in one ormore embodiments of the invention, Step 410 may not be performed. In one or more embodiments of the invention, if Step 410 is performed and the business industry is not recognized, then the method proceeds directly to Step 420, discussed in detail below. Alternatively, if thebusiness industry is recognized, then industry-specific tax-related questions are presented to and subsequently answered by the user (Step415). In one or more embodiments of the invention, the industry-specific tax-related questions may include questions about vehicle use, rent,mortgages, utilities, clientele, business partners, or any other similar type of question. Those skilled in the art will appreciate that the industry-specific tax-related questions are questions associated withthe specific business industry specified. Further, those skilled in the art will appreciate that multiple business industries may have industry-specific tax-related questions in common. In one or moreembodiments of the invention, the questions may be answered using a dropdown menu, a textbox, a checkbox, a radio button, a menu, a voice command, an automated programmatic lookup (e.g., a query issued to a database or software application), or by any other similar means. Next, general tax-related questions are presented to and subsequently answered by the user (Step 420). In one or more embodiments of theinvention, the general tax-related questions may include questions about vehicle use, rent, mortgages, utilities, clientele, business partners,or any other type of question. Those skilled in the art will appreciatethat the general tax-related questions are questions that may be applicable/common to a significant percentage of business industries. Inone or more embodiments of the invention, the questions may be answered using a dropdown menu, a textbox, a checkbox, a radio button, a menu, a voice command, an automated programmatic lookup (e.g., a query issued toa database or software application), or by any other similar means.Those skilled in the art will appreciate that, in one or moreembodiments of the invention, the industry-specific tax-related questions of Step 415 and the general tax-related questions of Step 420may be combined and presented to and answered by the user in any order. Based on the business industry specified and/or the answers to the tax-related questions (i.e., the industry-specific and/or general tax-related questions), categories are subsequently determined for thefinancial account(s) (Step 425) (i.e., categories that may later be usedto categorize financial records, as discussed in further detail below).For example, in one or more embodiments of the invention, if an answer to a question indicates that a small business uses a vehicle for business purposes, then a vehicle expenses category may be used. As another example, if the business industry selected typically involves frequent travel, then a travel expenses category may be used. Those skilled in the art will appreciate that many different ways to determine categories based on a business industry and/or answers to tax-related questions exist. For example, in one or more embodiments of theinvention, a pre-determined list of categories may exist for each recognized business industry—i.e., a list including at least one business-related category based on the business industry specified.Further, in one or more embodiments of the invention, if apre-determined list of categories is used, the list may be modified based on the answers to the tax-related questions. Those skilled in the art will appreciate that some or all of the categories may be general business-related categories—i.e., categories that are not based on thebusiness industry specified. For example, in one or more embodiments of the invention, the business industries “truckers,” “real estate agents,” and/or “network marketing”may be available to specify. In one or more embodiments of theinvention, if the business industry “truckers” is specified, then thebusiness-related categories “claims & damages,” “lump ers,” and/or“tolls/scales/pre pass” may be used. In one or more embodiments of theinvention, if the business industry “real estate agents” is specified,then the business-related categories “client gifts,” “showing expenses,”“broker fees,” and/or “education” may be used. In one or more embodiments of the invention, if the business industry“network marketing” is specified, then the business-related categories“promotions & contests,” “demonstration expenses,” “shipping & postage,”and/or “event registration fees” may be used. An example of how these business-related categories may be mapped to tax categories, in accordance with one or more embodiments of the invention, is provided in FIG. 6, discussed in detail below. Those skilled in the art willappreciate that the aforementioned business industries and business-related categories are provided for exemplary purposes only and should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that, in one or moreembodiments of the invention, categories provided to a user forcategorizing financial records (e.g., categories determined as discussed above) may be user-friendly categories. In other words, the categories may be easier for a user to understand than tax categories, thereby allowing the user to more easily categorize financial records correctly.More specifically, because the user-friendly categories may be mapped to specific tax categories, as described in further detail below, a user may be able to identify the correct tax categories for financial records without dealing with the tax categories directly. FIG. 5 shows a flow chart in accordance with one or more embodiments ofthe invention. Specifically, FIG. 5 shows a flow chart of a method forcategorizing a financial record (e.g., Step 220 of FIG. 2), in accordance with one or more embodiments of the invention. In one or moreembodiments of the invention, categorizing a financial record involves selecting a business-related category or a non-business-related category to associate with the financial record. More specifically, in one ormore embodiments of the invention, multiple business-related categories are available to select and at least one non-business-related category is available to select. Examples of business-related categories are provided below. In one or more embodiments of the invention, each business-related category is associated with a tax category.Accordingly, each financial record categorized using a business-relatedcategory may then be mapped to the corresponding tax category. Mapping a financial record to a tax category is discussed in further detail below. As shown in FIG. 5, initially, an uncategorized financial record is obtained (Step 505). For example, the financial record may be obtained from a financial account associated with a financial institution (e.g.,Step 215 of FIG. 2). In one or more embodiments of the invention, the financial record may be obtained in response to a user-issued command, in accordance with a predetermined schedule for obtaining financial records, upon configuring access to a financial account (e.g., upon completing Step 205 of FIG.2), upon determining which categories to apply to the financial account(e.g., upon completing Step 210 of FIG. 2), or at any other time prior to categorizing the financial records. In one or more embodiments of theinvention, the financial records may be obtained by a softwareapplication executing on a user's computer system (e.g., financial management software, tax preparation software, etc.), software executing on a network server (e.g., the financial records collector (115) of FIG.1), or any other type of hardware or software module configured to obtain financial records from a financial institution. In one or more embodiments of the invention, some of the financialrecords retrieved from the financial institution(s) may already be associated with categories (e.g., categories provided by a financialinstitution, by a user of a separate interface (not shown) provided bythe financial institution, by a transaction partner associated with the particular financial record, etc.). Alternatively, in one or moreembodiments of the invention, all of the financial records retrieved from the financial institution(s) may be uncategorized. Those skilled inthe art will appreciate that even if a financial record is already associated with a category, the category may not correspond to a category used by the present invention. Accordingly, in one or moreembodiments of the invention, even if a financial record is already associated with a category, the following categorization method may still be required. Returning to discussion of FIG. 5, after the financial record is obtained, the financial record may then be categorized by a user (Step510). In one or more embodiments of the invention, categorization by auser may be performed using a dropdown menu, a textbox, a checkbox, a radio button, a menu, a voice command, or any other similar type of userinput. Those skilled in the art will appreciate that after the user has categorized the financial record, in one or more embodiments of theinvention, the user may subsequently re-categorize the financial record(i.e., Step 510 may be repeated). Alternatively, in one or more embodiments of the invention, thefinancial record may be categorized automatically (Step 515). In one ormore embodiments of the invention, automatic categorization may involve categorization based on historical data. For example, the financialrecord may be categorized based on a categorization trend associatedwith previously categorized financial records. More specifically, in oneor more embodiments of the invention, the financial record may be categorized based on having one or more characteristics (e.g.,transaction amount, transaction partner, transaction time, etc.) in common with the previously categorized financial records. Different characteristics of financial records are discussed in detail below.Further, as discussed above, in one or more embodiments of theinvention, automatic categorization may be based, in full or in part, ona percentage of transactions (i.e., transactions for the financial account with which the financial record is associated) that are directed to business-related expenses and/or income. In one or more embodiments of the invention, the automatic categorization may be performed by a software application executing on auser's computer system (e.g., financial management software, tax preparation software, etc.), software executing on a network server(e.g., the financial records collector (115) of FIG. 1), or any other similar type of hardware or software module configured to perform an automatic categorization. In one or more embodiments of the invention, after the financial record is automatically categorized, the financial record may be automatically re-categorized (i.e., Step 515 may be repeated). For example, thefinancial record may subsequently be automatically re-categorized basedon an update to a process performing the categorization, newly available data (e.g., categorizations of other financial records, defining a new categorization trend), a user-selected categorization preference, or anyother factor. Further, in one or more embodiments of the invention, after thefinancial record is automatically categorized (i.e., after a first or subsequent instance of Step 515), a user may categorize the financialrecord using a different category (Step 510). Alternatively, the user may accept the automatic categorization (Step 520). In one or moreembodiments of the invention, even after accepting the automatic categorization, the user may still categorize the financial record using a different category (Step 510). After the user has accepted an automatic categorization (Step 520)and/or categorized the financial record (Step 510), a determination is made whether the financial record is associated with a business-relatedcategory (Step 525). Alternatively, in one or more embodiments of theinvention, financial records associated with business-related categories may be stored separately from financial records associated with non-business-related categories. Those skilled in the art willappreciate that in this case, Step 525 may not be required. In one or more embodiments of the invention, if the financial record isnot associated with a business-related category, no additional categorization steps are taken. Alternatively, if the financial record is associated with a business-related category, then the financialrecord is mapped to a tax category, based on the business-relatedcategory (Step 525). More specifically, in one or more embodiments ofthe invention, the tax category that the financial record is mapped to indicates how the financial record should be used when filing a tax form. Tax categories are discussed in detail below. In one or more embodiments of the invention, the mapping may be performed by a software application executing on a user's computer system (e.g., financial management software, tax preparation software,etc.), software executing on a network server (e.g., the financialrecords collector (115) of FIG. 1), or any other type of hardware or software module configured to perform mapping of a financial record to a tax category. Those skilled in the art will appreciate that, in one ormore embodiments of the invention, the mapping may simply be a lookup ofa predetermined association between a business-related category and a tax category. In one or more embodiments of the invention, the tax form with which the mapping is associated may be a tax form provided by a government entity,such as the Internal Revenue Service (IRS). Further, in one or moreembodiments of the invention, the tax form may be a tax form for soleproprietorships (e.g., IRS Form 1040, Schedule C), supplemental income and loss (e.g., IRS Form 1040, Schedule E), S corporations (e.g., IRS Form 1020S), partnerships (e.g., IRS Form 1065), or any other type of small business. Those skilled in the art will appreciate that, in one or moreembodiments of the invention, one or more of the aforementioned steps(i.e., the steps shown in FIGS. 2-5 individually, collectively, or in any sub-combination) may be performed in a different order than the order shown in FIGS. 2-5. Further, in one or more embodiments of theinvention, one or more of the steps shown in FIGS. 2-5 may be omitted.Additionally, in one or more embodiments of the invention, one or more of the steps shown FIGS. 2-5 may be interspersed. Those skilled in the art will appreciate that many different ways to order, omit, and/orintersperse the steps shown in FIGS. 2-5 exist. Accordingly, the specific arrangement of the steps shown in FIGS. 2-5 should not be construed as limiting the scope of the invention. FIG. 6 shows a table of mappings in accordance with one or moreembodiments of the invention. Specifically, the table shown in FIG. 6 isan example of how business-related categories (e.g., 602) may be mapped to tax categories (e.g., 652), in accordance with one or moreembodiments of the invention. In one or more embodiments of the invention, multiple business-related categories may be mapped to a single tax category. For example, in FIG.6, the groups of business-related categories (604, 606, 610, 612, 614,616, 618, 620) are mapped to the tax categories (654, 656, 660, 662,664, 666, 668, 670), respectively. Further, in one or more embodimentsof the invention, a single business-related category may be mapped to asingle tax category. For example, in FIG. 6, the business-related categories (608, 622) are mapped to the tax categories (658, 672),respectively. Those skilled in the art will appreciate that any numberof business-related categories may be mapped to a single tax category. As shown in FIG. 6, in one or more embodiments of the invention,business-related categories may be named differently from their corresponding tax categories. Further, as shown in FIG. 6,business-related categories may have the same name as their associated tax categories. For example, in FIG. 6, the business-related category(622) has the same name as its corresponding tax category (672). Those skilled in the art will appreciate that the business-related categories,tax categories, and mappings shown in FIG. 6 are provided for exemplary purposes only, and should not be construed as limiting the scope of theinvention. In one or more embodiments of the invention, financial records may be categorized (e.g., some or all of the steps of FIG. 5) using a graphical user interface. FIG. 7 shows a diagram of a graphical user interface in accordance with one or more embodiments of the invention. Specifically,FIG. 7 shows a diagram of a categorization interface (700), in accordance with one or more embodiments of the invention. Those skilled in the art will appreciate that the categorization interface (700) shown in FIG. 7 may correspond to the categorization interface (120) of FIG.1. In one or more embodiments of the invention, the categorizationinterface (700) is configured to display one or more financial records(e.g., 705). Specifically, in one or more embodiments of the invention,each financial record (e.g., 705) corresponds to a financial transaction. Accordingly, in one or more embodiments of the invention,each financial record (e.g., 705) may include a transaction time (e.g.,710) indicating the time of the transaction, a transaction partner(e.g., 715) indicating an entity with which the financial transaction occurred, a transaction amount (e.g., 725) indicating an amount of the transaction, a transaction description (e.g., 720) providing textual details about the financial record (e.g., 705), or any other data associated with the financial record (e.g., 705). Those skilled in the art will appreciate that, in one or moreembodiments of the invention, one or more of the characteristics of a financial record (e.g., 705) described above may be omitted. Further, inone or more embodiments of the invention, one or more of these characteristics may not be present in the financial record (e.g., 705)as obtained from the financial institution. Accordingly, in one or moreembodiments of the invention, the categorization interface (700) may provide a means to add and/or edit one or more of the characteristics.In one or more embodiments of the invention, a characteristic may be added and/or edited by selecting a specific location of thecategorization interface (700) (e.g., an editable field associated witha financial record (e.g., 705)), using a separate interface (not shown),or by any other similar means. Those skilled in the art will appreciatethat adding and/or editing one or more of these characteristics may provide for more reliable tracking of financial records (e.g., 705)and/or more informative financial reports. Such financial reports are discussed in detail below. For example, as shown in FIG. 7, the categorization interface (700) may allow a user to add and/or edit the transaction description (e.g., 720)for a financial record (e.g., 705). Further, in one or more embodimentsof the invention, the categorization interface (700) may provide functionality for editing the transaction amount (e.g., 725) associatedwith a financial record (e.g., 705). Those skilled in the art willappreciate that editing a transaction amount (e.g., 725) may allow auser to specify a portion of the transaction amount (e.g., 725) that is business-related, thereby providing for more accurate financial reports.In one or more embodiments of the invention, if a transaction amount(e.g., 725) is edited, then the categorization interface (700) may be configured to display both the edited transaction amount (not shown) andthe original transaction amount (e.g., 725) in association with thefinancial record (e.g., 705). Further, in one or more embodiments of the invention, each financialrecord (e.g., 705) may be associated with a category indicator (e.g.,730). Specifically, in one or more embodiments of the invention, the category indicator (e.g., 730) may be configured to display a category associated with the financial record (e.g., 705). Those skilled in the art will appreciate that the category indicator (e.g., 730) may indicate a business-related category or non-business-related category, as described above. In one or more embodiments of the invention, the category indicator (730) may be an un-editable text field or image, alist, a dropdown menu, a menu, an icon group, a textbox, or any other similar type of indicator and/or control. For example, the category indicator (730) may be a control for selecting a category, as described below. In one or more embodiments of the invention, the categorizationinterface (700) may be used to select a business-related or non-business-related category for a financial record (e.g., 705).Accordingly, in one or more embodiments of the invention, thecategorization interface may include a category selector (740)configured to select a category for the financial record (e.g., 705). Inone or more embodiments of the invention, the category selector (740)may be a list, a dropdown menu, a menu, an icon group, a textbox, or anyother similar type of control for selecting a category. Further, in oneor more embodiments of the invention, the categorization interface (700)may include a category entry control (not shown) for entering auser-defined category. Those skilled in the art will appreciate that the category entry control may alternatively be displayed in a separate interface (not shown). Further, those skilled in the art will appreciatethat functionality of the category indicator (730) and category selector(740) may be included in a single categorization control (not shown). Further, in one or more embodiments of the invention, the categorizationinterface (700) may include a category navigator (745) configured toprovide access to additional categories not initially available in thecategorization interface (700). Those skilled in the art will appreciatethat the categories initially available in the categorization interface(700) may be categories that are common to a significant number ofbusiness industries, categories that are specific to a business industry(e.g., a business industry selected for a financial account, as discussed above), one or more user-selected preferences, or any other filtering of available categories. Alternatively, in one or moreembodiments of the invention, all categories may be initially available. In one or more embodiments of the invention, the categorizationinterface (700) may include a keyword help (735) configured to accept a keyword and, based thereon, obtain advice associated with the keyword.In one or more embodiments of the invention, the keyword may be abusiness-related category, a tax category, a title of a tax form, or anyother type of keyword. In one or more embodiments of the invention, the advice may be directed to usage of the categorization interface (700), a tax deduction, description of a business-related category and/or tax category, or any other type of advice. For example, in one or moreembodiments of the invention, the advice may indicate a probable business-related category associated with the keyword. Further, in oneor more embodiments of the invention, the advice may indicate a probable tax category associated with the keyword. Those skilled in the art willappreciate that many different types of advice for keywords exist. Further, in one or more embodiments of the invention, the categorizationinterface (700) may include a progress indicator (755). Specifically, inone or more embodiments of the invention, the progress indicator (755)may be configured to display a number of financial records (e.g., 705)that have not yet been categorized, a number of financial records (e.g.,705) that have been categorized, or any other type of categorization progress. In one or more embodiments of the invention, the progress indicator (755) may include text, a number, an image, a progress bar, a percentage, or any other similar type of indicator. Further, in one or more embodiments of the invention, the categorizationinterface (700) may include a financial status indicator (760).Specifically, as shown in FIG. 7, in one or more embodiments of theinvention, the financial status indicator (760) may be configured to display a tax savings (or estimated tax savings) and/or tax liability(or estimated tax liability) based on categorization of one or more financial records (e.g., 705). Those skilled in the art will appreciatethat many different ways to calculate or estimate a tax savings and/or tax liability based on categorization of financial records (e.g., 705)exist. Further, in one or more embodiments of the invention, the categorizationinterface (700) may include a financial record entry control (765).Specifically, in one or more embodiments of the invention, the financialrecord entry control (765) may be configured to access an interface (not shown) for entering a financial record (e.g., 705) to be displayed inthe categorization interface (700). Those skilled in the art willappreciate that a financial record (e.g., 705) thus entered may be a financial record that was not retrieved and/or not available from thefinancial institution(s)—i.e., not retrieved in the manner described above. However, in one or more embodiments of the invention, thefinancial record entry control (765) may provide functionality to retrieve additional financial records from the financial institution(s). In one or more embodiments of the invention, the financial records(e.g., 705) displayed in the categorization interface (700) may only be those financial records (e.g., 705) that were not previously categorized, or financial records (e.g., 705) corresponding only to a particular range of transaction times (e.g., 710). Accordingly, in oneor more embodiments of the invention, the categorization interface (700)may include a financial history link (770) configured to provide functionality for displaying financial records that have already been categorized and/or corresponding to a historical range of transaction times. In one or more embodiments of the invention, financial records accessed using the financial history link (770) may be displayed in thecategorization interface (700) or in another interface (not shown). As described above, in one or more embodiments of the invention, after financial records are categorized, the categorizations are used to generate a financial report. FIG. 8 shows a graphical user interface in accordance with one or more embodiments of the invention. Specifically,FIG. 8 shows a financial report interface (800), in accordance with oneor more embodiments of the invention. In one or more embodiments of theinvention, the financial report interface (800) is configured to display a financial report (850). In one or more embodiments of the invention, the financial report (850)may include a report title (805), configured to display a project and/or client name, a report description, one or more dates (e.g., a current date, earliest and latest dates of financial records (e.g., 825)included in the financial report, or any other type of date or date range), or any other type of information about the financial report(850) and/or financial report interface (800). In one or more embodiments of the invention, the financial report (850)may include one or more financial records (e.g., 825). More specifically, in one or more embodiments of the invention, the financialrecords (e.g., 825) may be displayed in association with a tax category heading (e.g., 815), indicating a result of mapping each financialrecord (e.g., 825) from a business-related category (e.g., abusiness-related category selected using the categorization interface of FIG. 7) to a tax category. Further, in one or more embodiments of theinvention, the tax category heading (e.g., 815) may include a transaction amount total for all of the financial records (e.g., 825)included in the tax category. Additionally, in one or more embodimentsof the invention, the financial report (850) may include preparerinstructions (e.g., 820) for one or more tax categories, providing information about how to use the financial report (850) when preparing a tax filing. In one or more embodiments of the invention, every characteristic (e.g.,transaction time, transaction partner, transaction amount, transaction description, etc.) of the financial records (e.g., 825) may be displayed in the financial report (850). Alternatively, in one or more embodimentsof the invention, none or a subset of characteristics of the financialrecords (e.g., 825) may be displayed. Further, in one or moreembodiments of the invention, the financial report interface (800) may include a summary view link (830) and/or a detailed view link (835),configured to modify the financial report (850). In one or more embodiments of the invention, the summary view link (830)may be configured to modify the financial report (850) to include transaction amount totals for each tax category, without including specific characteristics of each financial record (e.g., 825).Conversely, in one or more embodiments of the invention, the detailed view link (835) may be configured to modify the financial report (850)to include one or more characteristics of individual financial records(e.g., 825) (e.g., as shown in FIG. 8). Those skilled in the art willappreciate that the summary view link (830) and/or detailed view link(835) may be configured to modify the financial report (850) to include any combination and/or subset of characteristics of financial records(e.g., 825). In one or more embodiments of the invention, the summary view link (830) and/or detailed view link (835) may be a hyperlink, a button, an icon, a menu item, a tab, or any other similar type of link. In one or more embodiments of the invention, the financial report interface (800) may include an output control (840) configured to output the financial report (850). In one or more embodiments of the invention,the financial report (850) may be output to a printer, a portable document format (PDF) file, a text file, a network location, or anyother type of output resource. Those skilled in the art will appreciatethat when the output control (840) is selected, additional steps (e.g.,using an output settings interface (not shown)) may be required prior tooutputting the financial report (850). In one or more embodiments of theinvention, the output control (840) may be a hyperlink, a button, an icon, a menu item, a tab, or any other similar type of control. In one or more embodiments of the invention, the financial report (850),or the financial data (i.e., financial records (e.g., 825), etc.)represented therein, may be used by tax preparation software to automatically populate a tax form. Different types of tax forms are discussed in greater detail above. Accordingly, in one or moreembodiments of the invention, the output control (840) may be configured to provide some or all of the financial data to tax preparation software. Those skilled in the art will appreciate that thecategorization interface (e.g., 700 of FIG. 7), financial report interface (800), and aforementioned tax preparation software may be components of a single software package. In one or more embodiments of the invention, the financial report interface (800) may include a report guide (845) configured to supply additional information about the financial report interface (800). More specifically, the report guide (845) may supply information about usingthe financial report interface (800), using the financial report (850)to prepare a tax filing, tax categories, or any other type of information. In one or more embodiments of the invention, the information may be supplied via one or more links displayed in the report guide (845). For example, the links may be hyperlinks, buttons,icons, menu items, tabs, or any other type of link. The invention may be implemented on virtually any type of computer regardless of the platform being used. For example, as shown in FIG. 9,a computer system (900) includes a processor (902), associated memory(904), a storage device (906), and numerous other elements and functionalities typical of today's computers (not shown), including mobile devices such as PDAs, cellular phones, and any other computing devices. The computer (900) may also include input means, such as a keyboard (908) and a mouse (910), and output means, such as a monitor(912). The computer system (900) may be communicatively coupled to a network (914), such as a local area network (LAN) or a wide area network(e.g., the Internet) via a network interface connection (not shown).Those skilled in the art will appreciate that these input and output means may take other forms. Further, those skilled in the art will appreciate that one or more elements of the aforementioned computer system (900) may be located at a remote location and connected to the other elements over a network.Further, the invention may be implemented on a distributed system having a plurality of nodes, where each portion of the invention (e.g.,financial institution(s), financial records collector, categorizationinterface, financial report interface, etc.) may be located on a different node within the distributed system. In one embodiment of theinvention, the node corresponds to a computer system. Alternatively, the node may correspond to a processor with associated physical memory. The node may alternatively correspond to a processor with shared memory and/or resources. Further, software instructions to perform embodimentsof the invention may be stored on a computer readable medium such as a compact disc (CD), a diskette, a tape, a file, or any other computer readable storage device. While the invention has been described with respect to a limited numberof embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein.Accordingly, the scope of the invention should be limited only by the attached claims. 1. A method for categorizing financial records of a business,comprising: identifying, from a plurality of business industries, abusiness industry of the business, wherein each of the plurality business industries has a distinct corresponding pre-determined list comprising a plurality of industry specific categories; selecting, by asoftware application using a processor, the plurality of industry specific categories corresponding to the business industry; obtaining,using the processor, a plurality of uncategorized financial records ofthe business from a plurality of financial institutions; for each uncategorized financial record in the plurality of uncategorized financial records, automatically categorizing, by the softwareapplication using the processor, the uncategorized financial record using a category selected from a plurality of categories to create aplurality of categorized financial records, wherein the plurality of categories comprises a plurality of business-related categories comprising the plurality of industry specific categories and at least one non-business-related category, and wherein, for at least one uncategorized financial record, the automatic categorization is based ona trend observed in previously categorized financial records, wherein the trend comprises characteristics of the at least one financial record being categorized that is in common with characteristics of previously categorized financial records; for each categorized financial record automatically categorized using a business-related category selected from the plurality of business-related categories, automatically mapping, by the software application using the processor, the categorized financial record to a tax category selected from a plurality of tax categories based upon a predetermined association between thebusiness-related category and the tax category, wherein the tax category is associated with the business-related category, wherein at least two of the plurality of industry specific categories map to the same tax category; grouping the plurality of categorized financial records according to the plurality of tax categories to obtain a plurality of financial record tax category groups; and generating, using the processor and based upon the mapping, a financial report comprising theplurality of financial record tax category groups, wherein at least a portion of the plurality of uncategorized financial transaction records categorized according to the plurality of industry specific categories are expense transactions. 2. The method of claim 1, wherein theplurality of business-related categories further comprises auser-defined category. 3. The method of claim 1, wherein the plurality of financial records is associated with a sole proprietorship as defined by the Internal Revenue Service (IRS), and wherein the plurality of tax categories is defined for sole proprietorships. 4. An apparatus comprising: a computer and display device configured with a graphical user interface displaying a categorization interface, comprising: aplurality of categorized financial records of a business from aplurality of financial institutions, wherein the business is in abusiness industry of a plurality of business industries, wherein each ofthe plurality business industries has a distinct corresponding pre-determined list comprising a plurality of industry specific categories, wherein the plurality of categorized financial records are obtained as a plurality of uncategorized financial records from theplurality of financial institutions, wherein each uncategorized financial record in the plurality of uncategorized financial records are automatically categorized using a first category selected from aplurality of categories to create the plurality of categorized financialrecords, wherein the plurality of categories comprises a plurality ofbusiness-related categories comprising the plurality of industry specific categories corresponding to the business industry of thebusiness and at least one non-business-related category, and wherein,for at least one uncategorized financial record, the automatic categorization is based on a trend observed in previously categorized financial records, wherein the trend comprises characteristics of the atleast one financial record being categorized that is in common with characteristics of previously categorized financial records; and foreach categorized financial record in the plurality of uncategorized financial records, a category selector configured to decline the automatic categorization of the categorized financial record and further configured to re-categorize the categorized financial record byassociating the financial record with a second category selected from plurality of categories, wherein the categorized financial record associated with a first category selected from the plurality ofbusiness-related categories is automatically mapped to a tax category selected from a plurality of tax categories based on a predetermined association between the business-related category and the tax category,wherein the tax category is associated with the business-relatedcategory, wherein at least two of the plurality of industry specific categories map to the same tax category; and wherein a financial report generated based upon the mapping comprises a plurality of financialrecord tax category groups grouped according to the plurality of tax categories, and wherein at least a portion of the plurality of uncategorized financial transaction records categorized according to theplurality of industry specific categories are expense transactions. 5.The graphical user interface of claim 4, further comprising: a keyword help configured to accept a keyword and, based thereon, obtain advice associated with a probable business-related category associated with the keyword. 6. The graphical user interface of claim 4, wherein theplurality of business-related categories further comprises auser-defined category. 7. The graphical user interface of claim 4,wherein the plurality of financial records is associated with a soleproprietorship as defined by the Internal Revenue Service (IRS), and wherein the plurality of tax categories is defined for soleproprietorships. 8. A system comprising: a processor associated with a memory and a storage device; a financial records collector configured to obtain a plurality of uncategorized financial records of a business froma plurality of financial institutions; and a categorization interface configured to: identifying, from a plurality of business industries, abusiness industry of the business, wherein each of the plurality business industries has a distinct corresponding pre-determined list comprising a plurality of industry specific categories; automatically categorize each uncategorized financial record in the plurality of uncategorized financial records using a category selected from aplurality of categories to create a plurality of categorized financialrecords, wherein the plurality of categories comprises a plurality ofbusiness-related categories and at least one non-business-relatedcategory, wherein the business-related categories comprises theplurality of industry specific categories corresponding to the business industry wherein, for at least one uncategorized financial record, the automatic categorization is based on a trend observed in previously categorized financial records, wherein the trend comprises characteristics of the at least one financial record being categorized that is in common with characteristics of previously categorized financial records, wherein each categorized financial record automatically categorized using a business-related category selected from the plurality of business-related categories is automatically mapped to a tax category selected from a plurality of tax categories based upon a predetermined association between the business-relatedcategory and the tax category, wherein the tax category is associatedwith the business-related category, wherein at least two of theplurality of industry specific categories map to the same tax category;and wherein a financial report generated based upon the mapping comprises a plurality of financial record tax category groups grouped according to the plurality of tax categories, and wherein at least a portion of the plurality of uncategorized financial transaction records categorized according to the plurality of industry specific categories are expense transactions. 9. The system of claim 8, wherein theplurality of business-related categories further comprises auser-defined category. 10. The system of claim 8, wherein the plurality of financial records is associated with a sole proprietorship as defined by the Internal Revenue Service (IRS), and wherein the plurality of tax categories is defined for sole proprietorships. 11. A computer readable non-transitory medium comprising executable instructions forcategorizing financial records of a business by: identifying, from aplurality of business industries, a business industry of the business,wherein each of the plurality business industries has a distinct corresponding predetermined list comprising a plurality of industry specific categories; selecting the plurality of industry specific categories corresponding to the business industry; obtaining a plurality of uncategorized financial records of the business from a plurality of financial institutions; for each uncategorized financial record in theplurality of uncategorized financial records, automatically categorizingthe uncategorized financial record using a category selected from aplurality of categories to create a plurality of categorized financialrecords, wherein the plurality of categories comprises a plurality ofbusiness-related categories comprising the plurality of industry specific categories and at least one non-business-related category, and wherein, for at least one uncategorized financial record, the automatic categorization is based on a trend observed in previously categorized financial records, wherein the trend comprises characteristics of the atleast one financial record being categorized that is in common with characteristics of each previously categorized financial record; foreach categorized financial record automatically categorized using abusiness-related category selected from the plurality ofbusiness-related categories, mapping the categorized financial record toa tax category selected from a plurality of tax categories based upon a predetermined association between the business-related category and the tax category, wherein the tax category is associated with thebusiness-related category, wherein at least two of the plurality of industry specific categories map to the same tax category; grouping theplurality of categorized financial records according to the plurality of tax categories to obtain a plurality of financial record tax category groups; and generating, based upon the mapping, a financial report comprising the plurality of financial record tax category groups,wherein at least a portion of the plurality of uncategorized financial transaction records categorized according to the plurality of industry specific categories are expense transactions. 12. The computer readable medium of claim 11, wherein the plurality of business-related categories further comprises a user-defined category. 13. The computer readable medium of claim 11, wherein the plurality of financial records is associated with a sole proprietorship as defined by the Internal Revenue Service (IRS), and wherein the plurality of tax categories is defined for sole proprietorships.
196 LIFE WITH THE TEOTTERS. rule has always been that if I want my horse shod to get the blacksmith to do it. The same rule holds good in case of sickness. If my horse needs blistering or any other medical attendance, I get what I consider to be the best veterinary surgeon within reach and let him take charge of the case, and I know of no reason why this rule should not be fol lowed at all times. "Every man his own lawyer" gener ally gets the prisoner locked up, and I think every man his own doctor would be a great scheme for the undertakers. In this case, that the horse did not die was due, I am satis fied, more to good luck than good management. I called in a veterinary, and he suggested a treatment that he thought would serve to allay the inflammation and pain. He man aged to effect this result, but the swelling never entirely left the horse' s legs. As the season for training approached, I commenced giving Ford moderate exercise on the road, and then took him to the West Side driving Park to prepare him for the race. The spring being very backward and cold and the track unfit for fast work I made up my mind I would either have to hunt another training ground or I would not have my horse in any condition by the 12th of June. I concluded that Indiana would be a good place to go to, as they have an early spring and sandy tracks there, but when I sug gested this to Ford's owner I met with opposition that I had not expected, as he did not see why a horse could not be prepared just as well in Chicago as at any other place, and freed his mind to that effect. I was so sure that my judg ment was right that I told him to do one of two things, — he could take my interest in the race, and have Ford trained wherever he chose, or I would take his interest in the race and have my way about it. Under this pressure he finally consented that I should use my own judgment, and so I took Ford with my other horses to Elkhart, Ind., where I found a good half-mile track, first-class stabling, good roads, and everything favorable for an early preparation. I discovered when I begun to drive Ford along, that his
var socket = io.connect(); var mode; socket.emit('getData'); socket.addEventListener("quiz", function (response) { var assetPath = "/audio/"; var quiz = response.quizData; var audio = response.quizAudio; console.log(audio) if ((quiz.mode)===("colors")) { mode = quiz.colors; sounds = audio.colors; } else if ((quiz.mode)===("body")) { mode = quiz.body; sounds = audio.body } else if ((quiz.mode)===("animals")) { mode = quiz.animals; sounds = audio.animals } else if ((quiz.mode)===("fruit")) { mode = quiz.fruit; sounds = audio.fruit } else if ((quiz.mode)===("shapes")) { mode = quiz.shapes; sounds = audio.shapes } else if ((quiz.mode)===("vegetables")) { mode = quiz.vegetables; sounds = audio.vegetables } else if ((quiz.mode)===("letters")) { mode = quiz.letters; sounds = audio.letters } else if ((quiz.mode)===("numbers")) { mode = quiz.numbers; sounds = audio.numbers } else if ((quiz.mode)===("items")) { mode = quiz.items; sounds = audio.items } else if ((quiz.mode)===("cooked-food")) { mode = quiz.cooked_food; sounds = audio.cooked_food } createjs.Sound.registerSounds(sounds, assetPath); $(mode).each(function(index, el) { $('#sliderObj').append('<li><h3 class="slide-title">'+el.answer.replace("-", " ")+'</h3><img src="./img/'+el.img+'" class="img-thumbnail"><button id="'+el.id+'" class="btn btn-block btn-slider">'+el.answer.replace("-", " ")+'</button></li>') }); $('#currentMode').append(quiz.mode); $(document).ready(function ($) { $('#checkbox').change(function(){ setInterval(function () { moveRight(); }, 3000); }); var slideCount = $('#slider ul li').length; var slideWidth = $('#slider ul li').width(); var slideHeight = $('#slider ul li').height(); var sliderUlWidth = slideCount * slideWidth; $('#slider').css({ width: slideWidth, height: slideHeight }); $('#slider ul').css({ width: sliderUlWidth, marginLeft: - slideWidth }); $('#slider ul li:last-child').prependTo('#slider ul'); function moveLeft() { $('#slider ul').animate({ left: + slideWidth }, 200, function () { $('#slider ul li:last-child').prependTo('#slider ul'); $('#slider ul').css('left', ''); }); }; function moveRight() { $('#slider ul').animate({ left: - slideWidth }, 200, function () { $('#slider ul li:first-child').appendTo('#slider ul'); $('#slider ul').css('left', ''); }); }; $('a.control_prev').click(function () { moveLeft(); }); $('a.control_next').click(function () { moveRight(); }); $('.btn-slider').click(function() { if (quiz.mode != "numbers") { createjs.Sound.play(this.innerHTML.replace(" ", "-")); } else { createjs.Sound.play($(this).prev('img').attr('src').slice(14,-4)); //console.log($(this).prev('img').attr('src').slice(14,-4)) } }); }); });
Talk:Axe of Perun Untitled I don't think that this properly part of WP:Milhist, so I've removed its banner. Sturmvogel 66 (talk) 03:54, 4 August 2009 (UTC) The link to a merchant has been removed, because I believe it was violating Wikipedia's terms of use. It contributed nothing to the article, and was just an advertisement. The article (Axe of Perun), was created first therefore it can not be referencing it. VoivodeZmey (talk) 00:49, 14 December 2011 (UTC) Cleaning up needed I fixed the most glaring formatting errors. More work is needed here, and more refs. Zezen (talk) 12:17, 4 November 2015 (UTC) No sources, seems to be pure fantasy It's completely lacking any proper sources and the few links down there are broken. Besides that it looks like some 14 year old wrote a fantasy article about some item found in russia. I mean, "The Axe of Perun, Hammer of Perun or Molnia / Mjolnir, also called a "hatchet amulet", is an archaeological artifact worn as a pendant and shaped like a battle axe. The Amulet is named after Molnia (Cyrillic: Молния), and is a counterpart to a Nordic Mjolnir amulet." is ridicolous. It's the counterpart to a nordic mjolnir amulet? Sources please. This article should be deleted, or atleast rewritten and sourced properly. — Preceding unsigned comment added by <IP_ADDRESS> (talk) 06:15, 10 December 2019 (UTC)
Discharge lamp lighting device ABSTRACT A discharge lamp lighting device having: a power control circuit for controlling power to be supplied to a discharge lamp; an AC converter circuit provided between the power control circuit and the discharge lamp for converting a DC current into an AC current; a timer circuit for controlling operation of the AC converter circuit; and an igniter circuit for generating a high voltage pulse to thereby activate the discharge lamp; wherein the timer circuit includes: a second timer for starting as a timer in accordance with a power supply activating signal; a first timer for starting as a timer in accordance with lighting of the discharge lamp; and an OR circuit for outputs of the first and second timers. Thus, the DC lighting time is designed to be not changed and stable lighting can be achieved. FIELD OF THE INVENTION The present invention relates to a discharge lamp lighting device suitable for a projection type display such as a liquid crystal projector. DESCRIPTION OF THE RELATED ART High voltage discharge lamps such as metal halide lamps or high-pressure mercury lamps are used as light sources of projection type displays because it is easy for these lamps to obtain high conversion efficiency close to that of point sources. Dedicated discharge lamp lighting devices for supplying voltages and currents required for lighting are used to light the high voltage discharge lamps. An AC type high voltage discharge lamp is typically once DC-driven for a fixed time by a timer circuit, and then shifted to be AC-driven. In the related art, however, the time between the operation of an igniter circuit and the lighting of the lamp fluctuates due to conditions of the high voltage discharge lamp (e.g. lamp temperature) or variations in the ease of lighting peculiar to the discharge lamp. Accordingly, there is a disadvantage that the time for the discharge lamp to be DC-lit is so short that the discharge lamp is shifted to be AC-lit as its arc remains unstable, and the discharge lamp is blacks out thus. In addition, the related-art discharge lamp can be activated even if there is a failure such as I/O short-circuit in a power control circuit. Accordingly, there is a disadvantage that a circuit located after the power control circuit, such as an igniter circuit, or the discharge lamp is broken in a chain reaction. SUMMARY OF THE INVENTION It is a first object of the invention to provide a discharge lamp lighting device which can light a high voltage discharge lamp quickly and stably regardless of the conditions or variations of the discharge lamp. It is a second object of the invention to provide a discharge lamp lighting device which can prevent electric circuits or the like located after a power control circuit from being broken in a chain reaction even if there is a failure such as I/O short-circuit in the power control circuit. A discharge lamp lighting device having: a power control circuit for controlling power to be supplied to a discharge lamp; In order to achieve the first object, according to a first aspect of the invention, there is provided a discharge lamp lighting device having: a power control circuit for controlling power to be supplied to a discharge lamp; an AC converter circuit provided between the power control circuit and the discharge lamp for converting a DC current into an AC current; a timer circuit for controlling operation of the AC converter circuit; and an igniter circuit for generating a high voltage pulse to thereby activate the discharge lamp. The timer circuit includes: a second timer for starting as a timer in accordance with a power supply activating signal; a first timer for starting as a timer in accordance with lighting of the discharge lamp; and an OR circuit for outputs of the first and second timers. In order to achieve the second object, according to a second aspect of the invention, there is provided a discharge lamp lighting device having: a power control circuit for controlling power to be supplied to a discharge lamp. The discharge lamp lighting device further has: a lamp voltage detection terminal for detecting a lamp voltage of the power control circuit; and a protection circuit which is provided so that when a lamp voltage detection signal is outputted from the lamp voltage detection terminal, the protection circuit suspends activation of the discharge lamp even if a lamp-switching-on signal is supplied, and so that when no lamp voltage detection signal is outputted from the lamp voltage detection terminal, the protection circuit allows activation of the discharge lamp in accordance with a lamp-switching-on signal supplied to the protection circuit. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of a discharge lamp lighting device according to an embodiment of the invention; FIG. 2 is a circuit diagram showing an embodiment of a timer circuit for use in the discharge lamp lighting device; FIG. 3 is a timing chart for explaining the operation of the timer circuit; FIG. 4 is a circuit diagram showing an embodiment of a protection circuit for use in the discharge lamp lighting device; FIG. 5 is a timing chart for explaining the operation of the protection circuit; and FIG. 6 is a schematic configuration view of a projection type display using the discharge lamp lighting device according to the invention. DESCRIPTION OF THE PREFERRED EMBODIMENT Description will be made below on an embodiment of the invention with reference to the drawings. FIG. 1 is a block diagram of a discharge lamp lighting device according to the embodiment of the invention, and FIG. 6 is a schematic configuration view of a projection type display using the discharge lamp lighting device. The discharge lamp lighting device according to the embodiment of the invention is used suitably in the projection type display shown in FIG. 6 by way of example. As shown in FIG. 6, a reflector 74 and a high voltage discharge lamp 75 constitutes a light source irradiating an image display device 73 with light from the back of the image display device 73. The light transmitted through the image display device 73 is projected onto a screen 71 by an optical system 72. The image display device 73 is, for example, a liquid crystal display. The image display device 73 is driven by an image display device drive circuit 76 so that an image is displayed. Thus, a large screen image is obtained on the screen 71. A discharge lamp lighting device 77 controls the activation and lighting of the high voltage discharge lamp 75. The discharge lamp lighting device 77 is configured as shown in FIG. 1. In FIG. 1, the reference numeral 1 represents a power supply input terminal; 2, a MOS-FET; 3, a diode; 4, a choke coil; 5, a capacitor; 6 and 7, resistors; 8 to 11, MOS-FETs; 12, a resistor; 13, a discharge lamp; 14 and 16, lamp voltage detection terminals; 15, a timer circuit output terminal; 17, a protection circuit power supply output terminal; 18, an input terminal of a lamp-switching-on signal from a lamp switch (not shown); 19, a protection circuit power supply input terminal; 21, a first drive circuit; 22, a PWM control circuit; 23, an overvoltage protection circuit (OVP circuit); 24, a second drive circuit; 25, an oscillating circuit; 26, an igniter circuit; 31, a timer circuit; and 32, a protection circuit. As shown by the chain lines, a power control circuit 27 is constituted by the MOS-FET 2, the diode 3, the choke coil 4, the capacitor 5, the resistors 6, 7 and 12, the drive circuit 21 and the PWM control circuit 22. A voltage and a current to be supplied to the discharge lamp 13 is controlled by the PWM control circuit 22 in accordance with the detection results of the voltage and the current. An AC converter circuit 28 is constituted by the MOS-FETs 8 to 11, the drive circuit 24 and the oscillating circuit 25. The AC converter circuit 28 is provided between the power control circuit 27 and the discharge lamp 13 so as to convert a DC current into an AC current. The igniter circuit 26 generates a high voltage pulse so as to activate the high voltage discharge lamp 13. The overvoltage protection (OVP) circuit 23 suspends the operation of the power control circuit 27 when an overvoltage appears in the output due to abnormality in the discharge lamp 13 or the like. Thus, the circuits and the discharge lamp 13 are protected. The timer circuit 31 is connected to the overvoltage protection circuit 23 and the oscillating circuit 25. To activate the discharge lamp 13 stably, it is necessary to generate a high voltage. To this end, control is carried out in accordance with a signal from the timer circuit 13 so as to suspend the operation of the overvoltage protection circuit 23. In addition, to activate the discharge lamp 13 stably, it is necessary to DC-drive the discharge lamp 13. To this end, control is carried out in accordance with a signal from the timer circuit 31 so as to suspend the oscillation of the oscillating circuit 25. For example, the MOS-FETs 8 and 11 are turned on while the MOS-FETs 9 and 10 are turned off. When the timer time has passed, the operation of the overvoltage protection circuit 23 and the oscillation of the oscillating circuit 25 are released from suspension. The protection circuit 32 controls the power supply to the PWM control circuit 22 or the oscillating circuit 25 in accordance with the lamp-switching-on signal supplied to the terminal 18. At the same time, the protection circuit 32 carries out protection operations such as power-off at the time of overheating. FIG. 2 is a circuit diagram showing an embodiment of the timer circuit 31 in the discharge lamp lighting device shown in FIG. 1. In FIG. 2, the reference numeral 14 represents a lamp voltage detection terminal; 15, a timer circuit output terminal; 40, a reference voltage terminal; 41, a power supply input terminal of the timer circuit 31; 42, a comparator; 43 and 44, resistors; 45 and 46, capacitors; 47, a first timer; 48, a second timer; and 49, an OR circuit. These members are connected as shown in FIG. 2. Next, description will be made on the operation centering on the timer circuit 31 with reference to FIG. 3. In FIG. 3, the reference sign S1 represents an output voltage of the timer circuit output terminal 15 shown in FIGS. 1 and 2; S2, an output voltage of the second timer 48 shown in FIG. 2; S3, an output voltage of the first timer 47 shown in FIG. 2; and S4, a voltage of the power supply input terminal 41 of the timer circuit 31 shown in FIG. 2. As shown in FIG. 3, when power is supplied at time t0, a maximum voltage V3 is outputted from the power control circuit 27 because the discharge lamp 13 is not lit. At this time, the output of the second timer 48 is in a high level. Accordingly, the overvoltage protection circuit 23 is not operated. A high voltage pulse from the igniter circuit 26 is superposed on the voltage V3 so that a voltage V4 is applied to the discharge lamp 13. Thus, the discharge lamp is activated. Then, high-voltage low-current glow discharge is started at time t1, and further shifted to low-voltage high-current arc discharge at time t2. When this voltage change is detected through the lamp voltage detection terminal 14, the positive terminal of the comparator 42 drops to a low level so that the first timer 47 starts. Then, the lamp voltage increases while the lamp temperature increases. The output of the first timer 47 drops to a low level at time t3, so that the AC converter circuit 28 is operated. Thus, the discharge lamp 13 is shifted to an AC-lighting mode. After that, the output from the power control circuit 27 reaches a steady-state voltage V1 at time t4 so that the power control circuit 27 supplies fixed power to the discharge lamp 13 by constant power control. As a result, quick and stable lighting can be carried out regardless of the conditions or variations of the discharge lamp 13. FIG. 4 is a circuit diagram showing an embodiment of the protection circuit 32 shown in FIG. 1. In FIG. 4, the reference numeral 16 represents a lamp voltage detection terminal; 17, a protection circuit power supply output terminal; 18, a lamp-switching-on signal input terminal; 19, a protection circuit power supply input terminal; 50, a thermistor; 51 to 61, resistors; and 62 to 67, transistors. With reference to FIG. 5, description will be made on the operation of the protection circuit 32 shown in FIG. 4. In FIG. 5, the reference sign P1 represents an input voltage of the protection circuit power supply input terminal 19; P2, a base voltage of the transistor 64; P3, a partial voltage of the input voltage P1 divided by the thermistor 50 and the resistor 51; P4, a detected voltage of the lamp voltage detection terminal 16; P5, a lamp-switching-on signal (activating signal) of the input terminal 18; and P6, a voltage of the protection circuit power supply output terminal 17. As shown in FIG. 5, first, the voltage P1 is supplied to the protection circuit power supply input terminal 19. When the voltage P4 of the lamp voltage detection terminal 16 is high enough to turn ON the transistor 66, the base voltage P2 of the transistor 64 becomes low. As a result, even if the lamp-switching-on signal P5 (activating signal) is supplied to the lamp-switching-on signal input terminal 18, the transistors 64 and 65 cannot be turned ON. Thus, the voltage P6 of the protection circuit power supply output terminal 17 remains 0 V so that the discharge lamp 13 cannot be activated. Accordingly, if there is a failure such as I/O short-circuit in the power control circuit 27, the protection circuit 32 cannot be activated, so that chain breakdown can be prevented. When a voltage high enough to turn ON the transistor 66 is not supplied to the lamp voltage detection terminal 16, the transistor 64 is turned ON in accordance with the lamp-switching-on signal P5 (activating signal) so that the transistor 65 is turned ON and the voltage P6 of the protection circuit power supply output terminal 17 is outputted. Thus, the discharge lamp 13 can be activated. After the activation, the voltage P4 indeed appears in the lamp voltage detection terminal 16. The transistor 66 is, however, kept OFF because the transistor 67 is ON. Thus, there is no fear that the protection circuit 32 malfunctions. Incidentally, after the activation, when the resistance value of the thermistor 50 increases due to overheating caused by the increase of loss or the like so that the voltage P3 increases to a level high enough to turn on the transistor 62. The base voltage P2 of the transistor 64 becomes low so that the transistors 64 and 65 are turned off. As a result, the voltage P6 of the protection circuit power supply output terminal 17 drops so that the operations of the PWM control circuit 23 and the oscillating circuit 25 are suspended. Thus, protection can be carried out all over the circuits. According to the first aspect of the invention, a timer circuit includes a second timer for starting as a timer in accordance with a power supply activating signal, a first timer for starting as a timer in accordance with lighting of a discharge lamp, and an OR circuit for outputs of the first and second timers. Since DC lighting time is designed thus to be not changed, quick and stable lighting can be carried out regardless of the conditions and variations of the discharge lamp. According to the second aspect of the invention, a protection circuit is provided so that when a lamp voltage detection signal is outputted from a lamp voltage detection terminal, the protection circuit suspends the activation of the discharge lamp even if a lamp-switching-on signal is supplied. Thus, chain breakdown can be prevented when there is a failure such as I/O short-circuit in a power control circuit. What is claimed is: 1. A discharge lamp lighting device comprising: a power control circuit for controlling power to be supplied to a discharge lamp; an AC converter circuit provided between said power control circuit and said discharge lamp for converting a DC current into an AC current; a timer circuit for controlling operation of said AC converter circuit; and an igniter circuit for generating a high voltage pulse to thereby activate said discharge lamp; wherein said timer circuit includes: a second timer for starting as a timer in accordance with a power supply activating signal; a first timer for starting as a timer in accordance with lighting of said discharge lamp; and an OR circuit for outputs of said first and second timers. 2. A discharge lamp lighting device comprising: a power control circuit for controlling power to be supplied to a discharge lamp; a lamp voltage detection terminal for detecting a lamp voltage of said power control circuit; and a protection circuit which is provided so that when a lamp voltage detection signal is outputted from said lamp voltage detection terminal, said protection circuit suspends activation of said discharge lamp even if a lamp-switching-on signal is supplied, and so that when no lamp voltage detection signal is outputted from said lamp voltage detection terminal, said protection circuit allows activation of said discharge lamp in accordance with a lamp-switching-on signal supplied to said protection circuit. 3. A discharge lamp lighting device according to claim 1 or 2, wherein said discharge lamp is a discharge lamp for a projection type display.
[Congressional Bills 113th Congress] [H. Res. 770 Reported in House (RH)] House Calendar No. 148 113th CONGRESS 2d Session H. RES. 770 [Report No. 113-646] Providing for consideration of the Senate amendment to the bill (H.R. 3979) to amend the Internal Revenue Code of 1986 to ensure that emergency services volunteers are not taken into account as employees under the shared responsibility requirements contained in the Patient Protection and Affordable Care Act; providing for consideration of the bill (H.R. 5759) to establish a rule of construction clarifying the limitations on executive authority to provide certain forms of immigration relief; and providing for consideration of the bill (H.R. 5781) to provide short-term water supplies to drought-stricken California. _______________________________________________________________________ IN THE HOUSE OF REPRESENTATIVES December 3, 2014 Mr. Nugent, from the Committee on Rules, reported the following resolution; which was referred to the House Calendar and ordered to be printed _______________________________________________________________________ RESOLUTION Providing for consideration of the Senate amendment to the bill (H.R. 3979) to amend the Internal Revenue Code of 1986 to ensure that emergency services volunteers are not taken into account as employees under the shared responsibility requirements contained in the Patient Protection and Affordable Care Act; providing for consideration of the bill (H.R. 5759) to establish a rule of construction clarifying the limitations on executive authority to provide certain forms of immigration relief; and providing for consideration of the bill (H.R. 5781) to provide short-term water supplies to drought-stricken California. Resolved, That upon adoption of this resolution it shall be in order to take from the Speaker's table the bill (H.R. 3979) to amend the Internal Revenue Code of 1986 to ensure that emergency services volunteers are not taken into account as employees under the shared responsibility requirements contained in the Patient Protection and Affordable Care Act, with the Senate amendment thereto, and to consider in the House, without intervention of any point of order, a motion offered by the chair of the Committee on Armed Services or his designee that the House concur in the Senate amendment with an amendment consisting of the text of Rules Committee Print 113-58 modified by the amendments printed in part A of the report of the Committee on Rules accompanying this resolution. The Senate amendment and the motion shall be considered as read. The motion shall be debatable for one hour equally divided and controlled by the chair and ranking minority member of the Committee on Armed Services. The previous question shall be considered as ordered on the motion to its adoption without intervening motion. Sec. 2. Upon adoption of this resolution it shall be in order to consider in the House the bill (H.R. 5759) to establish a rule of construction clarifying the limitations on executive authority to provide certain forms of immigration relief. All points of order against consideration of the bill are waived. The amendment in the nature of a substitute printed in part B of the report of the Committee on Rules accompanying this resolution shall be considered as adopted. The bill, as amended, shall be considered as read. All points of order against provisions in the bill, as amended, are waived. The previous question shall be considered as ordered on the bill, as amended, and on any further amendment thereto, to final passage without intervening motion except: (1) one hour of debate equally divided and controlled by the chair and ranking minority member of the Committee on the Judiciary; and (2) one motion to recommit with or without instructions. Sec. 3. Upon adoption of this resolution it shall be in order to consider in the House the bill (H.R. 5781) to provide short-term water supplies to drought-stricken California. All points of order against consideration of the bill are waived. The amendment printed in part C of the report of the Committee on Rules accompanying this resolution shall be considered as adopted. The bill, as amended, shall be considered as read. All points of order against provisions in the bill, as amended, are waived. The previous question shall be considered as ordered on the bill, as amended, and on any further amendment thereto, to final passage without intervening motion except: (1) one hour of debate equally divided and controlled by the chair and ranking minority member of the Committee on Natural Resources; and (2) one motion to recommit with or without instructions. Sec. 4. The chair of the Committee on Armed Services may insert in the Congressional Record at any time during the remainder of the second session of the 113th Congress such material as he may deem explanatory of defense authorization measures for the fiscal year 2015. House Calendar No. 148 113th CONGRESS 2d Session H. RES. 770 [Report No. 113-646] _______________________________________________________________________ RESOLUTION Providing for consideration of the Senate amendment to the bill (H.R. 3979) to amend the Internal Revenue Code of 1986 to ensure that emergency services volunteers are not taken into account as employees under the shared responsibility requirements contained in the Patient Protection and Affordable Care Act; providing for consideration of the bill (H.R. 5759) to establish a rule of construction clarifying the limitations on executive authority to provide certain forms of immigration relief; and providing for consideration of the bill (H.R. 5781) to provide short-term water supplies to drought-stricken California. _______________________________________________________________________ December 3, 2014 Referred to the House Calendar and ordered to be printed
Insulated fiber cement siding ABSTRACT A method for installing siding panels to a building includes providing a foam backing board having alignment ribs on a front surface and a drainage grid on a back surface and then establishing a reference line at a lower end of the building for aligning a lower edge of a first backing board and tacking thereon. Tabs and slots along vertical edges of the foam backing board align and secure adjacent backing boards to each other. A siding panel is butted against one of the lower alignment ribs and secured thereto. Another siding panel is butted against and secured to an adjacent alignment rib to form a shadow line between the adjacent siding panels on the building. This application claims priority of U.S. provisional patent application Ser. No. 60/600,845 filed on Aug. 12, 2004. FIELD OF THE INVENTION The invention is related to an insulated fiber cement siding. BACKGROUND OF THE INVENTION A new category of lap siding, made from fiber cement or composite wood materials, has been introduced into the residential and light commercial siding market during the past ten or more years. It has replaced a large portion of the wafer board siding market, which has been devastated by huge warranty claims and lawsuits resulting from delamination and surface irregularity problems. Fiber cement siding has a number of excellent attributes which are derived from its fiber cement-base. Painted fiber cement looks and feels like wood. It is strong and has good impact resistance and it will not rot. It has a Class 1(A) fire rating and requires less frequent painting than wood siding. It will withstand termite attacks. Similarly composite wood siding has many advantages. Fiber cement is available in at least 16 different faces that range in exposures from 4 inches to 10.75 inches The panels are approximately 5/16 inch thick and are generally 12 feet in length. They are packaged for shipment and storage in units that weigh roughly 5,000 pounds. Fiber cement panels are much heavier than wood and are hard to cut requiring diamond tipped saw blades or a mechanical shear. Composite wood siding can also be difficult to work with. For example, a standard 12 foot length of the most popular 8¼ inch fiber cement lap siding weighs 20.6 pounds per piece. Moreover, installers report that it is both difficult and time consuming to install. Fiber cement lap siding panels, as well as wood composite siding panels, are installed starting at the bottom of a wall. The first course is positioned with a starter strip and is then blind nailed in the 1¼ inch high overlap area at the top of the panel (see FIG. 1). The next panel is installed so that the bottom 1¼ inch overlaps the piece that it is covering. This overlap is maintained on each successive course to give the siding the desired lapped siding appearance. The relative height of each panel must be meticulously measured and aligned before the panel can be fastened to each subsequent panel. If any panel is installed incorrectly the entire wall will thereafter be mis-spaced. The current fiber cement lap siding has a very shallow 5/16 inch shadow line. The shadow line, in the case of this siding, is dictated by the 5/16 inch base material thickness. In recent years, to satisfy customer demand for the impressive appearance that is afforded by more attractive and dramatic shadow lines virtually all residential siding manufacturers have gradually increased their shadow lines from ½ inch and ⅝ inch to ¾ inch and 1 inch. SUMMARY OF THE INVENTION The present invention provides a novel installation method for fiber cement siding panels or composite wood siding panels. In particular, the present invention provides for a variety of different arrangements including an expanded polystyrene (EPS) contoured backing or other foam material backing to which the fiber cement siding or composite wood panel may be attached. An installer may abut a fiber cement board or a composite wood product against the contoured foam backing to achieve pre-defined alignment of the siding panel. This eliminates the meticulous measuring of overlap and leveling tasks associated with prior art installation methods. According to a second preferred embodiment of the novel installation method of fiber cement or composite wood panels, a foam backing may be attached to the fiber cement or composite wood board. This foam backing has pre-defined dimensions which permit siding panels to be set one atop the next in such a fashion as to achieve pre-defined spacing and level boards. In solving the problems associated with fiber cement and wood composite siding, improvements to contoured foam backing have been discussed which have applicability to any type of siding product. These improvements include a tab and notch arrangement which allows laterally adjacent foam backers (i.e., side to side) to be mechanically fastened together. Further, it has been discovered that through the use of a foam backer the siding may be manufactured with a thinner gauge, including manufactured fiber cement and wood composite products. The present invention also provides for a new and novel siding configuration which may be used with siding manufactured of any material including fiber cement, engineered composite wood and plastic, and cellulose-polyethylene materials to make the shadow line appear greater. This method provides for the utilization of a thinner siding panel which is substantially supported by a foam backing. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a sectional view of a prior art fiber cement panel installation; FIG. 2 is a plan view of a contoured alignment installation board according to a first preferred embodiment of the present invention; FIG. 2 a is a portion of the installation board shown in FIG. 2 featuring interlocking tabs; FIG. 3 is a sectional view of a fiber cement or wood composite installation using a first preferred method of installation; FIG. 4 is a rear perspective view of the installation board of FIG. 2; FIG. 5 is a plan view of an installation board according to a first preferred embodiment of the present invention attached to a wall; FIG. 6 is a plan view of an installation board on a wall; FIG. 7 is a sectional view of the installation board illustrating the feature of a ship lap utilized to attach multiple EPS foam backers or other foam material backers when practicing the method of the first preferred embodiment of the present invention; FIG. 7 a is a sectional view of an upper ship lap joint; FIG. 7 b is a sectional view of a lower ship lap joint; FIG. 8 a is a sectional view of the fiber cement board of the prior art panel; FIGS. 8 b-8 d are sectional views of fiber cement boards having various sized shadow lines; FIG. 9 is a second preferred embodiment of a method to install a fiber cement panel; FIG. 10 a shows the cement board in FIG. 8 b installed over an installation board of the present invention; FIG. 10 b shows the cement board in FIG. 8 c installed over an installation board of the present invention; FIG. 10 c shows the cement board in FIG. 8 d installed over an installation board of the present invention; FIG. 11 illustrates the improved fiber cement or wood composite panel utilizing an installation method using a cement starter board strip; FIG. 12 is a sectional view of a starter board strip having a foam backer; and FIG. 13 illustrates a method for installing a first and second layer of fiber cement or wood composite panels. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT The invention outlined hereinafter addresses the concerns of the aforementioned shortcomings or limitations of current fiber cement siding 10. A shape molded, extruded or wire cut foam board 12 has been developed to serve as a combination installation/alignment tool and an insulation board. This rectangular board 12, shown in FIG. 2 is designed to work with 1¼ inch trim accessories. The board's 12 exterior dimensions will vary depending upon the profile it has been designed to incorporate, see FIG. 3. With reference to FIG. 2 there is shown a plan view of a contoured foam alignment backer utilized with the installation method of the first preferred embodiment. Installation and alignment foam board 12 includes a plurality or registration of alignment ribs 14 positioned longitudinally across board 12. Alignment board 12 further includes interlocking tabs 16 which interlock into grooves or slots 18. As illustrated in FIG. 2 a, and in the preferred embodiment, this construction is a dovetail arrangement 16, 18. It is understood that the dovetail arrangement could be used with any type of siding product, including composite siding and the like where it is beneficial to attach adjacent foam panels. Typical fiber cement lap siding panels 10 are available in 12 foot lengths and heights ranging from 5¼ inches to 12 inches. However, the foam boards 12 are designed specifically for a given profile height and face such as, Dutch lap, flat, beaded, etc. Each foam board 12 generally is designed to incorporate between four and twelve courses of a given fiber cement lap siding 10. Spacing between alignment ribs 14 may vary dependent upon a particular fiber cement siding panel 10 being used. Further size changes will naturally come with market requirements. Various materials may also be substituted for the fiber cement lap siding panels 10. One commercially available material is an engineered wood product coated with special binders to add strength and moisture resistance; and further treated with a zinc borate-based treatment to resist fungal decay and termites. This product is available under the name of LP Smart Side® manufactured by LP Specialty Products, a unit of Louisiana-Pacific Corporation (LP) headquartered in Nashville, Tenn. Other substituted materials may include a combination of cellulose, wood and a plastic, such as polyethylene. Therefore, although this invention is discussed with and is primarily beneficial for use with fiber board, the invention is also applicable with the aforementioned substitutes and other alternative materials such as vinyl and rubber. The foam boards 12 incorporate a contour cut alignment configuration on the front side 20, as shown in FIG. 3. The back side 22 is flat to support it against the wall, as shown in FIG. 4. The flat side 22 of the board, FIG. 4, will likely incorporate a drainage plane system 24 to assist in directing moisture runoff, if moisture finds its way into the wall 12. It should be noted that moisture in the form of vapor, will pass through the foam from the warm side to the cold side with changes in temperature. The drainage plane system is incorporated by reference as disclosed in Application Ser. No. 60/511,527 filed on Oct. 15, 2003. To install the fiber cement siding, according to the present invention, the installer must first establish a chalk line 26 at the bottom of the wall 28 of the building to serve as a straight reference line to position the foam board 12 for the first course 15 of foam board 12, following siding manufacturer's instructions. The foam boards 12 are designed to be installed or mated tightly next to each other on the wall 28, both horizontally and vertically. The first course foam boards 12 are to be laid along the chalk line 26 beginning at the bottom corner of an exterior wall 28 of the building (as shown FIG. 5) and tacked into position. When installed correctly, this grid formation provided will help insure the proper spacing and alignment of each piece of lap siding 19. As shown in FIGS. 5 and 6, the vertical edges 16 a, 18 a of each foam board 12 are fabricated with an interlocking tab 16 and slot 18 mechanism that insure proper height alignment. Ensuring that the tabs 16 are fully interlocked and seated in the slots 18, provides proper alignment of the cement lap siding. As shown in FIGS. 7, 7 a, 7 b, the horizontal edges 30, 32 incorporate ship-lapped edges 30, 32 that allow both top and bottom foam boards 12 to mate tightly together. The foam boards 12 are also designed to provide proper horizontal spacing and alignment up the wall 28 from one course to the next, as shown in phantom in FIGS. 7 and 7 a. As the exterior wall 28 is covered with foam boards 12, it may be necessary to cut and fit the foam boards 12 as they mate next to doorways windows, gable corners, electrical outlets, water faucets, etc. This cutting and fitting can be accomplished using a circular saw, a razor knife or a hot knife. The opening (not shown) should be set back no more than ⅛ inches for foundation settling. Once the first course 15 has been installed, the second course 15′ of foam boards 12 can be installed at any time. The entire first course 15 on any given wall should be covered before the second course 15′ is installed. It is important to insure that each foam board 12 is fully interlocked and seated on the interlocking tabs 16 to achieve correct alignment. The first piece of fiber cement lap siding 10 is installed on the first course 15 of the foam board 12 and moved to a position approximately ⅛ inches set back from the corner and pushed up against the foam board registration or alignment rib 14 (see FIG. 8) to maintain proper positioning of the panel 10. The foam board registration or alignment rib 14 is used to align and space each fiber cement panel 10 properly as the siding job progresses. Unlike installing the fiber cement lap siding in the prior art, there is no need to measure the panel's relative face height to insure proper alignment. All the system mechanics have been accounted for in the rib 14 location on the foam board 12. The applicator simply places the panel 10 in position and pushes it tightly up against the foam board alignment rib 14 immediately prior to fastening. A second piece of fiber cement lap siding can be butted tightly to the first, pushed up against the registration or alignment rib and fastened securely with fasteners 17 with either a nail gun or hammer. Because the alignment ribs 14 are preformed and pre-measured to correspond to the appropriate overlap 30 between adjacent fiber cement siding panels 10, no measurement is required. Further, because the alignment ribs 14 are level with respect to one another, an installer need not perform the meticulous leveling tasks associated with the prior art methods of installation. With reference to FIGS. 7, 7 a, 7 b, vertically aligned boards 20 include a ship lap 30, 32 mating arrangement which provides for a continuous foam surface. Furthermore, the interlocking tabs 16, 18 together with the ship lap 30, 32 ensures that adjacent fiber boards 12, whether they be vertically adjacent or horizontally adjacent, may be tightly and precisely mated together such that no further measurement or alignment is required to maintain appropriate spacing between adjacent boards 12. It is understood that as boards 12 are mounted and attached to one another it may be necessary to trim such boards when windows, corners, electrical outlets, water faucets, etc. are encountered. These cuts can be made with a circular saw, razor knife, or hot knife. Thereafter, a second course of fiber cement siding 10′ can be installed above the first course 10 by simply repeating the steps and without the need for leveling or measuring operation. When fully seated up against the foam board alignment rib 14, the fiber cement panel 10′ will project down over the first course 10 to overlap 34 by a desired 1¼ inches, as built into the system as shown in FIG. 3. The next course is fastened against wall 28 using fasteners 36 as previously described. The foam board 12 must be fully and properly placed under all of the fiber cement panels 10. The installer should not attempt to fasten the fiber cement siding 10 in an area that it is not seated on and protected by a foam board 12. The board 12, described above, will be fabricated from foam at a thickness of approximately 1¼ inch peak height. Depending on the siding profile, the board 12 should offer a system “R” value of 3.5 to 4.0. This addition is dramatic considering that the average home constructed in the 1960's has an “R” value of 8. An R-19 side wall is thought to be the optimum in thermal efficiency. The use of the foam board will provide a building that is cooler in the summer and warmer in the winter. The use of the foam board 12 of the present invention also increases thermal efficiency, decreases drafts and provides added comfort to a home. In an alternate embodiment, a family of insulated fiber cement lap siding panels 100 has been developed, as shown in FIG. 9, in the interest of solving several limitations associated with present fiber cement lap sidings. These composite panels 100 incorporate a foam backer 112 that has been bonded or laminated to a complementary fiber cement lap siding panel 110. Foam backing 112 preferably includes an angled portion 130 and a complementary angled portion 132 to allow multiple courses of composite fiber cement siding panels 100 to be adjoined. Foam backer 112 is positioned against fiber cement siding 110 in such a manner as to leave an overlap region 134 which will provide for an overlap of siding panels on installation. The fiber cement composite siding panels 100 of the second preferred embodiment may be formed by providing appropriately configured foam backing pieces 132 which may be adhesively attached to the fiber cement siding panel 110. The composite siding panels 100 according to the second preferred embodiment may be installed as follows with reference to FIGS. 10 b, 10 c and 13. A first course 115 is aligned appropriately against sill plate 40 adjacent to the foundation 42 to be level and is fastened into place with fasteners 36. Thereafter, adjacent courses 115′ may be merely rested upon the previous installed course and fastened into place. The complementary nature of angled portions 130, 132 will create a substantially uniformed and sealed foam barrier behind composite siding panels 100. Overlap 134, which has been pre-measured in relation to the foam pieces allows multiple courses to be installed without the need for measuring or further alignment. This dramatic new siding of the present invention combines an insulation component with an automatic self-aligning, stack-on siding design. The foam backer 112 provides a system “R” value in the range of 3.5 to 4.0. The foam backer 112 will also be fabricated from expanded polystyrene (EPS), which has been treated with a chemical additive to deter termites and carpenter ants. The new self-aligning, stack-on siding design of the present invention provides fast, reliable alignment, as compared to the time consuming, repeated face measuring and alignment required on each course with the present lap design. The new foam backer 112 has significant flexural and compressive strength. The fiber cement siding manufacturer can reasonably take advantage of these attributes. The weight of the fiber cement siding 110 can be dramatically reduced by thinning, redesigning and shaping some of the profiles of the fiber cement 110. FIG. 8 a shows the current dimensions of fiber cement boards, FIGS. 8 b, 8 c, and 8 c show thinner fiber cement board. Experience with other laminated siding products has shown that dramatic reductions in the base material can be made without adversely affecting the product's performance. The combination of weight reduction with the new stack-on design provides the installers with answers to their major objections. It is conceivable that the present thickness (D′) of fiber cement lap siding panels 110 of approximately 0.313 inches could be reduced to a thickness (D′) of 0.125 inches or less. The fiber cement siding panel may include a lip 144 which, when mated to another course of similarly configured composite fiber cement siding can give the fiber cement siding 110 the appearance of being much thicker thus achieving an appearance of an increased shadow line. Further, it is understood although not required, that the fiber cement siding panel 110 may be of substantially reduced thickness, as stated supra, compared to the 5/16″ thickness provided by the prior art. Reducing the thickness of the fiber cement siding panel 110 yields a substantially lighter product, thereby making it far easier to install. A pair of installed fiber cement composite panels having a thickness (D′) of 0.125 or less is illustrated in FIGS. 8B-8D and 10B and 10C. Such installation is carried out in similar fashion as that described in the second preferred embodiment. The present invention provides for an alternate arrangement of foam 112 supporting the novel configuration of fiber cement paneling. In particular, the foam may include an undercut recess 132 which is configured to accommodate an adjacent piece of foam siding. As shown in FIGS. 10 a, 10 b and 10 c, the new, thinner, insulated fiber cement lap siding panel 110 will allow the siding manufacturers to market panels with virtually any desirable shadow line, such as the popular new ¾ inch vinyl siding shadow line with the lip 144 formation. The lip 144 can have various lengths such as approximately 0.313 inch (E), 0.50 inch (F), and 0.75 (G) inch to illustrate a few variations as shown in FIGS. 8 b, 8 c, and 8 d, respectively. This new attribute would offer an extremely valuable, previously unattainable, selling feature that is simply beyond the reach with the current system. No special tools or equipment are required to install the new insulated fiber cement lap siding 100. However, a new starter adapter or strip 150 has been designed for use with this system, as shown in FIGS. 11 and 12. It is preferable to drill nail holes 152 through the adapter 150 prior to installation. The installer must first establish a chalk line 26 at the bottom of the wall 28 to serve as a straight reference line to position the starter adapter 150 for the first course of siding and follow the siding manufacturer's instructions. The siding job can be started at either corner 29. The siding is placed on the starter adapter or strip 150 and seated fully and positioned, leaving a gap 154 of approximately ⅛ inches from the corner 29 of the building. Thereafter, the siding 100 is fastened per the siding manufacturer's installation recommendations using a nail gun or hammer to install the fasteners 36. Thereafter, a second course of siding 115′ can be installed above the first course 115 by simply repeating the steps, as shown in FIG. 13. Where practical, it is preferable to fully install each course 115 before working up the wall, to help insure the best possible overall alignment. Installation in difficult and tight areas under and around windows, in gable ends, etc. is the same as the manufacturer's instruction of the current fiber cement lap siding 10 The lamination methods and adhesive system will be the same as those outlined in U.S. Pat. Nos. 6,019,415 and 6,195,92B1. The insulated fiber cement stack-on sliding panels 100 described above will have a composite thickness of approximately 1¼ inches. Depending on the siding profile, the composite siding 100 should offer a system “R” value of 3.5 to 4.0. This addition is dramatic when you consider that the average home constructed in the 1960's has an “R” value of 8. An “R-19” side wall is thought to be the optimum in energy efficiency. A building will be cooler in the summer and warmer in the winter with the use of the insulated fiber cement siding of the present invention. While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the fiber cement siding board disclosed in the invention can be substituted with the aforementioned disclosed materials and is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law. 1. A method for installing siding panels to a building comprises the steps of: providing a foam backing board having predetermined dimensions having a flat back side for supporting against a wall of the building and a contour cut alignment configuration on the front side; establishing a reference line at the bottom of the wall for aligning and positioning the foam backing board for a first course of the backing board; laying a first lower edge of a first backing board along the reference line and tacking the first backing board into position; and laying another backing board adjacent the first backing board; and installing a siding panel over the first backing board. 2. The method of claim 1, wherein the flat back side of the foam backing board has a drainage grid thereon. 3. The method of claim 1, wherein the step of laying another backing board adjacent the first backing board includes the step of interlocking the first and other backing board together with tabs and slot located on vertical edges of each backing board. 4. The method of claim 2, wherein the step of interlocking includes the step of seating the tabs into the slots. 5. The method of claim 1 further comprising the step of cutting and fitting the foam backing board around at least one of a doorway window, gable corner, electrical outlet and water faucet. 6. The method of claim 5 wherein the foam backing board has a front surface with alignment ribs and the step of installing a siding panel includes the step of pushing the siding panel against one of the alignment ribs. 7. The method of claim 1, wherein the step of installing a siding panel includes the step of providing a fiber cement siding panel having a thickness of less than 0.13 inches. 8. The method of claim 7, wherein the step of providing a siding panel includes the step of providing a panel having a lip formation at one end for providing a shadow line. 9. The method of claim 8, wherein the lip formation is between 0.3 and 0.8 inches long. 10. The method of claim 1, wherein the siding panel is bonded to the firm backing board. 11. The method of claim 10, wherein the siding panel has a thickness less than 0.13 inches. 12. The method of claim 5, further comprising the step of abutting a second siding panel against an adjacent alignment rib so that the second siding panel overlaps the first siding panel for forming a shadow line. 13. The method of claim 1, wherein the step of providing a foam backing board includes the step of providing a foam backing board with an undercut recess at at least one end configured to accommodate an adjacent piece of foam backing board. 14. The method of claim 1, further comprising the step of treating the foam fiber board with a chemical additive for deterring termites and carpenter ants. 15. The method of claim 1, further comprising the step of installing a starter adapter adjacent the reference line. 16. The method of claim 15, wherein the foam backing board and siding panel are placed on the starter adapter and secured thereto. 17. The method of claim 1, wherein the siding panel is a fiber cement siding panel. 18. The method of claim 1, wherein the siding panel is of one of an engineered composite wood product, an engineered composite plastic product and a combination cellulose, wood and plastic material. 19. A lap siding for a building comprising: a first panel having a predetermined length, height, and thickness and an L-shaped profile; and a second panel having alignment means for aligning the first panel relative to the building, said second panel positionable between the building and the first panel. 20. The lap siding of claim 19, wherein the length of the first panel is at least ten times longer than the height and the thickness is less than 0.13 inches thick. 21. The lap siding of claim 20, wherein the L-shaped profile of the first panel includes a lip having a depth between 0.3-0.8 inches for providing the appearance of a shadow line.
Development and Implementation of an Integrated Approach to Improving the Operating Cycle and Design of an Energy-Ef fi cient Forced Diesel Engine The methods and means of improving the operating cycle and design of forced diesel, its fuel ef fi ciency, and the limitation of the thermomechanical loading were considered. Introduction The aim of this study is to increase the competitiveness of possible by intensification of scientific research and finding of new technical solutions for improving characteristics of a diesel. This fully applies to the developed six-cylinder V-shaped diesel engine, with a cylinder diameter of 150 mm and a piston stroke of 160 mm (further diesel), with turbocharging and intercooling, intended for application in ground transportation vehicles. Earlier exploratory research into advanced models identified a number of problems, the most important of which require an integrated approach to further improve the operating cycle [9, p. 2; 1, p. 2; 8, pp. [3][4][5][6][7][8][9][10][11][12][13][14][15] and design using a mathematical simulation of the main processes in the inside-cylinder space, in the gas path components, the hydrodynamic phenomena in the fuel supply, and the cooling and lubrication systems of diesel. An integrated approach is based on the following key assumptions: common design, structural, and functional technical solutions for creating a forced modification diesel; the availability of technical solutions for computational and theoretical control with subsequent adjustment based on the results of tests of a diesel engine; compliance with the requirements of the unification of technical solutions design, low metal content, adaptability in production, and economic feasibility; the use of mechanisms and systems of the new generation of structural and operational materials to provide design improvement; management of the energy-efficient diesel engine by using modern software and computing complexes; including a modern microprocessor element base in the structure of control of the executive power and hydromechanical high-speed devices. The aim of the study is to develop technical solutions to production based on the Russian components of an energy-efficient forced diesel and a specific capacity of at least 35 kW/l for ground transport vehicles. To achieve these goals, the following problems need to be solved. 1. Adaptation and development of existing and new mathematical models, methods of calculation and technical solutions for the elements of forced diesel engines, including improving the operating cycle of diesel; providing the required thermodynamic state of the gas environment and the conditions of heat and gas exchange in the inside-cylinder space in the basic processes of the operating cycle; providing high turbine supercharging with intercooling; intensification of the process of fuel injection with electronic control, improvement of inside-cylinder configuration space of the combustion chamber and the gas distribution phases; evaluation and improvement of the strength characteristics of the piston, connecting rod, crankshaft, and crankcase of a diesel engine with a high maximum pressure of gas in the cylinder; determination of tribological characteristics to enhance the bearing capacity of the crank and turbocharger rotor bearings and the performance of the "piston-cylinder liner" system; evaluation and improvement of the strength characteristics of elements of the fuel injector for high-pressure injection. 2. Experimental evaluation of the main technical solutions for improving the systems and diesel units. 3. Creating a prototype of energy-efficient forced diesel with domestic component parts. 4. Bench testing of the developed prototype of the diesel engine. Literature Review The basis of the operating cycle of a diesel with direct injection is enough to put into practice widely approved diesel engine volumetric mixture formation and combustion method, implemented in the undivided combustion chamber, located at the bottom of the piston. The geometric compression ratios are 13.5 … 14 units. It is necessary to choose the amount of compression performed on the basis of compromise and due to the need to limit the maximum pressure of the gas in the cylinder and ensure a reliable start-up at low temperatures. The piston of a forced diesel is bimetallic with oil gallery cooling and a special profile of the skirt. The complex design of the piston is due to the high thermomechanical loading of elements of inside-cylinder space [12, p. 798]. The undivided combustion chamber has a small heat-sensitive surface, which together with a small whirlwind ratio causes a reduction in the heat transfer to the piston and cooling system. The individual cylinder heads include four valves, the combined inlet and outlet channels with a minimum of hydraulic losses. Liquid cooling cavities in the heads require an individual unit separate from the coolant supply. The fuel injection equipment includes a common rail accumulating system with electronically controlled fuel injection. The inclined fuel nozzle with a central location and indoor spraying has eight to ten sprayer holes [7, p. 4]. Diesel has a high turbocharged deep intercooling. According to the analysis of flow and working characteristics, we selected a diesel engine with a turbocharger with a one-stage compressor. Particular attention is paid to improving the efficiency of mixing and complete fuel burn, taking into account the limitations imposed by the thermomechanical stressed parts [12, p. 798]. To achieve the specific power of diesel at least 35 −2 kW/l is necessary to solve the following problems: providing charge air options, guaranteeing the mixture with an air-fuel ratio of not less than 1.8…2.0; intensification of injection and uniform distribution of fuel in the combustion chamber to achieve the best homogeneity of the mixture; providing timely and complete combustion of fuel with the least possible duration to increase the efficiency of the combustion process. Methods The objectives of improving the operating cycle of diesel were solved using well-known and new methods: intensification injection and uniform distribution of fuel in the combustion chamber was achieved using a fuel supply apparatus with a high-pressure injection (160…200 MPa) and an optimum number of holes in the sprayer [7, p. 4]; ensuring timely and complete combustion of fuel with the least possible duration was achieved in the implementation of the above-mentioned methods, combined with increased use of air charge in the combustion chamber and electronically controlled fuel injection; reduction of thermal and mechanical loading of components and parts of inside-cylinder space of a diesel engine was achieved by the thermal emission intensity in the initial stage of the combustion process and minimizing the thermal losses; cylinder group reliability is achieved by using composite bimetallic pistons with a special profiling forming the skirt undivided combustion chamber; the required intensity of heat transfer and heat exchange in the cooling system of regulation is achieved by a dual feed of the coolant in the cylinder head and block. Increased efficiency of the turbocharger in the diesel is provided by using a single-stage high-pressure compressor with a two-level vane diffuser and the turbine vane nozzle assembly is completed with a turbocharger with a small rotor inertia and short gas path system. As an intermediate charge, air coolers use airliquid heat exchangers with minimal hydraulic resistances for air and fluid paths. The use in a diesel fuel supply accumulator-type apparatus with high-pressure injection (160…200 MPa) and an optimum number (till 10) sprayer atomizer holes reduced the duration of fuel injection to 1.5 ms at a sufficiently uniform distribution of fuel in the combustion chamber [7, p. 4]. This agrees with the results of other authors [2, p. 302-303; 11, p. 192]. Use of the cylinder heads of the two inlet channels with profiled boundary contours and minimal flow resistance on the operating speed of the engine provided increased efficiency in filling the cylinder with a fresh charge of up to 0.95. At the same time, we considered the dimensions (depth, the maximum diameter of the piston head profile) of the undivided combustion chamber in the piston and the specific recommendations of the leading manufacturers of diesel engines for the relevant cylinder diameter, and the location and inclination of the fuel injector in the cylinder head. Electronic fuel injection control allowed a biphasic injection fuel injection advance angle adjustment depending on the rotational speed of the crankshaft of a diesel engine and the load. As a result, the maximum pressure and the speed of its rise in the cylinder and the combustion temperature were reduced. This effect reduces the heat transfer from the gases to the surrounding elements of inside-cylinder space and temperature level of details, the stress in the dangerous area, loads on the main and connecting rod bearings, and diesel vibroacoustic activity. Reduced heat loss also contributes to increasing the excess air ratio. Theoretical and Experimental Evaluation of the Parameters of the Operating Cycle and Characteristics of the Diesel At the initial stage of creating a diesel engine (the design stage), using computational studies we established indicators of the operating cycle for expected characteristics of the fuel burn. For these purposes, we used the method of synthesis of the operating cycle [5, pp. 874-875; 6, pp. 489-492], which was developed in South Ural State University. An indicator diagram of the pressure P and the temperature T of gases is shown in Fig. 1. Analysis of the indicator diagram shows that the indicators and parameters of the combustion process are as follows: maximum gas pressure of 18.4 MPa, the maximum rate of rise of the pressure of 0.38 MPa/deg., the maximum cycle temperature is 1850 K, the start of combustion timing angle of 5 deg. before TDC, process time combustion 135 deg., and maximum speed fuel burn 0.022 deg −1 . P,MPa T,K deg. Fig. 1 The indicator diagram of the pressure P and the temperature T of gases in the cylinder of the diesel-single-stage supercharging and two-stage supercharging Using computer modeling we determined and optimized the following: the basic processes of the operating cycle (fuel supply, mixture formation, combustion) and parameters of heat transfer, based on mathematical models of the fuel jet mixing, chemical kinetics, and combustion of fuel nonreacting turbulent gas flow in the combustion chamber with the determination of temperature fields of the combustion chamber surfaces by the finite element method; the combustion chamber configuration, based on a mathematical model of gas dynamics reacting turbulent two-phase flow in a three-dimensional setting with a simultaneous selection of the boost parameters, design injector nozzle, and fuel law; geometry inlet cylinder head channels, based on a mathematical model of the dynamics nonreacting turbulent gas flow with high Mach numbers in the three-dimensional setting and minimization of areas with the greatest dissipation of turbulent kinetic energy; gas distribution phases, based on mathematical models of mass and energy balances of gas in the combustion chamber, intake and exhaust manifolds, one-dimensional flow in the channels of the gas path and submodels of elements of the turbocharged system and charge air; boost parameters and characteristics of aggregates with intercooling using a synthesis of operating cycle of diesel; dynamics and lubrication of a complex loaded tribosystem with reciprocating and rotational movements of the elements, solution of the boundary conditions for hydrodynamic pressure using multigrid algorithms for bearings with complex geometry of the lubricating layer, taking into account the lubricant sources on the friction surfaces, micro-and macrogeometry, non-Newtonian behavior of the lubricants [10, pp. 22-221; 13, pp. 46-47], and the solution of multi-criteria optimization problems; the effect of misalignment of bearings and crankshaft journals, the elastic properties of the crankcase, and crankshaft bearing on the characteristics of the diesel; thermal and mechanical loading, the stress-strain state of the cylinder head and crankcase of a diesel engine on the basis of modeling of heat transfer processes in the solid, liquid, and gaseous systems under consideration. The implementation activities included optimization of the mix parameters characteristics of the gas-turbine supercharging and fuel injection advance angle and reducing the length and increasing the fuel injection pressure. This provides the required fuel efficiency at a high specific power of diesel. The dynamics of the piston on the lubricating layer in the cylinder of the engine are largely dependent on the profile of the skirt of the piston. The main hydromechanical characteristics (HMC) of the "piston-cylinder" tribosystem are h min ðsÞ-instantaneous values of the minimum oil film thickness; p max ðsÞ-instantaneous values of the maximal hydrodynamic pressure; h à min -average value of h min ðsÞ; p à max -average value of p max ðsÞ; NðsÞ and N à -instantaneous and average power loss of friction; Q à -the average flow rate of oil in the direction of the combustion chamber; and T à eff -the average effective temperature of the lubricating layer. To assess the influence of the design parameters on the HMC of the interface "piston-cylinder" of diesel we performed parametric studies for the maximum torque mode. In accordance with the calculation method given in [3, p. 249; 4, p. 519] we define the profiles of piston skirt in cold and hot conditions, and a profile was built up in the form of a parabola approximation (Fig. 2). Table presents the results of calculation of the HMC of the "piston-cylinder" tribosystem for initial and recommended interface design before and after optimization. Additionally, we analyzed a non-symmetrical design of the skirt of the piston (see Table 1). It is clear from these results that the optimization of the geometric parameters of the original design improves HMC by 7%. Discussion The results given above can help to develop integrated technologies with the following features: optimization of the parameters of units of boost at a mass flow of air to 1 kg/s and a high degree of pressure increase with deep intercooling according to the criteria of fuel efficiency and thermomechanical loading of the diesel; surface profiling of the undivided combustion chamber and a piston skirt forming a bimetallic composite piston, with longitudinal contours of inlet and exhaust channels in the cylinder head to improve gas exchange in the cylinder; optimization of the fuel accumulating injection system for the fuel pressure to 200 MPa, and cyclic feed to 0.4 g per cycle; improved elements and cooling structure for reducing heat transfer to the coolant; improved reliability of the diesel, taking into account the high thermal mechanical loading, providing fluid friction mode in the basic tribounits. The practical significance of the study is to implement an integrated approach to designing energy-efficient forced diesel on the basis of a differentiated analysis of the functioning of its basic mechanisms and systems. Suggested recommendations were checked using the new motorless bench [7, p. 2-3]. The original design of the bench allows us to register the development of fuel torches at up to 40.000 frames per second (Fig. 3) and can be recommended for the study of the different fuel systems. Motor testing of diesel on the motor bench HORIBA-SCHENCK DT-2100-1 confirmed the results of the preliminary analysis of the effectiveness of implemented measures to improve the operating cycle. Suggested a new technical solution in the design of diesel engines can be recommended for the practical design of forced diesel engines based on existing and developed technologies for their practical implementation. Comparison of the results of the study with the results of similar work indicates that the achievement of a given level of diesel by competitor companies is provided at a substantially greater weight and size with a similar level of fuel efficiency. Conclusions A brief analysis of current trends in the development of diesel engines and theoretical aspects of their implementation show that reserves are not exhausted in this direction. It is necessary to combine known and new methods and means of improving the operating cycle with advanced solutions in the design of the basic mechanisms and systems. As a result, we were able to achieve the required technical level. As a result, we were able to achieve the required technical level of the designed diesel. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
#!/bin/sh # # Copyright (c) 2018 Alejandro Liu # Licensed under the MIT license: # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to permit # persons to whom the Software is furnished to do so, subject to the # following conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN # NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE # USE OR OTHER DEALINGS IN THE SOFTWARE. # # set -euf -o pipefail mydir=$(dirname "$(readlink -f "$0")") export LIBDIR="$(dirname "$mydir")/lib" export PATH=$PATH:$mydir:$LIBDIR . pp.sh noxml() { python3 $mydir/noxml.py "$@" } if [ $# -eq 0 ] ; then echo "Usage: $0 {input.nxm} ..." exit 1 fi while [ $# -gt 0 ] do case "$1" in -I*) export PATH="$PATH:${1#-I}" ;; -D*) eval "${1#-D}" ;; *) break ;; esac shift done rc=0 done='[OK]' for input in "$@" do if [ x"$input" = x"-" ] ; then set - (export PATH="$(pwd):$PATH" ; pp | noxml) || rc=1 else if [ ! -f "$input" ] ; then echo "$input: not found" 1>&2 continue fi output=$(echo $input | sed -e 's/\.nxm$/.xml/') name=$(basename "$output" .xml) [ x"$input" = x"$output" ] && output="$input.xml" [ $# -gt 1 ] && echo -n "$input " 1>&2 if ! ( exec <"$input" >"$output" ; export PATH="$(dirname "$input"):$PATH" ; pp | noxml ) ; then rc=1 done='[ERROR]' fi fi done [ $# -gt 1 ] && echo "$done" 1>&2 exit $rc
Implement Azure service bus Topic(Create and Delete) Operator [x] Add Create Topic Operator [ ] Add Delete Topic Operator [ ] Add Test case [ ] Add Doc string [ ] Add example DAG Completed Create Topic Operator with Test case, example, Doc string and rst file changes Have some PR review comments need to address Addressed the PR review comments and added the test case for that
using UnityEngine; using UnityEngine.Serialization; namespace lisandroct.ScriptableValues { public abstract class ScriptableValue<T> : ScriptableObject, ISerializationCallbackReceiver { [FormerlySerializedAs("m_Value")] [SerializeField] private T _value; private T _runtimeValue { get; set; } public T Value { get { return _runtimeValue; } set { _runtimeValue = value; } } public void OnAfterDeserialize() { _runtimeValue = _value; } public void OnBeforeSerialize() { } } }
Board Thread:Fun and Games/@comment-4844180-20130814020359/@comment-4844180-20130904030002 (Btw, it is now an arms reach away) Inter: You bandage the creature with leaves...badly... The water hasn't done much, apart from make the creature yelp. I said before: "2 metres tall" and it has equestrian proportions, if not a little more slender. The rumbling is getting louder.
System time off by a small amount On Karmic (9.10), my system time is currently off by 17 minutes, e.g. when I type date on the terminal I get Thu Jan 6 16:22:29 CST 2011 while the correct time is 16:05pm. I went through the standard time set and cannot fix this. If it were off by a multiple of an hour I would blame timezone or daylight savings time settings, but 17 minutes, I have no idea. Thanks What happens when you try to set the time? Nothing. I go through the process but the problem is unchanged. Under Administration->Time and Date, I have sync with Internet servers. It may be your BIOS time. "Your computer has two timepieces; a battery-backed one that is always running (the hardware, BIOS, or CMOS clock), and another that is maintained by the operating system currently running on your computer (the system clock). The hardware clock is generally only used to set the system clock when your operating system boots, and then from that point until you reboot or turn off your system, the system clock is the one used to keep track of time." This description of this bug might be related. The fix will be available in the 11.04 release in April. Ubuntu is supposed to sync with internet time servers by default so that everyone connected to the Internet should have a reasonably accurate clock. Check if /etc/ntp.conf exists and is empty. If so, sudo rm the file.
Talk:Ay (pharaoh) Ay and Aegyptus Some questions by IonnKorr - Was Ay the same person with Aegyptus (of historian Manetho)? - Was Akhenaton (or an naval-general of Egyptian fleet) the same person with Danaus, king of Argos? - Was Nefertiti post-developed ('mutated") into Aphrodite, Greek goddess of beauty? Take into account the article "Danaus", from Wikipedia. "It has been suggested that the figure Danaus represents an actual Egyptian monarch, possibly identified with the pharaoh Akhenaton (as accounted by the ancient Greco-Aegyptian, Manetho). Furthering the parallel, the character of Aegyptus bears similarities with the pharaoh Ay. This leads some to believe that the Aegyptiads were an Egyptian army that was sent by Ay and Ammonian priests to punish Akhenaton and Atenists, and, following from this presumption, that the Danaids were Egyptians who followed Akenaton to Greece after his escape from Egypt."--Ionn-Korr 18:07, 26 October 2005 (UTC) * Josephus gives Orus a reign of 36 years and 5 months and Amenhotep III had a high year mark of 37 years, making an excellent match. For this and other reasons, several Egyptologists have endorsed the correlation between the two. Josephus’s Rathotis has a reign of 9 years, which coincides precisely with that of Tutankhamen, who ruled only 9 years. Josephus’s Harmais has a reign of 4 years and 1 month, which makes an excellent fit with Aye, who has a high-year mark of 4 years. Finally, Josephus’s Ramesses has a reign of 1 year and 4 months, which coincides with Ramesses I, who ruled about two years, partly as coregent with Horemheb. This takes care of the more obvious correspondences. . Not sure whether this is accepted fact though! Markh 13:14, 27 October 2005 (UTC) Concerning the intro... Why is the Osman thing there of all places? Come now. It's not a mainstream theory, and the prevolance of minority unorthodox theses makes these articles crummy. Thanatosimii 21:02, 5 September 2006 (UTC) Titulary Full titulary comes from my copy of the stela of Nxt-mnw.--Cliau 07:55, 11 September 2006 (UTC) I am wondering if the title : "Father of the God of pure hands" is hereditary. As it looks like his brother Anen also bore the title. Didn't their father also have this title? It was most certainly a title held by priests. There are folks out there who take the limited view that the title of "God's father" to mean that Ay was the father-in-law of Akhenaten. If Ay were actually Nefertiti's father, I believe that he would have stated it. He was definately married to Nefertiti's nurse, and Nefertiti claims their daughter as her "sister", which also is pointed to as proof that Ay was Nefertiti's father. I think it would be a nice clarification to identify the "Father of the God of pure hands", "Father of the Divinity" as a familial/inherited title and a priestly position. This would help to negate the automatic assumption that Ay was Nefertiti's father because he is claiming to be the father-in-law of Akhenaten. rmeyermn 29 June 2008 —Preceding comment was added at 23:50, 29 June 2008 (UTC) One more thought, I do not think that Ay was Nefertiti's father, nor Ankhesenamun's grandfather. The Amarna letter written to Supiluliumas asking for a husband and attributed to Queen Ankhesenamun,(in the Hittite records as Queen Dahamunzu ) states : "'My husband died. A son I have not. But to thee, they say, the sons are many. If thou wouldst give me one son of thine, he would become my husband. Never shall I pick out a servant of mine and make him my husband!...I am afraid.'" Why would Ankhesenamun think of her grandfather as a servant? Ay was officiating at Tutankhamun's funeral and was going to be the next king by rights of sending Tut properly into the next world. So in my mind Ankhesenamun was writing about Ay. It is possible that she was referring to Horemheb though. I think that Horemheb married Mutnodjmet to ascend the throne, not because she was Nefertiti's sister, but because she was Ay's daughter. He married the previous pharaoh's daughter and took the throne. rmeyermn —Preceding comment was added at 14:36, 30 June 2008 (UTC) Reverting I have reverted the article, removing anonymous edits, to reinstate the Possible historical context section and clean up the introduction. Markh 11:27, 27 September 2006 (UTC) Ay/Horemheb I agree with Thanatosimi. Just because Markh thinks Ay is Ephraim, the son of Joseph--based on Ahmad Osman's unorthodox views doesn't make it so. All we know is that Ay was an EGYPTIAN whose family came from Akhmin. Osman hasn't published in a scholarly journal like JEA, JNES, BASOR, Orientalia where all articles are subject to academic scrutiny prior to publishing. Its just all nonsense and lowers Wikipedia's standards. Encyclopaedia Brittanica must be laughing at us all! The problem here is that Kitchen has shown in his 1993 paper "He Swore and Oath" that Joseph was sold into slavery for 20 shekels--which was the standard price for slaves in the 20th Century BC Mesopotamia. So, there is no way his sons like Ephrayim could have lived into the 13th Century and ruled as Pharaoh Ay in the 1320's. Osman's ideas are not only a frimge theory--they are untenable. The problem with Wikipedia is that someone like Thanatosimii CANNOT edit out these impossible views. Major forums like Thoth Web have Moderators who can remove theses speculative articles. <IP_ADDRESS> 19:31, 2 October 2006 (UTC) * Would like to point out that I belive NO SUCH THING! I reverted the changes that User:Therealmikelvee made to my changes, so the Osman "speculation" didn't appear in the introduction to the article. Which by the way have now been reinstated again by the same user, everywhere they were removed. He seems to be the only one on Wikipedia that actually believes this, so how do we deal with these changes without having an on going edit war. Markh 11:12, 3 October 2006 (UTC) * (Actually, if Ay was Yuya's son, we don't even know he was necesarrily an egyptian... at least not paternally.Thanatosimii 14:38, 3 October 2006 (UTC) Too much information about Tutankhamun? I'm a bit perplexed that so much space is devoted to the possible murder of Tutankhamun in this article. No context is provided -- viz., how T.'s death affects our understanding of Ay's reign; this section reads as if it is the remains of a forgotten edit war. Unless this can be better integrated, I think it would be best to move this discussion to the article on T. -- llywrch (talk) 00:48, 7 April 2008 (UTC) * Yes, I would agree that that some of the information in that section is unnecessary. Do other editors agree with the removal of the part shown below? Opinions please. * The National Geographic forensic researchers instead presented a new theory that Tutankhamun died from an infection caused by a badly broken leg since he is often portrayed as walking with a cane due to spina bifida, a hereditary trait in his family on his father's side.[9] The bone fragments found in Tutankhamun's skull were most likely the result of post-mortem damage caused by Howard Carter's initial examination of the boy king "because they show no evidence of being inundated with the embalming fluid used to preserve the pharaoh for the afterlife."[10] --Zanthorp (talk) 03:14, 26 March 2009 (UTC) * Made minor edits to match Brier's actual murder theory as presented in his 1999 lecture series. He also said the bone fragments were postmortem damage because they they were not inundated by the embalming fluid, instead identifying a different part of the X-ray at the base the skull as signs of an injury. Brier's theory, which he presents as a theory, only suggests murder in the cases of Tutankamen, his queen, and the Hittite prince. Brier makes no claim that Ankhesenamen's two miscarriages, Akhenaten, Nefertiti, or Smenkhare were murders, so I have struck that. Edward321 (talk) 04:51, 5 August 2010 (UTC) grammatical error corrected I've corrected a minor grammatical error in the Tutankhamun section. I removed 'with' from the following sentence: He also alleged that Ankhesenamen and the Hittite Prince she was about to marry with were also murdered at his orders... --Zanthorp (talk) 03:00, 26 March 2009 (UTC) a couple more points To me at least, there seems to be a certain amount of visual similarity between the statue of Ay and the more famous one of Nefertititi, as you would expect from Father and Daughter. The letters to the Hittite king smells like a put up job to me.The letters could only have been organised at the highest level and the Prince Zannaza was killed by the "men and chariots of Egypt". If Tut died in March then the second letter of "where is your son" means that the military campaign season is advanced with the ambush already arranged and likely to leak. The Hittite king may have bought the story but seems to have had some suspicions and sent only a surprising junior prince to marry the crown princess of a superpower. After the plague, whatever it was, borders were in a mess and populations on the move. Ay and horemheb needed a short war against the Hittites to stabilise their borders. The Hitler organised Polish border incident readily comes to mind. —Preceding unsigned comment added by <IP_ADDRESS> (talk) 10:27, 23 June 2010 (UTC) speculative and unencyclopedic " His death was natural" (about Tut) There is no reason to assume this and to pass it off as fact does a disservice to the reader. —Preceding unsigned comment added by <IP_ADDRESS> (talk) 03:18, 2 November 2010 (UTC) Ay and Horemhab The whole country must have been in an uproar after a plague and the death of pharoah Tut. It seems unlikely that Ay could have taken the throne while an active General Horemhab still had control of the army. One scenario is that they were surviving oldest and youngest brothers and a degree of trust between them. With the older brother as pharoah he would ensure that the younger, more active brother received all the necessary supplies to go on campaign and sort out Egypts borders and anybody else who wanted to try their luck as the plague had abated. Both brothers must have been aware that this was a good policy as Horemhab would be next pharaoh anyway but with settled borders. — Preceding unsigned comment added by AT Kunene (talk • contribs) 12:33, 29 October 2011 (UTC) External links modified Hello fellow Wikipedians, I have just modified 1 one external link on Ay. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes: When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ). * Added archive https://web.archive.org/web/20100731230154/http://www.drhawass.com:80/blog/press-release-king-tuts-chariot-travels-new-york to http://www.drhawass.com/blog/press-release-king-tuts-chariot-travels-new-york Cheers.— InternetArchiveBot (Report bug) 19:53, 22 October 2016 (UTC) "Ay" Origin - Reg (Tamil Nadu, India) There may be a possible connection between these both https://en.wikipedia.org/wiki/Ay (Egypt) https://en.wikipedia.org/wiki/Ay_kingdom (IT IS A TAMIL KINGDOM - More then 3000 - 4000 yeses back) Please take this is a note — Preceding unsigned comment added by <IP_ADDRESS> (talk) 09:44, 26 October 2016 (UTC) Ay as Nefertiti & Mutnodjmet/Mutbenret's father? This theory is brought up on both Ay and Nakhtmin's pages with no citation, and is even dismissed outright on Nefertiti's page. The only evidence to make this connection is that Ay's wife Tey was Nefertiti's wetnurse, so the theory is thin at best. This should probably be clarified. — Preceding unsigned comment added by Shoom'lah (talk • contribs) 22:17, 15 January 2019 (UTC) Requested move 28 March 2022 The result of the move request was: Moved. (non-admin closure) Turnagra (talk) 10:07, 7 April 2022 (UTC) Ay → Ay (pharaoh) – No primary topic for this 2 letter word. Though the pharaoh gets the most views of topics called "Ay" some of the uses of the acronym have many more views[] or long-term significance. Google, Images and Books seem split though many results aren't topic WP covers. Redirect to AY per WP:DABCOMBINE. Crouch, Swale ( talk ) 21:18, 28 March 2022 (UTC) The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion. * Support per nomination. There are 19 entries upon the AY disambiguation page, with no indication that the short-lived Egyptian pharaoh from more than 3000 years ago left such an imprint upon history that it overwhelms the remaining 18 entries. Ay should indeed redirect to the AY dab page or the dab page's main title header should be moved to Ay. —Roman Spinner (talk • contribs) 23:14, 28 March 2022 (UTC) * Oppose There is no other competitor to be the primary topic that isn't a partial title match. Disambiguation pages are not a search index; the current situation is perfectly fine without meddling. It can be differentiated from the acronym due to WP:DIFFCAPS. ᴢxᴄᴠʙɴᴍ (ᴛ) 20:17, 29 March 2022 (UTC) * Support - yes there is a competitor, Ay (river), a river that flows for hundreds of miles by hundreds of thousands of people. That's enough for me to say we have no primary topic. Red Slash 20:00, 30 March 2022 (UTC) * Oppose per Zxcbvnm. Daniel Case (talk) 05:00, 3 April 2022 (UTC) * Support and comment While there is another pharaoh with the birth name Ay, from the 13th Dynasty, he is at Merneferre Ay. Technically Ay (18th Dynasty) would have been called pharaoh, since the word didn't exist in the reign on Merneferre Ay. Might be a cause of minor confusion since in common use 'pharaoh' could apply to both of these kings Merytat3n (talk) 00:50, 5 April 2022 (UTC) * Support. I think there is some confusion above about partial title matches. You need to read the whole of WP:PTM not just the first phrase. I count at least thirteen entries at the DAB that would be valid destinations for Ay if the others did not exist. That's the important thing. For example, people searching for the river Ay will search for Ay even if we have decided to name the article on the river something else. Seen in this sense, there is no possibility of a primary topic here. Andrewa (talk) 05:23, 5 April 2022 (UTC)
ember-stateful-promise ============================================================================== [![Download count all time](https://img.shields.io/npm/dt/ember-stateful-promise.svg)](https://badge.fury.io/js/ember-stateful-promise) [![GitHub Actions Build Status](https://img.shields.io/github/workflow/status/snewcomer/ember-stateful-promise/CI/main)](https://github.com/snewcomer/ember-stateful-promise/actions/workflows/ci.yml?query=branch%3Amain) [![npm version](https://badge.fury.io/js/ember-stateful-promise.svg)](https://badge.fury.io/js/ember-stateful-promise) [![Ember Observer Score](https://emberobserver.com/badges/ember-stateful-promise.svg)](https://emberobserver.com/addons/ember-stateful-promise) [ember-concurrency](http://ember-concurrency.com/docs/introduction/) is the go to solution in the Ember community for tracking async action state and many other tasks around async behaviour. `ember-stateful-promise` seeks to simplify with native `async/await` instead of generators and expose a few flags on a promise object for you to use. Moreover, they are tracked! This library can be used if you simply need derived state your async functions and/or need a lightweight version of ember-concurrency. Also [ember-promise-helpers](https://github.com/fivetanley/ember-promise-helpers) is another great library if you want to calculate state from your promises. `ember-stateful-promise` is different in that is seeks to provide derived state. ## API Supports 1. Derived state 2. Debouncing async functions 3. Cleanup of async functions wired up with `@ember/destroyable` - `isRunning` - `isResolved` - `isError` - `isCanceled` - `performCount` ## Usage There are a few ways to use this addon. Likely, you only need the `stateful-function` decorator. However, if you need the lower level util, we make that available as `StatefulPromise` as well. ### Stateful Promise - Promise `interface` ``` import { StatefulPromise } from 'ember-stateful-promise/utils/stateful-promise'; const promise = fetch(url); let result = new StatefulPromise((resolveFn, rejectFn) => { promise.then((data) => resolveFn(data)).catch((e) => rejectFn(e)); }); result.isRunning; // true result.isResolved; // false result.isError; // false await result; result.isRunning; // false result.isResolved; // true result.isError; // false ``` - `create` method with destroyable ``` import { StatefulPromise } from 'ember-stateful-promise/utils/stateful-promise'; const promise = fetch(url); let result = new StatefulPromise().create(this, promise); result.isRunning; // true result.isResolved; // false result.isError; // false await result; result.isRunning; // false result.isResolved; // true result.isError; // false ``` ``` import { StatefulPromise } from 'ember-stateful-promise/utils/stateful-promise'; import { action } from '@ember/object'; class MyComponent extends Component { @action clickMe() { const promise = fetch(url); // Destroyable registered let result = new StatefulPromise().create(this, (resolveFn, rejectFn) => { promise.then((data) => resolveFn(data)).catch((e) => rejectFn(e)); }); // Component destroyed // and then try { await result; } catch (e) { // WILL ERROR here! } } } ``` ### Decorator ``` import Component from '@glimmer/component'; import { action } from '@ember/object'; import { statefulFunction } from 'ember-stateful-promise/decorators/stateful-function'; class MyComponent extends Component { @statefulFunction async clickMe() { await fetch(url); } } ``` ``` <button disabled={{if this.clickMe.isRunning "true"}} {{on "click" this.clickMe}}> Click </button> <p>(Clicked this many times - {{this.clickMe.performCount}})</p> ``` Note - the default behaviour out of the box is to `debounce` the action. When clicked, the first promise will be rejected and a new promise will be created. Note - If you decorate a function with the `@action` decorator, you will lost the derived state. `@statefulFunction` will bind `this` for you. As a result, `@statefulFunction` replaces `@action` while giving you all the features of this addon! Compatibility ------------------------------------------------------------------------------ * Ember.js v3.20 or above * Ember CLI v3.20 or above * Node.js v12 or above Installation ------------------------------------------------------------------------------ ``` ember install ember-stateful-promise ``` Contributing ------------------------------------------------------------------------------ See the [Contributing](CONTRIBUTING.md) guide for details. License ------------------------------------------------------------------------------ This project is licensed under the [MIT License](LICENSE.md).
Can I travel to Croatia after being refused entry in 2018? I was refused entry to Croatia because I had only a single entrance Schengen visa in 2018. However, I received no refusal stamps or letter from the Croatian border control. In June 2019, I managed to successfully travel to Schengen countries such as Spain, Portugal, France, Poland, Lithuania, Norway, Iceland, Luxembourg, and Belgium with a Schengen visa issued by Spain. Question is: I still have a dream of one day visiting Dubrovnik. Will I be turned away at the border because I was previously refused entry, even if this time I obtain a valid Croatian visa or a multiple entry Schengen visa? I have edited your question for clarity. If I have misstated anything, please revert it to your original text. Yes you can travel to Croatia on your current Schengen Visa, your formal entry denial due to single entry visa will not affect a later entry because you did not violate any regulations. Had you violated any regulation and been denied it would be a different matter but since that is not the case you are good to go. As always however in these matters it’s always good to place a call to the visiting country’s border control agency and ask they are normally quite helpful albeit sometimes hard to get hold of Yes could not agree more to recheck thank you so much , this is so helpfull. @user96563 Email<EMAIL_ADDRESS>and<EMAIL_ADDRESS>Start the message by "To the Border police" and ask your question in simple English. You should receive a reply, though mostly likely in Croatian
import React, { createContext, useContext, useReducer } from "react"; import { reducer, setName, setLocation } from "./reducer"; import data from "./data"; const PersonContext = createContext(); import "./styles.scss"; const App = () => { const [person, dispatch] = useReducer(reducer, data); return ( <div className="App component"> <PersonContext.Provider value={[person, setName, setLocation, dispatch]}> <h1>Main App</h1> <SubComponent1 /> </PersonContext.Provider> </div> ); }; const SubComponent1 = () => { return ( <div className="component"> <h1>SubComponent 1</h1> <SubComponent2 /> </div> ); }; const SubComponent2 = () => { const [person] = useContext(PersonContext); return ( <div className="component"> <h1>SubComponent 2</h1> <h3> Name: {person.name.title} {person.name.first} {person.name.last} </h3> <SubComponent3 /> </div> ); }; const SubComponent3 = () => { const [person, setPerson, setLocation, dispatch] = useContext(PersonContext); const changeLocation = () => { dispatch(setLocation("222 N 22 Street", "Philadelphia", "PA")); }; const changeName = () => { dispatch(setName("Mr", "Warren", "Longmire")); }; return ( <div className="component"> <h1>SubComponent 3</h1> <h3> Location: {person.location.street} {person.location.city},{" "} {person.location.state} </h3> <br /> <button onClick={changeLocation}>Change Location</button> <button onClick={changeName}>Change Name</button> </div> ); }; export default App;
whitespace management should work via a change accessor sort of thing Issue created from code comment with imdone.io TODO @Alex whitespace management should work via a change accessor sort of thing id:55 src/vs/editor/common/viewModel/viewModel.ts:93 @imdone - Efficiently manage your project's technical debt. imdone.io Issue closed by removing a comment. TODO @Alex whitespace management should work via a change accessor sort of thing id:55 gh:64 src/vs/editor/common/viewModel/viewModel.ts:93 @imdone - Efficiently manage your project's technical debt. imdone.io
Shirley OUTLAW v. Thomas L. OUTLAW. Civ. 6361. Court of Civil Appeals of Alabama. June 15, 1988. J.R. Herring of Herring, Bennett & Young, Dothan, for appellant. Robert H. Brogden of Brogden & Quatt-lebaum, Ozark, for appellee. HOLMES, Judge. This is a divorce case. After an ore tenus hearing, the trial court divorced the parties and effectuated a division of the property. The wife appeals, contending, in effect, that the trial court’s judgment was inequitable and thus due to be reversed. We find no error requiring reversal and affirm. It is well settled, of course, that in divorce cases a trial court’s judgment is presumed to be correct and will not be set aside on appeal unless it is so contrary to the evidence as to be plainly and palpably wrong or unjust. Sayles v. Sayles, 495 So.2d 1131 (Ala.Civ.App.1986). Moreover, a trial court’s judgment regarding such matters as the division, of property and the award of alimony is committed to the sound discretion of the trial court and will not be disturbed on appeal except where such discretion was plainly and palpably abused. Turquitt v. Turquitt, 506 So.2d 1014 (Ala.Civ.App.1987); Lucero v. Lucero, 485 So.2d 347 (Ala.Civ.App.1986); Burns v. Burns, 473 So.2d 1085 (Ala.Civ.App.1985). Although we do not think that a detailed summary of the evidence is necessary in this case, we would note that both parties were approximately fifty years old and had good jobs at the time of trial. The wife argues, in the main, that the trial court abused its discretion by awarding the husband the marital residence and other property when compared to what she received, and in view of the fact that the husband’s income was almost twice that of hers. We disagree. In Edge v. Edge, 494 So.2d 71 (Ala.Civ.App.1986), we held that a trial court did not abuse its discretion by awarding the marital home and a larger share of the personal property to the wife, and in not awarding the husband periodic alimony, despite the fact that the wife’s income was considerably larger than that of the husband. We find that case to be dispositive of the issue on appeal. This is especially so in view of the fact that the trial court awarded the wife another home of comparable value to that awarded the husband. The trial court also awarded the wife the sum of $17,500 as her interest in the property awarded to the husband. Having reviewed the record with the attendant presumptions, we cannot hold that the trial court erred regarding its awards to the wife in this case. That is, we are not persuaded that the trial court’s judgment in this case constitutes a plain abuse of discretion. We would note that we would not necessarily have made the same judgment in this case, but to reverse would be to substitute our judgment for that of the trial court. This the law does not permit. Brannon v. Brannon, 477 So.2d 445 (Ala.Civ.App.1985). The husband has requested an attorney’s fee for representation on appeal. That request is denied. This case is due to be affirmed. AFFIRMED. BRADLEY, P.J., and INGRAM, J., concur.
English English Two Week Iditarod Project: ELA Component media type="file" key="Podcast 1. Who and what will you need for this project to be successful? Wednesday: Reading Day in Class. HW: Post First Blog // Week Two // [|facebook] **Thursday: Review: Jeopardy Game to go over major topics in the novel HW: Ch 13-14** 3. How will you meet the needs of diverse learners? 4. What are the summative/formative assessments? 5. What are the unique needs of the teacher/learner in this situation? [[file:Rubric.pdf]]
package it.uniparthenope.fairwind.services.logger.filepacker.security; import mjson.Json; import javax.crypto.*; import java.io.UnsupportedEncodingException; import java.security.*; import java.security.spec.InvalidKeySpecException; import java.security.spec.PKCS8EncodedKeySpec; import java.security.spec.X509EncodedKeySpec; import java.util.Base64; /** * Created by marioruggieri on 16/11/2017. */ public class TLS { // TLS layer is composed of an encryptor and a digital signer private Encryptor encryptor; private DigitalSigner digitalSigner; public TLS(String destPublicKey, String srcPrivateKey) throws NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeySpecException, InvalidAlgorithmParameterException, UnsupportedEncodingException, InvalidKeyException, NoSuchProviderException { // get publicKey from string PublicKey destPBK = stringToPublicKey(destPublicKey); // get privateKey from string PrivateKey srcPRK = stringToPrivateKey(srcPrivateKey); Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); encryptor = new Encryptor(destPBK); digitalSigner = new DigitalSigner(srcPRK); } public String obfuscate(Json json) throws IllegalBlockSizeException, InvalidKeyException, BadPaddingException, NoSuchAlgorithmException, NoSuchPaddingException { String jsonAsString=json.toString(); return encryptor.encrypt(jsonAsString); } public String getIV() throws BadPaddingException, IllegalBlockSizeException { // Get clear IV encoded as a Base64 string to send it return encryptor.getIV(); // No problem for a clear IV } public String getObfuscatedKey() throws BadPaddingException, IllegalBlockSizeException { // encrypt the AES key using the RSA public key // encode the encrypted AES key as a Base64 string to send it return encryptor.getObfuscatedAESKey(); // Key must be encrypted using RSA } public String sign(Json json) throws InvalidKeyException, BadPaddingException, NoSuchAlgorithmException, IllegalBlockSizeException, NoSuchPaddingException { String data = json.toString(); return digitalSigner.sign(data); } private PublicKey stringToPublicKey(String pbk) throws NoSuchAlgorithmException, InvalidKeySpecException { // decode from string to binary[] X509EncodedKeySpec spec = new X509EncodedKeySpec(Base64.getDecoder().decode(pbk)); // generate a PublicKey object from the binary array KeyFactory kf = KeyFactory.getInstance("RSA"); return kf.generatePublic(spec); } private PrivateKey stringToPrivateKey(String pvk) throws NoSuchAlgorithmException, InvalidKeySpecException { PKCS8EncodedKeySpec specPriv = new PKCS8EncodedKeySpec(Base64.getDecoder().decode(pvk)); KeyFactory kf = KeyFactory.getInstance("RSA"); return kf.generatePrivate(specPriv); } }
In connection with any of the features and/or embodiments described herein, in some embodiments, the system can be configured to analyze, characterize, track, and/or utilize one or more plaque features derived from a medical image. For example, in some embodiments, the system can be configured to analyze, characterize, track, and/or utilize one or more dimensions of plaque and/or an area of plaque, in two dimensions, three dimensions, and/or four dimensions, for example over time or changes over time. In addition, in some embodiments, the system can be configured to rank one or more areas of plaque and/or utilize such ranking for analysis. In some embodiments, the ranking can be binary, ordinal, continuous, and/or mathematically transformed. In some embodiments, the system can be configured to analyze, characterize, track, and/or utilize the burden or one or more geometries of plaque and/or an area of plaque. For example, in some embodiments, the one or more geometries can comprise spatial mapping in two dimensions, three dimensions, and/or four dimensions over time. As another example, in some embodiments, the system can be configured to analyze transformation of one or more geometries. In some embodiments, the system can be configured to analyze, characterize, track, and/or utilize diffuseness of plaque regions, such as spotty v. continuous. For example, in some embodiments, pixels or voxels within a region of interest can be compared to pixels or voxels outside of the region of interest to gain more information. In particular, in some embodiments, the system can be configured to analyze a plaque pixel or voxel with another plaque pixel or voxel. In some embodiments, the system can be configured to compare a plaque pixel or voxel with a fat pixel or voxel. In some embodiments, the system can be configured to compare a plaque pixel or voxel with a lumen pixel or voxel. In some embodiments, the system can be configured to analyze, characterize, track, and/or utilize location of plaque or one or more areas of plaque. For example, in some embodiments, the location of plaque determined and/or analyzed by the system can include whether the plaque is within the left anterior descending (LAD), left circumflex artery (LCx), and/or the right coronary artery (RCA). In particular, in some embodiments, plaque in the proximal LAD can influence plaque in the mid-LAD, and plaque in the LCx can influence plaque in the LAD, such as via mixed effects modeling. As such, in some embodiments, the system can be configured to take into account neighboring structures. In some embodiments, the location can be based on whether it is in the proximal, mid, or distal portion of a vessel. In some embodiments, the location can be based on whether a plaque is in the main vessel or a branch vessel. In some embodiments, the location can be based on whether the plaque is myocardial facing or pericardial facing (for example as an absolute binary dichotomization or as a continuous characterization around 360 degrees of an artery), whether the plaque is juxtaposed to fat or epicardial fat or not juxtaposed to fat or epicardial fat, subtending a substantial amount of myocardium or subtending small amounts of myocardium, and/or the like. For example, arteries and/or plaques that subtend large amounts of subtended myocardium can behave differently than those that do not. As such, in some embodiments, the system can be configured to take into account the relation to the percentage of subtended myocardium. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can be configured to analyze, characterize, track, and/or utilize one or more peri-plaque features derived from a medical image. In particular, in some embodiments, the system can be configured to analyze lumen, for example in two dimensions in terms of area, three dimensions in terms of volume, and/or four dimensions across time. In some embodiments, the system can be configured to analyze the vessel wall, for example in two dimensions in terms of area, three dimensions in terms of volume, and/or four dimensions across time. In some embodiments, the system can be configured to analyze peri-coronary fat. In some embodiments, the system can be configured to analyze the relationship to myocardium, such as for example a percentage of subtended myocardial mass. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can be configured to analyze and/or use medical images obtained using different image acquisition protocols and/or variables. In some embodiments, the system can be configured to characterize, track, analyze, and/or otherwise use such image acquisition protocols and/or variables in analyzing images. For example, image acquisition parameters can include one or more of mA, kVp, spectral CT, photon counting detector CT, and/or the like. Also, in some embodiments, the system can be configured to take into account ECG gating parameters, such as retrospective v. prospective ECG helical. Another example can be prospective axial v. no gating. In addition, in some embodiments, the system can be configured to take into account whether medication was used to obtain the image, such as for example with or without a beta blocker, with or without contrast, with or without nitroglycerin, and/or the like. Moreover, in some embodiments, the system can be configured to take into account the presence or absence of a contrast agent used during the image acquisition process. For example, in some embodiments, the system can be configured to normalize an image based on a contrast type, contrast-to-noise ratio, and/or the like. Further, in some embodiments, the system can be configured to take into account patient biometrics when analyzing a medical image. For example, in some embodiments, the system can be configured to normalize an image to Body Mass Index (BMI) of a subject, normalize an image to signal-to-noise ratio, normalize an image to image noise, normalize an image to tissue within the field of view, and/or the like. In some embodiments, the system can be configured to take into account the image type, such as for example CT, non-contrast CT, MRI, x-ray, nuclear medicine, ultrasound, and/or any other imaging modality mentioned herein. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can be configured to normalize any analysis and/or results, whether or not based on image processing. For example, in some embodiments, the system can be configured to standardize any reading or analysis of a subject, such as those derived from a medical image of the subject, to a normative reference database. Similarly, in some embodiments, the system can be configured to standardize any reading or analysis of a subject, such as those derived from a medical image of the subject, to a diseased database, such as for example patients who experienced heart attack, patients who are ischemic, and/or the like. In some embodiments, the system can be configured to utilize a control database for comparison, standardization, and/or normalization purposes. For example, a control database can comprise data derived from a combination of subjects, such as 50% of subjects who experience heart attack and 50% who did not, and/or the like. In some embodiments, the system can be configured to normalize any analysis, result, or data by applying a mathematical transform, such as a linear, logarithmic, exponential, and/or quadratic transform. In some embodiments, the system can be configured to normalize any analysis, result, or data by applying a machine learning algorithm. In connection with any of the features and/or embodiments described herein, in some embodiments, the term “density,” can refer to radiodensity, such as in Hounsfield units. In connection with any of the features and/or embodiments described herein, in some embodiments, the term “density,” can refer to absolute density, such as for example when analyzing images obtained from imaging modalities such as dual energy, spectral, photon counting CT, and/or the like. In some embodiments, one or more images analyzed and/or accessed by the system can be normalized to contrast-to-noise. In some embodiments, one or more images analyzed and/or accessed by the system can be normalized to signal-to-noise. In some embodiments, one or more images analyzed and/or accessed by the system can be normalized across the length of a vessel, such as for example along a transluminal attenuation gradient. In some embodiments, one or more images analyzed and/or accessed by the system can be mathematically transformed, for example by applying a logarithmic, exponential, and/or quadratic transformation. In some one or more images analyzed and/or accessed by the system can be transformed using machine learning. In connection with any of the features and/or embodiments described herein, in some embodiments, the term “artery” can include any artery, such as for example, coronary, carotid, cerebral, aortic, renal, lower extremity, and/or upper extremity. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can utilize additional information obtained from various sources in analyzing and/or deriving data from a medical image. For example, in some embodiments, the system can be configured to obtain additional information from patient history and/or physical examination. In some embodiments, the system can be configured to obtain additional information from other biometric data, such as those which can be gleaned from wearable devices, which can include for example heart rate, heart rate variability, blood pressure, oxygen saturation, sleep quality, movement, physical activity, chest wall impedance, chest wall electrical activity, and/or the like. In some embodiments, the system can be configured to obtain additional information from clinical data, such as for example from Electronic Medical Records (EMR). In some embodiments, additional information used by the system can be linked to serum biomarkers, such as for example of cholesterol, renal function, inflammation, myocardial damage, and/or the like. In some embodiments, additional information used by the system can be linked to other omics markers, such as for example transcriptomics, proteomics, genomics, metabolomics, microbiomics, and/or the like. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can utilize medical image analysis to derive and/or generate assessment of a patient and/or provide assessment tools to guide patient assessment, thereby adding clinical importance and use. In some embodiments, the system can be configured to generate risk assessment at the plaque-level (for example, will this plaque cause heart attack and/or does this plaque cause ischemia), vessel-level (for example, will this vessel be the site of a future heart attack and/or does this vessel exhibit ischemia), and/or patient level (for example, will this patient experience heart attack and/or the like). In some embodiments, the summation or weighted summation of plaque features can contribute to segment-level features, which in turn can contribute to vessel-level features, which in turn can contribute to patient-level features. In some embodiments, the system can be configured to generate a risk assessment of future major adverse cardiovascular events, such as for example heart attack, stroke, hospitalizations, unstable angina, stable angina, coronary revascularization, and/or the like. In some embodiments, the system can be configured to generate a risk assessment of rapid plaque progression, medication non-response (for example if plaque progresses significantly even when medications are given), benefit (or lack thereof) of coronary revascularization, new plaque formation in a site that does not currently have any plaque, development of symptoms (such as angina, shortness of breath) that is attributable to the plaque, ischemia and/or the like. In some embodiments, the system can be configured to generate an assessment of other artery consequences, such as for example carotid (stroke), lower extremity (claudication, critical limb ischemia, amputation), aorta (dissection, aneurysm), renal artery (hypertension), cerebral artery (aneurysm, rupture), and/or the like. Additional Detail—Determination of Non-Calcified Plaque from a Medical Image(s) As discussed herein, in some embodiments, the system can be configured to determine non-calcified plaque from a medical image, such as a non-contrast CT image and/or image obtained using any other image modality as those mentioned herein. Also, as discussed herein, in some embodiments, the system can be configured to utilize radiodensity as a parameter or measure to distinguish and/or determine non-calcified plaque from a medical image. In some embodiments, the system can utilize one or more other factors, which can be in addition to and/or used as an alternative to radiodensity, to determine non-calcified plaque from a medical image. For example, in some embodiments, the system can be configured to utilize absolute material densities via dual energy CT, spectral CT or photon-counting detectors. In some embodiments, the system can be configured to analyze the geometry of the spatial maps that “look” like plaque, for example compared to a known database of plaques. In some embodiments, the system can be configured to utilize smoothing and/or transform functions to get rid of image noise and heterogeneity from a medical image to help determine non-calcified plaque. In some embodiments, the system can be configured to utilize auto-adjustable and/or manually adjustable thresholds of radiodensity values based upon image characteristics, such as for example signal-to-noise ratios, body morph (for example obesity can introduce more image noise), and/or the like. In some embodiments, the system can be configured to utilize different thresholds based upon different arteries. In some embodiments, the system can be configured to account for potential artifacts, such as beam hardening artifacts that may preferentially affect certain arteries (for example, the spine may affect right coronary artery in some instances). In some embodiments, the system can be configured to account for different image acquisition parameters, such as for example, prospective vs. retrospective ECG gating, how much mA and kvP, and/or the like. In some embodiments, the system can be configured to account for different scanner types, such as for example fast-pitch helical vs. traditional helical. In some embodiments, the system can be configured to account for patient-specific parameters, such as for example heart rate, scan volume in imaged field of view, and/or the like. In some embodiments, the system can be configured to account for prior knowledge. For example, in some embodiments, if a patient had a contrast-enhanced CT angiogram in the past, the system can be configured to leverage findings from the previous contrast-enhanced CT angiogram for a non-contrast CT image(s) of the patient moving forward. In some embodiments, in cases where epicardial fat is not present outside an artery, the system can be configured to leverage other Hounsfield unit threshold ranges to depict the outer artery wall. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. Additional Detail—Determination of Cause of Change in Calcium As discussed herein, in some embodiments, the system can be configured to determine a cause of change in calcium level of a subject by analyzing one or more medical images. In some embodiments, the change in calcium level can be by some external force, such as for example, medication treatment, lifestyle change (such as improved diet, physical activity), stenting, surgical bypass, and/or the like. In some embodiments, the system is configured to include one or more assessments of treatment and/or recommendations of treatment based upon these findings. In some embodiments, the system can be configured to determine a cause of change in calcium level of a subject and use the same for prognosis. In some embodiments, the system can be configured to enable improved diagnosis of atherosclerosis, stenosis, ischemia, inflammation in the peri-coronary region, and/or the like. In some embodiments, the system can be configured to enable improved prognostication, such as for example forecasting of some clinical event, such as major adverse cardiovascular events, rapid progression, medication non-response, need for revascularization, and/or the like. In some embodiments, the system can be configured to enable improved prediction, such as for example enabling identification of who will benefit from what therapy and/or enabling monitoring of those changes over time. In some embodiments, the system can be configured to enable improved clinical decision making, such as for example which medications may be helpful, which lifestyle interventions might be helpful, which revascularization or surgical procedures may be helpful, and/or the like. In some embodiments, the system can be configured to enable comparison to one or more normative databases in order to standardize findings to a known ground truth database. In some embodiments, a change in calcium level can be linear, non-linear, and/or transformed. In some embodiments, a change in calcium level can be on its own or in other words involve just calcium. In some embodiments, a change in calcium level can be in relation to one or more other constituents, such as for example, other non-calcified plaque, vessel volume/area, lumen volume/area, and/or the like. In some embodiments, a change in calcium level can be relative. For example, in some embodiments, the system can be configured to determine whether a change in calcium level is above or below an absolute threshold, whether a change in calcium level comprises a continuous change upwards or downwards, whether a change in calcium level comprises a mathematical transform upwards or downwards, and/or the like. As discussed herein, in some embodiments, the system can be configured to analyze one or more variables or parameters, such as those relating to plaque, in determining the cause of a change in calcium level. For example, in some embodiments, the system can be configured to analyze one or more plaque parameters, such as a ratio or function of volume or surface area, heterogeneity index, geometry, location, directionality, and/or radiodensity of one or more regions of plaque within the coronary region of the subject at a given point in time. As discussed herein, in some embodiments, the system can be configured to characterize a change in calcium level between two points in time. For example, in some embodiments, the system can be configured to characterize a change in calcium level as one of positive, neutral, or negative. In some embodiments, the system can be configured to characterize a change in calcium level as positive when the ratio of volume to surface area of a plaque region has decreased, as this can be indicative of how homogeneous and compact the structure is. In some embodiments, the system can be configured to characterize a change in calcium level as positive when the size of a plaque region has decreased. In some embodiments, the system can be configured to characterize a change in calcium level as positive when the density of a plaque region has increased or when an image of the region of plaque comprises more pixels with higher density values, as this can be indicative of stable plaque. In some embodiments, the system can be configured to characterize a change in calcium level as positive when there is a reduced diffuseness. For example, if three small regions of plaque converge into one contiguous plaque, that can be indicative of non-calcified plaque calcifying along the entire plaque length. In some embodiments, the system can be configured to characterize a change in calcium level as negative when the system determines that a new region of plaque has formed. In some embodiments, the system can be configured to characterize a change in calcium level as negative when more vessels with calcified plaque appear. In some embodiments, the system can be configured to characterize a change in calcium level as negative when the ratio of volume to surface area has increased. In some embodiments, the system can be configured to characterize a change in calcium level as negative when there has been no increase in Hounsfield density per calcium pixel. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. Additional Detail—Quantification of Plaque, Stenosis, and/or CAD-RADS Score As discussed herein, in some embodiments, the system can be configured to generate quantifications of plaque, stenosis, and/or CAD-RADS scores from a medical image. In some embodiments, as part of such quantification analysis, the system can be configured to determine a percentage of higher or lower density plaque within a plaque region. For example, in some embodiments, the system can be configured to classify higher density plaque as pixels or voxels that comprise a Hounsfield density unit above 800 and/or 1000. In some embodiments, the system can be configured to classify lower density plaque as pixels or voxels that comprise a Hounsfield density unit below 800 and/or 1000. In some embodiments, the system can be configured to utilize other thresholds. In some embodiments, the system can be configured to report measures on a continuous scale, an ordinal scale, and/or a mathematically transformed scale. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. Additional Detail—Disease Tracking As discussed herein, in some embodiments, the system can be configured to track the progression and/or regression of an arterial and/or plaque-based disease, such as atherosclerosis, stenosis, ischemia, and/or the like. For example, in some embodiments, the system can be configured to track the progression and/or regression of a disease over time by analyzing one or more medical images obtained from two different points in time. As an example, in some embodiments, one or more normal regions from an earlier scan can turn into abnormal regions in the second scan or vice versa. In some embodiments, the one or more medical images obtained from two different points in time can be obtained from the same modality and/or different modalities. For example, scans from both points in time can be CT, whereas in some cases the earlier scan can be CT while the later scan can be ultrasound. Further, in some embodiments, the system can be configured to track the progression and/or regression of disease by identifying and/or tracking a change in density of one or more pixels and/or voxels, such as for example Hounsfield density and/or absolute density. In some embodiments, the system can be configured to track change in density of one or more pixels or voxels on a continuous basis and/or dichotomous basis. For example, in some embodiments, the system can be configured to classify an increase in density as stabilization of a plaque region and/or classify a decrease in density as destabilization of a plaque region. In some embodiments, the system can be configured to analyze surface area and/or volume of a region of plaque, ratio between the two, absolute values of surface area and/or volume, gradient(s) of surface area and/or volume, mathematical transformation of surface area and/or volume, directionality of a region of plaque, and/or the like. In some embodiments, the system can be configured to track the progression and/or regression of disease by analyzing vascular morphology. For example, in some embodiments, the system can be configured to analyze and/or track the effects of the plaque on the outer vessel wall getting bigger or smaller, the effects of the plaque on the inner vessel lumen getting smaller or bigger, and/or the like. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. Global Ischemia Index Some embodiments of the systems, devices, and methods described herein are configured to determine a global ischemia index that is representative of risk of ischemia for a particular subject. For example, in some embodiments, the system is configured to generate a global ischemia index for a subject based at least in part on analysis of one or more medical images and/or contributors of ischemia as well as consequences and/or associated factors to ischemia along the temporal ischemic cascade. In some embodiments, the generated global ischemia index can be used by the systems, methods, and devices described herein for determining and/or predicting the outcome of one or more treatments and/or generating or guiding a recommended medical treatment, therapy, medication, and/or procedure for the subject. In particular, in some embodiments, the systems, devices, and methods described herein can be configured to automatically and/or dynamically analyze one or more medical images and/or other data to identify one or more features, such as plaque, fat, and/or the like, for example using one or more machine learning, artificial intelligence (AI), and/or regression techniques. In some embodiments, one or more features identified from medical image data can be inputted into an algorithm, such as a second-tier algorithm which can be a regression algorithm or multivariable regression equation, for automatically and/or dynamically generating a global ischemia index. In some embodiments, the AI algorithm for determining a global ischemia index can be configured to utilize one or more variables as input, such as different temporal stages of the ischemia cascade as described herein, and compare the same to an output, such as myocardial blood flow, as a ground truth. In some embodiments, the output, such as myocardial blood flow, can be indicative of the presence or absence of ischemia as a binary measure and/or one or more moderation s of ischemia, such as none, mild, moderate, severe, and/or the like. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. In some embodiments, by utilizing one or more computer-implemented algorithms, such as for example one or more machine learning and/or regression techniques, the systems, devices, and methods described herein can be configured to analyze one or more medical images and/or other data to generate a global ischemia index and/or a recommended treatment or therapy within a clinical reasonable time, such as for example within about 1 minute, about 2 minutes, about 3 minutes, about 4 minutes, about 5 minutes, about 10 minutes, about 20 minutes, about 30 minutes, about 40 minutes, about 50 minutes, about 1 hour, about 2 hours, about 3 hours, and/or within a time period defined by two of the aforementioned values. In generating the global ischemia index, in some embodiments, the systems, devices, and methods described herein are configured to: (a) temporally integrate one or more variables along the “ischemic” pathway and weight their input differently based upon their temporal sequence in the development and worsening of coronary ischemia; and/or (b) integrate the contributors, associated factors and consequences of ischemia to improve diagnosis of ischemia. Furthermore, in some embodiments, the systems, devices, and methods described herein transcend analysis beyond just the coronary arteries or just the left ventricular myocardium, and instead can include a combination one or more of: coronary arteries; coronary arteries after nitroglycerin or vasodilator administration; relating coronary arteries to the fractional myocardial mass; non-cardiac cardiac examination; relationship of the coronary-to-non-coronary cardiac; and/or non-cardiac examinations. In addition, in some embodiments, the systems, devices, and methods described herein can be configured to determine the fraction of myocardial mass or subtended myocardial mass to vessel or lumen volume, for example in combination with any of the other features described herein such as the global ischemia index, to further determine and/or guide a recommended medical treatment or procedure, such as revascularization, stenting, surgery, medication such as statins, and/or the like. As such, in some embodiments, the systems, devices, and methods described herein are configured to evaluate ischemia and/or provide recommended medical treatment for the same in a manner that does not currently exist today, accounting for the totality of information contributing to ischemia. In some embodiments, the system can be configured to differentiate between micro and macro vascular ischemia, for example based on analysis of one or more of epicardial corona ries, measures of myocardium densities, myocardium mass, volume of epicardial corona ries, and/or the like. In some embodiments, by differentiating between micro and macro vascular ischemia, the system can be configured to generate different prognostic and/or therapeutic approaches based on such differentiation. In some embodiments, when a medical image(s) of a patient is obtained, such as for example using CT, MM, and/or any other modality, not only information relating to coronary arteries but other information is also obtained, which can include information relating to the vascular system and/or the rest of the heart and/or chest area that is within the frame of reference. While certain technologies may simply focus on the information relating to coronary arteries from such medical scans, some embodiments described herein are configured to leverage more of the information that is inherently obtained from such images to obtain a more global indication of ischemia and/or use the same to generate and/or guide medical therapy. In particular, in some embodiments, the systems, devices, and methods described herein are configured to examine both the contributors as well as consequences and associated factors to ischemia, rather than focusing only on either contributors or consequences. In addition, in some embodiments, the systems, devices, and methods described herein are configured to consider the entirety and/or a portion of temporal sequence of ischemia or the “ischemic pathway.” Moreover, in some embodiments, the systems, devices, and methods described herein are configured to consider the non-coronary cardiac consequences as well as the non-cardiac associated factors that contribute to ischemia. Further, in some embodiments, the systems, devices, and methods described herein are configured to consider the comparison of pre- and post-coronary vasodilation. Furthermore, in some embodiments, the systems, devices, and methods described herein are configured to consider a specific list of variables, rather than a general theme, appropriately weighting their contribution to ischemia. Also, in some embodiments, the systems, devices, and methods described herein can be validated against multiple “measurements” of ischemia, including absolutely myocardial blood flow, myocardial perfusion, and/or flow ratios. Generally speaking, ischemia diagnosis is currently evaluated by either stress tests (myocardial ischemia) or flow ratios in the coronary artery (coronary ischemia), the latter of which can include fractional flow reserve, instantaneous wave-free pressure ratio, hyperemic resistance, coronary flow, and/or the like. However, coronary ischemia can be thought of as only an indirect measure of what is going on in the myocardium, and myocardial ischemia can be thought of as only an indirect measure of what is going on in the coronary arteries. Further certain tests measure only individual components of ischemia, such as contributors of ischemia (such as, stenosis) or sequelae of ischemia (such as, reduced myocardial perfusion or blood flow). However, there are numerous other contributors to ischemia beyond stenosis, numerous associated factors that increase likelihood of ischemia, and many other early and late consequences of ischemia. One technical shortcoming of such existing techniques is that if you only look at factors that contribute or are associated with ischemia, then you are always too early—i.e., in the pre-ischemia stage. Conversely, if you only look at factors that are consequences/sequelae of ischemia, then you are always too late—i.e., in the post-ischemia stage. And ultimately, if you do not look at everything (including associative factors, contributors, early and late consequences), you will not understand where an individual exists on the continuum of coronary ischemia. This may have very important implications in the type of therapy an individual should undergo—such as for example medical therapy, intensification of medical therapy, coronary revascularization by stenting, and/or coronary revascularization by coronary artery bypass surgery. As such, in some embodiments described herein, the systems, methods, and devices are configured to generate or determine a global ischemia index for a particular patient based at least in part on analysis of one or more medical images or data of the patient, wherein the generated global ischemia index is a measure of ischemia for the patient along the continuum of coronary ischemia or the ischemic cascade as described in further detail below. In other words, in some embodiments, unlike in existing technologies or techniques, the global ischemia index generated by the system can be indicative of a stage or risk or development of ischemia of a particular patient along the continuum of coronary ischemia or the ischemic cascade. Further, there can be a relationship between the things that contribute/cause ischemia and the consequences/sequelae of ischemia that occur in a continuous and overlapping fashion. Thus, it can be much more accurate to identify ischemic individuals by combining various factors that contribute/cause ischemia with factors that are consequences/sequelae of ischemia. As such, in some embodiments described herein, the systems, devices, and methods are configured to analyze one or more associative factors, contributors, as well as early and late consequences of ischemia in generating a global ischemia index, which can provide a more global indication of the risk of ischemia. Further, in some embodiments described herein, the systems, devices, and methods are configured to use such generated global ischemia index to determine and/or guide a type of therapy an individual should undergo, such as for example medical therapy, intensification of medical therapy, coronary revascularization by stenting, and/or coronary revascularization by coronary artery bypass surgery. As discussed herein, in some embodiments, the systems, devices, and methods are configured to generate a global ischemia index indicative and/or representative of a risk of ischemia for a particular subject based on one or more medical images and/or other data. More specifically, in some embodiments, the system can be configured to generate a global ischemia index as a measurement of myocardial ischemia. In some embodiments, the generated global ischemia index provides a much more accurate and/or direct measurement of myocardial ischemia compared to existing techniques. Ischemia, by its definition, is an inadequate blood supply to an organ or part of the body. By this definition, the diagnosis of ischemia can be best performed by examining the relationship of the coronary arteries (blood supply) to the heart (organ or part of the body). However, this is not the case as current generation tests measure either the coronary arteries (e.g., FFR, iFR) or the heart (e.g. stress testing by nuclear SPECT, PET, CMR or echo). Because current generation tests fail to examine the relationships of the coronary arteries, they do not account for the temporal sequence of events that occurs in the evolution of ischemia (from none-to-some, as well as from mild-to-moderate-to-severe) or the “ischemic pathway,” as will be described in more detail herein. Quantifying the relationship of the coronary arteries to the heart and other non-coronary structures to the manifestation of ischemia, as well as the temporal findings associated with the stages of ischemia in the ischemic cascade, can improve our accuracy of diagnosis—as well as our understanding of ischemia severity—in a manner not possible with current generation tests. As discussed above, no test currently exists for directly measuring ischemia; rather, existing tests only measure certain specific factors or surrogate markers associated with ischemia, such as for example hypoperfusion or fractional flow reserve (FFR) or wall motion abnormalities. In other words, the current approaches to ischemia evaluation are entirely too simplistic and do not consider all of the variables. Ischemia has historically been “measured” by stress tests. The possible stress tests that exist include: (a) exercise treadmill ECG testing without imaging; (b) stress testing by single photon emission computed tomography (SPECT); (c) stress testing by positron emission tomography (PET); (d) stress testing by computed tomography perfusion (CTP); (e) stress testing by cardiac magnetic resonance (CMR) perfusion; and (f) stress testing by echocardiography. Also, SPECT, PET, CTP and CMR can measure relative myocardial perfusion, in that you compare the most normal appearing portion of the left ventricular myocardium to the abnormal-appearing areas. PET and CTP can have the added capability of measuring absolute myocardial blood flow and using these quantitative measures to assess the normality of blood supply to the left ventricle. In contrast, exercise treadmill ECG testing measures ST-segment depression as an indirect measure of subendocardial ischemia (reduced blood supply to the inner portion of the heart muscle), while stress echocardiography evaluates the heart for stress-induced regional wall motion abnormalities of the left ventricle. Abnormal relative perfusion, absolute myocardial blood flow, ST segment depression and regional wall motion abnormalities occur at different points in the “ischemic pathway.” Furthermore, in contrast to myocardial measures of the left ventricle, alternative methods to determine ischemia involve direct evaluation of the coronary arteries with pressure or flow wires. The most common 2 measurements are fractional flow reserve (FFR) or iFR. These techniques can compare the pressure distal to a given coronary stenosis to the pressure proximal to the stenosis. While easy to understand and potentially intuitive, these techniques do not account for important parameters that can contribute to ischemia, including diffuseness of “mild” stenoses, types of atherosclerosis causing stenosis; and these techniques take into account neither the left ventricle in whole nor the % left ventricle subtended by a given artery. In some embodiments, the global ischemia index is a measure of myocardial ischemia, and leverages the quantitative information regarding the contributors, associated factors and consequences of ischemia. Further, in some embodiments, the system uses these factors to augment ischemia prediction by weighting their contribution accordingly. In some embodiments, the global ischemia index is aimed to serve as a direct measure of both myocardial perfusion and coronary pressure and to integrate these findings to improve ischemia diagnosis. In some embodiments, unlike existing ischemia “measurement” techniques that focus only on a single factor or a single point in the ischemic pathway, the systems, devices, and methods described herein are configured to analyze and/or use as inputs one or more factors occurring at different points in the ischemic pathway in generating the global ischemia index. In other words, in some embodiments, the systems, devices, and methods described herein are configured to take into account the whole temporal ischemic cascade in generating a global ischemia index for assessing the risk of ischemia and/or generating a recommended treatment or therapy for a particular subject. FIG. 20A illustrates one or more features of an example ischemic pathway. While the ischemic pathway is not definitively proven, it is thought to be as shown in FIG. 20A. Having said this, this ischemic pathway may not actually occur in this exact sequence. The ischemic pathway may in fact occur in different order, or many of the events may occur simultaneously and overlap. Nonetheless, the different points along the ischemic pathway can occur at different points in time, thereby adding a temporal aspect in the development of ischemia that some embodiments described herein consider. As illustrated in FIG. 20A, the ischemic pathway can illustrate different conditions that can occur when you have a blockage in a heart artery that reduces blood supply to the heart muscle. In other words, the ischemic pathway can illustrate a sequence of pathophysiologic events caused by coronary artery disease. As illustrated in FIG. 20A, ischemia can occur or gradually develop in a number of different steps rather than a binary concept. The ischemic pathway illustrates different conditions that may arise as a patient gets more and more ischemic. Different existing tests can show ischemia at different stages along the ischemic pathway. For example, a nuclear stress test can show ischemia sooner rather than an echo test, because nuclear imaging probes hypoperfusion, which is an earlier event in the ischemic pathway, whereas a stress echocardiography probes a later event such as systolic dysfunction. Further, an exercise treadmill EKG testing can show ischemia sometime after an echo stress test, as if EKG testing becomes abnormal ECG changes will show. In addition, a PET scan can measure flow maldistribution, and as such can show signs of ischemia prior to before nuclear stress tests. As such, different tests exist for measuring different conditions and steps along the ischemic cascade. However, there does not exist a global technique that takes into account all of these different conditions that arise throughout the course of the ischemic pathway. As such, in some embodiments herein, the systems, devices and methods are configured to analyze multiple different measures along the temporal ischemic pathway and/or weight them differently in generating a global ischemia index, which can be used to diagnose ischemia and/or provide a recommended therapy and/or treatment. In some embodiments, such multiple measures along the temporal ischemic pathway can be weighted differently in generating the global ischemic index; for example, certain measures that come earlier can be weighted less than those measures that arise later in the ischemic cascade in some embodiments. More specifically, in some embodiments, one or more measures of ischemia can be weighted from less to more heavily in the following general order: flow maldistribution, hypoperfusion, diastolic dysfunction, systolic dysfunction, ECG changes, angina, and/or regional wall motion abnormality. In some embodiments, the system can be configured to take the temporal sequence of the ischemic pathway and integrate and weight various conditions or events accordingly in generating the global ischemia index. Further, in some embodiments, the system can be configured to identify certain conditions or “associative factors” well before actual signs ischemia occur, such as for example fatty liver which is associated with diabetes which is associated with coronary disease. In other words, in some embodiments, the system can be configured to integrate one or more factors that are associated, causal, contributive, and/or consequential to ischemia, take into account the temporal sequence of the same and weight them accordingly to generate an index representative of and/or predicting risk of ischemia and/or generating a recommend treatment. As discussed herein, the global ischemia index generated by some embodiments provide substantial technical advantages over existing techniques for assessing ischemia, which have a number of shortcomings. For example, coronary artery examination alone does not consider the wealth of potential contributors to ischemia, including for example: (1) 3D flow (lumen, stenosis, etc.); (2) endothelial function/vasodilation/vasoconstrictive ability of the coronary artery (e.g., plaque type, burden, etc.); (3) inflammation that may influence the vasodilation/vasoconstrictive ability of the coronary artery (e.g., epicardial adipose tissue surrounding the heart); and/or (4) location (plaques that face the myocardium are further away from the epicardial fat, and may be less influenced by the inflammatory contribution of the fat. Plaques that are at the bifurcation, trifurcation or proximal/osti al location may influence the likelihood of ischemia more than those that are not at the bifurcation, trifurcation or proximal/osti al location). One important consideration is that current methods for determining ischemia by CT rely primarily on computational fluid dynamics which, by its definition, does not include fluid-structure interactions (FSI). However, the use of FSI requires the understanding of the material densities of coronary artery vessels and their plaque constituents, which is not known well. Thus, in some embodiments described herein, one important component is that the lateral boundary conditions in the coronary arteries (lumen wall, vessel wall, plaque) can be known in a relative fashion by setting Hounsfield unit thresholds that represent different material densities or setting absolute material densities to pixels based upon comparison to a known material density (i.e., normalization device in our prior patent). By doing so, and coupling to a machine learning algorithm, some embodiments herein can improve upon the understanding of fluid-structure interactions without having to understand the exact material density, which may inform not only ischemia (blood flow within the vessel) but the ability of a plaque to “fatigue” over time. In addition, in some embodiments, the system is configured to take into account non-coronary cardiac examination and data in addition to coronary cardiac data. The coronary arteries supply blood to not only the left ventricle but also the other chambers of the heart, including the left atrium, the right ventricle and the right atrium. While perfusion is not well measured in these chambers by current generation stress tests, in some embodiments, the end-organ effects of ischemia can be measured in these chambers by determining increases in blood volume or pressure (i.e., size or volumes). Further, if blood volume or pressure increases in these chambers, they can have effects of “backing up” blood flow due to volume overload into the adjacent chambers or vessels. So, as a chain reaction, increases in left ventricular volume may increase volumes in sequential order of: (1) left atrium; (2) pulmonary vein; (3) pulmonary arteries; (4) right ventricle; (5) right atrium; (6) superior vena cava or inferior vena cava. In some embodiments, by taking into account non-coronary cardiac examination, the system can be configured to differentiate the role of ischemia on the heart chambers based upon how “upstream” or “downstream” they are in the ischemic pathway. Moreover, in some embodiments, the system can be configured to take into account the relationship of coronary arteries and non-coronary cardiac examination. Existing methods of ischemia determination limit their examination to either the coronary arteries (e.g., FFR, iFR) or the heart left ventricular myocardium. However, in some embodiments herein, the relationship of the coronary arteries with the heart chambers may act synergistically to improve our diagnosis of ischemia. Further, in some embodiments, the system can be configured to take into account non-cardiac examination. At present, no method of coronary/myocardial ischemia determination accounts for the effects of clinical contributors (e.g., hypertension, diabetes) on the likelihood of ischemia. However, these clinical contributors can manifest several image-based end-organ effects which may increase the likelihood of an individual to manifest ischemia. These can include such image-based signs such as aortic dimension (aneurysms are a common end-organ effect of hypertension) and/or non-alcoholic steatohepatitis (fatty liver is a common end-organ effect of diabetes or pre-diabetes). As such, in some embodiments, the system can be configured to account for these features to augment the likelihood of ischemia diagnosis on a scan-specific, individualized manner. Furthermore, at present, no method of myocardial ischemia determination incorporates other imaging findings that may not be ascertainable by a single method, but can be determined through examination by other methods. For example, the ischemia pathway is often thought to occur, in sequential order, from metabolic alterations (laboratory tests), perfusion abnormalities (stress perfusion), diastolic dysfunction (echocardiogram), systolic dysfunction (echocardiogram or stress test), ECG changes (ECG) and then angina (chest pain, human patient report). In some embodiments, the system can be configured to integrate these factors with the image-based findings of the CT scan and allow for improvements in ischemia determination by weighting these variables in accordance with their stage of the ischemic cascade. As described herein, in some embodiments, the systems, methods, and devices are configured to generate a global ischemia index to diagnose ischemia. In some embodiments, the global ischemia index considers the totality of findings that contribute to ischemia, including, for example one or more of: coronary arteries+nitroglycerin/vasodilator administration+relating coronary arteries to the fractional myocardial mass+non-cardiac cardiac examination+relationship of the coronary-to-non-coronary cardiac+non-cardiac examinations, and/or a subset thereof. In some embodiments, the global ischemia index provides weighted increases of variables to contribution of ischemia based upon where the image-based finding is in the pathophysiology of ischemia. In some embodiments, in generating the global ischemia index, the system is configured to input into a regression model one or more factors that are associative, contributive, casual, and/or consequential to ischemia to optimally diagnose whether a subject ischemic or not. FIG. 20B is a block diagram depicting one or more contributors and one or more temporal sequences of consequences of ischemia utilized by an example embodiment(s) described herein. As illustrated in FIG. 20B, in some embodiments, the system can be configured to analyze a number of factors, including contributors, associated factors, causal factors, and/or consequential factors of ischemia and/or use the same as input for generating the global ischemia index. Some of such factors can include those conditions shown in FIG. 20B. For example, signs of a fatty liver and/or emphysema in the lungs can be associated factors used by the system as inputs for generating the global ischemia index. Some examples of contributors used as an input(s) by the system can include the inability to vasodilate with nitric oxide and/or nitroglycerin, low density non-calcified plaque, small artery, and/or the like. Some examples of early consequences of ischemia used as an input(s) by the system can include reduced perfusion in the heart muscle, increase in size of the volume of the heart. An example of late consequences of ischemia used as an input(s) by the system can include blood starting to back up into other chambers of heart in addition to the left ventricle. In some embodiments, the global ischemia index accounts for the direct contributors to ischemia, the early consequences of ischemia, the late consequences of ischemia, the associated factors with ischemia and other test findings in relation to ischemia. In some embodiments, one or more these factors can be identified and/or derived automatically, semi-automatically, and/or dynamically using one or more algorithms, such as a machine learning algorithm. Some example algorithms for identifying such features are described in more detail below. Without such trained algorithms, it can be difficult, if not impossible, to take into account all of these factors in generating the global ischemia index within a reasonable time. In some embodiments, these factors, weighted differently and appropriately, can improve diagnosis of ischemia. FIG. 20C is a block diagram depicting one or more features of an example embodiment(s) for determining ischemia by weighting different factors differently. In some embodiments, in generating the global ischemia index, the system is configured to take into account the temporal aspect of the ischemic cascade and weight one or more factors according to the temporal aspect, for example where early signs of ischemia can be weighted less heavily compared to later signs of ischemia. In some embodiments, the system can automatically and/or dynamically determine the different weights for each factor, for example using a regression model. In some embodiments, the system can be configured to derive one or more appropriate weighting factors based on previous analysis of data to determine which factor should be more or less heavily weighted compared to others. In some embodiments, a user can guide and/or otherwise provide input for weighting different factors. As described herein, in some embodiments, the global ischemia index can be generated by a machine learning algorithm and/or a regression algorithm that condenses this multidimensional information into an output of “ischemia” or “no ischemia” when compared to a “gold standard” of ischemia, as measured by myocardial blood flow, myocardial perfusion or flow ratios. In some embodiments, the system can be configured to output an indication of moderation of ischemia, such none, mild, moderate, severe, and/or the like. In some embodiments, the output indication of ischemia can be on a continuous scale. FIG. 20D is a block diagram depicting one or more features of an example embodiment(s) for calculating a global ischemia index. As illustrated in FIG. 20D, in some embodiments, the system can be configured to validate the outputted global ischemia index against absolute myocardial blood flow, which can be measured for example by PET and/or CT scans to measure different regions of the heart to see if there are different flows of blood within different regions. As absolute myocardial blood flow can provide an absolute value of volume per time, in some embodiments, the system can be configured to compare the absolute myocardial blood flow of one region to another region, which would not be possible using relative measurements, such as for example using nuclear stress testing. As discussed herein, in some embodiments, the systems, devices, and methods can be configured to utilize a machine learning algorithm and/or regression algorithm for analyzing and/or weighting different factors for generating the global ischemia index. By doing so, in some embodiments, the system can be configured to take into account one or more statistical and/or machine learning considerations. More specifically, in some embodiments, the system can be configured to deliberately duplicate the contribution of particular variables. For example, in some embodiments, non-calcified plaque (NCP), low density non-calcified plaque (LD-NCP), and/or high-risk plaque (HRP) may all contribute to ischemia. In traditional statistics, collinearity could be a reason to select only one out of these three variables, but by utilizing machine learning in some embodiments, the system may allow for data driven exploration of the contribution of multiple variables, even if they share a specific feature. In addition, in some embodiments, the system may take into account certain temporal considerations when training and/or applying an algorithm for generating the global ischemia index. For example, in some embodiments, the system can be configured to give greater weight to consequences/sequelae rather than causes/contributors, as the consequences/sequelae have already occurred. In addition, in some situations, coronary vasodilation is induced before a coronary CT scan because it allows the coronary arteries to be maximum in their size/volume. Nitroglycerin is an endothelium-independent vasodilator as compared to, for example, nitric oxide, which is an endothelium-dependent vasodilator. As nitroglycerin-induced vasodilation occurs in the coronary arteries—and, because a “timing” iodine contrast bolus is often administered before the actual coronary CT angiogram, comparison of the volume of coronary arteries before and after a nitroglycerin administration may allow a direct evaluation of coronary vasodilatory capability, which may significantly augment accurate ischemia diagnosis. Alternatively, an endothelium-dependent vasodilator—like nitric oxide or carbon dioxide—may allow for augmentation of coronary artery size in a manner that can be either replaced or coupled to endothelium-independent vasodilation (by nitroglycerin) to maximize understanding of the ability of coronary arteries to vasodilate. In some embodiments, the system can be configured to measure vasodilatory effects, for example by measuring the diameter of one or more arteries before and/or after administration of nitroglycerin and/or nitric oxide, and use such vasodilatory effects as a direct measurement or indication of ischemia. Alternatively and/or in addition to the foregoing, in some embodiments, the system can be configured to measure such vasodilatory effects and use the same as an input in determining or generating the global ischemia index and/or developing a recommended medical therapy or treatment for the subj ect. Further, in some embodiments, the system can be configured to relate the coronary arteries to the heart muscle that it provides blood to. In other words, in some embodiments, the system can be configured to take into account fractional myocardial mass when generating a global ischemia index. For ischemia diagnosis, stress testing can be, at present, limited to the left ventricle. For example, in stress echocardiogram (ultrasound), the effects of stress-induced left ventricular regional wall motion abnormalities are examined, while in SPECT, PET and cardiac Mill, the effects of stress-induced left ventricular myocardial perfusion are examined. However, no currently existing technique relates the size (volume), geometry, path and relation to other vessels with the % fractional myocardial mass subtended by that artery. Further, one assumes that the coronary artery distribution is optimal but, in many people, it may not be. Therefore, understanding an optimization platform to compute optimal blood flow through the coronary arteries may be useful in guiding treatment decisions. As such, in some embodiments, the system is configured to determine the fractional myocardial mass or the relationship of coronary arteries to the left ventricular myocardium that they subtend. In particular, in some embodiments, the system is configured to determine and/or tack into account the subtended mass of myocardium to the volume of arterial vessel. Historically, myocardial perfusion evaluation for myocardial ischemia has been performed using stress tests, such as nuclear SPECT, PET, cardiac Mill or cardiac CT perfusion. These methods have relied upon a 17-segment myocardial model, which classifies perfusion defects by location. There can be several limitations to this, including: (1) assuming that all 17 segments have the same size; (2) assuming that all 17 segments have the same prognostic importance; and (3) does not relate the myocardial segments to the coronary arteries that provide blood supply to them. As such, to address such shortcomings, in some embodiments, the system can be configured to analyze fractional myocardial mass (FMM). Generally speaking, FMM aims to relate the coronary arteries to the amount of myocardium that they subtend. These can have important implications on prognostication and treatment. For example, a patient may have a 70% stenosis in an artery, which has been a historical cut point where coronary revascularization (stenting) is considered. However, there may be very important prognostic and therapeutic implications for patients who have a 70% stenosis in an artery that subtends 1% of the myocardium vs. a 70% stenosis in an artery that subtends 15% of the myocardium. This FMM has been historically calculated using a “stem-and-crown” relationship between the myocardium on CT scans and the coronary arteries on CT scans and has been reported to have the following relationship: M=kL3/4, where M=mass, k=constant, and L=length. However, this relationship, while written about quite frequently, has not been validated extensively. Nor have there been any cut points that can effectively guide therapy. The guidance of therapy can come in many regards, including: (1) decision to perform revascularization: high FMM, perform revascularization to improve event-free survival; low FMM, medical therapy alone without revascularization; (2) different medical therapy regimens: high FMM, give several medications to improve event-free survival; low FMM, give few medications; (3) prognostication: high FMM, poor prognosis; low FMM, good prognosis. Further, in the era of 3D imaging, the M=kL relationships should be expanded to the M=kV relationship, where V =volume of the vessel or volume of the lumen. As such, in some embodiments, the system is configured to (1) describe the allometric scaling law in 3 dimensions, i.e., M=kVn; (2) use FMM as a cut point to guide coronary revascularization; and/or (3) use FMM cut points for clinical decision making, including (a) use of medications vs. not, (b) different types of medications (cholesterol lowering, vasodilators, heart rate slowing medications, etc.) based upon FMM cut points; (c) number of medications based upon FMM cut points; and/or (d) prognostication based upon FMM cut points. In some embodiments, the use of FMM cut points by 3D FMM calculations can improve decision making in a manner that improves event-free survival. As described above, in some embodiments, the system can be configured to utilize one or more contributors or causes of ischemia as inputs for generating a global ischemia index. An example of a contributor or cause of ischemia that can be utilized as input and/or analyzed by the system can include vessel caliber. In particular, in some embodiments, the system can be configured to analyze and/or utilize as an input the percentage diameter of stenosis, wherein the greater the stenosis the more likely the ischemia. In addition, in some embodiments, the system can be configured to analyze and/or utilize as in input lumen volume, wherein the smaller the lumen volume, the more likely the ischemia. In some embodiments, the system can be configured to analyze and/or utilize as an input lumen volume indexed to % fractional myocardial mass, body surface area (BSA), body mass index (BMI), left ventricle (LV) mass, overall heart size, wherein the smaller the lumen volume, the more likely the ischemia. In some embodiments, the system can be configured to analyze and/or utilize as an input vessel volume, wherein the smaller the vessel volume, the more likely the ischemia. In some embodiments, the system can be configured to analyze and/or utilize as an input minimal luminal diameter (MLD), minimal luminal are (MLA), and/or a ratio between MLD and MLA, such as MLD/MLA. Another example contributor or cause of ischemia that can be utilized as input and/or analyzed by the system can include plaque, which may have marked effects on the ability of an artery to vasodilate/vasoconstrict. In particular, in some embodiments, the system can be configured to analyze and/or utilize as an input non-calcified plaque (NCP), which may cause greater endothelial dysfunction and inability to vasodilate to hyperemia. In some embodiments, the system may utilize one or more arbitrary cutoffs for analyzing NCP, such as binary, trina ry, and/or the like for necrotic core, fibrous, and/or fibrofatty. In some embodiments, the system may utilize continuous density measures for NCP. Further, in some embodiments, the system may analyze NCP for dual energy, monochromatic, and/or material basis decomposition. In some embodiments, the system can be configured to analyze and/or identify plaque geometry and/or plaque heterogeneity and/or other radio mics features. In some embodiments, the system can be configured to analyze and/or identify plaque facing the lumen and/or plaque facing epicardial fat. In some embodiments, the system can be configured to derive and/or identify imaging-based information, which can be provided directly to the algorithm for generating the global ischemia index. In some embodiments, the system can be configured to analyze and/or utilize as an input low density NCP, which may cause greater endothelial dysfunction and inability to vasodilate to hyperemia, for example using one or more specific techniques described above in relation to NCP. In some embodiments, the system can be configured to analyze and/or utilize as an input calcified plaque (CP), which may cause more laminar flow, less endothelial dysfunction and less ischemia. In some embodiments, the system may utilize one or more arbitrary cutoffs, such as 1K plaque (plaques >1000 Hounsfield units), and/or continuous density measures for CP. In some embodiments, the system can be configured to analyze and/or utilize as an input the location of plaque. In particular, the system may determine that myocardial facing plaque may be associated with reduced ischemia due to its proximity to myocardium (e.g., myocardial bridging rarely has atherosclerosis). In some embodiments, the system may determine that pericardial facing plaque may be associated with increased ischemia due to its proximity to peri-coronary adipose tissue. In some embodiments, the system may determine that bifurcation and/or trifurcation lesions may be associated with increased ischemia due to disruptions in laminar flow. In some embodiments, visualization of three-dimensional plaques can be generated and/or provided by the system to a user to improve understanding to the human observer of where plaques are in relationship to each other and/or to the myocardium to the pericardium. For example, in a particular vein, the system may be configured to allow the visualization of all the plaques on a single 2D image. As such, in some embodiments, the system can allow for all of the plaques to be visualized in a single view, with color-coded and/or shadowed labels and/or other labels to plaques depending on whether they are in the 2D field of view, or whether they are further away from the 2D field of view. This can be analogous to the maximum intensity projection view, which highlights the lumen that is filled with contrast agent, but applies an intensity projection (maximum, minimum, average, ordinal) to the plaques of different distance from the field of view or of different densities. In some embodiments, the system can be configured to visualize plaque using maximum intensity projection (MIP) techniques. In some embodiments, the system can be configured to visualize plaque in 2D, 3D, and/or 4D, for example using MIP techniques and/or other techniques, such as volume rendering techniques (VRT). More specifically, for 4D, in some embodiments, the system can be configured to visualize progression of plaque in terms of time. In some embodiments, the system can be configured to visualize on an image and/or on a video and/or other digital support the lumen and/or the addition of plaque in 2D, 3D, and/or 4D. In some embodiments, the system can be configured to show changes in time or 4D. In some embodiments, the system can be configured to take multiple scans taken from different points in time and/or integrate all or some of the information with therapeutics. In some embodiments, based on the same, the system can be configured to decide on changes in therapy and/or determine prognostic information, for example assessing for therapy success. Another example contributor or cause of ischemia that can be utilized as input and/or analyzed by the system can include fat. In some embodiments, the system can be configured to analyze and/or utilize as an input peri-coronary adipose tissue, which may cause ischemia due to inflammatory properties that cause endothelial dysfunction. In some embodiments, the system can be configured to analyze and/or utilize as an input epicardial adipose tissue, which may be a cause of overall heart inflammation. In some embodiments, the system can be configured to analyze and/or utilize as input epicardial fat and/or radio mics or imaging-based information provided directly to the algorithm, such as for example heterogeneity, density, density change away from the vessel, volume, and/or the like. As described above, in some embodiments, the system can be configured to utilize one or more consequences or sequelae of ischemia as inputs for generating a global ischemia index. An example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the left ventricle. For example, in some embodiments, the system can be configured to analyze the perfusion and/or Hounsfield unit density of the left ventricle, which can be global and/or related to the percentage of fractional myocardial mass. In some embodiments, the system can be configured to analyze the mass of the left ventricle, wherein the greater the mass, the greater the potential mismatch between lumen volume to LV mass, which can be global as well as related to the percentage of fractional myocardial mass. In some embodiments, the system can be the system can be configured to analyze the volume of the left ventricle, wherein an increase in the left ventricle volume can be a direct sign of ischemia. In some embodiments, the system can be configured to analyze and/or utilize as input density measurements of the myocardium, which can be absolute and/or relative, for example using a sticker or normalization device. In some embodiments, the system can be configured to analyze and/or use as input regional and/or global changes in densities. In some embodiments, the system can be configured to analyze and/or use as input endo, mid-wall, and/or epicardial changes in densities. In some embodiments, the system can be configured to analyze and/or use as input thickness, presence of fat and/or localization thereof, presence of calcium, heterogeneity, radio mic features, and/or the like. Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the right ventricle. For example, in some embodiments, the system can be configured to analyze the perfusion and/or Hounsfield unit density of the right ventricle, which can be global and/or related to the percentage of fractional myocardial mass. In some embodiments, the system can be configured to analyze the mass of the right ventricle, wherein the greater the mass, the greater the potential mismatch between lumen volume to LV mass, which can be global as well as related to the percentage of fractional myocardial mass. In some embodiments, the system can be the system can be configured to analyze the volume of the right ventricle, wherein an increase in the right ventricle volume can be a direct sign of ischemia. Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the left atrium. For example, in some embodiments, the system can be configured to analyze the volume of the left atrium, in which an increased left atrium volume can occur in patients who become ischemic and go into heart failure. Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the right atrium. For example, in some embodiments, the system can be configured to analyze the volume of the right atrium, in which an increased right atrium volume can occur in patients who become ischemic and go into heart failure. Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to one or more aortic dimensions. For example, an increased aortic size as a long-standing contributor of hypertension may be associated with the end-organ effects of hypertension on the coronary arteries (resulting in more disease) and the LV mass (resulting in more LV mass-coronary lumen volume mismatch). Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the pulmonary veins. For example, for patients with volume overload, engorgement of the pulmonary veins may be a significant sign of ischemia. As described above, in some embodiments, the system can be configured to utilize one or more associated factors of ischemia as inputs for generating a global ischemia index. An example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to the presence of fatty liver or non-alcoholic steatohepatitis, which is a condition that can be diagnosed by placing regions of interest (ROIs) in the liver to measure Hounsfield unit densities. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to emphysema, which is a condition that can be diagnosed by placing regions of interest in the lung to measure Hounsfield unit densities. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to osteoporosis, which is a condition that can be diagnosed by placing regions of interest in the spine. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to mitral annular calcification, which is a condition that can be diagnosed by identifying calcium (e.g., HU>350 etc.) in the mitral annulus. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to aortic valve calcification, which is a condition that can be diagnosed by identifying calcium in the aortic valve. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to aortic enlargement, often seen in hypertension, can reveal an enlargement in the proximal aorta due to long-standing hypertension. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to mitral valve calcification, which can be diagnosed by identifying calcium in the mitral valve. As discussed herein, in some embodiments, the system can be configured to utilize one or more inputs or variables for generating a global ischemia index, for example by inputting the like into a regression model or other algorithm. In some embodiments, the system can be configured to use as input one or more radio mics features and/or imaging-based deep learning. In some embodiments, the system can be configured to utilize as input one or more of patient height, weight, sex, ethnicity, body surface, previous medication, genetics, and/or the like. In some embodiments, the system can be configured to analyze and/or utilize as input calcium, separate calcium densities, localization calcium to lumen, volume of calcium, and/or the like. In some embodiments, the system can be configured to analyze and/or utilize as input contrast vessel attenuation. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input average contrast in the lumen in the beginning of a segment and/or average contrast in the lumen at the end of that segment. In some embodiments, the system can be configured to analyze and/or utilize as input average contrast in the lumen in the beginning of the vessel to the beginning of the distal segment of that vessel, for example because the end can be too small in some instances. In some embodiments, the system can be configured to analyze and/or utilize as input plaque heterogeneity. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input calcified plaque volume versus and/or non-calcified plaque volume. In some embodiments, the system can be configured to analyze and/or utilize as input standard deviation of one or more of the 3 different components of plaque. In some embodiments, the system can be configured to analyze and/or utilize as input one or more vasodilation metrics. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input the highest remodeling index of a plaque. In some embodiments, the system can be configured to analyze and/or utilize as input the highest, average, and/or smallest thickness of plaque, and for example for its calcified and/or non-calcified components. In some embodiments, the system can be configured to analyze and/or utilize as input the highest remodeling index and/or lumen area. In some embodiments, the system can be configured to analyze and/or utilize as input the lesion length and/or segment length of plaque. In some embodiments, the system can be configured to analyze and/or utilize as input bifurcation lesion, such as for example the presence of absence thereof. In some embodiments, the system can be configured to analyze and/or utilize as input coronary dominance, for example left dominance, right dominance, and/or co dominance. In particular, in some embodiments, if left dominance, the system can be configured to disregard and/or weight less one or more right coronary metrics. Similarly, if right dominance, the system can be configured to disregard and/or weight less one or more left coronary metrics. In some embodiments, the system can be configured to analyze and/or utilize as input one or more vascularization metrics. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input the volume of the lumen of one or more, some, or all vessels. In some embodiments, the system can be configured to analyze and/or utilize as input the volume of the lumen of one or more secondary vessels, such as for example, non-right coronary artery (non-RCA), left anterior descending artery (LAD) vessel, circumflex (CX) vessel, and/or the like. In some embodiments, the system can be configured to analyze and/or utilize as input the volume of vessel and/or volume of plaque and/or a ratio thereof. In some embodiments, the system can be configured to analyze and/or utilize as input one or more inflammation metrics. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input the average density of one or more pixels outside a lesion, such as for example 5 pixels and/or 3 or 4 pixels of 5, disregarding the first 1 or 2 pixels. In some embodiments, the system can be configured to analyze and/or utilize as input the average density of one or more pixels outside a lesion including the first 2/3 of each vessel that is not a lesion or plaque. In some embodiments, the system can be configured to analyze and/or utilize as input one or more pixels outside a lesion and/or the average of the same pixels on a 3mm section above the proximal right coronary artery (R1) if there is no plaque in that place. In some embodiments, the system can be configured to analyze and/or utilize as input one or more ratios of any factors and/or variables described herein. As described above, in some embodiments, the system can be configured to utilize one or more machine learning algorithms in identifying, deriving, and/or analyzing one or more inputs for generating the global ischemia index, including for example one or more direct contributors to ischemia, early consequences of ischemia, late consequences of ischemia, associated factors with ischemia, and other test findings in relation to ischemia. In some embodiments, one or more such machine learning algorithms can provide fully automated quantification and/or characterization of such factors. As an example, in some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze inferior vena cava from one or more medical images. Measures of inferior vena cava can be of high importance in patients with right-sided heart failure and tricuspid regurgitation. In addition, in some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the inter atrial septum from one or more medical images. Inter atrial septum dimensions can be vital for patients undergoing left-sided transcatheter procedures. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze descending thoracic aorta from one or more medical images. Measures of descending thoracic aorta can be of critical importance in patients with aortic aneurysms, and for population-based screening in long-time smokers. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the coronary sinus from one or more medical images. Coronary sinus dimensions can be vital for patients with heart failure who are undergoing biventricular pacing. In some embodiments, by analyzing the coronary sinus, the system can be configured to derive all or some myocardium blood flow, which can be related to coronary volume, myocardium mass. In addition, in some embodiments, the system can be configured to analyze, derive, and/or identify hypertrophic cardiomyopathy (HCM), other hyper trophies, ischemia, and/or the like to derive ischemia and/or microvascular ischemia. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the anterior mitral leaflet from one or more medical images. For a patient being considered for surgical or transcatheter mitral valve repair or replacement, no current method currently exists to measure anterior mitral leaflet dimensions. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left atrial appendage from one or more medical images. Left atrial appendage morphologies are linked to stroke in patients with atrial fibrillation, but no automated characterization solution exists today. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left atrial free wall mass from one or more medical images. No current method exists to accurately measure left atrial free wall mass, which may be important in patients with atrial fibrillation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left ventricular mass from one or more medical images. Certain methods of measuring left ventricular hypertrophy as an adverse consequence of hypertension rely upon echocardiography, which employs a 2D estimated formula that is highly imprecise. 3D imaging by magnetic resonance imaging (MRI) or computed tomography (CT) are much more accurate, but current software tools are time-intensive and imprecise. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left atrial volume from one or more medical images. Determination of left atrial volume can improve diagnosis and risk stratification in patients with and at risk of atrial fibrillation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left ventricular volume from one or more medical images. Left ventricular volume measurements can enable determination of individuals with heart failure or at risk of heart failure. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left ventricular papillary muscle mass from one or more medical images. No current method currently exists to measure left ventricular papillary muscle mass. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the posterior mitral leaflet from one or more medical images. For patients being considered for surgical or transcatheter mitral valve repair or replacement, no current method currently exists to measure posterior mitral leaflet dimensions. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze pulmonary veins from one or more medical images. Measures of pulmonary vein dimensions can be of critical importance in patients with atrial fibrillation, heart failure and mitral regurgitation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze pulmonary arteries from one or more medical images. Measures of pulmonary artery dimensions can be of critical importance in patients with pulmonary hypertension, heart failure and pulmonary emboli. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right atrial free wall mass from one or more medical images. No current method exists to accurately measure right atrial free wall mass, which may be important in patients with atrial fibrillation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right ventricular mass from one or more medical images. Methods of measuring right ventricular hypertrophy as an adverse consequence of pulmonary hypertension and/or heart failure do not currently exist. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the proximal ascending aorta from one or more medical images. Aortic aneurysms can require highly precise measurements of the aorta, which are more accurate by 3D techniques such as CT and Mill. At present, current algorithms do not allow for highly accurate automated measurements. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right atrial volume from one or more medical images. Determination of right atrial volume can improve diagnosis and risk stratification in patients with and at risk of atrial fibrillation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right ventricular papillary muscle mass from one or more medical images. No current method currently exists to measure right ventricular papillary muscle mass. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right ventricular volume from one or more medical images. Right ventricular volume measurements can enable determination of individuals with heart failure or at risk of heart failure. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the superior vena cava from one or more medical images. No reliable method exists to date to measure superior vena cava dimensions, which may be important in patients with tricuspid valve insufficiency and heart failure. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, analyze, segment, and/or quantify one or more cardiac structures from one or more medical images, such as the left and right ventricular volume (LVV, RVV), left and right atrial volume (LAV, RAV), and/or left ventricular myocardial mass (LVM). Further, in some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, analyze, segment, and/or quantify one or more cardiac structures from one or more medical images, such as the proximal ascending and descending aorta (PAA, DA), superior and inferior vena cav ae (SVC, IVC), pulmonary artery (PA), coronary sinus (CS), right ventricular wall (RVW), and left atrial wall (LAW). In addition, in some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, analyze, segment, and/or quantify one or more cardiac structures from one or more medical images, such as the left atrial appendage, left atrial wall, coronary sinus, descending aorta, superior vena cava, inferior vena cava, pulmonary artery, right ventricular wall, sinuses of Valsalva, left ventricular volume, left ventricular wall, right ventricular volume, left atrial volume, right atrial volume, and/or proximal ascending aorta. FIG. 20E is a flowchart illustrating an overview of an example embodiment(s) of a method for generating a global ischemia index for a subject and using the same to assist assessment of risk of ischemia for the subject. As illustrated in FIG. 20E, in some embodiments, the system can be configured to access one or more medical images of a subject at block 202, in any manner and/or in connection with any feature described above in relation to block 202. In some embodiments, the system is configured to identify one or more vessels, plaque, and/or fat in the one or more medical images at block 2002. For example, in some embodiments, the system can be configured to use one or more AI and/or ML algorithms and/or other image processing techniques to identify one or more vessels, plaque, and/or fat. In some embodiments, the system at block 2004 is configured to analyze and/or access one or more contributors to ischemia of the subject, including any contributors to ischemia described herein, for example based on the accessed one or more medical images and/or other medical data. In some embodiments, the system at block 2006 is configured to analyze and/or access one or more consequences of ischemia of the subject, including any consequences of ischemia described herein, including early and/or late consequences, for example based on the accessed one or more medical images and/or other medical data. In some embodiments, the system at block 2008 is configured to analyze and/or access one or more associated factors to ischemia of the subject, including any associated factors to ischemia described herein, for example based on the accessed one or more medical images and/or other medical data. In some embodiments, the system at block 2010 is configured to analyze and/or access one or more results from other testing, such as for example invasive testing, non-invasive testing, image-based testing, non-image based testing, and/or the like. In some embodiments, the system at block 2012 can be configured to generate a global ischemia index based on one or more parameters, such as for example one or more contributors to ischemia, one or more consequences of ischemia, one or more associated factors to ischemia, one or more other testing results, and/or the like. In some embodiments, the system is configured to generate a global ischemia index for the subject by generating a weighted measure of one or more parameters. For example, in some embodiments, the system is configured to weight one or more parameters differently and/or equally. In some embodiments, the system can be configured weight one or more parameters logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system is configured to generate a weighted measure using only some or all of the parameters. In some embodiments, at block 2014, the system is configured to verify the generated global ischemia index. For example, in some embodiments, the system is configured to verify the generated global ischemia index by comparison to one or more blood flow parameters such as those discussed herein. In some embodiments, at block 2016, the system is configured to generate user assistance to help a user determine an assessment of risk of ischemia for the subject based on the generated global ischemia index, for example graphically through a user interface and/or otherwise. CAD Score(s) Some embodiments of the systems, devices, and methods described herein are configured to generate one or more coronary artery disease (CAD) scores representative of a risk of CAD for a particular subject. In some embodiments, the risk score can be generated by analyzing and/or combining one or more aspects or characteristics relating to plaque and/or cardiovascular features, such as for example plaque volume, plaque composition, vascular remodeling, high-risk plaque, lumen volume, plaque location (proximal v. middle v. distal), plaque location (myocardial v. pericardial facing), plaque location (at bifurcation or trifurcation v. not at bifurcation or trifurcation), plaque location (in main vessel v. branch vessel), stenosis severity, percentage coronary blood volume, percentage fractional myocardial mass, percentile for age and/or gender, constant or other correction factor to allow for control of within-person, within-vessel, inter-plaque, plaque-myocardial relationships, and/or the like. In some embodiments, a CAD risk score(s) can be generated based on automatic and/or dynamic analysis of one or more medical images, such as for example a CT scan or an image obtained from any other modality mentioned herein. In some embodiments, data obtained from analyzing one or more medical images of a patient can be normalized in generating a CAD risk score(s) for that patient. In some embodiments, the systems, devices, and methods described herein can be configured to generate a CAD risk score(s) for different vessels, vascular territories, and/or patients. In some embodiments, the systems, devices, and methods described herein can be configured to generate a graphical visualization of risk of CAD of a patient based on a vessel basis, vascular territory basis, and/or patient basis. In some embodiments, based on the generated CAD risk score(s), the systems, methods, and devices described herein can be configured to generate one or more recommended treatments for a patient. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. In some embodiments, the systems, devices, and methods described herein can be configured to assess patients with suspected coronary artery disease (CAD) by use of one or more of a myriad of different diagnostic and prognostic tools. In particular, in some embodiments, the systems, devices, and methods described herein can be configured to use a risk score for cardiovascular care for patients without known CAD. As a non-limiting example, in some embodiments, the system can be configured to generate an Atherosclerotic Cardiovascular Disease (ASCVD) risk score, which can be based upon a combination of age, gender, race, blood pressure, cholesterol (total, HDL and LDL), diabetes status, tobacco use, hypertension, and/or medical therapy (such as for example, statin and aspirin). As another non-limiting example, in some embodiments, the system can be configured to generate a Coronary Artery Calcium Score (CACS), which can be based upon a non-contrast CT scan wherein coronary arteries are visualized for the presence of calcified plaque. In some embodiments, an Agatston (e.g., a measure of calcium in a coronary CT scan) score may be used to determine the CACS. In particular, in some embodiments, a CACS score can be calculated by: Agatston score=surface area×Hounsfield unit density (with brighter plaques with higher density receiving a higher score). However, in some embodiments, there may be certain limitations with a CACS score. For example, in some embodiments, because surface area to volume ratio decreases as a function of the overall volume, more spherical plaques can be incorrectly weighted as less contributory to the Agatston score. In addition, in some embodiments, because Hounsfield unit density is inversely proportional to risk of major adverse cardiac events (MACE), weighting the HU density higher can score a lower risk plaque as having a higher score. Moreover, in some embodiments, 2.5-3 mm thick CT “slices” can miss smaller calcified plaques, and/or no use of beta blocker results in significant motion artifact, which can increase the calcium score due to artifact. In some embodiments, for symptomatic patients undergoing coronary CT angiography, the system can be configured to generate and/or utilize one or more additional risk scores, such as a Segment Stenosis Score, Segment Involvement Score, Segments-at-Risk Score, Duke Prognostic Index, CTA Score, and/or the like. More specifically, in some embodiments, a Segment Stenosis Score weights specific stenoses (0=0%, 1=1−24%, 2=25−49%, 3=50−69%, 4=>70%) across the entire 18 coronary segment, resulting in a total possible score of 72. In some embodiments, a Segment Involvement Score counts the number of plaques located in the 18 segments and has a total possible score of 18. In some embodiments, a Segments-at-Risk Score reflects the potential susceptibility of all distal coronary segments subtended by severe proximal plaque. Thus, in some embodiments, all segments subtended by severe proximal plaque can be scored as severe as well, then summated over 18 segments to create a segment-at-risk score. For example, if the proximal portion of the LCx is considered severely obstructive, the segments-at-risk score for the LCx can be proximal circumflex (=3)+mid circumflex (=3)+distal circumflex (=3)+proximal obtuse marginal (=3)+mid obtuse marginal (=3)+distal obtuse marginal (=3), for a total circumflex segments-at-risk score of 18. In this individual, if the LAD exhibits mild plaque in the proximal portion (=1) and moderate plaque in the mid portion (=2), the LAD segments-at-risk score can be 3. If the RCA exhibits moderate plaque in the proximal portion (=3), the RCA segments-at-risk score can be 2. Thus, for this individual, the total segments-at-risk score can be 23 out of a possible 48. In some embodiments, a Duke Prognostic Index can be a reflection of the coronary artery plaque severity considering plaque location. In some embodiments, a modified Duke CAD index can consider overall plaque extent relating it to coexistent plaque in the left main or proximal LAD. In some embodiments, using this scoring system, individuals can be categorized into six distinct groups: no evident coronary artery plaque; >2 mild plaques with proximal plaque in any artery or 1 moderate plaque in any artery; 2 moderate plaques or 1 severe plaque in any artery; 3 moderate coronary artery plaques or 2 severe coronary artery plaques or isolated severe plaque in the proximal LAD; 3 severe coronary artery plaques or 2 severe coronary artery plaques with proximal LAD plaque; moderate or severe left main plaque. In some embodiments, a CT angiography (CTA) Score can be calculated by determining CAD in each segment, such as for example proximal RCA, mid RCA, distal RCA, R-PDA, R-PLB, left main, proximal LAD, mid LAD, distal LAD, D1, D2, proximal LCX, distal LCX, IM/AL, OM, L-PL, L-PDA, and/or the like. In particular, for each segment, when plaque is absent, the system can be configured to assign a score of 0, and when plaque is present, the system can be configured to assign a score of 1.1, 1.2 or 1.3 according to plaque composition (such as calcified, non-calcified and mixed plaque, respectively). In some embodiments, these scores can be multiplied by a weight factor for the location of the segment in the coronary artery tree (for example, 0.5-6 according to vessel, proximal location and system dominance). In some embodiments, these scores can also be multiplied by a weight factor for stenosis severity (for example, 1.4 for ≥50% stenosis and 1.0 for stenosis <50%). In some embodiments, the final score can be calculated by addition of the individual segment scores. In some embodiments, the systems, devices, and methods described herein can be configured to utilize and/or perform improved quantification and/or characterization of many parameters on CT angiography that were previously very difficult to measure. For example, in some embodiments, the system can be configured to determine stenosis severity leveraging a proximal/distal reference and report on a continuous scale, for example from 0-100%, by diameter, area, and/or volumetric stenosis. In some embodiments, the system can be configured to determine total atheroma burden, reported in volumes or as a percent of the overall vessel volume (PAV), including for example non-calcified plaque volume (for example, as a continuous variable, ordinal variable or single variable), calcified plaque volume (for example, as a continuous variable, ordinal variable or single variable), and/or mixed plaque volume (for example, as a continuous variable, ordinal variable or single variable). In some embodiments, the system can be configured to determine low attenuation plaque, for example reported either as yes/no binary or continuous variable based upon HU density. In some embodiments, the system can be configured to determine vascular remodeling, for example reported as ordinal negative, intermediate or positive (for example, <0.90, 0.90-1.10, or >1.0) or continuous. In some embodiments, the system can be configured to determine and/or analyze various locations of plaque, such as for example proximal/mid/distal, myocardial facing vs. pericardial facing, at bifurcation v. not at bifurcation, in main vessel vs. branch vessel, and/or the like. In some embodiments, the system can be configured to determine percentage coronary blood volume, which can report out the volume of the lumen (and downstream subtended vessels in some embodiments) as a function of the entire coronary vessel volume (for example, either measured or calculated as hypothetically normal). In some embodiments, the system can be configured to determine percentage fractional myocardial mass, which can relate the coronary lumen or vessel volume to the percentage downstream subtended myocardial mass. In some embodiments, the system can be configured to determine the relationship of all or some of the above to each other, for example on a plaque-plaque basis to influence vessel behavior/risk or on a vessel-vessel basis to influence patient behavior/risk. In some embodiments, the system can be configured to utilize one or more comparisons of the same, for example to normal age- and/or gender-based reference values. In some embodiments, one or more of the metrics described herein can be calculated on a per-segment basis. In some embodiments, one or more of the metrics calculated on a per-segment basis can then summed across a vessel, vascular territory, and/or patient level. In some embodiments, the system can be configured to visualize one or more of such metrics, whether on a per-segment basis and/or on a vessel, vascular territory, and/or patient basis, on a geographical scale. For example, in some embodiments, the system can be configured to visualize one or more such metrics on a graphical scale using 3D and/or 4D histograms. Further, in some embodiments, cardiac CT angiography enables quantitative assessment of a myriad of cardiovascular structures beyond the coronary arteries, which may both contribute to coronary artery disease as well as other cardiovascular diseases. For example, these measurements can include those of one or more of: (1) left ventricle—e.g., left ventricular mass, left ventricular volume, left ventricle Hounsfield unit density as a surrogate marker of ventricular perfusion; (2) right ventricle—e.g., right ventricular mass, right ventricular volume; (3) left atrium—e.g., volume, size, geometry; (4) right atrium—e.g., volume, size, geometry; (5) left atrial appendage—e.g., morphology (e.g., chicken wing, windsock, etc.), volume, angle, etc.; (6) pulmonary vein—e.g., size, shape, angle of takeoff from the left atrium, etc.; (7) mitral valve—e.g., volume, thickness, shape, length, calcification, anatomic orifice area, etc.; (8) aortic valve—e.g., volume, thickness, shape, length, calcification, anatomic orifice area, etc.; (9) tricuspid valve—e.g., volume, thickness, shape, length, calcification, anatomic orifice area, etc.; (10) pulmonic valve—e.g., volume, thickness, shape, length, calcification, anatomic orifice area, etc.; (11) pericardial and peri coronary fat—e.g., volume, attenuation, etc.; (12) epicardial fat—e.g., volume, attenuation, etc.; (13) pericardium—e.g., thickness, mass, volume; and/or (14) aorta—e.g., dimensions, calcifications, atheroma. Given the multitude of measurements that can help characterize cardiovascular risk, certain existing scores can be limited in their holistic assessment of the patient and may not account for many key parameters that may influence patient outcome. For example, certain existing scores may not take into account the entirety of data that is needed to effectively prognosticate risk. In addition, the data that will precisely predict risk can be multi-dimensional, and certain scores do not consider the relationship of plaques to one another, or vessel to one another, or plaques-vessels-myocardium relationships or all of those relationships to the patient-level risk. Also, in certain existing scores, the data may categorize plaques, vessels and patients, thus losing the granularity of pixel-wise data that are summarized in these scores. In addition, in certain existing scores, the data may not reflect the normal age- and gender-based reference values as a benchmark for determining risk. Moreover, certain scores may not consider a number of additional items that can be gleaned from quantitative assessment of coronary artery disease, vascular morphology and/or downstream ventricular mass. Further, within-person relationships of plaques, segments, vessels, vascular territories may not considered within certain risk scores. Furthermore, no risk score to date that utilizes imaging normalizes these risks to a standard that accounts for differences in scanner make/model, contrast type, contrast injection rate, heart rate/cardiac output, patient characteristics, contrast-to-noise ratio, signal-to-noise ratio, and/or image acquisition parameters (for example, single vs. dual vs. spectral energy imaging; retrospective helical vs. prospective axial vs. fast-pitch helical; whole-heart imaging versus non-whole-heart [i.e., non-volumetric] imaging; etc.). In some embodiments described herein, the systems, methods, and devices overcome such technical shortcomings. In particular, in some embodiments, the systems, devices, and methods described herein can be configured to generate and/or a novel CAD risk score that addresses the aforementioned limitations by considering one or more of: (1) total atheroma burden, normalized for density, such as absolute density or Hounsfield unit (HU) density (e.g., can be categorized as total volume or relative volume, i.e., plaque volume/vessel volume×100%); (2) plaque composition by density or HU density (e.g., can be categorized continuously, ordinal ly or binarily); (3) low attenuation plaque (e.g., can be reported as yes/no binary or continuous variable based upon density or HU density); (4) vascular remodeling (e.g., can be reported as ordinal negative, intermediate or positive (<0.90, 0.90-1.10, or >1.0) or continuous); (5) plaque location—proximal v. mid v. distal; (6) plaque location—which vessel or vascular territory; (7) plaque location—myocardial facing v. pericardial facing; (8) plaque location—at bifurcation v. not at bifurcation; (9) plaque location—in main vessel v. branch vessel; (10) stenosis severity; (11) percentage coronary blood volume (e.g., this metric can report out the volume of the lumen (and downstream subtended vessels) as a function of the entire coronary vessel volume (e.g., either measured or calculated as hypothetically normal)); (12) percentage fractional myocardial mass (e.g., this metric can relate the coronary lumen or vessel volume to the percentage downstream subtended myocardial mass); (13) consideration of normal age- and/or gender-based reference values; and/or (14) statistical relationships of all or some of the above to each other (e.g., on a plaque-plaque basis to influence vessel behavior/risk or on a vessel-vessel basis to influence patient behavior/risk). In some embodiments, the system can be configured to determine a baseline clinical assessment(s), including for such factors as one or more of: (1) age; (2) gender; (3) diabetes (e.g., presence, duration, insulin-dependence, history of diabetic ketoacidosis, end-organ complications, which medications, how many medications, and/or the like); (4) hypertension (e.g., presence, duration, severity, end-organ damage, left ventricular hypertrophy, number of medications, which medications, history of hypertensive urgency or emergency, and/or the like); (5) dyslipidemia (e.g., including low-density lipoprotein (LDL), triglycerides, total cholesterol, lipoprotein(a) Lp(a), apolipoprotein B (ApoB), and/or the like); (6) tobacco use (e.g., including what type, for what duration, how much use, and/or the like); (7) family history (e.g., including which relative, at what age, what type of event, and/or the like); (8) peripheral arterial disease (e.g., including what type, duration, severity, end-organ damage, and/or the like); (9) cerebrovascular disease (e.g., including what type, duration, severity, end-organ damage, and/or the like); (10) obesity (e.g., including how obese, how long, is it associated with other metabolic derangements, such as hypertriglyceridemia, centripetal obesity, diabetes, and/or the like); (11) physical activity (e.g., including what type, frequency, duration, exertional level, and/or the like); and/or (12) psychosocial state (e.g., including depression, anxiety, stress, sleep, and/or the like). In some embodiments, a CAD risk score is calculated for each segment, such as for example for segment 1, segment 2, or for some or all segments. In some embodiments, the score is calculated by combining (e.g., by multiplying or applying any other mathematical transform or generating a weighted measure of) one or more of: (1) plaque volume (e.g., absolute volume such as in mm3 or PAV; may be weighted); (2) plaque composition (e.g., NCP/CP, Ordinal NCP/Ordinal CP; Continuous; may be weighted); (3) vascular remodeling (e.g., Positive/Intermediate/Negative; Continuous; may be weighted); (4) high-risk plaques (e.g., positive remodeling+low attenuation plaque; may be weighted); (5) lumen volume (e.g., may be absolute volume such as in mm3 or relative to vessel volume or relative to hypothetical vessel volume; may be weighted); (6) location—proximal/mid/distal (may be weighted); (7) location—myocardial vs. pericardial facing (may be weighted); (8) location—at bifurcation/trifurcation vs. not at bifurcation/trifurcation (may be weighted); (9) location—in main vessel vs. branch vessel (may be weighted); (10) stenosis severity (e.g., ><70%, <>50%, 1-24, 25-49, 50-69, >70%; 0, 1-49, 50-69, >70%; continuous; may use diameter, area or volume; may be weighted); (11) percentage Coronary Blood Volume (may be weighted); (12) percentage fractional myocardial mass (e.g., may include total vessel volume-to-LV mass ratio; lumen volume-to-LV mass ratio; may be weighted); (13) percentile for age- and gender; (14) constant/correction factor (e.g., to allow for control of within-person, within-vessel, inter-plaque, and/or plaque-myocardial relationships). As a non-limiting example, if Segment 1 has no plaque, then it can be weighted as 0 in some embodiments. In some embodiments, to determine risk (which can be defined as risk of future myocardial infarction, major adverse cardiac events, ischemia, rapid progression, insufficient control on medical therapy, progression to angina, and/or progression to need of target vessel revascularization), all or some of the segments are added up on a per-vessel, per-vascular territory and per-patient basis. In some embodiments, by using plots, the system can be configured to visualize and/or quantify risk based on a vessel basis, vascular territory basis, and patient-basis. In some embodiments, the score can be normalized in a patient- and scan-specific manner by considering items such as for example: (1) patient body mass index; (2) patient thorax density; (3) scanner make/model; (4) contrast density along the Z-axis and along vessels and/or cardiovascular structures; (5) contrast-to-noise ratio; (6) signal-to-noise ratio; (7) method of ECG gating (e.g., retrospective helical, prospective axial, fast-pitch helical); (8) energy acquisition (e.g., single, dual, spectral, photon counting); (9) heart rate; (10) use of pre-CT medications that may influence cardiovascular structures (e.g., nitrates, beta blockers, anxiolytics); (11) mA; and/or (12) kvp. In some embodiments, without normalization, cardiovascular structures (coronary arteries and beyond) may have markedly different Hounsfield units for the same structure (e.g., if 100 vs. 120 kvp is used, a single coronary plaque may exhibit very different Hounsfield units). Thus, in some embodiments, this “normalization” step is needed, and can be performed based upon a database of previously acquired images and/or can be performed prospectively using an external normalization device, such as those described herein. In some embodiments, the CAD risk score can be communicated in several ways by the system to a user. For example, in some embodiments, a generated CAD risk score can be normalized to a scale, such as a 100 point scale in which 90-100 can refer to excellent prognosis, 80-90 for good prognosis, 70-80 for satisfactory prognosis, 60-70 for below average prognosis, <60 for poor prognosis, and/or the like. In some embodiments, the system can be configured to generate and/or report to a user based on the CAD risk score(s) vascular age vs. biological age of the subject. In some embodiments, the system can be configured to characterize risk of CAD of a subject as one or more of normal, mild, moderate, and/or severe. In some embodiments, the system can be configured to generate one or more color heat maps based on a generated CAD risk score, such as red, yellow, green, for example in ordinal or continuous display. In some embodiments, the system can be configured to characterize risk of CAD for a subject as high risk vs. non-high-risk, and/or the like. As a non-limiting example, in some embodiments, the generated CAD risk score for Lesion 1 can be calculated as Vol X Composition (HU) X RI X HRP X Lumen Volume X Location×Stenosis% X %CBV X %FMM X Age-/Gender Normal Value % X Correction Constant) X Correction factor for scan- and patient-specific parameters X Normalization factor to communicate severity of findings. Similarly, in some embodiments, the generated CAD risk score for Lesion 2 can be calculated as Vol X Composition (HU) X RI X HRP X Lumen Volume X Location×Stenosis% X %CBV X %FMM X Age-/Gender Normal Value % X Correction Constant) X Correction factor for scan- and patient-specific parameters X Normalization factor to communicate severity of findings. In some embodiments, the generated CAD risk score for Lesion 3 can be calculated as Vol X Composition (HU) X RI X HRP X Lumen Volume X Location×Stenosis% X %CBV X %FMM X Age-/Gender Normal Value % X Correction Constant) X Correction factor for scan- and patient-specific parameters X Normalization factor to communicate severity of findings. In some embodiments, the generated CAD risk score for Lesion 4 can be calculated as Vol X Composition (HU) X RI X HRP X Lumen Volume X Location×Stenosis% X %CBV X %FMM X Age-/Gender Normal Value % X Correction Constant) X Correction factor for scan- and patient-specific parameters X Normalization factor to communicate severity of findings. In some embodiments, a CAD risk score can similarly be generated for any other lesions. In some embodiments, the CAD risk score can be adapted to other disease states within the cardiovascular system, including for example: (1) coronary artery disease and its downstream risk (e.g., myocardial infarction, acute coronary syndromes, ischemia, rapid progression, progression despite medical therapy, progression to angina, progression to need for target vessel revascularization, and/or the like); (2) heart failure; (3) atrial fibrillation; (4) left ventricular hypertrophy and hypertension; (5) aortic aneurysm and/or dissection; (6) valvular regurgitation or stenosis; (7) sudden coronary artery dissection, and/or the like. FIG. 21 is a flowchart illustrating an overview of an example embodiment(s) of a method for generating a coronary artery disease (CAD) Score(s) for a subject and using the same to assist assessment of risk of CAD for the subject. As illustrated in FIG. 21, in some embodiments, the system is configured to conduct a baseline clinical assessment of a subject at block 2102. In particular, in some embodiments, the system can be configured to take into account one or more clinical assessment factors associated with the subject, such as for example age, gender, diabetes, hypertension, dyslipidemia, tobacco use, family history, peripheral arterial disease, cerebrovascular disease, obesity, physical activity, psychosocial state, and/or any details of the foregoing described herein. In some embodiments, one or more baseline clinical assessment factors can be accessed by the system from a database and/or derived from non-image-based and/or image-based data. In some embodiments, at block 202, the system can be configured to access one or more medical images of the subject at block 202, in any manner and/or in connection with any feature described above in relation to block 202. In some embodiments, the system is configured to identify one or more segments, vessels, plaque, and/or fat in the one or more medical images at block 2104. For example, in some embodiments, the system can be configured to use one or more AI and/or ML algorithms and/or other image processing techniques to identify one or more segments, vessels, plaque, and/or fat. In some embodiments, the system at block 2106 is configured to analyze and/or access one or more plaque parameters. For example, in some embodiments, one or more plaque parameters can include plaque volume, plaque composition, plaque attenuation, plaque location, and/or the like. In particular, in some embodiments, plaque volume can be based on absolute volume and/or PAV. In some embodiments, plaque composition can be determined by the system based on density of one or more regions of plaque in a medical image, such as absolute density and/or Hounsfield unit density. In some embodiments, the system can be configured to categorize plaque composition binarily, for example as calcified or non-calcified plaque, and/or continuously based on calcification levels of plaque. In some embodiments, plaque attenuation can similarly be categorized binarily by the system, for example as high attenuation or low attenuation based on density, or continuously based on attenuation levels of plaque. In some embodiments, plaque location can be categorized by the system as one or more of proximal, mid, or distal along a coronary artery vessel. In some embodiments, the system can analyze plaque location based on the vessel in which the plaque is located. In some embodiments, the system can be configured to categorize plaque location based on whether it is myocardial facing, pericardial facing, located at a bifurcation, located at a trifurcation, not located at a bifurcation, and/or not located at a trifurcation. In some embodiments, the system can be configured to analyze plaque location based on whether it is in a main vessel or in a branch vessel. In some embodiments, the system at block 2108 is configured to analyze and/or access one or more vessel parameters, such as for example stenosis severity, lumen volume, percentage of coronary blood volume, percentage of fractional myocardial mass, and/or the like. In some embodiments, the system is configured to categorize or determine stenosis severity based on one or more predetermined ranges of percentage stenosis, for example based on diameter, area, and/or volume. In some embodiments, the system is configured to determine lumen volume based on absolute volume, volume relative to a vessel volume, volume relative to a hypothetical volume, and/or the like. In some embodiments, the system is configured to determine percentage of coronary blood volume based on determining a volume of lumen as a function of an entire coronary vessel volume. In some embodiments, the system is configured to determine percentage of fractional myocardial mass as a ratio of total vessel volume to left ventricular mass, a ratio of lumen volume to left ventricular mass, and/or the like. In some embodiments, the system at block 2110 is configured to analyze and/or access one or more clinical parameters, such as for example percentile condition for age, percentile condition for gender of the subject, and/or any other clinical parameter described herein. In some embodiments, the system at block 2112 is configured to generate a weighted measure of one or more parameters, such as for example one or more plaque parameters, one or more vessel parameters, and/or one or more clinical parameters. In some embodiments, the system is configured to generate a weighted measure of one or more parameters for each segment. In some embodiments, the system can be configured to generate the weighted measure logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system can be configured to generate the weighted measure by applying a correction factor or constant, for example to allow for control of within-person, within-vessel, inter-plaque, and/or plaque-myocardial relationships. In some embodiments, the system at block 2114 is configured to generate one or more CAD risk scores for the subject. For example, in some embodiments, the system can be configured to generate a CAD risk score on a per-vessel, per-vascular territory, and/or per-subject basis. In some embodiments, the system is configured to generate one or more CAD risk scores of the subject by combining the generated weighted measure of one or more parameters. In some embodiments, the system at block 2116 can be configured to normalize the generated one or more CAD scores. For example, in some embodiments, the system can be configured to normalize the generated one or more CAD scores to account for differences due to the subject, scanner, and/or scan parameters, including those described herein. In some embodiments, the system at block 2118 can be configured to generate a graphical plot of the generated one or more per-vessel, per-vascular territory, or per-subject CAD risk scores for visualizing and quantifying risk of CAD for the subject. For example, in some embodiments, the system can be configured to generate a graphical plot of one or more CAD risk scores on a per-vessel, per-vascular, and/or per-subject basis. In some embodiments, the graphical plot can include a 2D, 3D, or 4D representation, such as for example a histogram. In some embodiments, the system at block 2120 can be configured to assist a user to generate an assessment of risk of CAD for the subject based the analysis. For example, in some embodiments, the system can be configured to generate a scaled CAD risk score for the subject. In some embodiments, the system can be configured to determine a vascular age for the subject. In some embodiments, the system can be configured to categorize risk of CAD for the subject, for example as normal, mild, moderate, or severe. In some embodiments, the system can be configured to generate one or more colored heart maps. In some embodiments, the system can be configured to categorize risk of CAD for the subject as high risk or low risk. Treat to the Image
Regression introduced in 13.0.0: "TypeError: Cannot get property 'range' of null" What version of this package are you using? 14.3.1 What operating system, Node.js, and npm version? macOS 10.14.6 $ node --version v10.17.0 $ npm --version 6.11.3 What happened? Upgraded all deps to latest versions as of this issue report & got failure: ><EMAIL_ADDRESS>lint /Users/matthewadams/dev/scispike/nodejs-support > standard --verbose 'src/**/*.js' standard: Unexpected linter output: TypeError: Cannot read property 'range' of null Occurred while linting /Users/matthewadams/dev/scispike/nodejs-support/src/main/entities/DatePeriod.js:27 at SourceCode.getTokenBefore (/Users/matthewadams/dev/scispike/nodejs-support/node_modules/eslint/lib/source-code/token-store/index.js:303:18) at checkSpacingBefore (/Users/matthewadams/dev/scispike/nodejs-support/node_modules/eslint/lib/rules/template-curly-spacing.js:60:42) at TemplateElement (/Users/matthewadams/dev/scispike/nodejs-support/node_modules/eslint/lib/rules/template-curly-spacing.js:119:17) at listeners.(anonymous function).forEach.listener (/Users/matthewadams/dev/scispike/nodejs-support/node_modules/eslint/lib/linter/safe-emitter.js:45:58) at Array.forEach (<anonymous>) at Object.emit (/Users/matthewadams/dev/scispike/nodejs-support/node_modules/eslint/lib/linter/safe-emitter.js:45:38) at NodeEventGenerator.applySelector (/Users/matthewadams/dev/scispike/nodejs-support/node_modules/eslint/lib/linter/node-event-generator.js:253:26) at NodeEventGenerator.applySelectors (/Users/matthewadams/dev/scispike/nodejs-support/node_modules/eslint/lib/linter/node-event-generator.js:282:22) at NodeEventGenerator.enterNode (/Users/matthewadams/dev/scispike/nodejs-support/node_modules/eslint/lib/linter/node-event-generator.js:296:14) at CodePathAnalyzer.enterNode (/Users/matthewadams/dev/scispike/nodejs-support/node_modules/eslint/lib/linter/code-path-analysis/code-path-analyzer.js:646:23) package.json dependencies are the following: "dependencies": { "@babel/polyfill": "7.7.0", "@scispike/aspectify": "1.0.0", "bunyan": "1.8.12", "bunyaner": "1.1.0", "enumify": "1.0.4", "lodash.merge": "4.6.2", "moment": "2.24.0", "moment-timezone": "0.5.27", "mutrait": "1.0.0", "uuid": "3.3.3" }, "devDependencies": { "@babel/cli": "7.7.0", "@babel/core": "7.7.0", "@babel/plugin-proposal-class-properties": "7.7.0", "@babel/plugin-proposal-decorators": "7.7.0", "@babel/plugin-proposal-optional-chaining": "7.6.0", "@babel/plugin-proposal-throw-expressions": "7.2.0", "@babel/preset-env": "7.7.1", "@babel/register": "7.7.0", "acorn": "7.1.0", "babel-eslint": "10.0.3", "chai": "4.2.0", "cls-hooked": "4.2.2", "copyfiles": "2.1.1", "dirty-chai": "2.0.1", "fs-extra": "8.1.0", "intercept-stdout": "0.1.2", "mocha": "6.2.2", "npm-run-all": "4.1.5", "nyc": "14.1.1", "standard": "14.3.1", "zone.js": "0.10.2" }, Traced it back to<EMAIL_ADDRESS>no error with with<EMAIL_ADDRESS>What did you expect to happen? Successful linting. Are you willing to submit a pull request to fix this bug? Since this is a dev dependency, prolly not. No time. Must focus on prod issues. More info: strangely, while 12.0.1 worked on my mac, it failed in CI on travis-ci.com: https://travis-ci.com/SciSpike/nodejs-support/builds/135216135 Thank you for the issue, @matthewadams. Whatever it was, it seems to have been resolved. Probably somewhere down the dependency tree. https://github.com/SciSpike/nodejs-support/pull/12 Hi @mightyiam looks like it's still a problem. I've updated dependencies in master & it's still failing, but not on macOS, only on nx (see docker build output below). It appears that it's related to upgrading babel-eslint. If you look at the build history of the project, my last commit before upgrading babel-eslint, which upgraded everything but babel-eslint, succeeded. When I upgraded babel-eslint, standard failed: https://travis-ci.com/SciSpike/nodejs-support/builds/136353983 I'll see if I can pin it down to a particular version of babel-eslint and report back. $ docker run --rm -it -v $PWD:/app -w /app node:10.17.0 npm run build ><EMAIL_ADDRESS>build /app > npm install && npm test ><EMAIL_ADDRESS>install /app/node_modules/dtrace-provider > node-gyp rebuild || node suppress-error.js make: Entering directory '/app/node_modules/dtrace-provider/build' TOUCH Release/obj.target/DTraceProviderStub.stamp make: Leaving directory '/app/node_modules/dtrace-provider/build' ><EMAIL_ADDRESS>postinstall /app/node_modules/@babel/polyfill/node_modules/core-js > node postinstall || echo "ignore" Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library! The project needs your help! Please consider supporting of core-js on Open Collective or Patreon: > https://opencollective.com/core-js > https://www.patreon.com/zloirock Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -) ><EMAIL_ADDRESS>postinstall /app/node_modules/@scispike/aspectify/node_modules/core-js > node scripts/postinstall || echo "ignore" Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library! The project needs your help! Please consider supporting of core-js on Open Collective or Patreon: > https://opencollective.com/core-js > https://www.patreon.com/zloirock Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -) npm WARN optional SKIPPING OPTIONAL DEPENDENCY<EMAIL_ADDRESS>(node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for<EMAIL_ADDRESS>wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) added 906 packages from 822 contributors and audited 5163 packages in 88.114s found 16 vulnerabilities (1 moderate, 15 high) run `npm audit fix` to fix them, or `npm audit` for details ><EMAIL_ADDRESS>test /app > run-s transpile unit-integration lint ><EMAIL_ADDRESS>transpile /app > run-s transpile-main transpile-test ><EMAIL_ADDRESS>transpile-main /app > babel --delete-dir-on-start --verbose --out-dir lib/main --copy-files src/main src/main/context/ClsHookedContext.js -> lib/main/context/ClsHookedContext.js src/main/context/ZoneJsContext.js -> lib/main/context/ZoneJsContext.js src/main/context/index.js -> lib/main/context/index.js src/main/entities/DatePeriod.js -> lib/main/entities/DatePeriod.js src/main/entities/Identifiable.js -> lib/main/entities/Identifiable.js src/main/entities/Period.js -> lib/main/entities/Period.js src/main/entities/Persistable.js -> lib/main/entities/Persistable.js src/main/entities/Recurrence.js -> lib/main/entities/Recurrence.js src/main/entities/index.js -> lib/main/entities/index.js src/main/enums/DayOfWeek.js -> lib/main/enums/DayOfWeek.js src/main/enums/Enumeration.js -> lib/main/enums/Enumeration.js src/main/enums/ResponseStatus.js -> lib/main/enums/ResponseStatus.js src/main/enums/TimeUnit.js -> lib/main/enums/TimeUnit.js src/main/enums/index.js -> lib/main/enums/index.js src/main/errors/AlreadyInitializedError.js -> lib/main/errors/AlreadyInitializedError.js src/main/errors/ClassNotExtendableError.js -> lib/main/errors/ClassNotExtendableError.js src/main/errors/CodedError.js -> lib/main/errors/CodedError.js src/main/errors/IllegalArgumentError.js -> lib/main/errors/IllegalArgumentError.js src/main/errors/IllegalArgumentTypeError.js -> lib/main/errors/IllegalArgumentTypeError.js src/main/errors/IllegalStateError.js -> lib/main/errors/IllegalStateError.js src/main/errors/MethodNotImplementedError.js -> lib/main/errors/MethodNotImplementedError.js src/main/errors/MissingRequiredArgumentError.js -> lib/main/errors/MissingRequiredArgumentError.js src/main/errors/NonuniqueCriteriaError.js -> lib/main/errors/NonuniqueCriteriaError.js src/main/errors/NotYetInitializedError.js -> lib/main/errors/NotYetInitializedError.js src/main/errors/ObjectExistsError.js -> lib/main/errors/ObjectExistsError.js src/main/errors/ObjectNotFoundError.js -> lib/main/errors/ObjectNotFoundError.js src/main/errors/UnknownDiscriminatorError.js -> lib/main/errors/UnknownDiscriminatorError.js src/main/errors/UnknownEnumError.js -> lib/main/errors/UnknownEnumError.js src/main/errors/index.js -> lib/main/errors/index.js src/main/index.js -> lib/main/index.js src/main/logger/index.js -> lib/main/logger/index.js src/main/package/pkg.js -> lib/main/package/pkg.js src/main/require/index.js -> lib/main/require/index.js src/main/services/ServiceSupport.js -> lib/main/services/ServiceSupport.js src/main/services/index.js -> lib/main/services/index.js src/main/services/returnsServiceResponse.js -> lib/main/services/returnsServiceResponse.js src/main/string-utils/index.js -> lib/main/string-utils/index.js Successfully compiled 37 files with Babel. ><EMAIL_ADDRESS>transpile-test /app > babel --delete-dir-on-start --verbose --out-dir lib/test --copy-files src/test src/test/unit/context/ClsHookedContext.spec.js -> lib/test/unit/context/ClsHookedContext.spec.js src/test/unit/context/ZoneJsContext.spec.js -> lib/test/unit/context/ZoneJsContext.spec.js src/test/unit/context/context-tests.js -> lib/test/unit/context/context-tests.js src/test/unit/entities/DatePeriod.spec.js -> lib/test/unit/entities/DatePeriod.spec.js src/test/unit/entities/Identifiable.spec.js -> lib/test/unit/entities/Identifiable.spec.js src/test/unit/entities/Period.spec.js -> lib/test/unit/entities/Period.spec.js src/test/unit/entities/Recurrence.spec.js -> lib/test/unit/entities/Recurrence.spec.js src/test/unit/entities/_.merge.spec.js -> lib/test/unit/entities/_.merge.spec.js src/test/unit/enums/DayOfWeek.spec.js -> lib/test/unit/enums/DayOfWeek.spec.js src/test/unit/enums/Enumeration.spec.js -> lib/test/unit/enums/Enumeration.spec.js src/test/unit/enums/ResponseStatus.spec.js -> lib/test/unit/enums/ResponseStatus.spec.js src/test/unit/enums/TimeUnit.spec.js -> lib/test/unit/enums/TimeUnit.spec.js src/test/unit/errors/CodedError.spec.js -> lib/test/unit/errors/CodedError.spec.js src/test/unit/require/dirWithDirs/dir0/index.js -> lib/test/unit/require/dirWithDirs/dir0/index.js src/test/unit/require/dirWithDirs/dir1/index.js -> lib/test/unit/require/dirWithDirs/dir1/index.js src/test/unit/require/dirWithDirsWithNamesThatLookLikeFiles/dir0.js/index.js -> lib/test/unit/require/dirWithDirsWithNamesThatLookLikeFiles/dir0.js/index.js src/test/unit/require/dirWithDirsWithNamesThatLookLikeFiles/dir1.json/index.js -> lib/test/unit/require/dirWithDirsWithNamesThatLookLikeFiles/dir1.json/index.js src/test/unit/require/dirWithDirsWithNamesThatLookLikeFiles/index.js/index.js -> lib/test/unit/require/dirWithDirsWithNamesThatLookLikeFiles/index.js/index.js src/test/unit/require/dirWithJsFiles/file0.js -> lib/test/unit/require/dirWithJsFiles/file0.js src/test/unit/require/dirWithJsFiles/file1.js -> lib/test/unit/require/dirWithJsFiles/file1.js src/test/unit/require/dirWithJsFiles/index.js -> lib/test/unit/require/dirWithJsFiles/index.js src/test/unit/require/require.spec.js -> lib/test/unit/require/require.spec.js src/test/unit/services/ServiceSupport.spec.js -> lib/test/unit/services/ServiceSupport.spec.js src/test/unit/services/returnsServiceResponse.spec.js -> lib/test/unit/services/returnsServiceResponse.spec.js src/test/unit/string-utils/index.spec.js -> lib/test/unit/string-utils/index.spec.js src/test/unit/test-util.js -> lib/test/unit/test-util.js Successfully compiled 26 files with Babel. ><EMAIL_ADDRESS>unit-integration /app > nyc --check-coverage --statements 86 --branches 77 --functions 83 --lines 91 -x 'lib/test' --exclude-after-remap false mocha 'lib/test/unit/**/*.spec.js' 'lib/test/integration/**/*.spec.js' Warning: Cannot find any files matching pattern "lib/test/integration/**/*.spec.js" ClsHookedTest ✓ should work with sync fn returning a value ✓ should work with awaiting sync fn returning a value ✓ should work with awaiting async fn returning a value ✓ should work with setTimeout ✓ should work with Promise.resolve ✓ should work with Promise.reject ✓ should work with async/await ✓ should give the same context when given the same name ZoneJsTest ✓ should work with sync fn returning a value ✓ should work with awaiting sync fn returning a value ✓ should work with awaiting async fn returning a value ✓ should work with setTimeout ✓ should work with Promise.resolve ✓ should work with Promise.reject ✓ should work with async/await ✓ should give the same context when given the same name _.merge ✓ should invoke inherited property methods correctly ✓ should invoke expressed property methods correctly ✓ should invoke expressed property methods correctly when used in ctors ✓ should invoke mutrait-expressed property methods correctly when used in ctors unit tests of DatePeriod ✓ should disallow invalid states ✓ should disallow invalid begin granularity ✓ should disallow invalid end granularity ✓ should contain or not correctly ✓ should overlap or not correctly ✓ should have a DatePeriod type unit tests of Identifiable ✓ should work unit tests of Period ✓ should disallow invalid states ✓ should contain or not correctly ✓ should overlap or not correctly ✓ should test length function ✓ should return false for a non-moment ✓ should have a Period type unit tests of Recurrence ✓ should work unit tests of DayOfWeek ✓ should calculate next & prev correctly ✓ should retrieve enum ✓ should fail to retrieve unknown enum unit tests of Enumeration ✓ should create a new enum ✓ should not allow extensions of enum classes unit tests of ResponseStatus ✓ should retrieve enum ✓ should fail to retrieve unknown enum unit tests of DayOfWeek ✓ should retrieve enum unit tests of CodedError {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_MY: boom","name":"MyError","stack":"MyError: E_MY: boom\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:25:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_MY"},"msg":"E_MY: boom","time":"2019-11-13T15:44:37.524Z","v":0} ✓ should have code & no name or cause {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_MY: boom: E_MY_ERROR_CAUSE: because many badness so high","name":"MyError","stack":"MyError: E_MY: boom: E_MY_ERROR_CAUSE: because many badness so high\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:50:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_MY"},"msg":"E_MY: boom: E_MY_ERROR_CAUSE: because many badness so high","time":"2019-11-13T15:44:37.527Z","v":0} ✓ should have a cause and code as name {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_MY: E_MY_ERROR_CAUSE","name":"MyError","stack":"MyError: E_MY: E_MY_ERROR_CAUSE\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:73:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_MY"},"msg":"E_MY: E_MY_ERROR_CAUSE","time":"2019-11-13T15:44:37.528Z","v":0} ✓ should work with no args {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_SUB","name":"SubError","stack":"SubError: E_SUB\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:92:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_SUB"},"msg":"E_SUB","time":"2019-11-13T15:44:37.531Z","v":0} ✓ should work with a supererror & no name {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_MY_ERROR: boom","name":"MyError","stack":"MyError: E_MY_ERROR: boom\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:115:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_MY_ERROR"},"msg":"E_MY_ERROR: boom","time":"2019-11-13T15:44:37.532Z","v":0} ✓ should have name, code & no cause {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_MY: boom","name":"MyError","stack":"MyError: E_MY: boom\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:132:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_MY"},"msg":"E_MY: boom","time":"2019-11-13T15:44:37.533Z","v":0} ✓ should have name & no cause {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_MY: boom: E_MY_CAUSE: because many badness so high","name":"MyError","stack":"MyError: E_MY: boom: E_MY_CAUSE: because many badness so high\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:158:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_MY"},"msg":"E_MY: boom: E_MY_CAUSE: because many badness so high","time":"2019-11-13T15:44:37.534Z","v":0} ✓ should have a cause and code as name {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_MY: E_MY_CAUSE","name":"MyError","stack":"MyError: E_MY: E_MY_CAUSE\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:183:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_MY"},"msg":"E_MY: E_MY_CAUSE","time":"2019-11-13T15:44:37.535Z","v":0} ✓ should work with no args {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_SUB2","name":"Sub2Error","stack":"Sub2Error: E_SUB2\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:206:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_SUB2"},"msg":"E_SUB2","time":"2019-11-13T15:44:37.537Z","v":0} ✓ should work with a supererror, a subclass & a subclass subclass {"name":"CodedError","hostname":"ade26b9ce870","pid":179,"level":50,"err":{"message":"E_SUB","name":"Sub","stack":"Sub: E_SUB\n at Context.it (/app/lib/test/unit/errors/CodedError.spec.js:229:15)\n at callFn (/app/node_modules/mocha/lib/runnable.js:387:21)\n at Test.Runnable.run (/app/node_modules/mocha/lib/runnable.js:379:7)\n at Runner.runTest (/app/node_modules/mocha/lib/runner.js:535:10)\n at /app/node_modules/mocha/lib/runner.js:653:12\n at next (/app/node_modules/mocha/lib/runner.js:447:14)\n at /app/node_modules/mocha/lib/runner.js:457:7\n at next (/app/node_modules/mocha/lib/runner.js:362:14)\n at Immediate._onImmediate (/app/node_modules/mocha/lib/runner.js:425:5)\n at runCallback (timers.js:705:18)\n at tryOnImmediate (timers.js:676:5)\n at processImmediate (timers.js:658:5)","code":"E_SUB"},"msg":"E_SUB","time":"2019-11-13T15:44:37.539Z","v":0} ✓ should work with named error & supererror ✓ should return correctly from isInstance req ✓ should require all .js files (42ms) ✓ should require all .js files except index.js ✓ should require all dirs ✓ should require all .json files unit tests of @returnsServiceResponse ✓ should return successful response ✓ should return error response unit tests of ServiceSupport ✓ should create service response from dto ✓ should create service response from error unit tests of string-utils ✓ should convert camel case to snake ✓ should convert snake to camel case 63 passing (316ms) ----------|----------|----------|----------|----------|-------------------| File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s | ----------|----------|----------|----------|----------|-------------------| All files | 0 | 0 | 0 | 0 | | ----------|----------|----------|----------|----------|-------------------| ><EMAIL_ADDRESS>postunit-integration /app > run-s report ><EMAIL_ADDRESS>report /app > nyc report --reporter=html ><EMAIL_ADDRESS>lint /app > standard --verbose 'src/**/*.js' standard: Unexpected linter output: TypeError: Cannot read property 'range' of null Occurred while linting /app/src/main/entities/DatePeriod.js:27 at SourceCode.getTokenBefore (/app/node_modules/eslint/lib/source-code/token-store/index.js:303:18) at checkSpacingBefore (/app/node_modules/eslint/lib/rules/template-curly-spacing.js:60:42) at TemplateElement (/app/node_modules/eslint/lib/rules/template-curly-spacing.js:119:17) at listeners.(anonymous function).forEach.listener (/app/node_modules/eslint/lib/linter/safe-emitter.js:45:58) at Array.forEach (<anonymous>) at Object.emit (/app/node_modules/eslint/lib/linter/safe-emitter.js:45:38) at NodeEventGenerator.applySelector (/app/node_modules/eslint/lib/linter/node-event-generator.js:253:26) at NodeEventGenerator.applySelectors (/app/node_modules/eslint/lib/linter/node-event-generator.js:282:22) at NodeEventGenerator.enterNode (/app/node_modules/eslint/lib/linter/node-event-generator.js:296:14) at CodePathAnalyzer.enterNode (/app/node_modules/eslint/lib/linter/code-path-analysis/code-path-analyzer.js:646:23) If you think this is a bug in `standard`, open an issue: https://github.com/standard/standard/issues npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR<EMAIL_ADDRESS>lint: `standard --verbose 'src/**/*.js'` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the<EMAIL_ADDRESS>lint script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! Test failed. See above for more details. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR<EMAIL_ADDRESS>build: `npm install && npm test` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the<EMAIL_ADDRESS>build script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /root/.npm/_logs/2019-11-13T15_44_49_232Z-debug.log @matthewadams could you please see whether it seems like one of the following? https://github.com/babel/babel-eslint/issues/799 https://github.com/babel/babel-eslint/issues/741 @mightyiam As suggested in babel/babel-eslint#799, running npm i @babel/types @babel/traverse --save-dev fixed the problem! So strange that it worked on macOS but failed on Linux. #weird Happy to learn that. Thank you for informing.
Goldtooth Goldtooth is a big, gold-toothed level 8 kobold just outside the Fargodeep Mine at Goldtooth's Den. Loot * Bernice's Necklace (Quest Item) * A Gold Tooth (vendor trash)
Mercenary Appearances * Darth Maul, Part I * Star Wars: Jedi of the Republic&mdash;Mace Windu * Catalyst: A Rogue One Novel * Darth Maul—Son of Dathomir, Part Four * Adventures in Wild Space: The Nest * Adventures in Wild Space: The Dark * Rogue One: A Star Wars Story * Rogue One: A Star Wars Story novelization * Rogue One, Part III * Star Wars: Episode IV A New Hope * The Weapon of a Jedi: A Luke Skywalker Adventure * Darth Vader 7: Shadows and Secrets, Part I * Star Wars: Commander * Doctor Aphra 1: Aphra, Part I * Battlefront: Twilight Company * Aftermath: Life Debt * Aftermath: Empire's End * "The Perfect Weapon" * "The Crimson Corsair and the Lost Treasure of Count Dooku" * Star Wars: Episode VII The Force Awakens * Star Wars: The Force Awakens novelization * Star Wars: The Force Awakens: A Junior Novel * The Force Awakens, Part III * The Force Awakens, Part III
package net.loomchild.maligna.filter.modifier.modify.split; import static org.junit.Assert.assertEquals; import java.util.List; import org.junit.Test; /** * Represents {@link WordSplitAlgorithm} unit test. * @author loomchild */ public class WordSplitAlgorithmTest { public static final String SPACE = " ab\t 9\net "; public static final String[] EXPECTED_SPACE = new String[] {"ab", "9", "et"}; /** * Checks if splitting on whitespace works as expected and that whitespace * characters are removed from the output. */ @Test public void splitSpace() { WordSplitAlgorithm splitter = new WordSplitAlgorithm(); List<String> wordList = splitter.split(SPACE); String[] wordArray = wordList.toArray(new String[wordList.size()]); assertEquals(EXPECTED_SPACE, wordArray); } public static final String PUNCTUATION = "1. Ja, niżej podpisan(I'm \"batman01\")."; public static final String[] EXPECTED_PUNCTUATION = new String[] { "1", ".", "Ja", ",", "niżej", "podpisan", "(", "I", "'", "m", "\"", "batman01", "\"", ")", "."}; /** * Checks if splitting after punctuation characters works as expected. */ @Test public void splitPunctuation() { WordSplitAlgorithm splitter = new WordSplitAlgorithm(); List<String> wordList = splitter.split(PUNCTUATION); String[] wordArray = wordList.toArray(new String[wordList.size()]); assertEquals(EXPECTED_PUNCTUATION, wordArray); } }
import { Resolver, Query, Args, Mutation } from "@nestjs/graphql"; import { UserService } from "~/user/services/user.service"; import { LoginDto } from "~/auth/dto/login.dto"; import { RegisterDto } from "~/auth/dto/register.dto"; import { SessionEntity } from "~/auth/entities/session.entity"; import { AuthService } from "~/auth/services/auth.service"; import { UserRoles } from "~/user/enums/user-roles.enum"; import { TokenService } from "~/auth/services/token.service"; import { NotFoundException } from "@nestjs/common"; import { UpdatePasswordDto } from "../dto/update-password.dto"; @Resolver() export class AuthResolver { constructor( private readonly authService: AuthService, private readonly userService: UserService, private readonly tokenService: TokenService, ) {} @Query(returns => SessionEntity) login( @Args({ name: 'loginInput', type: () => LoginDto }) loginInput: LoginDto ): Promise<SessionEntity> { return this.authService.signin({ role: UserRoles.USER, ...loginInput }); } @Mutation(returns => SessionEntity) register( @Args({ name: 'registerInput', type: () => RegisterDto }) registerDto: RegisterDto ): Promise<SessionEntity> { return this.authService.signup({ role: UserRoles.USER, ...registerDto }); } @Query(returns => String) async resetPassword( @Args({ name: 'email', type: () => String }) email: string, ): Promise<string> { await this.userService.findOneOrFail({ email }); return this.authService.sendResetPasswordEmail(email); } @Mutation(returns => Boolean) async updatePassword( @Args({ name: 'updatePasswordDto', type: () => UpdatePasswordDto }) updatePasswordDto: UpdatePasswordDto, ): Promise<boolean> { const payload = this.tokenService.verify(updatePasswordDto.resetToken); console.log(payload) if(!payload || payload.sub.code !== updatePasswordDto.resetCode) { throw new NotFoundException('Token expiré'); } else { const user = await this.userService.findOneOrFail({ email: payload.sub.email }); await this.userService.updateOneById(user._id, { password: updatePasswordDto.password }); } return true; } }
--- title: Openings & Closings tags: [ til, infra, intern ] category: Blog --- Tuesday is meeting day. And after our meetings, I met up with my younger brother who has NC State orientation. ## Today I Learned 1. Where Nuage is and where they are headed 2. Tasty 8s, the worlds best gourmet hot dog shop, has closed ### Nuage Networks A few weeks ago, we had a site-wide meeting at Raleigh, and I wrote about my experience--I discovered how I fit into the company. Today, we had a site-wide meeting, and I discovered how the company fits into the world. There were acronyms flying, people laughing, and a serious show of 'what progress have we made, where do we go from here.' It was, in a word, invigorating. We were challenged to challenge each other, to prioritize the quality of our work, to be more than just siloed teams but functional units working cohesively together. And we were praised, because the Raleigh office has done some amazing things in its short lifespan and with a fraction of the numbers. ### Tasty 8s There really are no words. I discovered Tasty 8s when I interned with Raleigh Youth Mission last summer. My boss, Reverend Katherine Blankenship, met with us there one afternoon as we were orienting ourselves to the program and our responsibilities. They advertised 'gourmet hot dogs.' And boy were they right. Unfortunately, they closed in April. I was so looking forward to eating there again; I had taken my Dad there when he visited last summer, I loved it so much. He, too, was disappointed (there was a rather hearty 'dadgummit' on the other end of the line when I called). My brother will never taste of their yummy goodness. But I have, and I will miss you Tasty 8s.
package com.cybernostics.lib.template; import java.io.InputStream; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.util.HashMap; import java.util.Map; import org.junit.After; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import static org.junit.Assert.*; /** * * @author jasonw */ public class TemplateProcessorTest { public TemplateProcessorTest() { } @BeforeClass public static void setUpClass() throws Exception { } @AfterClass public static void tearDownClass() throws Exception { } @Before public void setUp() { } @After public void tearDown() { } /** * Test of process method, of class TemplateProcessor. */ @Test public void testProcess() { String endl = System.getProperty( "line.separator" ); String textInput = "this is a text of the |%item_1%| text." + endl + "which is |%item_1%| contained on [%item_2%] several lines" + endl + "That are contained in |%item_3%| a file"; System.out.println( "process" ); Map< String, String > replacements = new HashMap< String, String >(); replacements.put( "item_1", "boo" ); replacements.put( "item_2", "moo" ); String[] start = { "|%", "[%" }; String[] end = { "%|", "%]" }; ByteArrayOutputStream bos = new ByteArrayOutputStream(); InputStream is = new ByteArrayInputStream( textInput.getBytes() ); TemplateProcessor.process( is, bos, start, end, replacements ); String outText = bos.toString(); System.out.println( "output<" + outText + ">" ); String textOutput = "this is a text of the boo text." + endl + "which is boo contained on moo several lines" + endl + "That are contained in |%item_3%| a file"; System.out.println( "expected<" + textOutput + ">" ); assertEquals( textOutput, outText ); } }