text
stringlengths 0
128k
|
---|
What's with using C# attributes with the 'Attribute' suffix?
I'm looking at some C# code that applies several LINQ to SQL attributes with the Attribute suffix, e.g. ColumnAttribute, instead of the plain Column that I am used to using. Is there any reason but verbosity to do this?
There is no semantic difference. Probably whoever wrote the code just preferred that notation.
It's also possible that the code was automatically generated using a tool. Code generation tools usually don't bother to strip the Attribute bit from the attribute's type name.
I think it was actually generated. They say it's hand rolled, but I'm sure it was generated, then hand altered. It's for a skills test, and some properties are missing from mapped classes. I think they just broke the generated, mapped classes.
The attribute is called ColumnAttribute. The compiler just provides syntactic-sugar to allow you to specify them without the Attribute suffix.
There's no practical difference.
Most attributes end with the word Attribute, including ColumnAttribute, CLSCompliantAttribute, and SerializableAttribute. The compiler allows the last word Attribute to be omitted. It's the programmer's choice whether to add Attribute to such names or not.
The Attribute suffix, however, is merely a convention: it is perfectly valid, albeit unusual, to define an attribute, for example, as follows:
[AttributeUsage(AttributeTargets.All)]
public class Foo : Attribute {
}
just as it is to define an exception named Throwable, for example.
No, it's just a style decision. However, if what you're looking at is code generated using CodeDOM the presence of the suffix is expected since C# code generator will keep the full ___Attribute type name when adding an attribute.
Makes sense. No gain of adding a line like "Oh, this is an attribute applied to a target, lets remove the suffix."
They're syntactically the same. AFAIK there's no reason to do this unless the person in question wasn't familiar with the fact that syntax sugar in C# adds the 'Attribute' suffix automatically when adding an attribute via square brackets.
Column is simply the nickname of ColumnAttribute, and syntactically identical in functionality, and allowed by the compiler in either form.
Some people prefer to use the full name, out of habit of following the Framework Naming Guidelines, which encourage adding Attribute to custom attribute classes, or I to interface types.
You mean Column is actually the nickname of ColumnAttribute. Attribute could apply to any other attribute as well.
If you look at the Philips Healthcare - C# Coding Standard, rule 3@122, you can see they actually want coders to add the suffix "Attribute" to attributes.
http://www.tiobe.com/content/paperinfo/gemrcsharpcs.pdf
The code may be generated but the author(s) of the code generator are probably trying to create code that meets as many standards as possible.
|
/*
* This file is part of the PSL software.
* Copyright 2011-2015 University of Maryland
* Copyright 2013-2018 The Regents of the University of California
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.linqs.psl.utils.textsimilarity;
import org.linqs.psl.database.ReadableDatabase;
import org.linqs.psl.model.function.ExternalFunction;
import org.linqs.psl.model.term.Constant;
import org.linqs.psl.model.term.ConstantType;
import org.linqs.psl.model.term.StringAttribute;
import com.wcohen.ss.BasicStringWrapper;
import com.wcohen.ss.Level2Levenstein;
/**
* Wraps the Level 2 Levenshtein string similarity from the Second String library.
* Level 2 means that tokens are broken up before comparison.
* If the similarity is below a threshold (default=0.5) it returns 0.
*/
public class Level2LevenshteinSimilarity implements ExternalFunction {
// similarity threshold (default=0.5)
private double simThresh;
// constructors
public Level2LevenshteinSimilarity() {
this.simThresh = 0.5;
}
public Level2LevenshteinSimilarity(double simThresh) {
this.simThresh = simThresh;
}
@Override
public int getArity() {
return 2;
}
@Override
public ConstantType[] getArgumentTypes() {
return new ConstantType[] { ConstantType.String, ConstantType.String };
}
@Override
public double getValue(ReadableDatabase db, Constant... args) {
String a = ((StringAttribute) args[0]).getValue();
String b = ((StringAttribute) args[1]).getValue();
BasicStringWrapper aWrapped = new BasicStringWrapper(a);
BasicStringWrapper bWrapped = new BasicStringWrapper(b);
Level2Levenstein leven = new Level2Levenstein();
double sim = leven.score(aWrapped, bWrapped);
if (sim < simThresh)
return 0.0;
else
return sim;
}
}
|
affections.
The bacilli are probably first introduced into the diseased tissue by wandering cells which take up and transport the bacilli, which are not themselves capable of independent movement it is possible that this trans porting wandering cell becomes then transformed into an epithelioid cell and afterwards into a giant cell. In many infective experiments wandering cells containing tubercle bacilli can be directly demonstrated in the blood and tissues.
of the skin X 700.
of the tongue, in tuberculosis of the pelvis of the kidney, of the uterus, of the testicle, &c., the bacilli were con stantly found ; likewise in 21 cases of tuberculous glands. Farther, they were present in 13 cases of tuberculosis of the joints and 10 cases of tuberculous affections of bones ; in 4 cases of lupus, where the bacilli were only found in giant cells, and only one organism in each ; in 7 cases of bovine tuberculosis ; finally, in various animals inoculated with tubercular material (273 guinea-pigs, 105 rabbits, 44 field mice, 28 white mice, 19 rats, 13 cats, as well as dogs, marmots, fowls, pigeons, &c.). Further, sputa and organs in a very large number of other non-
|
package james.monochrome.views;
import android.content.Context;
import android.util.AttributeSet;
public class DrawingImageView extends SquareImageView {
public DrawingImageView(Context context) {
super(context);
}
public DrawingImageView(Context context, AttributeSet attrs) {
super(context, attrs);
}
public DrawingImageView(Context context, AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
int size = Math.min(getMeasuredWidth(), getMeasuredHeight());
size = (size / 100) * 100;
setMeasuredDimension(size, size);
}
}
|
task and servers not showing
Could be the version mismatch.
What's the version of asynq package and version of asynqmon?
thankyou,it's version mismatch.
@lululxq thank you for raising this issue. I'll update the README so that it's clear to other people.
|
User:Dvwwyb419
Ugg Outlet Uk
Tiffany And Co Outlet
Canada Goose Outlet Online
while|as|with|on top of that|in addition
Ugg Boots Sale
Canada Goose Sale
while|as|with|on top of that|in addition
Cheap Moncler
while|as|with|on top of that|in addition
Tiffany Outlet Website
but when|however when|however
while|as|with|on top of that|in addition
Canada Goose Sale
Tiffany Uk
Tiffany And Co Outlet
Tiffany Bracelet
Tiffany Uk
while|as|with|on top of that|in addition
veterans administration
however it|but rather|also|so|remember
but unfortunately|fortunately|simply|nonetheless
Moncler Jackets Outlet
while|as|with|on top of that|in addition
Tiffany Uk
Tiffany Bracelet
Cheap Canada Goose
really|and thus|but|simply|and|so
Tiffany Outlet
Tiffany Outlet
Canada Goose Outlet
Ugg Outlet Online
|
JOYCE SNYDER v. GLADEVIEW HEALTH CARE CENTER ET AL.
(AC 35474)
Beach, Sheldon and Keller, Js.
Argued February 14—
officially released April 29, 2014
Kenneth J. McDonnell, with whom, on the brief, was William P. Monigan, for the appellant (plaintiff).
Douglas M. Connors, with whom, on the brief, was Samuel I. Reich, for the appellees (defendants).
Opinion
SHELDON, J.
The plaintiff Peter Snyder, as the executor of the estate of his deceased wife, Joyce Snyder (claimant), appeals from the decision of the Workers’ Compensation Review Board (board) affirming the ruling of the Workers’ Compensation Commissioner (commissioner) that a stipulated settlement that was prepared by the defendant Gladeview Health Care Center, but signed only by the claimant, was not enforceable. On appeal, the plaintiff claims that: (1) the defendant was bound by the stipulated settlement; (2) the commissioner was required to approve the stipulated settlement absent evidence of fraud, accident, mistake or surprise; and (3) the board erred in affirming the commissioner’s denial of the plaintiffs motion to correct. We disagree with the plaintiff, and thus affirm the decision of the board.
The following facts and procedural history are relevant to our resolution of the plaintiffs claims. The claimant was employed by the defendant as a registered nurse for approximately ten years. On January 22,1997, she sustained a lower back injury during the corase of her employment with the defendant. As a result of that injury, the claimant underwent three surgeries on her spine and was unable to return to work. At the time of the claimant’s third surgery, her treating physician noticed that she had an abnormally low white blood cell count and referred her to an oncologist, who diagnosed her as suffering from acute myelogenous leukemia, of which she informed the defendant.
The claimant and the defendant began discussing the possibility of settling her workers’ compensation case in October, 2003. On or about January 21, 2011, as part of an anticipated final settlement of the case, the United States Department of Health and Human Services approved a proposed Medicare set-aside trust, which was to have paid the claimant $2512 annually, commencing on February 24, 2012, and continuing until the date of her death or February 24, 2022, whichever occurred first. In addition to this Medicare set-aside account, the defendant agreed to make a single final payment to the claimant in the amount of $75,000.
On February 3, 2011, the defendant sent the claimant an electronic version of the full and final stipulation it had drafted for her approval and requested that her counsel schedule an approval hearing before the commissioner. On February 4, 2011, the claimant signed the stipulation in the presence of her attorney, her husband and another witness, and her counsel requested a hearing before the commissioner to approve the stipulation. The following day, on February 5, 2011, the claimant died because of complications from leukemia.
On February 8, 2011, by way of an e-mail, the defendant’s counsel acknowledged receipt of the claimant’s hearing request form. Thereafter, on March 1, 2011, counsel for the claimant and the defendant appeared for the scheduled hearing before the commissioner with the intention of presenting the stipulation for approval. At the hearing, however, the claimant’s counsel informed the commissioner and the defendant’s counsel of the claimant’s recent death. Upon learning that the claimant had died, the defendant, through counsel, withdrew its consent to submitting the proposed stipulation for the commissioner’s approval. The hearing was therefore adjourned without presenting the stipulation to the commissioner for approval.
On October 4, 2011, at the plaintiffs request, counsel for the claimant and the defendant appeared before the commissioner at a formal hearing where they were permitted to present evidence on the issue of whether the stipulation should be approved. Thereafter, on February 2, 2012, the commissioner issued a written ruling denying the plaintiffs request to approve the stipulation and dismissing his claim. In paragraph six of the ruling, the commissioner noted that “[t]he settlement was never properly before a commissioner nor was it approved prior to the claimant’s death, nor did the [defendant] sign the settlement documents.” On February 16, 2012, the plaintiff filed a motion to correct this paragraph of the commissioner’s ruling, seeking to strike that paragraph and replace it with the following language: “The settlement was properly before the commissioner on March 1, 2011 when the [defendant] appeared through counsel at the scheduled hearing to approve the parties’ written stipulation and then, after learning that the claimant had passed away, the [defendant’s] counsel refused to physically sign a hard copy of its own written stipulation and withdrew its request that the [cjommission approve the written stipulation.” The commissioner denied the plaintiffs motion to correct on February 21, 2012.
The plaintiff subsequently appealed from the commissioner’s rulings to the board, which affirmed the commissioner’s order of dismissal, finding that “when an agreement which is not executed by both parties is presented to the [c]ommission for approval it is axiomatic that both parties must assent to its approval at that hearing. The trial commissioner . . . reviewed the circumstances herein carefully, and provided a clear rationale for his decision not to approve the settlement. As the conclusions of the trial commissioner are consistent with precedent, they are not contrary to law.” This appeal followed.
“As a threshold matter, we set forth the standard of review applicable to workers’ compensation appeals. The principles that govern our standard of review in workers’ compensation appeals are well established. The conclusions drawn by [the commissioner] from the facts found must stand unless they result from an incorrect application of the law to the subordinate facts or from an inference illegally or unreasonably drawn from them. ... It is well established that [although not dispositive, we accord great weight to the construction given to the workers’ compensation statutes by the commissioner and review board. . . . Statutory construction is a question of law and therefore our review is plenary. . . .
“Although the [Workers’ Compensation Act (act), General Statutes § 31-275 et seq.] does not explicitly provide for [stipulated settlement agreements], we have consistently upheld the ability to compromise a compensation claim as inherent in the power to make a voluntary agreement regarding compensation. . . . [0]nce an agreement is reached, [General Statutes § 31-296 provides that] a commissioner may approve the agreement if it conforms in every regard to the provisions of chapter 568 of the General Statutes. . . . Approval of ... a stipulation by the commissioner is not an automatic process. It is his function and duty to examine all the facts with care before entering an award, and this is particularly true when the stipulation presented provides for a complete release of all claims under the act. . . . Once approved, an Award by Stipulation is a binding award which, on its terms, bars a farther claim for compensation unless [General Statutes §] 31-315, which allows for modification, is satisfied.” (Citations omitted; footnote altered; internal quotation marks omitted.) O’Neil v. Honeywell, Inc., 66 Conn. App. 332, 335-37, 784 A.2d 428 (2001), cert. denied, 259 Conn. 914, 792 A.2d 852 (2002).
The plaintiff first argues that the defendant was bound by the written stipulation because it prepared the stipulation for the claimant and the claimant signed and agreed to it before her death. In support of this argument, the plaintiff asserts that the commissioner failed to follow board precedent such as Festa v. Hamden, No. 3052, CRB 3-95-4 (October 16, 1996), and Drozd v. Connecticut/DMR Southbury Training School, No. 5158, CRB 5-06-11 (October 19, 2007). We are not persuaded.
In Festa, the plaintiff and the defendants, through their counsel, reported to the commissioner at a formal hearing that they had reached an agreement to settle the plaintiffs case arising from his compensable elbow injury. At the hearing, counsel for the parties explained the terms of the proposed settlement, and the plaintiff stated that he understood its terms. At a subsequent hearing on the defendants’ motion to enforce the stipulation, the plaintiff testified before a different commissioner that he is a diabetic and has bouts of hypoglycemia that render him dizzy and confused. That commissioner found that at the time the settlement was being discussed at the prior hearing, the plaintiff was suffering from such an attack and did not fully understand the terms of the settlement. The plaintiff never signed the agreement, and, in denying the defendants’ motion to enforce the agreement, the commissioner declared it to be null and void. The defendants petitioned the board for review of the commissioner’s decision, arguing that the record revealed that the plaintiff understood the nature of the stipulation and that he should not be allowed to change his mind after agreeing to it. The board disagreed with the defendants, ruling that the stipulation could not be enforced against the plaintiff because “there was significant doubt as to the [plaintiff’s] capacity to understand the nature and extent of the stipulation . . . coupled with the fact that the [plaintiff] never actually signed the stipulation . . . ."
The plaintiff misstates the holding of Festa by asserting that the board in that case held that the signatures of both parties were not required before a stipulation could be approved by the commissioner. Although the board in that case noted the fact that the plaintiff before it had never signed the stipulation, that was not the determinative factor upon which the board relied in ruling on the stipulation’s enforceability. The facts presented to the board in Festa are different from those in the present case for several reasons. In Festa, unlike in this case, the plaintiff had not signed the stipulation prior to its submission to the commissioner for approval. Further, the issue in Festa did not center around the fact that the plaintiff had not signed the stipulation, but rather around the plaintiffs mental capacity to understand the nature and the extent of the stipulation in light of the hypoglycemic attack he had suffered on the morning of the approval hearing. At no point in its decision in Festa did the board hold that a stipulation without the plaintiffs physical signature still would be a valid and binding agreement on the parties. Thus, the plaintiffs use of Festa to support the argument that the commissioner in this case acted arbitrarily in holding that the defendant was required to sign the stipulation is unavailing.
The plaintiff also relies on Drozd v. Connecticut/DMR Southbury Training School, supra, No. 5158, CRB 5-06-11, for the proposition that it would not be inequitable under the facts of the present case to enforce the agreement against the defendant. We disagree. In Drozd, counsel for the defendant had advised the commissioner of the terms of a proposed settlement of the plaintiffs case, which the commissioner had approved subject to its memorialization in writing. On the day of the hearing to approve the settlement, counsel for the defendant reported that he did not have the authority to settle because the defendant had rescinded its offer. The plaintiff moved to enforce the unexecuted agreement, which was denied by the commissioner and upheld by the board on appeal. In affirming the commissioner’s ruling that the settlement agreement was not enforceable, the board held that the agreement had not been reduced to writing and that it would not enforce an oral agreement against one of the parties. The board explained that even if a defendant acts inequitably by not executing a settlement agreement, a commissioner lacks the authority to enforce the agreement against it without its consent. In the present case, the commissioner properly did not enforce the agreement against the defendant because the defendant had not consented to the agreement, as evidenced by its refusal to sign the settlement documents.
The plaintiff next argues that, in the absence of evidence of fraud, accident, mistake or surprise, the commissioner was required to approve the parties’ stipulation. In support of this argument, the plaintiff relies on this court’s decision in O'Neil v. Honeywell, Inc., supra, 66 Conn. App. 338-39. The defendant responds that O’Neil is inapplicable, arguing instead that this case is controlled by Secola v. Connecticut Comptroller’s Office, No. 3102, CRB 5-95-06 (February 26, 1997). We agree with the defendant.
In O’Neil, the administratrix of the decedent’s estate appealed to this court from the board’s ruling affirming the commissioner’s decision to open and set aside the parties’ approved stipulation. The parties had negotiated a full and final settlement of the decedent’s case arising from a compensable spinal injury, and both parties signed a stipulation setting forth the terms of the settlement in September, 1996. On October 4,1996, the decedent died from an accidental overdose of prescription drugs. Without notifying the defendant second injury fund of the decedent’s death, counsel for the decedent presented the stipulation to the commissioner at a hearing about which the defendant was never provided notice. The stipulation was approved by the commissioner at that hearing. The defendant was later notified of the decedent’s death and moved to open the approved stipulation. The commissioner granted the motion in order to afford the defendant another hearing, at which the defendant would have the opportunity to object to the approval of the stipulation if it chose to do so. The plaintiff appealed from this ruling to the board, which upheld the commissioner’s decision. The plaintiff appealed to this court from the board’s affirmance of the commissioner’s ruling, and this court reversed the decision of the board, holding that the commissioner did not have the authority to open and set aside the approved stipulation absent evidence of fraud, accident, mistake or surprise.
O’Neil is distinguishable from the present case for several reasons. O’Neil involved the opening of a previously approved agreement, not the commissioner’s threshold decision of whether or not to approve a stipulated agreement between the parties, as is the case here. Both the decedent and the defendant in O’Neil had signed the stipulated agreement, and it had been approved by the commissioner without knowledge of the decedent’s death. In the present case, by contrast, only the claimant had signed the stipulated agreement prior to her death, and the commissioner never approved it. This court stated in O’Neil that, “-under the recognized grounds for equitable interference . . . neither the court nor the plaintiff had a duty to inform the defendant of the approval hearing and the claimant’s death after the agreement was signed, and their failure to do so could not have affected the defendant’s ability to make a defense because the parties already had reached a ‘full, final and complete’ settlement of all claims arising from the injury.” (Emphasis in original.) O’Neil v. Honeywell, Inc., supra, 66 Conn. App. 339. Because, in the present case, we are not presented with a settlement agreement executed by both parties and approved by the commissioner before the claimant’s death, O’Neil does not govern this claim. Rather, we conclude that Secola, which is more factually similar to the present case, provides us with clear guidance on the issue presented.
In Secóla, the board reviewed the commissioner’s refusal to approve a stipulation that both parties had signed and agreed to prior to the claimant’s death from cancer. Secola v. Connecticut Comptroller’s Office, supra, No. 3102, CRB 5-95-06. The commissioner found that the insurer was not informed of the claimant’s cancer diagnosis during settlement negotiations and that her illness had a direct effect on the amount of future benefits for which the defendant might be hable. Id. The board affirmed the commissioner’s decision, stating that “protecting the employee’s rights does not mean ignoring the rights of the employer or insurer. Fairness and equity are two-way streets, and the commissioner is certainly entitled to consider more than the claimant’s position in deciding whether a stipulation should be approved.” Id. The board concluded that, “[j]ust as the commissioner is entitled to reject a stipulation if the claimant has second thoughts, he may exercise his authority to withhold approval of a stipulation if the respondent no longer concords with its terms when it is submitted for ratification.” Id.
The holding of Secola is instructive in the present case. Here, although the defendant had not signed the stipulation, as had the defendant in Secóla, the commissioner found that the defendant was no longer in agreement with the stipulation at the time it was submitted to the commissioner for approval. “The commissioner . . . may exercise his authority to withhold approval of a stipulation if the respondent no longer concords with its terms when it is submitted for ratification.” Secola v. Connecticut Comptroller’s Office, supra, No. 3102, CRB 5-95-06. The board applied the findings of Secóla to the present case and determined that there were no equitable considerations favoring enforcement of the settlement agreement against the defendant. “Where the commissioner has reason to suspect that the . . . stipulation is no longer agreed to by both parties at the time it is being offered for approval ... he is entitled to reject that agreement.” Considine v. Slotnik, No. 3468, CRB 04-96-11 (May 6, 1998). Simply stated, “no stipulation is binding until it has been approved by the commissioner.” Muldoon v. Homestead Insulation Co., 231 Conn. 469, 480, 650 A.2d 1240 (1994). Without the commissioner’s approval, the stipulation between the parties could not be operative within the confines of the act. The commissioner’s findings support his decision to deny approval of the stipulation. We therefore affirm his decision.
The plaintiff last argues that the board erred in affirming the commissioner’s denial of the plaintiffs motion to strike and motion to correct because the commissioner’s findings in paragraph six of his ruling were not supported by the evidence and were contrary to material facts that were admitted or undisputed. We disagree.
“ [T]he finding of a commissioner cannot be corrected by striking out or adding paragraphs, unless the record discloses that he has found facts without evidence, or failed to include material facts which were admitted or undisputed . . . Palumbo v. George A. Fuller Co., 99 Conn. 353, 356-57, 122 A. 63 (1923). The plaintiff, by way of motion, sought to correct and to strike the commissioner’s finding of fact that “[t]he settlement was never properly before a commissioner nor was it approved prior to the claimant’s death, nor did the [defendant] sign the settlement documents.” The plaintiff moved to replace this finding with language stating that the settlement was properly before the commissioner and that, after learning that the claimant had died, the defendant’s counsel withdrew its request that the commissioner approve the written stipulation.
The record in this case does not reveal that the commissioner’s findings were unsupported by the evidence. By contrast, the plaintiffs suggested substituted findings, had they been accepted by the board, would not have been supported by the evidence or by undisputed facts. There is nothing in the record to suggest that the stipulation was properly before the commissioner at the time of the hearing. Rather, because the defendant had not executed the stipulation and withdrew its consent to submit the stipulation for approval, the stipulation was never properly before the commissioner. For the commissioner to find otherwise, or for the board to have reversed the commissioner’s dismissal of the plaintiffs motion to correct and motion to strike, would have been contrary to the evidence presented and the undisputed facts of this case. Thus, because the commissioner’s findings in paragraph six were supported by the evidence, we conclude that the board did not err in affirming the commissioner’s denial of the plaintiffs motion to strike and motion to correct.
The decision of the Workers’ Compensation Review Board is affirmed.
In this opinion the other judges concurred.
Arrowhead Capital Corporation, the workers’ compensation insurer for the defendant Gladeview Health Care Center, also is a defendant and a party on appeal. For convenience, we refer in this opinion to Gladeview Health Care Center as the defendant.
General Statutes § 31-296 (a) provides in relevant part: “If an employer and an injured employee ... at a date not earlier than the expiration of the waiting period, reach an agreement in regard to compensation, such agreement shall be submitted in writing to the commissioner by the employer with a statement of the time, place and nature of the injury upon which it is based; and, if such commissioner finds such agreement to conform to the provisions of this chapter in every regard, the commissioner shall so approve it. A copy of the agreement, with a statement of the commissioner’s approval, shall be delivered to each of the parties and thereafter it shall be as binding upon both parties as an award by the commissioner. . . .”
At the time the stipulation was scheduled for approval by the commissioner, the defendant was unaware that the claimant had died three weeks prior, and counsel for the defendant testified at the formal hearing before the commissioner that, “[h]ad [he] known that there were health issues aside from the workers’ compensation claim, [his] advice to them would have been to consider whether or not they wanted to go forward with a full and final stipulation since a stipulation would necessarily involve future uncertain benefits, and with a person of declining health they may have wished to forgo settling that case since there was no present benefit that was being paid or was being threatened at that time.” The claimant in the present case disclosed her medical records showing her diagnosis of leukemia, but her counsel withheld the information of her death from the defendant prior to the approval hearing before the commissioner. By contrast, the decedent in Secóla had withheld her terminal illness diagnosis from the defendant prior to the parties’ execution of the stipulation.
|
Talk:Entity6666/@comment-39515848-20190817124821/@comment-37459810-20190817164313
Oki
The story is fine
I just don't really ever include real jammers for this reason OwO
|
Page:Ivanhoe (1820 Volume 3).pdf/221
for a little space aside by the rock, but fails not to find its course to the ocean. That scroll which warned thee to demand a champion, from whom could'st thou think it came, if not from Bois-Guilbert? In whom else could'st thou have excited such interest?"
"A brief respite from instant death," said Rebecca, "which will little avail me—was this all thou could'st do for one, on whose head thou hast heaped sorrow, and whom thou hast brought near even to the verge of the tomb?"
"No, maiden," said Bois-Guilbert, "this was not all that I purposed. Had it not been for the accursed interference of yon fanatical dotard, and the fool of Goodalricke, who, being a Templar, affects to think and judge according to the ordinary rules of humanity, the office of the Champion Defender had devolved, not on a Preceptor, but on a Companion of the Order. Then I myself—such was my purpose—had, on the sounding of the trumpet, appeared in the lists as thy champion, disguised indeed in the fashion of a roving knight, who seeks adventures to prove his
|
/**
* Given an array of 2n integers, your task is to group these integers into n pairs of integer, say (a1, b1), (a2, b2), ..., (an, bn) which makes sum of min(ai, bi) for all i from 1 to n as large as possible.
*
*Example 1:
*Input: [1,4,3,2]
*
*Output: 4
*Explanation: n is 2, and the maximum sum of pairs is 4 = min(1, 2) + min(3, 4).
*Note:
*n is a positive integer, which is in the range of [1, 10000].
*All the integers in the array will be in the range of [-10000, 10000].
*/
class Solution {
/**
*排序不需要重写,有现成的快排算法工具类,对应结果和不需要排序取值,只需要取[0, 2, 4, 6, .....]
*/
public int arrayPairSum(int[] nums) {
//nums = sortAsc(nums);
Arrays.sort(nums);
int size = nums.length;
int result = 0;
for (int i = 0; i < size; i = i + 2) {
result += nums[i];
}
return result;
}
/**
* 效率低
*/
public int arrayPairSum(int[] nums) {
nums = sortAsc(nums);
int size = nums.length;
int result = 0;
for (int i = 0; i < size; i = i + 2) {
result += min(nums[i], nums[i + 1]);
}
return result;
}
/**
* 冒泡排序,正序
*/
private int[] sortAsc(int[] nums) {
int size = nums.length;
int median = 0;
for (int i = 0; i < size; i++)
for (int j = i + 1; j < size; j++) {
if (nums[i] > nums[j]) {
median = nums[i];
nums[i] = nums[j];
nums[j] = median;
}
}
return nums;
}
/**
* 大小比较
*/
private int min(int x, int y) {
return x > y ? y : x;
}
}
|
////////////////////////////////////////////////////////////////////////////
//
// Copyright 2020 Realm Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
////////////////////////////////////////////////////////////////////////////
import { Fetcher } from "../Fetcher";
import routes from "../routes";
/** @inheritdoc */
export class ApiKeyAuth implements Realm.Auth.ApiKeyAuth {
/**
* The fetcher used to send requests to services.
*/
private readonly fetcher: Fetcher;
/**
* Construct an interface to the API-key authentication provider.
* @param fetcher The fetcher used to send requests to services.
*/
constructor(fetcher: Fetcher) {
this.fetcher = fetcher;
}
/** @inheritdoc */
create(name: string): Promise<Realm.Auth.ApiKey> {
return this.fetcher.fetchJSON({
method: "POST",
body: { name },
path: routes.api().auth().apiKeys().path,
tokenType: "refresh",
});
}
/** @inheritdoc */
fetch(keyId: string): Promise<Realm.Auth.ApiKey> {
return this.fetcher.fetchJSON({
method: "GET",
path: routes.api().auth().apiKeys().key(keyId).path,
tokenType: "refresh",
});
}
/** @inheritdoc */
fetchAll(): Promise<Realm.Auth.ApiKey[]> {
return this.fetcher.fetchJSON({
method: "GET",
tokenType: "refresh",
path: routes.api().auth().apiKeys().path,
});
}
/** @inheritdoc */
async delete(keyId: string): Promise<void> {
await this.fetcher.fetchJSON({
method: "DELETE",
path: routes.api().auth().apiKeys().key(keyId).path,
tokenType: "refresh",
});
}
/** @inheritdoc */
async enable(keyId: string): Promise<void> {
await this.fetcher.fetchJSON({
method: "PUT",
path: routes.api().auth().apiKeys().key(keyId).enable().path,
tokenType: "refresh",
});
}
/** @inheritdoc */
async disable(keyId: string): Promise<void> {
await this.fetcher.fetchJSON({
method: "PUT",
path: routes.api().auth().apiKeys().key(keyId).disable().path,
tokenType: "refresh",
});
}
}
|
import { ApiProperty } from "@nestjs/swagger";
import { IsNotEmpty, IsPhoneNumber } from "class-validator";
export class Vaccine {
id?: string;
@IsNotEmpty({ message: 'The user_id is required' })
user_id?: string;
@IsNotEmpty({ message: 'The name is required' })
name?: string;
@IsNotEmpty({ message: 'The amount is required' })
amount?: number;
@IsNotEmpty({ message: 'The email is required' })
email?: string;
@IsPhoneNumber('TH')
@IsNotEmpty({ message: 'The tel is required' })
tel?: string;
@IsNotEmpty({ message: 'The lat is required' })
lat?: number;
@IsNotEmpty({ message: 'The long is required' })
long?: number;
@IsNotEmpty({ message: 'The description is required' })
description?: string;
@IsNotEmpty({ message: 'The createAt is required' })
createAt?: Date;
}
|
Tunisia
Appearance
Tunisia has wavey, dark brown hair and tan skin. She has dark brown eyes.
She usually wears western clothes, but on special occasians she will wear a Blouza and Fouta.
Personality and Interests
Her favorite foods are Couscous and chickpeas (usually baked in cakes).
|
Numerous experimental indications suggest that the Hume-Rothery mechanism
plays an important role in stabilizing quasicrystalline and amorphous phases.
However, the exponential damping of the conventional Friedel oscillations at
the relevant, elevated temperatures $T$ poses a severe challenge to the HR
stabilization. In order to resolve this problem it is shown using a Feynman
diagram technique that quantum correlations in the electron sea, arising from
the interplay of Coulomb interaction and impurity scattering, can strongly
enhance the Friedel oscillations in these systems even at elevated temperature.
The resulting corrections to the Friedel potential are in agreement with
available experimental results on amorphous HR alloys. It is proposed to
include the enhancement of the Friedel amplitude derived in the present work
into pseudopotentials through the local field factor.
|
Simulated ice crystal formation on substrates
ABSTRACT
A process for producing an article having a three-dimensional optical effect comprising a transparent substrate provided with a translucent simulated ice crystal formation having a controlled amount of fern-like patterns along with articles produced by this process.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention is generally concerned with providing an article having a three-dimensional decorative appearance, particularly in producing an article having a continuous or discontinuous pattern simulating a reproducible natural ice crystal formation for use in the decorative glass industry, and most particularly for use on doors and windows.
2. Description of the Prior Art
Frosting articles, particularly those made of glass or plastic, for aesthetic reasons, has long been known and used in many diverse industries. These industries include architecture/construction, packaging, display systems and furniture making. Conventional frosting of these substrates is achieved either chemically by an etching liquid such as an acid solution, or mechanically by abrading such as by sandblasting. Another method known in the prior art which is utilized to provide a textured surface on glass and plastic articles, comprises the steps of applying animal hide glue to the surface of the glass which has been sandblasted, then heating the glue and allowing it to dry. This causes the glue to pull chips from the surface, thereby producing a random frosting pattern with an overabundance of fern-like surface patterns. For these reasons, the conventional glue chip process does not yield an ice crystal formation which closely simulates a natural pattern. Such a process is discussed in U.S. Pat. No. 5,631,057 to Carter, which is directed to yet another method of providing a decorative appearance that involves adhering appliques to the surface of the glass.
The aforementioned methods produce articles that are primarily decorative or advertising in nature, are frequently unique and are difficult to reproduce in exactly the same pattern on a commercial scale. In attempting to reproduce an intricate pattern such as natural ice formation, the problem is magnified. The use of a natural ice crystal formation pattern for imparting the illusion of a very cold environment for beverage dispensing machines and other refrigerated products that are displayed and merchandised behind glass in various refrigerated commercial equipment is very desirable from a marketing standpoint.
None of the prior art techniques discussed above provides a satisfactory three dimensional ice crystal effect on two dimensional optically transparent substrates, which can be duplicated in high volumes. Therefore, there exists a need in the decorative glass industry to provide a process which produces a translucent three-dimensional optical effect which closely simulates natural ice crystal formation on a suitable substrate, wherein the pattern can be reproduced in large quantities.
SUMMARY OF THE INVENTION
In its broadest aspect, the present invention provides a three-dimensional optical effect simulating the appearance of a natural ice formation on the surface of substrates, such as a glass or a plastic. It is preferred that the substrate is optically transparent, although translucent and opaque substrates can be used. The ice crystal formation produced by the present process closely resembles ice crystal formations found in nature by substantially reducing the fern-like surface patterns associated with the prior art methods discussed. In accordance with the present invention in its broadest aspect, the process for producing an article having a three dimensional optical effect pattern comprises the steps of:
a. applying a screen pattern of the ice crystal formation onto a substrate;
b. screen printing a polymer coating composition on the substrate; and
c. curing the polymer coating composition to provide a three dimensional translucent simulated ice crystal formation having from 0 to 50 percent of the surface area within said formation containing a fern-like surface pattern;
The polymer coating composition is selected from synthetic thermoplastic or thermosetting polymers, which can be cured by thermally initiated polymerization or by photo-initiated polymerization or by a combination of these processes.
The process yields an article of manufacture which possess high standards of quality control regarding the reproducibility of the ice crystal formation.
It is therefore a primary object of this invention to provide a simulated ice crystal formation which has a three dimensional appearance on a substrate, such as a glass or a plastic material.
Another object of this invention is to provide a clear substrate with a simulated ice crystal formation that more closely resembles a natural ice crystal formation by controlling the amount and the configuration of the fern-like pattern formation and that is reproducible in a commercial process.
Another related object of this invention is to provide a decorative ice crystal effect on optically transparent substrates.
A more particular object of this invention is to provide a process which imparts a translucent three-dimensional simulated ice crystal formation on transparent substrates, such as doors and panels of refrigerated units.
An associated particular object of this invention is to provide a process which produces a three-dimensional simulated translucent ice crystal formation on translucent and opaque substrates such as drinking glasses, mugs, panels and bottles.
A further particular object of this invention is to provide a three dimensional appearance simulating a natural ice crystal formation on a suitable substrate wherein the pattern is continuous.
Yet another particular object of this invention is to provide a three dimensional appearance simulating a natural ice crystal formation on a suitable substrate wherein the pattern is discontinuous.
BRIEF DESCRIPTION OF THE DRAWINGS
The characteristics and advantages of the invention emerge from the following description given by way of example only with reference to the appended drawings in which:
FIG. 1 is a door unit of a refrigerated display case having a transparent glass carrying a translucent discontinuous pattern on the surface simulating the appearance of a natural ice crystal formation.
FIG. 2 is a door unit of a refrigerated display case having a translucent continuous pattern of a simulated natural ice crystal formation according to the invention.
FIG. 3 is a drinking glass having a translucent continuous pattern of a simulated natural ice crystal formation according to the invention.
FIG. 4 is a magnified close-up of the fern-like pattern formed within the ice crystal formations.
FIG. 5 is a schematic view of the process according to the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In accordance with the present invention, a glass or plastic article is produced having a translucent three-dimensional simulated ice crystal formation by controlling the amount and configuration of the fern-like patterns. Referring to the drawings in detail, FIG. 1 shows an optically transparent door panel unit 10 of a refrigerated display case comprising a transparent glass 16 carrying a translucent discontinuous pattern on the surface of the glass simulating the appearance of a natural ice crystal formulation 18 within door frame 14 and attached on support frame 12. FIG. 2 basically is the same construction as described in FIG. 1 except that the ice crystal formation is shown as a continuous pattern. FIG. 3 shows a drinking glass 30 having a three-dimensional translucent simulated ice crystal formation 36 on glass surface 34. This example illustrates only one application of the present invention. In FIG. 5 a simplified flow diagram is presented showing the major steps in the process of the invention. In one preferred embodiment, flat glass panels supported on a conveyor are moved through a series of operations. First the glass sheets are moved through a WASHER where detergent solutions and rotating brushes may be used to remove any dirt from the surface of the glass sheets, which are then dried with compressed air.
If desired, a solution of an adhesion promoter or primer may be applied to the exposed surface areas to be coated on the substrate before the silk screening operation. Suitable adhesion promoters or primers include reaction products of epoxy resin-silanes and a poly sulfide—silane system as disclosed in U.S. Pat. No. 4,059,469 to Mattimoe, et al.
Next, the glass sheet is conveyed to a SCREEN PRINTING station where an ice crystal formation pattern is placed on the areas to be coated. A polymer coating composition is applied by screen printing onto the surface of the glass. Upon completion of the printing step, the glass sheet is then moved through a CURING OVEN where the polymer coating composition is cured either thermally or by photo curing to produce translucent simulated ice crystal formations on the substrate. After exiting the CURING OVEN, the sheet is transferred to a COOLING AND CLEANING station where after cooling, the surface is cleaned. Optionally, a hard coat to prevent scratch, abrasion or other damage may be applied by spraying or brushing at an AFTERTREATMENT station. Suitable coats which have optical clarity are silicone or polypolysiloxane compounds such as described in U.S. Pat. Nos. 4,027,073 and 4,942,083, which are herein incorporated by reference.
The substrates which may be used in the present invention may be of any size or shape, limited only by the capacity of the machinery available. Preferably plates or sheets of commercial plate, float or sheet glass composition are used. Thermoset polymers are also employed as substrates in the process. Suitable thermosetting polymers include acrylic polymers, amino polymers, polyester polymers, polycarbonate polymers, polyurethane polymers and ionomer polymers.
It is preferred when the ultimate utility involves doors and windows that the sheet is optically transparent, however, either the substrate or the coatings may be tinted if desired with colorants such as with dyes or pigments. For other uses such as in drinking glasses and mugs, bottles, inter alia, opaque or translucent substrates may be employed.
Forming the simulated ice crystal formation on a silk screen pattern involves either computer generated imaging or hand drawing the formation pattern on paper or vellum in black and white. Photographing the black and white image to develop a full-size film positive of the ice crystal formation image that is properly sized for the substrate. Finally using the photograph to develop a silk screen for applying the polymer coating onto the substrates, in the same pattern as the desired ice pattern effect.
The polymer coating compositions which may be applied by screen printing in the practice of this invention are based on thermoplastic and thermosetting resins, which may be cured by thermal or photo initiation or by combinations thereof. The thickness of the polymer coating compositions in forming the ice crystal formation ranges from about 0.020 to about 0.125 inch.
Various conventional photocurable resins include acrylate resins and modified acrylate resins such as a urethane—modified polymethacrylate, a butadiene-modified polymethacrylate, a polyether-modified monomethacrylate and polyene-polythiol polymers. Such photocurable resins are described in U.S. Pat. No. 5,461,086 to Kato, et al., the disclosure of which is incorporated by reference. Photo initiated curing can be accomplished directly or with a photochemical initiator, such as acetophenone and derivatives thereof, benzophenone and derivatives thereof, Michler's ketone, benzyl, benzoin, benzoin alkyl ethers, benzyl alkyl ketals, thioxanthone and derivatives thereof, tetramethyl thiurammonosulfide, 1-hydroxycyclohexyl phenyl ketone, 2-methyl-1-[4-(methylthio)phenyl]-2-morpholino-propene-1 and the like alpha-amino ketone compounds, etc., alone or as a mixture thereof. Among them, benzoyl alkyl ethers, thioxanthone derivatives, 2-methyl-1-[4-(methylthio) -phenyl]-2-morpholino-propene-1 and the like a-amino ketone compounds are preferable for more rapid curing. Further, 2-methyl-1-[4-(methylthio)phenyl]2-morpholino-propene-1 is more preferable due to rapid curing as well as good storage stability.
It is possible to use together an amine compound for sensitizing the action of photo polymerization initiator. Photo polymerization initiators decompose into free radicals by ultraviolet light in 3600 angstrom range. Convenient sources of ultraviolet light are available in the form of mercury-arc lamps or fluorescent lamps with special phosphors.
Thermally cured thermosetting resins useful as polymer coatings to be applied by a screen printing process include epoxy, polyvinyl butyryl, polyesters, ionomer, phenolic and polyurethane. For example, the preferred resins are epoxy resins, particularly epoxy resins which have at least two epoxy groups per molecule. Examples of the epoxy resin are polyglycidyl ethers or polyglycidyl esters obtained by reacting a polyhydric phenol such as bisphenol A, halogenated bisphenol A, catechol, resorcinol, etc., or a polyhydric alcohol such as glycerin with epichlorohydrin in the presence of a basic catalyst; epoxy novo laks obtained by condensing a novo lak type phenol resin and epichlorohydrin; ep oxidized polyolefines obtained by epoxidizing by a peroxide method; ep oxidized polybutadienes; oxides derived from dicyclopentadiene; and ep oxidized vegetable oils. These epoxy resins can be used alone or as a mixture thereof.
When the thermosetting resin is an epoxy resin about 2 to 30 parts by weight of the curing agent for 100 parts by weight of the epoxy resin is used. Preferably the ratio is about 5 to 15 parts by weight of the curing agent to 100 parts by weight of the resin.
As curing agents for epoxy resins, there can be used actual curing agents which are insoluble in epoxy resins at a resin drying temperature of 176 degrees F. and dissolve at high temperatures to begin curing.
Examples of the curing agent are aromatic polyamines and hydroxyethylated derivatives thereof such as metaphenylenediamine, 4,4′-diaminodiphenylmethane, 4,4′-diaminodiphenylsulfone, 4,4′diaminodiphenyl oxide, 4,4′-diaminodiphenylimine, biphenylenediamine, etc; dicyandizmide; imidazole compounds such as 1-methylimidazole, 2-methylimidazole, 1,2-dimethylimidazole, 2-ethyl-4-methylimidazole, 2-undecylimidazole, 2-heptadecylimidazole, 1-benzyl-2-methylimidazole, 2-phenylimidazole, 1-(2-carbamyl)-2-ethyl-4-methylimidazole, 1-cyanoethyl-2-phenyl-4; 5-di(cyanoethoxymethyl)imidazole, 2-methylimidazole isocyanuric acid adduct, 1-cyanoethyl-2-methylimidazole, 1-cyanoethyl-2-ethyl-4-methylimidazole, 1-azinethyl-2-ethyl-4-imadiazole, 2-methyl-4-ethyl-imidazole trimesic acid adduct, etc.; BF3 amine complex compounds; acid anhydrides such as phthalic anhydride, methyltetrahydrophthalic anhydride, methylnadic anhydride, pyromellitic anhydride, chlorendic anhydride, benzophenonetetracarboxylic anhydride, trimellitic anhydride, polyazelaic anhydride, polysebacic anhydride, etc.; organic acid hydrazides such as adipic dihydrazide, etc; diaminomaleonitrile and derivatives thereof; melamine and derivatives thereof; amine imides, etc.
It is also possible to use a diaminotriazine modified imidazole compound of the formula:
wherein R is a residue of imidazole compound. Examples of such a compound are 2,4-diamino-6{2′-methylimidazole-(1′)}ethyl-s-triazine, 2,4-diamino-6{2′-undecylimidazole-(1′)}-ethyl-s-triazine, 2,4-diamino-6{2′-methylimidazole (1′)}ethyl-s-triazine, and isocyanuric acid adducts thereof.
Among these curing agents, a combination of diaminotriazine modified imidazole and dicyandiamide is particular preferable from the viewpoint of adhesiveness to form ice crystal patterns.
Additionally a compound capable of reacting with an epoxy resin and improving adhesiveness to the surface of the substrate when the epoxy resin coating composition is cured by ultraviolet light exposure may be added.
Examples of such a compound are thiazolines such as 2-mercaptothiazoline, thiazoline, L-thiazoline-4-carboxylic acid, 2,4-thiazolinedione, 2-methylthiazoline, etc.; thiazoles such as thiazole, 4-ethylthiazole, etc.; thiadiazoles such as 2-amino-5-mercapto-1,3,4-thiadiazole, 2,5-dimercapto-1,3,4-thiadiazole, etc.; imidazoles such as 2-mercaptobenzoimidazole, 2-mercapto-1-methylimidazole, etc.; imidazolines such as 2-mercaptoimidazoline, etc., 1-H-1,2,4-triazole-3-thiol, etc. Among them, 2-mercaptothiazoline, 2-mercaptoimidazoline, thiazoline, 2-amino-5-mercapto-1,3,4-thidiazole, and 1-H-1,2,4-triazole-3-thiol are more effective for improving the adhesiveness between the polymer coating and the glass or plastic substrate.
As mentioned earlier in this disclosure, the frosting pattern produced by the glue chip process produces an undesirable fern-like surface texture. FIG. 4 shows a magnified close-up of such fern-like surface structures 44 within the ice crystal formation pattern on the glass substrate of the perimeter portion of a door panel 40. An overabundance of these fern-like structures or textures is undesirable, since they diminish the natural effect desired in simulated ice crystal formation patterns. Further, the random configuration of the spine 42 and branches 46 in the fern leaf has a deleterious effect in reproducing a natural ice crystal formation. The process of this invention not only produces a more natural ice crystal formation by controlling the number of fern-like structures, and by controlling the configuration of the spines and branches, but the results are reproducible even in high volume production. The final product may contain as much as fifty percent of the fern-like pattern relative to the total surface area of the ice crystal formation, or may be completely devoid of fern-like patterns but preferably less than twenty percent and most preferably contains less than ten percent.
The present invention will be more fully understood from the descriptions of specific examples which follow:
EXAMPLE 1
A thermosetting polymer coating composition was prepared by mixing a bisphenol A epoxy resin (EPIKOTE-828—marketed by Shell Chemical Co.) and dicyandiamine were mixed in a 10:1 parts by weight ratio (resin:catalyst) with about 1 part by weight titanium dioxide (white pigment). A solvent, butyl Cello solve was added to the mixed components in proper amounts for dissolving and in order to improve the dispersing properties of the coating composition.
Single Coating Process—Thermal Cure
A discontinuous silk screen pattern of a simulated ice crystal formation is positioned on a previously cleaned optically transparent glass sheet. The epoxy coating composition as prepared above is screen printed and selectively deposited only on the exposed regions of the glass substrate. Upon completion of the printing operation, the coated glass sheet passes through a curing oven containing a bank of infra-red lamps which maintains the temperature of between about 310 and 350 degrees F. The workpiece is held in the oven for about 10 minutes. After exiting the curing process station, the coated glass sheet is cooled and then cleaned and transferred for storage and shipping or for further processing or fabrication.
EXAMPLE 2
A photocurable resin coating composition was prepared by mixing diallyl phthalate, trimethylolpropane triacrylate and 2-methyl-1-[4- (methyl)thiophenyl]-2-morpholino-propane-1 in a parts by weight ratio of 25:5:1. To this resin mixture was added a photo initiator 4,4′-bis (N<N-diethylamino benzophenone in a parts by weight ratio of 62:1 (resin/photo initiator). About 1 part by weight of titanium dioxide pigment was added. Sufficient solvent, such as butyl Cello solve was added to aid in dispersing the coating for screen printing.
Single Coating Process—Photo cure
A continuous silk screen pattern of simulated ice crystal formation is positioned on a previously cleaned optically transparent glass sheet. The epoxy coating composition as prepared above is screen printed onto the glass sheet. The coated glass sheet is transferred to a curing station. The resin composition was selectively photo cured by exposure to ultraviolet light using a 300 W high pressure mercury lamp for a period of 2 to 4 minutes. After exiting the curing process station, the coated glass sheet is then cleaned and transferred for storage and shipping or for further processing or fabrication.
The processes as shown in Examples 1 and 2 produce simulated ice crystal formations which are devoid of fern-like patterns. Example 3 shows a process where the fern-like pattern is added.
EXAMPLE 3 Multiple Coatings—Different Curing Mechanisms
A discontinuous silk screen pattern of a simulated ice crystal formation is positioned on a previously cleaned optically transparent glass sheet. The thermosetting zing epoxy coating composition as described in Example 1 was screen printed onto the glass sheet. Upon completion of the printing operation, the coated sheet is passed into the curing oven station and thermally cured at about 310 to 340 degrees F. for about 5 minutes. These conditions provide an under cure in order to prevent delamination during subsequent curing operations. After cooling, the first coated glass sheet is transferred to a station, where a silk screen having about ten percent surface area of a fern like pattern, is applied to the surface. A photocurable resin coating composition as described in Example 2 but further including about 0.5 part of a blue pigment is deposited on the glass substrate. Following the second coating, the coated sheet is passed to the curing station and the coating is cured by exposure to ultraviolet light using a 300 W high pressure mercury lamp for about 2 to 5 minutes to completely cure both coatings. After cooling and cleaning, the coated sheet is transferred to an aftertreatment station where a silicone or poliysiloxane coating composition was sprayed on the cured ice crystal formation and then cured at about 100 degrees to 300 degrees F. to impart a hard transparent abrasion resistant coating.
The screen printing process and the subsequent curing steps may be repeated if more colors or passes are needed to yield the desired effect. Other patterns may be screen printed on or around the ice crystal formation such as snowflakes or even different fern-like patterns. The important aspect is that the amount of fern-like patterns can be specifically controlled. However, in these cases, the abrasion resistant coating application and cure is the last step before storage or further fabrication.
There is no criticality in the manner the multi-pass processes are conducted. Example 3 shows a polymer coating which is thermally cured followed by a polymer coating system which is photo cured. The various coating systems are compatible and the above sequence may be reversed with the same excellent results. Any combination and sequence can be employed, i.e., photo cure polymer systems on photo cure polymer systems, as well as, thermally cured polymer systems on thermally cured polymer systems.
What is claimed is:
1. A process for producing an article having a three dimensional optical effect consisting essentially of the steps of: a. applying a screen print having a natural ice crystal pattern onto a transparent substrate comprising a flat transparent sheet, a door, a window or a drinking container; b. screen printing a polymer coating composition on said substrate; c. curing said polymer coating composition to provide a translucent simulated ice crystal formation containing from about 0 to 50 percent of fern-like surface patterns within said formation based on the total surface area of said ice crystal formation.
2. The process of claim 1 wherein said substrate is optically transparent.
3. The process of claim 2 wherein said substrate is tinted with a dye or pigment.
4. The process of claim 1 wherein said transparent substrate is selected from glass or thermoset polymers.
5. The process of claim 4 wherein the surface of said article is selected from flat or curved surfaces.
6. The process of claim 1 wherein said thermoset polymer is made from unsaturated resins selected from acrylic resins, amino resins, polyester resins, phenolic resins, polycarbonate resins, polyurethane resins, urethane resins, vinyl resins and ionomer resins or mixtures thereof.
7. The process of claim 1 wherein said thermosetting resin is an epoxy resin.
8. The process of claim 1 wherein said polymer coating composition is selected from a thermoplastic or thermosetting resin.
9. The process of claim 1 wherein said polymer coating composition material contains a colored pigment or dye.
10. The process of claim 1 wherein a transparent hard anti-abrasive coating is applied to the surface of said ice crystal formation.
11. The process of claim 10 wherein said coating is an organosilicone compound.
12. The process of claim 1 wherein said curing temperature ranges from about 300 to 350 degrees F.
13. A process for producing an article having a three dimensional optical effect consisting essentially of the steps of: a. applying a screen print having a natural ice crystal pattern onto a transparent substrate, selected from the groups consisting of a flat transparent sheet, a door, a window and a drinking container; b. screen printing a polymer coating composition comprising a thermosetting resin-curing agent mixture; c. thermally curing said polymer coating composition to provide a translucent simulated ice crystal formation containing about 0 to 50 percent of fern-like patterns within said formation.
14. A process for producing an article having a three dimensional optical effect consisting essentially of the steps of: a. applying a screen print having a natural ice crystal pattern onto a transparent substrate selected from the group consisting of a flat transparent sheet, a door, a window and a drinking container; b. screen printing a polymer coating composition comprising a thermosetting resin-curing agent mixture; c. partially thermally curing said polymer coating composition to provide a simulated ice crystal formation devoid of fern-like surface patterns within said formation; d. cooling said coated substrate having ice crystal formation; e. applying a screen print having a fern-like pattern covering less than fifty percent of said partially cured pattern of step c; f. screen printing a polymer coating composition comprising a photocurable resin; g. fully curing both the thermosetting and photocurable polymers and; h. cooling said substrate to provide a translucent simulated ice crystal formation having less than fifty percent fern-like patterns within said formation, based on the total surface area of said ice crystal formation.
15. A process for producing an article having a three dimensional optical effect consisting essentially of the steps of: a. applying a screen print having a natural ice crystal pattern onto a transparent substrate selected from the groups consisting a flat transparent sheet, a door, a window and a drinking container; b. screen printing a polymer coating composition a window or a drinking container comprising a photocurable resin; c. photo curing said resin to provide a translucent simulated ice crystal formation devoid of fern-like surface patterns within said formation.
16. The process of claim 15 wherein said polymer coating composition comprises a photocurable resin-photo initiator composition mixture.
|
/**
* Created by chensheng on 15/6/17.
*/
(function (ns) {
ns.PubTable = tp.component.SmartTable.extend({
events: _.extend({
'click .copy-button': 'copyButton_clickHandler'
}, tp.component.SmartTable.prototype.events),
copyButton_clickHandler: function (event) {
var button = $(event.currentTarget)
, msg = button.data('msg') || '确定复制吗?';
if (!confirm(msg)) {
return;
}
var init = this.$el.data()
, id = button.closest('tr').attr('id')
, url = init.url + id;
tp.service.Manager.call(url, null, {'success': function(response) {
alert('复制成功');
location.href= '#/pub/' + response.pub.id;
}});
}
});
}(Nervenet.createNameSpace('admin.page')));
|
Talk:PP-70 Gujranwala-XII
External links modified
Hello fellow Wikipedians,
I have just modified 2 external links on Constituency PP-97 (Gujranwala-VII). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
* Added archive https://web.archive.org/web/20141208232634/http://ecp.gov.pk/ERConsolidated2013/AllResultsFull2013.aspx?assemblyid=PP to http://ecp.gov.pk/ERConsolidated2013/AllResultsFull2013.aspx?assemblyid=PP
* Added archive https://web.archive.org/web/20141210145257/http://www.awaztoday.com/result.aspx?pageId=700 to http://www.awaztoday.com/result.aspx?pageId=700
Cheers.— InternetArchiveBot (Report bug) 12:28, 12 August 2017 (UTC)
|
<?php
namespace PhpSchool\PhpWorkshop\Listener;
use PhpSchool\PhpWorkshop\Event\ContainerListenerHelper;
use PhpSchool\PhpWorkshop\Exception\InvalidArgumentException;
use Psr\Container\ContainerInterface;
class LazyContainerListener
{
/**
* @var ContainerInterface
*/
private $container;
/**
* @var ContainerListenerHelper
*/
private $listener;
public function __construct(ContainerInterface $container, ContainerListenerHelper $listener)
{
$this->container = $container;
$this->listener = $listener;
}
/**
* @param mixed ...$args
*/
public function __invoke(...$args): void
{
/** @var object $service */
$service = $this->container->get($this->listener->getService());
if (!method_exists($service, $this->listener->getMethod())) {
throw new InvalidArgumentException(
sprintf('Method "%s" does not exist on "%s"', $this->listener->getMethod(), get_class($service))
);
}
$service->{$this->listener->getMethod()}(...$args);
}
/**
* @return callable
*/
public function getWrapped(): callable
{
/** @var callable $listener */
$listener = [
$this->container->get($this->listener->getService()),
$this->listener->getMethod()
];
return $listener;
}
}
|
import {loadStdlib} from '@reach-sh/stdlib';
import * as backend from './build/index.main.mjs';
import { util } from '@reach-sh/stdlib';
const { thread, Signal } = util;
const stdlib = loadStdlib();
const startingBalance = stdlib.parseCurrency(100);
const conWait = 5000;
const goGo = async (x, ctcAlice, ready) => {
const acc = (await stdlib.newTestAccount(startingBalance)).setDebugLabel('FE_API');
return async () => {
const ctc = acc.contract(backend, ctcAlice.getInfo());
const go = ctc.a.go;
await ready.wait();
const call = async (id, f, exp) => {
let res = undefined;
await new Promise(resolve => setTimeout(resolve, conWait))
try { res = await f(); }
catch (e) { res = [`err`, e]; }
console.log(id, res);
stdlib.assert(res == exp);
}
await call(`i > 5 | i == ${x} :`, () => go(x), x > 5);
}
}
const main = async (x) => {
const [ accAlice ] =
await stdlib.newTestAccounts(1, startingBalance);
accAlice.setGasLimit(5000000);
const ctcAlice = accAlice.contract(backend);
const ready = new Signal();
await Promise.all([
thread(await goGo(x, ctcAlice, ready)),
ctcAlice.p.Alice({
deployed: async (_ctcInfo) => {
console.log(`Deployed`);
ready.notify();
},
log: console.log,
}),
]);
}
await main(1);
await main(6);
|
--- layout: post title: "Retail Therapy" description: "Sorry if it is a bit all over the place but the idea is cool" date: 2020-04-19 19:10:00 +0800 background: '/img/002.jpg' categories: jekyll update ---
Retail therapy does not work.
無事出街少破財. Stuff I buy are in the grey area between want and need, like a my-life-could-go-on-without-it-but-it-would-make-a-great-difference-if-I-have-it. Now I question myself if I deserved it or not, and the social pressure of being a good child or something. Honestly, I am wasting money and food and all earthly resources. So I don’t know if this would make a difference.
NEVERTHELESS, I have two competitions to look forward to + work on, and this is by no means a flex bc this is some hard work. I still haven’t worked on the late-April one and I have approx. a week to finish up. I am not a competitive person at all. I never won any HK School music/speech festival. I am not a lucky person. I just thought that I could kill some time by joining some essay-writing competitions, and here I am working on two. Plus homework, ah, homework. They have no reason to exist. Out here killing my vibe. Am I worthy? I am not sure. Maybe I am. Maybe I’m not. I am tired. Physically and mentally tired. I am still grateful tho. Grateful for opportunities, life and Oscar Isaac. Peace
|
Dependency 'org.mockito:mockito-inline:3.8.0' not found
I am using a "MockedStatic" in my tests, but by the execution of method this error is appeared.
org.mockito.exceptions.base.MockitoException:
The used MockMaker SubclassByteBuddyMockMaker does not support the creation of static mocks
Mockito's inline mock maker supports static mocks based on the Instrumentation API.
You can simply enable this mock mode, by placing the 'mockito-inline' artifact where you are currently using 'mockito-core'.
Note that Mockito's inline mock maker is not supported on Android.
I have inserted the "mockito-inline" dependency but it looks like Maven can't resolve it. The text of this error is in the article.
I'm using Java 11, Junit 5.
pom.xml:
...<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-inline</artifactId>
<version>3.8.0</version>
<scope>test</scope>
</dependency>...
That artifact exists: https://mvnrepository.com/artifact/org.mockito/mockito-inline/3.8.0 Perhaps more details from the maven output would help.
They are two different messages. The one you have posted is about your running a test which means your setup is wrong? The title of your post is wrong because if a dependency would have not been found there would have been an error before even compiling and running your test....
As per the official documentation, it is turned off by default
This mock maker is turned off by default because it is based on completely different mocking mechanism that requires more feedback from the community. It can be activated explicitly by the mockito extension mechanism, just create in the classpath a file /mockito-extensions/org.mockito.plugins.MockMaker containing the value mock-maker-inline.
So I would suggest you to create that file, also you can refer to this section in the
Official Mockito Documentation
Do remember to add the latest version of mockito-inline maven dependency
<!-- https://mvnrepository.com/artifact/org.mockito/mockito-inline -->
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-inline</artifactId>
<version>4.3.1</version>
<scope>test</scope>
</dependency>
|
#!/bin/bash
source activate dask-ml-test
MSG='Checking flake8... ' ; echo $MSG
flake8
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Checking black... ' ; echo $MSG
black --check .
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Checking isort... ' ; echo $MSG
isort --recursive --check-only .
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Checking mypy... ' ; echo $MSG
mypy dask_ml/metrics
mypy dask_ml/preprocessing
RET=$(($RET + $?)) ; echo $MSG "DONE"
exit $RET
|
package com.power.home.di.module;
import com.power.home.data.WithdrawlModel;
import com.power.home.data.http.ApiService;
import com.power.home.presenter.contract.WithdrawlContract;
import dagger.Module;
import dagger.Provides;
/**
* Created by ZHP on 2018/12/27/0027.
*/
@Module
public class WithdrawlModule {
private WithdrawlContract.View mView;
public WithdrawlModule(WithdrawlContract.View mView) {
this.mView = mView;
}
@Provides
public WithdrawlContract.View provideView() {
return mView;
}
@Provides
public WithdrawlContract.IWithdrawlModel provideModel(ApiService apiService) {
return new WithdrawlModel(apiService);
}
}
|
FIX: Test regressions with current core
not ok 2 Chrome 116.0 - [undefined ms] - Global error: Uncaught TypeError: this.send is not a function at http://localhost:7357/assets/plugins/discourse-codebytes-plugin.js, line 222
@janzenisaac can you take a look? The failure was showing up in internal test runs. (The plugin is still using the legacy modal structure, though.)
|
var request = require('request');
var config = require('./config');
module.exports = {
/**
* Returns a request object without authentication headers
* @param {String} optional Account Token to use
*/
urequest : function(){
return request.defaults(
{
json: true,
headers: {
'Version':'1.0',
'Content-Type': 'application/json',
},
});
},
/**
* Returns a request object with the appropriate authentication headers
* @param {String} optional Account Token to use
*/
request : function(token){
if( ! token ){
token = config.findToken();
}
return request.defaults(
{
json: true,
headers: {
'Version':'1.0',
'Content-Type': 'application/json',
Authorization: 'Bearer ' + token,
},
});
}
};
|
OSX "Open With" does not work
If you right click on a dataset and "Open With" vaporgui, it does not load the dataset.
While this is a good feature to have, I'm not sure if this is something we want to invest time to implement. If we decide not, we should close it.
|
Kemeria Ahmed BESHIR, Plaintiff, v. Eric H. HOLDER, Jr., Attorney General of the United States, et al., Defendants.
Civil Action No. 10-652(JDB)
United States District Court, District of Columbia.
January 27, 2014
David R. Saffold, Washington, DC, for Plaintiff.
Katherine E.M. Goettel, Kimberly E. Helvey, William Charles Silvis, U.S. Department of Justice, Washington, DC, for Defendants.
MEMORANDUM OPINION
JOHN D. BATES, United States District Judge
Plaintiff Kemeria Ahmed Beshir, an asylee from Ethiopia, brings this lawsuit against the Attorney General, the Secretary of the Department of Homeland Security (“DHS”), the Director of the FBI, the Director of the United States Citizenship and Immigration Services (“USCIS”), and other USCIS officials. Beshir claims that defendants have unreasonably delayed the adjudication of her Form 1-485 application to adjust her immigration status to that of a lawful permanent resident and have unlawfully failed to elevate her application to USCIS headquarters personnel. Before the Court is [37] defendants’ third motion for summary judgment. Upon careful consideration of the motion and the parties’ memoranda, the applicable law, and the entire record, the Court will dismiss Beshir’s complaint for lack of subject-matter jurisdiction and will deny defendants’ motion as moot.
FACTUAL BACKGROUND
The facts and history of this case have been set forth in the Court’s prior opinions and orders. Beshir is an Ethiopian citizen currently residing in the United States pursuant to a grant of asylum decided on March 26, 2003. Defs.’ Statement of Material Facts Not in Dispute (“Defs.’ Stmt.”) [ECF No. 37-1] ¶ 1. In spring 2004, Beshir filed a Form 1-485 application for adjustment of status to become a legal permanent resident. Id. ¶ 2. USCIS denied her application on February 28, 2008, after finding her inadmissible under section 212(a)(3)(B)(i)(I) of the Immigration and Nationality Act (“INA”), codified at 8 U.S.C. § 1182(a)(3)(B)(i)(I), which renders “inadmissible” for permanent residency status any alien who “engaged in a terrorist activity.” Defs.’ Stmt. ¶ 9. USCIS found Beshir inadmissible under this provision because of statements she made in her asylum application indicating that she supported the Oromo Liberation Front (“OLF”), an organization that USCIS has determined falls within the definition of a Tier III terrorist organization. Id. ¶¶ 7-9.
In spring 2008, Beshir filed a motion to reopen her adjustment application. Id. ¶ 11. USCIS granted her request, reopened her application on or about April 30, 2008, and then placed it on hold pursuant to a new USCIS policy. Id. ¶¶ 11, 12. The new policy stemmed from USCIS’s March 26, 2008 Memorandum (the “2008 USCIS Memorandum”), which “instruct[ed] the withholding of adjudication of cases ... that could potentially benefit” from an expanded authority to exempt Tier III groups from terrorism-related inadmissibility grounds. Id. ¶¶ 6, 10. The referenced exemption authority is found at 8 U.S.C. § 1182(d)(3)(B)(i), which permits the Secretary of State or the Secretary of Homeland Security, “in such Secretary’s sole unreviewable discretion,” to exempt certain aliens who otherwise fall within the terrorism-related inadmissibility provisions of section 1182(a)(3)(B). 8 U.S.C. § 1182(d)(3)(B)(i). This discretionary exemption authority was broadened by the Consolidated Appropriations Act of 2008 to allow “the Secretary to not apply the definition of a Tier III ... terrorist organization ... to a group that falls within the scope of that definition,” and to allow “the Secretary to exempt most of the terrorist-related inadmissibility grounds delineated at ... 8 U.S.C. § 1182(a)(3)(B) as they apply to individual aliens.” Defs.’ Stmt. ¶¶ 5, 6. Due to this expanded exemption authority, the 2008 USCIS Memorandum instructed USCIS personnel to place on hold certain adjustment applications that could potentially benefit from future exemptions:
Because new exemptions may be issued by the Secretary in the future, until further notice[,] adjudicators are to withhold adjudication of cases in which the only ground(s) for referral or denial is a terrorist-related inadmissibility provision(s) and the applicant falls within one or more of the below categories ... (2) Applicants who are inadmissible under the terrorist-related provisions of the INA based on any activity or association that was not under duress relating to any other Tier III organization^]
Ex. Q to Am. Compl., 2008 USCIS Memorandum [ECF No. 17, ECF No. 1-1]. Pursuant to this policy, USCIS determined that Beshir may benefit from a future exemption, and her adjustment application was placed on hold. Defs.’ Stmt. ¶¶ 10-12.
On February 13, 2009, USCIS issued revised policies on the adjudication of cases involving terrorist-related inadmissibility grounds (the “2009 USCIS Memorandum”). Id. ¶10. The 2009 USCIS Memorandum did not lift the hold on the adjudication of Beshir’s case. Id. ¶ 12. It did, however, provide additional instructions regarding cases placed on hold:
If the adjudicating office receives a request from the beneficiary and/or attorney of record to adjudicate a case on hold per this policy (including the filing of a mandamus action in federal court) ... the case should be elevated through the chain of command to appropriate Headquarters personnel. Guidance will be provided by USCIS headquarters on whether or not the case should be adjudicated.
Ex. P to Am. Compl., 2009 USCIS Memorandum [ECF No. 17, ECF No. 1-1]; Defs.’ 3d Mot. for Summ. J. (“Defs.’ 3d MSJ”) [ECF No. 37] at 15; Pl.’s Opp’n [ECF No. 38] at 6, 12. Beshir’s attorney of record sent a letter on January 31, 2010 to the USCIS Director of the Nebraska Service Center requesting that “further action be taken” in Beshir’s case, Ex. N to Am. Compl., Jan. 31, 2010 Letter [ECF No. 17; ECF No. 1-1], but USCIS appears not to have “elevated” Beshir’s application “through the chain of command,” PL’s Opp’n at 6-7,12.
Over the last several years, the Secretary of Homeland Security has exercised her exemption authority and exempted from terrorist-related inadmissibility qualifying aliens who provided “material support to the All India Sikh Students’ Federation — Bittu Faction”; took part in “activities or associations relating to the All Burma Students’ Democratic Front”; and “received military training under duress or ... solicited funds or membership under duress.” Defs.’ Stmt. ¶¶ 13-15. Defendants have determined that no exemptions currently apply to Beshir and, consequently, the adjudication of her reopened application remains on hold pursuant to USCIS policy. Id. ¶¶ 12, 17.
PROCEDURAL BACKGROUND
After waiting approximately two years for a decision on her adjustment application, Beshir filed her initial complaint in this Court on April 27, 2010. See Compl. [ECF No.l]. Shortly thereafter, defendants sought dismissal of Beshir’s complaint for lack of jurisdiction and, in the alternative, moved for summary judgment. See Defs.’ Mot. to Dismiss or for Summ. J. [ECF No. 2], Judge Urbina denied the motion to dismiss for lack of jurisdiction and denied without prejudice defendants’ alternative motion for summary judgment. See Jan. 24, 2011 Order [ECF No. 5]; Jan. 24, 2011 Mem. Op. [ECF No. 6]. On March 23, 2011, USCIS interviewed Beshir in conjunction with her adjustment application. Defs.’ Stmt. ¶ 16. Defendants then filed a second motion for summary judgment. See Defs.’ 2d Mot. for Summ. J. [ECF No. 11]. Judge Urbina denied the motion without prejudice and granted Beshir leave to file an amended complaint to show that she had standing. See Mar. 9, 2012 Order [ECF No. 14]; Mar. 9, 2012 Mem. Op. [ECF No. 15]. Beshir filed an amended complaint on April 9, 2012. See Am. Compl. [ECF No. 17].
On August 17, 2012, DHS published a notice announcing a recent exercise of the Secretary’s exemption authority under 8 U.S.C. § 1182(d)(3)(B)(i). See 77 Fed.Reg. 49,821 (Aug. 17, 2012). The Court granted the parties’ requests to stay proceedings for several weeks to allow time for USCIS to determine if the exemption applied to Beshir. See Aug. 30, 2012 Stip. [ECF No. 25]; Nov. 5, 2012 Stip. [ECF No. 29], In a joint status report filed at the conclusion of the stay, defendants stated that “[d]uring the review process ... USCIS discovered information which suggests that the OLF [the association with which Beshir is associated] may not be eligible for the exemption.” Nov. 19, 2012 Status Report [EOF No. 30] at 2. Defendants asked the Court to extend the stay in Beshir’s case because USCIS was “still conducting its review process,” but was “unable to provide an estimate of when” it would be finished. Id. Beshir opposed defendants’ request to continue the stay, see id., and the Court let the stay expire. As of the date of this Opinion, Beshir has not received a decision on her adjustment application.
In her amended complaint, Beshir asks the Court to compel defendants to adjudicate her adjustment application within ninety days. See Am. Compl. ¶¶ 36-41. She does not argue that a terrorist-related inadmissibility exemption currently applies to her or that defendants have failed to complete some administrative task necessary to process her application. Rather, her claim is simply that defendants are taking an unreasonable amount of time to adjudicate her application. See id. She also claims that defendants have unlawfully failed to elevate her case “through the chain of command to appropriate Headquarters personnel” pursuant to the 2009 USCIS Memorandum, and argues that they should be compelled by the Court to apply that policy. PL’s Opp’n at 7 (“[Beshir] at the very least has a clear right to have the hold on her case reviewed by USCIS Headquarters.”).
Defendants assert that the adjudication of Beshir’s case has been delayed because it “is subject to extended processing due to the potential for high-level decision making that could affect her case” and argue that Beshir lacks a judicially enforceable right to demand that the government “prematurely conclude” this process. Defs.’ 3d MSJ. at 1-2. Moreover, defendants argue, the adjudication of Beshir’s adjustment application has not been unreasonably delayed. Id. Defendants also contend that the 2009 USCIS Memorandum represents internal policy guidance, not a binding regulation that the Court can compel them to apply. Id. at 14-16.
Because this Court has not yet determined whether an' affirmative basis for subject-matter jurisdiction exists over Beshir’s claims, it will do so now. In so doing, the Court will amend the earlier opinion in this case holding that the INA’s jurisdiction-stripping provision, 8 U.S.C. § 1252(a)(2)(B)(ii), does not preclude judicial review. See Jan. 24, 2011 Mem. Op. Because the Court finds that it lacks subject-matter jurisdiction over Beshir’s claims, it will not reach the parties’ summary judgment arguments on the reasonableness of the delay in the adjudication of Beshir’s adjustment application.
STANDARD OF REVIEW
This Court is of limited jurisdiction, possessing “only that power authorized by Constitution and statute.” Kokkonen v. Guardian Life Ins. Co., 511 U.S. 375, 377, 114 S.Ct. 1673, 128 L.Ed.2d 391 (1994). The Court can dismiss a complaint sua sponte for lack of jurisdiction at any time. Fed.R.Civ.P. 12(h)(3); see, e.g., Jerez v. Republic of Cuba, 777 F.Supp.2d 6,15 (D.D.C.2011). Although the Court must construe the complaint liberally, a plaintiff bears the burden of establishing the elements of federal jurisdiction. Lujan v. Defenders of Wildlife, 504 U.S. 555, 561, 112 S.Ct. 2130, 119 L.Ed.2d 351 (1992). “[W]here necessary, the court may consider the complaint supplemented by undisputed facts evidenced in the record, or the complaint supplemented by undisputed facts plus the court’s resolution of disputed facts.” Herbert v. Nat’l Acad. of Scis., 974 F.2d 192, 197 (D.C.Cir,1992). Additionally, although there is a “strong presumption in favor of judicial review of administrative action,” INS v. St. Cyr, 533 U.S. 289, 298, 121, S.Ct. 2271, 150 L.Ed.2d 347 (2001), there is also a heightened need for “judicial deference to the Executive Branch ... in the immigration context where officials exercise especially sensitive political functions,” INS v. Aguirre-Aguirre, 526 U.S. 415, 425, 119 S.Ct. 1439, 143 L.Ed.2d 590 (1999) (internal quotation marks and citation omitted).
ANALYSIS
Beshir asserts that this Court has subject-matter jurisdiction over her claims under 28 U.S.C. § 2201 et seq. (the Declaratory Judgment Act); 28 U.S.C. § 1331 (the federal question statute); 5 U.S.C. §§ 555(b), 701 et seq., 706 (the Administrative Procedure Act); and 28 U.S.C. § 1361 (the Mandamus Act). Am. Compl. ¶ 1(a). The Declaratory Judgment Act, however, “is not an independent source of federal jurisdiction.” Schilling v. Rogers, 363 U.S. 666, 678, 80 S.Ct. 1288, 4 L.Ed.2d 1478 (1960) (internal citation omitted); accord C & E Servs., Inc. of Washington v. D.C. Water and Sewer Auth., 310 F.3d 197, 201 (D.C.Cir.2002). To consider a claim under the Declaratory Judgment Act, a federal court must have jurisdiction under another federal statute. Schilling, 363 U.S. at 678, 80 S.Ct. 1288. The federal question statute also does not stand alone. It provides federal courts with subject-matter jurisdiction only in cases arising under some other source of federal law. 28 U.S.C. § 1331. A federal statute, like the Administrative Procedure Act (“APA”), can provide the basis for federal question jurisdiction.
The APA provides that federal courts shall “compel agency action unlawfully withheld or unreasonably delayed.” 5 U.S.C. § 706(1). However, the APA does not apply where “agency action is committed to agency discretion by law.” 5 U.S.C. § 701(a). The Supreme Court has clarified that “the only agency action that can be compelled under the APA is action legally required .... Thus, a claim under § 706(1) can proceed only where a plaintiff asserts that an agency failed to take a discrete agency action that it is required to take.” Norton v. S. Utah Wilderness Alliance, 542 U.S. 55, 63-64, 124 S.Ct. 2373, 159 L.Ed.2d 137 (2004); accord Kaufman v. Mukasey, 524 F.3d 1334, 1338 (D.C.Cir.2008). The APA therefore does not provide a basis for jurisdiction over a claim that an agency failed to take a discretionary action.
The APA also does not apply, and thus does not provide a basis for federal question jurisdiction, where a statute at issue “precludes judicial review.” 5 U.S.C. § 701(a). Relevant here, the INA’s jurisdiction-stripping provision, 8 U.S.C. § 1252, provides that, “[njotwithstanding any other provision of law,” no court shall have jurisdiction to review “any ... decision or action of the Attorney General or the Secretary of Homeland Security the authority for which is specified under this subchapter to be in the discretion of the Attorney General or the Secretary of Homeland Security, other than the [decision whether to grant asylum].” 8 U.S.C. § 1252(a)(2)(B). This jurisdiction-stripping provision dovetails with the jurisdictional limitations of the APA: the APA does not provide a basis for jurisdiction over discretionary agency action, and the INA prohibits jurisdiction over discretionary agency action.
Another potential basis for jurisdiction here is the Mandamus Act, which independently provides federal courts with jurisdiction to “compel an officer or employee of the U.S. or any agency thereof to perform a duty owed to the plaintiff.” 28 U.S.C. § 1361. Like the APA, however, mandamus is appropriate only where “a clear nondiscretionary duty” is at issue. Pittston Coal Grp. v. Sebben, 488 U.S. 105, 121, 109 S.Ct. 414, 102 L.Ed.2d 408 (1988) (quoting Heckler v. Ringer, 466 U.S. 602, 616, 104 S.Ct. 2013, 80 L.Ed.2d 622 (1984)). “[T]he standards for obtaining relief [through mandamus and through the APA] are essentially the same.” Viet. Veterans of Am. v. Shinseki, 599 F.3d 654, 659 n. 6 (D.C.Cir.2010) (citing In re Core Commc’ns Inc., 531 F.3d 849, 855 (D.C.Cir.2008)).
As discussed below, because the pace of the adjudication of Beshir’s application is discretionary, the Court lacks jurisdiction over Beshir’s claim that defendants have unreasonably delayed adjudication. Similarly, because the 2009 USCIS Memorandum is internal policy guidance and not a binding regulation, the Court lacks jurisdiction over Beshir’s claim that defendants failed to apply the memorándum.
I. Jurisdiction Over Beshir’s Unreasonable Delay Claim
District courts are divided on the question whether the APA or the Mandamus Act provides a basis for jurisdiction — -and whether the INA precludes jurisdiction— over claims that USCIS unreasonably delayed the adjudication of an adjustment application. Compare Senbeta v. Mayor-kas, 2013 WL 2936316 (D.Minn. Jun. 14, 2013) (finding subject-matter jurisdiction exists); Bemba v. Holder, 930 F.Supp.2d 1022 (E.D.Mo.2013) (same); Irshad v. Napolitano, 2012 WL 4593391 (D.Neb. Oct. 2, 2012) (same), with Namarra v. Mayorkas, 924 F.Supp.2d 1058 (D.Minn.2013) (finding a lack of subject-matter jurisdiction); Seydi v. USCIS, 779 F.Supp.2d 714 (E.D.Mich.2011) (same). The courts of this district are similarly split. Compare Geneme v. Holder, 935 F.Supp.2d 184 (D.D.C.2013) (finding subject-matter jurisdiction exists); Liu v. Novak, 509 F.Supp.2d 1 (D.D.C.2007) (same), with Singh v. Napolitano, 710 F.Supp.2d 123 (D.D.C.2010) (finding a lack of subject-matter jurisdiction); Orlov v. Howard, 523 F.Supp.2d 30 (D.D.C.2007) (same); Tao Luo v. Keisler, 521 F.Supp.2d 72 (D.D.C.2007) (same). What divides these courts on the issue of jurisdiction is whether the pace of processing adjustment applications is discretionary.
The two federal appellate courts that have addressed the issue have not created any consensus. The Eighth Circuit in Debba v. Heinauer, 366 Fed.Appx. 696 (8th Cir.2010), affirmed a district court decision that “apparently concluded that it had subject-matter jurisdiction ... [and] granted summary judgment for [defendants], holding that it would ‘refrain from imposing. its own judicially constructed deadline’ on the processing of [plaintiffs] adjustment application.” Id. at 698-99 (quoting Debba v. Heinauer, 2009 WL 146039, at *4 (D.Neb., Jan. 20, 2009)). That court did not conclude whether jurisdiction was proper because it found that, whether or not there was jurisdiction, the plaintiff had not established that the delay in adjudication was unreasonable. Id. at 699. The Fifth Circuit, on the other hand, found that neither the APA nor the Mandamus Act provided jurisdiction over a claim of unreasonable delay in adjudication and that the INA foreclosed any such jurisdiction. Bian v. Clinton, 605 F.3d 249, 255 (5th Cir.2010), vacated as moot, 2010 WL 3633770 (5th Cir. Sept. 16, 2010) (concluding that Congress “expressly precluded judicial review of the USCIS’s pace of adjudication when the agency acts within its discretion and pursuant to the regulations that the agency deems necessary for carrying out its statutory grant of authority”). The D.C. Circuit has not opined on the issue.
As discussed below, the plain language of the relevant federal statutes, the absence of a congressionally mandated timeline, and the national security considerations implicated by the adjudication process all support the conclusion that the pace of adjudicating Beshir’s adjustment application is discretionary. Moreover, to the extent that Beshir argues that the delay represents a “refusal” to adjudicate her application, thus proffering a possibly nondiscretionary action over which the Court could have jurisdiction, her argument is unavailing.
A. Plain Language of 8 U.S.C. §§ 1159(b) and 1255(a)
Two analogous statutes, 8 U.S.C. § 1159(b) and 8 U.S.C. § 1255(a), are relevant to the adjudication of adjustment applications, and their plain language supports the conclusion that the pace of adjudication is discretionary. Section 1159(b) provides that “[t]he Secretary of Homeland Security or the Attorney General, in the Secretary’s or the Attorney General’s discretion and under such regulations as the Secretary or the Attorney General may prescribe, may adjust ... the status of any alien granted asylum.” Similarly, section 1255(a) declares that “[t]he status of an alien who was inspected and admitted or paroled into the United States ... may be adjusted by the Attorney General, in his discretion and under such regulations as he may prescribe, to that of an alien lawfully admitted for permanent residence.” Hence, according to these statutes, an alien’s status may be adjusted “in the Secretary’s or the Attorney General’s discretion and under such regulations as the Secretary or the Attorney General may prescribe,” 8 U.S.C. § 1159(b) (emphasis added), and by “the Attorney General, in his discretion and under such regulations as he may prescribe,” 8 U.S.C. § 1255(a) (emphasis added). These provisions make clear that the statutes grant discretion not only over the decision to adjust an alien’s status but also over the promulgation of regulations to create the process by which an alien’s status may be adjusted. See Labaneya v. USCIS, 965 F.Supp.2d 823, 832, 2013 WL 4582203, at *8 (E.D.Mich. Aug. 29, 2013) (finding that section 1255(a) “constitutes a grant of discretion over the process by which applications for adjustment of status are adjudicated”); Orlov, 523 F.Supp.2d at 34 (finding that “[t]he plain meaning of [section 1255(a)] therefore is to grant USCIS the power and the discretion to promulgate regulations governing how (and when) adjustment decisions are made”); Singh, 710 F.Supp.2d at 129-30 (finding that section 1159(b) granted the Secretary “discretion to promulgate regulations that she feels are necessary to exercise her authority to grant permanent resident status to an asylee”).
Notwithstanding the clear statements in sections 1159(b) and 1255(a) that the Attorney General and the Secretary have the authority to prescribe regulations governing the process of adjudication, some courts have concluded that because the pace of adjudication is not specifically mentioned, it is therefore not discretionary. See, e.g., Mohammed v. Frazier, 2008 WL 360778, at *6 (D.Minn. Feb. 8, 2008) (finding that “[tjhere -is no explicit provision” granting discretion over the pace of adjudication; thus pace is not discretionary); Liu, 509 F.Supp.2d at 7-9 (finding that the INA does not specifically address the pace of application processing; thus pace is not discretionary). This Court is not persuaded. Granting the Attorney General and the Secretary the discretion to promulgate regulations governing the process of adjudication necessarily includes a grant of discretion over the pace of adjudication. “Otherwise, the grant of discretion would be illusory, given that courts could drastically alter the regulations prescribed by dictating what pace of adjudication the regulations must permit.” Labaneya, 965 F.Supp.2d at 829, 2013 WL 4582203, at *8; see also Namarra, 924 F.Supp.2d at 1064 (“[C]ommit[ting] the adjustment decision itself, as well as the authority to promulgate regulations governing the adjudication process, to the Secretary’s discretion, but excluding] from the Secretary’s discretion the time required to arrive at the adjustment decision, merely puts form over substance.”). And because the pace of adjudication is discretionary, neither the APA nor the Mandamus Act provides a basis for this Court to assert jurisdiction over Beshir’s claim of unreasonable delay. See, e.g., S. Utah Wilderness Alliance, 542 U.S. at 63-64, 124 S.Ct. 2373 (holding that a court cannot, under the APA, compel an agency to act unless there is a nondiscretionary, specific act — i.e., a discrete action that the agency is required to take); Pittston Coal Group, 488 U.S. at 121, 109 S.Ct. 414 (holding that mandamus is only appropriate where defendant owes petitioner “a clear nondiscretionary duty”).
Moreover, the INA’s jurisdiction-stripping provision, which precludes judicial review of any “decision or action” for which the authority “is specified under this sub-chapter” to be “in the discretion of the Attorney General or the Secretary of Homeland Security,” applies to the pace of adjudication of adjustment applications. 8 U.S.C. § 1252(a)(2)(B)(ii). Sections 1159(b) and 1255(a) specify that the process of adjudication, and hence the pace of adjudication, is discretionary. And these statutes clearly fall within the relevant “subchapter,” i.e., 8 U.S.C. §§ 1151-1381. The remaining question then is whether the pace of adjudication is an applicable “decision or action.” Plainly it is. The term “action” must encompass the discretionary pace at which the adjustment process proceeds because it encompasses the various other discretionary acts that constitute the process as a whole and that direct the pace of the process. See Safadi v. Howard, 466 F.Supp.2d 696, 699 (E.D.Va.2006) (finding that the term “action” in section 1252(a)(2)(B) “encompasses the entire process of reviewing an adjustment application, including the completion of background and security checks and the pace at which the process proceeds”). To hold otherwise would be inconsistent with “Congress’ intent to confer on USCIS discretion over not just the adjustment of status decision but also the process employed to reach that result, and to exclude from judicial review the exercise of all that discretion.” Id. Because the INA’s jurisdiction-stripping provision applies to the pace of adjudication, then, it provides a barrier to any basis for judicial review over Beshir’s claim.
A recent decision in this district relied on Kucana v. Holder, 558 U.S. 233, 130 S.Ct. 827, 175 L.Ed.2d 694 (2010), in reaching the opposite conclusion. See Geneme, 985 F.Supp.2d at 191-92 (holding that the INA’s jurisdiction-stripping provision did not preclude judicial review over a claim of unreasonable delay in the adjudication of an adjustment application). For the following reasons, this Court is not persuaded by that court’s reasoning. The issue in Kucana was whether a federal court had the authority to review the BIA’s decision to deny a motion to reopen removal proceedings. 558 U.S. at 237-39, 130 S.Ct. 827. The authority for the BIA’s decision was a regulation created by the Attorney General that placed “[t]he decision to grant or deny a motion to reopen ... within the discretion of the [BIA].” Id. at 242, 130 S.Ct. 827 (citing 8 C.F.R. § 1003.2(a)). This authority was not codified in the INA or any other federal statute. Id. at 242-43, 130 S.Ct. 827. The Supreme Court “granted certiorari to decide whether the [INA’s jurisdiction-stripping provision] applies not only to Attorney General determinations made discretionary by statute, but also to determinations declared discretionary by the Attorney General himself through regulation.” Id. at 237, 130 S.Ct. 827. In deciding this issue, the Supreme Court explained that “Congress barred court review of discretionary decisions only when Congress itself set out the Attorney General’s discretionary authority in statute.” Id. at 247, 130 S.Ct. 827 (emphasis added). Accordingly, the Supreme Court concluded that, because the BIA’s authority to grant or deny a motion to reopen was provided only within a regulation, not a statute, it was not covered by the INA’s jurisdiction-stripping provision. Id. 252-53, 130 S.Ct. 827. Hence, the INA did not preclude judicial review of decisions on motions to reopen.
In stark contrast here, the discretionary process of adjusting the status of aliens is statutorily codified in sections 1159(b) and 1255(a). Geneme overlooks this essential point, and instead focuses on the Supreme Court’s discussion of how an “adjunct ruling” that does “not direct the Executive to afford the alien substantive relief’ is different from a “substantive decision,” see Kucana, 558 U.S. at 247-48, 130 S.Ct. 827, concluding that “the Supreme Court held that decisions on ... motions [requesting an adjunct ruling] were subject to judicial review.” Geneme, 935 F.Supp.2d at 191-92 (explaining that the court had jurisdiction over a claim of unreasonable delay in adjudication because “an order that US-CIS adjudicate [plaintiffs] application would not afford her substantive relief, but only ensure that she got a fair chance to have her claims heard in a timely manner”). The Supreme Court’s discussion of substantive decisions versus adjunct rulings, however, was not the determining factor in its analysis. Rather, the Supreme Court clearly stated that its “paramount” consideration was the fact that Congress had not codified the regulation at issue. Kucana, 558 U.S. at 252, 130 S.Ct. 827 (“Finally, we stress a paramount factor in the decision we render today. By defining the various jurisdictional bars by reference to other provisions in the INA itself, Congress ensured that it, and only it, would limit the federal courts’ jurisdiction. To read [section] 1252(a)(2)(B)(ii) to apply to matters where discretion is conferred on the Board by regulation, rather than on the Attorney General by statute, would ignore that congressional design.”).
Kucana therefore does not stand for the proposition that a federal court has jurisdiction over any claim that would result in an adjunct ruling rather than a substantive decision. Hence, although ordering defendants to adjudicate Beshir’s application would be an adjunct ruling, that does not mean that Beshir’s claim of unreasonable delay is subject to judicial review. Kucana more clearly stands for the proposition that, in the context of the INA’s jurisdiction-stripping provision, only statutes can confer grants of discretion that are shielded from judicial review. Here, Congress has set out the Secretary’s and the Attorney General’s discretionary authority over the process of adjudication&emdash;and hence the pace of adjudication&emdash;in statutes, shielding it from judicial review. Kucana does not hold otherwise and hence Geneme is not persuasive.
B. Absence of a Congressionally Mandated Timeline
The absence of a congressionally-imposed deadline or timeframe to complete the adjudication of adjustment applications also supports the conclusion that the pace of adjudication is discretionary and thus not reviewable by this Court. Judicial review “is not to be had if the statute is drawn so that a court would have no meaningful standard against which to judge the agency’s exercise of discretion.” Heckler v. Chaney, 470 U.S. 821, 830, 105 S.Ct. 1649, 84 L.Ed.2d 714 (1985); accord Sierra Club v. Jackson, 648 F.3d 848, 855 (D.C.Cir.2011). Here, sections 1159(b) and 1255(a) provide no deadline by or time-frame within which the Attorney General or the Secretary of Homeland Security must complete the review of an application for adjustment status. “With no specific deadline in the statute, Congress has left to ... administrative discretion the time in which [to] complete [the] review of such applications.” Debba v. Heinauer, 2009 WL 146039, at *4 (D.Neb. Jan. 20, 2009), aff'd, 366 Fed.Appx. 696 (8th Cir.2010) (holding that the court “is not at liberty to construct and impose its own deadline”).
As this Court has previously noted, “[i]f Congress intended to constrain the USCIS to adjudicate an application within a specific amount of time, this Court believes it would have provided a time limitation as it did in 8 U.S.C. § 1447(b), which provides that a determination on a naturalization application must be made within 120 days after an examination is conducted.” Orlov, 523 F.Supp.2d at 34. “In the absence of statutorily prescribed time limitations or statutory factors to guide USCIS in crafting regulations for the adjustment process, it is difficult to determine how the pace of processing an application could be anything other than discretionary.” Id. at 35 (citing Mahaveer, Inc. v. Bushey, 2006 WL 1716723, at *3 (D.D.C. June 19, 2006) (concluding that, “by not providing any specific factors to guide the Attorney General in crafting such regulations [to govern the conditions of nonimmigrants’ entry into the United States], it can fairly be said that Congress intended the Attorney General to have full discretion in his or her decision making”); see also Zhang v. Chertoff, 491 F.Supp.2d 590, 594, (W.D.Va.2007) (explaining that “[i]f Congress had intended for the pace of adjudication of adjustment applications to be subject to. judicial review, it could have expressly offered a standard with which to measure the lapse of time”).
Beshir argues that 8 C.F.R. § 103.2(b)(18) creates a timeframe for the review and adjudication of adjustment applications placed on hold for terrorist-related inadmissibility under 8 U.S.C. § 1182(d)(3)(B)®. Pl.’s Opp’n at 8-9. The Court disagrees. By its plain language, section 103.2(b)(18) applies only to applications where adjudication has been withheld because
USCIS [has] determine[d] that an investigation has been undertaken involving a matter relating to eligibility or the exercise of discretion, where applicable, in connection with the benefit request, and that the disclosure of information to the applicant or petitioner in connection with the adjudication of the benefit request would prejudice the ongoing investigation.
8 C.F.R. § 103.2(b)(18) (emphasis added). Hence, this regulation is only relevant where there is an ongoing investigation that would be prejudiced by disclosures to the applicant. Neither party has alleged that situation exists here. Moreover, in support of this plain reading, the agency published the final version of the rule after the notice-and-comment period and included the following clarification in the supplementary information section:
The purpose of this rule is to prevent the use of visa petition regulations to obtain information regarding criminal investigations which would not be discoverable in the normal course of an ongoing criminal investigation and to protect confidential informants, witnesses, and undercover agents connected with civil and criminal investigations.
Powers and Duties, 53 Fed.Reg. 26034 (July 11, 1988). Neither party has alleged that there is an ongoing criminal investigation or that confidential informants, witnesses, or undercover agents are in any way involved here. Thus, the stated purpose of this rule does not encompass the situation faced by Beshir, and 8 C.F.R. § 103.2(b)(18) does not provide a time-frame for the adjudication of her application. The parties have not suggested, and the Court has not found, any other possibly applicable timeframe in federal statutes or regulations.
Because no guidelines compel USCIS to adjudicate adjustment applications by or within a certain time, “plaintiff plainly cannot assert that USCIS has failed to adjudicate [her] application within a time period in which it was required to do so.” Orlov, 523 F.Supp.2d at 37. The absence of an applicable timeframe for the adjudication of adjustment applications supports the conclusion that the pace of adjudication is discretionary and that the Court lacks jurisdiction to hear Beshir’s claim of unreasonable delay.
C. National Security Considerations
The national security considerations implicated by the adjudication of an adjustment application placed on hold because of terrorist-related inadmissibility further support the conclusion that the pace of adjudication is discretionary and thus not subject to judicial review. It is undisputed that USCIS has placed Beshir’s application on hold because it found that she was ineligible for an adjustment of status pursuant to 8 U.S.C. § 1182(a)(3)(B)(i)(I) for providing - material support to a terrorist organization. It is also undisputed that the authority given to the Secretary of State and the Secretary of Homeland Security to exempt individuals from terrorist-related inadmissibility is discretionary. See 8 U.S.C. § 1182(d)(3)(B)(i) (providing that the decision is within the “Secretary’s sole unreviewable discretion”). It would be “incongruous to, on the one hand, insist that Defendants act promptly to adjudicate an application that can only succeed through the exercise of the Secretary’s discretionary authority, while at the same time advancing a theory of subject-matter jurisdiction that disavows the notion that the Secretary has been called upon to exercise her discretionary authority.” Seydi v. USCIS, 779 F.Supp.2d 714, 720 (E.D.Mich.2011) (finding no jurisdiction).
Moreover, as appropriately noted by another court in this district, “[gjiven the national security implications of immigration regulation, the broad discretion afforded the Attorney General permits the agency to adjudicate applications only after conducting a careful and thorough investigation .... [I]n this context, the Court’s insertion into that process would be inappropriate and could be detrimental to national security.” Tao Luo, 521 F.Supp.2d at 74 (citing Safadi, 466 F.Supp.2d at 701); see also Aguirre-Aguirre, 526 U.S. at 425, 119 S.Ct. 1439 (explaining that “judicial deference to the Executive Branch is especially appropriate in the immigration context where officials exercise especially sensitive political functions” (internal quotation and citation omitted)).
D. No Refusal to Adjudicate Beshir’s Application
To the extent Beshir claims that defendants have “refused” to adjudicate her application, see PL’s Opp’n at 27 (“Defendants have willfully and unreasonably delayed in, and have refused to, adjudicate Plaintiffs reopened 1-485”), and have thereby failed to perform an arguably non-discretionary duty, her argument is unavailing because the facts and the record demonstrate otherwise. It is undisputed that Beshir’s current adjustment application has been on hold since April 30, 2008 for the same reasons it was initially denied&emdash;terrorist-related inadmissibility&emdash; and that, as it currently stands, no exemptions apply. It is likewise undisputed that defendants have completed the ordinary steps necessary to process Beshir’s adjustment application, including processing several fingerprint checks, completing preliminary background checks, and conducting an in-person interview. See Ex. 1 to Defs.’ 3d MSJ, Gareth R. Cannan Deck [EOF No. 37-1] ¶¶ 5, 10-12. And defendants have explained that they are “withholding final adjudication due to the implementation of a legislatively enacted, high-level exemption process for which Beshir (or [OLF]) may potentially qualify,” Defs.’ Mot. at 30-31, and are “waiting for issuance of additional formal guidance that would affect [Beshir’s] case and release it for adjudication,” Defs.’ Reply [EOF No. 39] at 6.
Unfortunately for Beshir and others similarly situated, it appears that Congress designed the exemption process to be deliberately time-consuming. “[I]t requires consultation between the Secretary of State, the Ú.S. Attorney General, and the Secretary of Homeland Security.... It also requires research by law enforcement and intelligence agencies and various levels of vetting that 'precede the required coordination among the three Cabinet officials.” Defs.’ Mot. at 23. A district court in Minnesota aptly represented the situation when it described how
[sufficient time is required for information-gathering and research by law enforcement and intelligence agencies. Further, the status and activities of a foreign organization are dynamic and fluid, and thus the time required for an inquiry into that organization must surely vary tremendously. For example, the Secretary’s assessment of a particular Tier III terrorist organization may depend on, among other things: the changing political environment of the organization’s country or region; shifting policies or leadership within the organization; whether the organization continues to be active and if so, in what sort of activities it engages; and ongoing concerns and goals of the United States with respect to its foreign policy.
Namarra, 924 F.Supp.2d at 1065-66 (finding no jurisdiction over a claim of unreasonable delay in adjudication). Defendants have demonstrated that they are actively considering any current exemptions that could be beneficial to Beshir. Defs.’ Reply at 3, 6. For example, upon joint motion of the parties, this case was stayed for several weeks while defendants considered whether a recent exercise of exemption authority applied to Beshir. See Aug. 30, 2012 Stip.; Nov. 5, 2012 Stip.
Defendants have also demonstrated that they are actively exempting applicable adjustment applications from terrorism inadmissibility when appropriate. Qualifying aliens who provided “material support to the All India Sikh Students’ Federation— Bittu Faction”; took part in “activities or associations relating to the All Burma Students’ Democratic Front”; and “received military training under duress or ... solicited funds or membership under duress” have all been exempted from terrorist-related inadmissibility over the last several years. Defs.’ Stmt. ¶¶ 13-15.
The Court agrees that Beshir has been subjected to a very long period of waiting — nearly six years since the initial hold was placed on her case on April 30, 2008. That is far from ideal. But defendants’ actions do not indicate that they have refused to adjudicate Beshir’s application.
II. Jurisdiction Over Beshir’s Claim that Defendants Failed to Apply the 2009 USCIS Memorandum.
In addition to the claim that adjudication of her adjustment application has been unreasonably delayed, Beshir also asserts that USCIS unlawfully failed to elevate her application to USCIS headquarters pursuant to the following text from the 2009 USCIS Memorandum:
If the adjudicating office receives a request from the beneficiary and/or attorney of record to adjudicate a case on hold per this policy (including the filing of a mandamus action in federal court) ■... the case should be elevated through the chain of' command to appropriate Headquarters personnel. Guidance will be provided by USCIS headquarters on whether or not the case should be adjudicated.
Pl.’s Opp’n at 12 (citing the 2009 USCIS Memorandum). Beshir’s attorney of record sent a letter to the USCIS Director of the Nebraska Service Center requesting that “further action be taken in [Beshir’s] case,” but Beshir asserts that her application does not appear to have been “elevated through the chain of command” at US-CIS. Pl.’s Opp’n at 9-10. Defendants respond that Beshir has no legal basis to demand that consideration of her adjustment application be “elevated” because the 2009 USCIS Memorandum “simply provides internal policy guidance” and is not binding on the agency. Defs.’ 3d MSJ at 14-16. As discussed above, “the only agency action that can be compelled under the APA is that which is legally required.” S. Utah Wilderness Alliance, 542 U.S. at 63, 124 S.Ct. 2373. Similarly, mandamus is only appropriate where defendant owes petitioner “a clear nondiscretionary duty.” Pittston Coal Group, 488 U.S. at 121, 109 S.Ct. 414. The question, then, is whether the 2009 USCIS Memorandum represents a legally required, i.e., nondiscretionary, duty over which this Court may have jurisdiction, or simply a non-binding policy statement.
To determine whether an agency has issued a binding regulation or simply a statement of policy, courts in this Circuit are guided by two lines of inquiry. See Wilderness Soc’y v. Norton, 434 F.3d 584, 595-96 (D.C.Cir.2006) (finding that the Management Policies of the National Park Service was a statement of internal policy rather than a binding rule). The first line of analysis focuses on the effects of the agency action, “asking whether the agency has ‘(1) impose[d] any rights and obligations,’ or (2) ‘genuinely [left] the agency and its decisionmakers free to exercise discretion.’ ” Id. (quoting CropLife Am. v. EPA, 329 F.3d 876, 883 (D.C.Cir. 2003)). “ ‘[T]he language actually used by the agency’ is often central to making such determinations.” Id. (quoting Cmty. Nutrition Inst. v. Young, 818 F.2d 943, 946 (D.C.Cir.1987)). The second line of analysis “ ‘focuses on the agency’s expressed intentions.’ ” Id. (quoting CropLife Am., 329 F.3d at 883). “The analysis under this line of cases ‘look[s] to three factors: (1) the [ajgency’s own characterization of the action; (2) whether the action was published in the Federal Register or the Code of Federal Regulations; and (3) whether the action has binding effects on private parties or on the agency.’ ” Id. (citing Molycorp, Inc. v. EPA 197 F.3d 543, 545 (D.C.Cir.1999)).
For many of the same reasons highlighted by the D.C. Circuit when it found that the Management Policies of the National Park Service was simply a policy statement rather than a binding rule, see id. at 595-96, this Court concludes that the 2009 USCIS Memorandum is a nonbinding policy statement. For example, the text does not use mandatory language, “such as ‘will’ and ‘must,’ ” id. at' 595, but instead uses the word “should.” Nor was the text issued through notice-and-comment rule-making under 5 U.S.C. § 553 of the APA. “ ‘Failure to publish in the Federal Register is [an] indication that the statement in question was not meant to be a regulation since the [APA] requires regulations to be so published.’ ” Id. at 596 (quoting Brock v. Cathedral Bluffs Shale Oil Co., 796 F.2d 533, 538-39 (D.C.Cir.1986)). Moreover, the 2009 USCIS Memorandum was never published in the Code of Federal Regulations. Id. (explaining that “[t]he real dividing point between regulations and general statements of policy is publication in the Code of Federal Regulations” (internal quotation marks and citation omitted)). USCIS’s characterization of the 2009 US-CIS Memorandum is also telling: the subject line indicates that the contents of the memorandum are “[rjevised guidance,” not “revised rules” or “revised regulations.”
In combination, these factors support the conclusion that the 2009 USCIS Memorandum is merely a statement of internal policy intended for use by USCIS field offices. Therefore, the 2009 USCIS Memorandum is not a binding rule or regulation, and the Court lacks jurisdiction over Beshir’s claim that defendants failed to apply it.
CONCLUSION
For the foregoing reasons, Beshir’s complaint will be dismissed, and defendants’ motion for summary judgment will be denied as moot. A separate Order accompanies this Memorandum Opinion.
. Judge Urbina entered the Memorandum Opinions and Orders denying defendants’ two previous motions for summary judgment. The case was reassigned to Judge Bates in April 2012, after Judge Urbina's retirement.
. The INA's jurisdiction-stripping provision is slightly narrower, however, because it specifies that the discretionary agency action in question must also be "specified under this subchapter." 8 U.S.C. § 1252(a)(2)(B). The subchapter referred to is “Title 8, Chapter 12, Subchapter II, of the United States Code, codified at 8 U.S.C. §§ 1151-1381 and titled 'Immigration.' ” Kucana v. Holder, 558 U.S. 233, 239 n. 3, 130 S.Ct. 827, 175 L.Ed.2d 694 (2010).
. Defendants cite only to section 1159(b), whereas Beshir cites only to section 1255(a). See Defs.’ 3d MSJ at 10; Am. Compl ¶¶ 4-5. Section 1159(b) appears to be more applicable to this case because it specifically applies to "any alien granted asylum.” 8 U.S.C. § 1159(b). Nonetheless, the Court will consider both statutes because the relevant language is analogous.
. Judge Urbina’s decision in this case denying defendants' motion to dismiss for lack of subject-matter jurisdiction also referenced Kucana when it discussed whether the pace of adjudication was "specified under this sub-chapter” for the purposes of the INA’s jurisdiction-stripping provision. See Jan. 24, 2011 Mem. Op. at 9-10. This Court has resolved that issue for itself here.
. Defendants have stated that they adhere to this interpretation, and the Court accordingly gives weight to it. See Auer v. Robbins, 519 U.S. 452, 461, 117 S.Ct. 905, 137 L.Ed.2d 79 (1997) (explaining that an agency’s interpretation of its own regulations is controlling unless plainly erroneous or inconsistent with the regulation). Deference is appropriate even where the interpretation is advanced in the form of a legal brief, as long as there is "no reason to suspect that the interpretation does not reflect the agency’s fair and considered judgment on the matter.” See id. at 462, 117 S.Ct. 905 (deferring to agency interpretation presented in a legal brief); accord Blackmon-Malloy v. U.S. Capitol Police Bd., 575 F.3d 699, 704-710 (D.C.Cir.2009).
|
COLUMBIA BROADCASTING SYSTEM, INC., Plaintiff, v. AMERICAN SOCIETY OF COMPOSERS et al., Defendants.
No. 69 Civ. 5740.
United States District Court, S. D. New York.
Sept. 22, 1975.
Cravath, Swaine & Moore, New York City, for plaintiff; Alan J. Hruska, Robert K. Baker, J. Barclay Collins, II, Robert M. Sondak, Kenneth M. Kramer, New York City, of counsel.
Paul, Weiss, Rif kind, Wharton & Garrison, New York City and Bernard Korman, New York City, for ASCAP; Herman Finkelstein, Jay H. Topkis, Allan Blumstein, Max Gitter, Richard Reimer, New York City, of counsel.
Hughes, Hubbard & Reed, New York City, for Broadcast Music, Inc.; Amalya L. Kearse, George A. Davidson, New York City, of counsel.
Stroock & Stroock & Lavan, New York City, for Warner Bros. Inc.
Simpson, Thacher & Bartlett, New York City, for MCA, Inc. and Duchess Music Corp.
Linden & Deutsch, New York City, for G. Schirmer, Inc. and Associated Music Publishers, Inc.
Hofheimer, Gartlir, Gottlieb & Gross, New York City, for Essex Music, Inc. and Hollis Music, Inc.
LASKER, District Judge.
In this age of change the quality of life has been fundamentally altered and influenced by the development of the automobile, the computer and television.
Millions of viewers spend untold hours weekly viewing television. During the larger part of that time the viewer is a listener to programs which utilize music, whether as background, as theme or as a feature. This case relates to the method by which networks are licensed to use copyrighted music on television.
The Columbia Broadcasting System (CBS) brings this antitrust action against the American Society of Composers, Authors and Publishers (ASCAP), Broadcast Music, Inc. (BMI) and their members and affiliates. It complains that the present system by which ASCAP and BMI issue blanket licenses for the right to perform any or all of the compositions in their repertories over the CBS network in exchange for a flat annual fee violates the Sherman Act, 15 U.S.C. §§ 1 and 2. The complaint seeks an injunction under § 16 of the Clayton Act, 15 U.S.C. § 26, directing ASCAP and BMI to offer CBS performance right licenses on terms which reflect the nature and amount of CBS’ actual use of music, or in the alternative, enjoining them from offering blanket licenses to any television network. CBS also seeks a declaration of copyright misuse under the Declaratory Judgment Act, 28 U.S.C. §§ 2201, 2202.
I.
Introduction
A. The Parties
Prior to ASCAP’s formation in 1914 there was no effective method by which composers and publishers of music could secure payment for the performance for profit of their copyrighted works. The users of music, such as theaters, dance halls and bars, were so numerous and widespread, and each performance so fleeting an occurrence, that no individual copyright owner could negotiate licenses with users of his music, or detect unauthorized uses. On the other side of the coin, those who wished to perform compositions without infringing the copyright were, as a practical matter, unable to obtain licenses from the owners of the works they wished to perform. ASCAP was organized as a “clearinghouse” for copyright owners and users to solve these problems. The world of music has changed radically since 1914. Radio and television broadcasters are the largest users of music today; they “perform” copyrighted music before audiences of millions. In 1975 ASCAP and BMI licensed these large users, including CBS and the other networks as well as smaller ones such as concert halls and background music services.
Because of the multitude of performances of music they generate each year, virtually all radio stations and television networks secure the rights to perform the music they use by a “blanket” license. An ASCAP blanket license gives the user the right to perform all of the compositions owned by its members as often as the user desires for a stated term, usually a year. Convenience is the prime virtue of the blanket license: it provides comprehensive protection against infringement, that is, access to a large pool of music without the need for the thousands of individual licenses which otherwise would be necessary to perform the copyrighted music used on radio stations and television networks in the course of a year. Moreover, it gives the user unlimited flexibility in planning programs, because any music it chooses is “automatically” covered by the blanket license.
ASCAP’s current membership includes some 6,000 music publishing companies and 16,000 composers. Its members have granted ASCAP, as their licensing agent, the nonexclusive right to license users to perform the compositions owned by them. ASCAP provides its members with a wide range of services. It maintains a surveillance system of radio and television broadcasts to detect unlicensed uses, institutes infringement actions, collects revenues from licensees and distributes royalties to copyright owners in accordance with a schedule which reflects the nature and amount of the use of their music and other factors.
BMI, a non-profit corporation, was organized in 1939 by members of the radio broadcasting industry, including CBS. It is affiliated with approximately 10,000 publishing companies and 20,000 writers and functions in essentially the same manner as ASCAP. Although CBS sold back its BMI stock to the corporation in 1959, BMI is still owned entirely by broadcasters.
As a practical matter virtually every domestic copyrighted composition is in the repertory of either ASCAP, which has over three million compositions in its pool, or BMI, which has over one million. Like ASCAP, BMI offers blanket licenses to broadcasters for unlimited use of the music owned by its “affiliates.” Almost all broadcasters hold blanket licenses from both ASCAP and BMI.
As is generally known, CBS operates one of three national television networks, as well as AM and FM radio stations in seven major cities. It has held blanket licenses from ASCAP for its radio broadcast operations since 1928, and from BMI since' soon after that organization was founded in 1939. It has held ASCAP and BMI blanket licenses for its television network on a continuous basis since the late 1940’s.
CBS supplies television programs to approximately two hundred affiliated television stations throughout the country, and telecasts about 7,500 programs per year. Many of these programs make use of copyrighted music which is recorded on the soundtrack. However, CBS does not produce most of the programs seen on its network. Instead it purchases the right to broadcast programs produced by independent television production companies, known as “program packagers.” Most of the popular prime-time serials fall into this category. In addition CBS itself produces a television serial (“Gunsmoke”), two day-time serials, a number of “specials,” usually variety shows, as well as news, public affairs and sports programs.
Agreements between program packagers and CBS normally stipulate the price at which the packager will produce a program in a series and furnish it to CBS for broadcast. Pursuant to the agreements, packagers are responsible for obtaining and furnishing to CBS most rights necessary for the use of copyrighted music by the network, such as the right to record a copyrighted song in synchronization with the film or video tape (“synch” rights). However, program packagers do not, in the present scheme of things, furnish to CBS the right to perform the copyrighted music for profit as part of a television broadcast. Ever since television became commercially practicable in the late 1940’s, CBS has obtained such “performance” rights for packaged programs, as well as for the programs it produces itself, from ASCAP and BMI by purchasing blanket licenses. From time to time it has renewed its licenses after negotiations with ASCAP and BMI. In the history of the parties the fee for the blanket license has been expressed in terms of a percentage of CBS’ advertising revenues. For example, for many years prior to the institution of suit, the BMI blanket license fee remained at 1.09% of net receipts from sponsors after certain deductions. This resulted in payment to BMI of about $1.6 million in 1969. For access to AS-CAP’s considerably larger repertory, CBS paid about $5.7 million in 1969. Averaging the total of $7.3 million paid by CBS in that year over 7,500 programs, its cost for ASCAP or BMI music runs about $1,000. per program; Of course, as detailed later, many of CBS’ programs, such as news and public affairs shows, use no music at all; while others, such as variety shows, use a great deal. $1,000. is a small fraction of the total cost of the program. CBS pays about $200,000. for each episode of a one hour variety show or dramatic serial, and as much as $750,000. for a made-for-TV movie. Since the commencement of this action, CBS has held interim blanket licenses from ASCAP and BMI at a total annual cost of some $6 million.
B. The Consent Decrees
Neither ASCAP nor BMI is a stranger to antitrust litigation. In 1941 the government sued ASCAP for antitrust violations. The action resulted in a consent decree which largely governs AS-CAP’s relationships with licensees such as CBS and other users. As amended in 1950, the decree requires ASCAP to offer a “per program” license to broadcasters in addition to the blanket license it has traditionally offered. Both forms of license grant the right to use any or all of the works in ASCAP’s repertory. However, the blanket license allows use of the entire inventory for a designated period of time, usually a year, for which the user pays a flat fee, while the per program license permits use of the entire repertory but requires payment only with respect to programs which actually make use of copyrighted music. The 1950 decree mandatorily enjoins ASCAP to set its fees for these licenses in a manner which gives the user a genuine choice between them, and prohibits it from requiring or influencing the prospective licensee to negotiate for a blanket license before negotiating for a per program license. If ASCAP and the licensee are unable to agree on a fee, the latter may apply to the United States District Court for the Southern District of New York for determination of a ‘‘reasonable fee.” In such proceedings, ASCAP bears the burden of establishing the reasonableness of the fee it requests.
Finally, ASCAP’s licensing authority is not exclusive. The 1950 decree provides that music users may bypass AS-CAP entirely, and negotiate for a license directly with the composer or publisher holding the copyright.
Under the terms of a consent decree entered in 1966 in United States v. BMI (S.D.N.Y.), BMI is required to offer a per-program license in addition to a blanket license. The difference in the terms of these licenses must be justified by “applicable business factors.” Although the form of the BMI decree differs from that of the ASCAP decree, the parties have stipulated that CBS could secure direct licenses from BMI affiliates with the same ease or difficulty, as the case may be, as from ASCAP members. (CX 3)
C. CBS’ Complaint
CBS does not allege that ASCAP and BMI have violated the terms of the consent decrees. It claims, rather, that the licensing alternatives which the decrees specify are not flexible enough to meet its needs, and are not realistically available to it. Thus, CBS’ complaint charges that the blanket license “compels” it to pay performance royalties with respect to television programs which use no music and that the per-program license requires it to pay the same royalty for a program which uses a single copyrighted composition as for one which uses many. (Complaint ¶¶ 14, 19). In other words, CBS asserts that defendants are “using the leverage inherent in [their] copyright pool to insist that plaintiff pay royalties on a basis which does not bear any relationship to the amount of music performed.” (Complaint ¶[ 19) As to the third alternative specified in the consent decrees —the possibility of bypassing ASCAP and BMI entirely and seeking licenses for the specific compositions it wishes to perform directly from the copyright proprietors—CBS alleges that any attempt by it “to .acquire such a large body of rights from the [individual copyright proprietors] . . . would be wholly impracticable . . . ” (¶ 15)
CBS’ disenchantment with the blanket licensing system takes form in several legal claims: first, that the writer and publisher members of ASCAP and BMI have combined through their common licensing organizations to eliminate price competition among themselves and, by pooling the grant of their respective licenses through ASCAP and BMI, to fix the price which a television network must pay to secure the rights; second, that ASCAP and BMI insist on granting only blanket licenses and have therefore imposed an unlawful tie-in, in that CBS is required to purchase the rights to music it does not want to buy in order to secure the rights to music it does want; third, that by forming pools of music and requiring CBS to deal with the common licensing agent of the pools, the writer and publisher members and affiliates of ASCAP and BMI are engaging in a concerted refusal to deal directly with CBS; fourth, that through ASCAP and BMI the writers and publishers are guilty of monopolization, both attempted and achieved; and fifth, that the activities described constitute copyright misuse.
Despite this rather imposing line-up of charges, the central issue in the case is not complex. The essence of CBS’ claim is that ASCAP and BMI are illegal combinations whose purpose and effect is to exact royalties from CBS for music it does not wish to license. The validity of the claim turns on whether CBS is in fact compelled to take a blanket license from the licensing organizations in order to secure the performance rights it needs. ASCAP and BMI contend that CBS is not compelled to do so, but has, in common with the ABC and NBC television networks and virtually all radio broadcasters, found it most convenient to license music by the blanket method. Defendants argue that if CBS no longer wishes to secure performance rights through centralized agents such as ASCAP and BMI, it can obtain the necessary rights directly from the individual members and affiliates of AS-CAP and BMI by negotiating with them for performance rights to the particular compositions it wants. As defendants view the case, if CBS is to prevail it must prove that direct licensing with members of the alleged combination is an unfeasible alternative to the blanket license. Proof that licenses could not be obtained directly from copyright proprietors, despite the fact that ASCAP and BMI are required by consent decrees to permit their members and affiliates to license their compositions to users direetly, would support the inference that defendants have formed illegal combinations in order to foreclose competition in the market for performance rights to music for network use. Conversely, proof that direct licensing is a feasible alternative method by which CBS could satisfy its music needs would undercut its claims that copyright proprietors have combined to monopolize the market for performance rights and have used their leverage to fix prices and impose unlawful tie-ins.
CBS vigorously disagrees with this view of the case. It argues, as though it could have moved for summary judgment years ago, that ASCAP and BMI are guilty of per se violations of the antitrust laws because the blanket licensing system, which is the only method by which CBS and the other networks have evér licensed performance rights, has .“thoroughly eliminated” price competition among copyright owners as a matter of historical fact. (CBS Post-Trial Brief at 15) CBS views the question of the feasibility of direct licensing as irrelevant to the issue whether defendants have restrained trade. It argues that the sole questions to be determined are (1) whether defendants’ restraint is justified or reasonable in view of the unique economic setting of the music licensing market; and (2) whether licensing can be accomplished on a more competitive basis. We find CBS’ analysis unpersuasive. Nevertheless, we set forth our views on the questions CBS raises because of their central importance to the ease. To do this, we must retrace some of the steps taken to define the issues prior to the trial.
II.
The Issue Presented for Decision
A. ASCAP’s Motion for Summary Judgment
At an earlier stage in the litigation ASCAP moved for summary judgment, relying principally on the decision in K-91, Inc. v. Gershwin Publishing Corp., 372 F.2d 1 (9th Cir. 1967), cert. denied, 389 U.S. 1045, 88 S.Ct. 761, 19 L.Ed.2d 838 (1968). In K-91 several members of ASCAP sued a radio broadcaster for infringement. The broadcaster admitted the infringement but defended On grounds similar to those asserted by CBS: that ASCAP is an unlawful combination engaged in price-fixing and block-booking of its members’ compositions. In rejecting the claims, the Ninth Circuit observed that ASCAP does not fix prices because, under the 1950 consent decree, the United States District Court for the Southern District of New York is the ultimate price-fixing authority in the event of disagreement as to the reasonableness of ASCAP’s fees. As to the other claims, the court observed:
“No contention is made here that AS-CAP’s actual activities do not comply with the decree. In short, we think that as a potential combination in restraint of trade, ASCAP has been ‘disinfected’ by the decree.
There is an additional reason why the activities disclosed by this record do not violate the antitrust laws. AS-CAP’s licensing authority is not exclusive. The right of the individual composer, author or publisher to make his own arrangements with prospective licensees, and the right of such prospective licensees to seek individual arrangements, are fully preserved [by the 1950 decree].” 372 F.2d at 4.
Although we agreed with the K-91 court, and continue to agree, that the activities of ASCAP and. BMI are not illegal per se, we denied ASCAP’s motion for summary judgment, because of a critical difference between the case presented in K-91 and the one at hand. In K-91 the parties stipulated that it would be virtually impossible for broadcasters and copyright proprietors to arrange separate licenses and payments for each radio performance of a copyrighted composition, and no proposal was made to the court of a practicable alternative to blanket and per-program licenses. In contrast to K-91, CBS’ claims are premised on the practicability of alternatives to the system now in effect. As noted earlier, CBS seeks an injunction either enjoining ASCAP and BMI even from offering blanket licenses or, in the alternative, and preferably, establishing what CBS calls a “per-use” system, by which ASCAP and BMI (rather than individual copyright owners) would be required to license individual compositions in accordance with a schedule of fees under court supervision. Moreover, far from being a stipulated fact, the impracticability of CBS’ “bypassing” ASCAP and BMI to secure licenses directly from copyright proprietors is the key factual issue in the case. Accordingly, we held that the feasibility of less restrictive alternatives to the blanket licensing system presented a genuine issue as to a material fact in the case and denied summary judgment to ASCAP.
Subsequent to determination of AS-CAP’s motion and in accordance with our holding, we ordered trial of the following specified issues:
“(i) Whether defendants’ conduct constitutes an actionable restraint of trade and compels the plaintiff as alleged in the complaint ;
(ii) Whether, if such restraint or compulsion exists, it is reasonable and justified or whether it may be achieved by less anticompetitive means.”
B. CBS’ ‘‘Per Se” Contention
Despite our earlier holding that the activities of ASCAP and BMI are to be judged by the rule of reason and the specification of the issues to be tried in light of that holding, CBS now takes the position that the primary question presented for determination is whether the present system can be amended to operate on a more competitive basis. As noted earlier, it argues as to the first issue, that it has established an illegal restraint of trade as a matter of law because the blanket licensing arrangement has “thoroughly eliminated” price competition among copyright owners as a matter of historical fact. (CBS Post-Trial Brief at 15) Coming after an eight week trial and the accumulation of, a bulky factual record, the timing of this contention is unusual. For the reasons stated below, we find it to be unmeritorious as well.
In support of its contention that ASCAP and BMI are illegal combinations merely because they offer blanket licenses, CBS cites cases in which sellers agreed among themselves as to the.prices to be charged buyers for their products. See, e. g., United States v. Socony-Vacuum Oil Co., Inc., 310 U.S. 150, 60 S.Ct. 811, 84 L.Ed. 1129 (1940); United States v. Trenton Potteries Co., 273 U.S. 392, 47 S.Ct. 377, 71 L.Ed. 700 (1927). The cases are inapposite. Unlike the plaintiffs in the cited cases, CBS does not claim that the individual members and affiliates (“sellers”) of ASCAP and BMI have agreed among themselves as to the prices to be charged for the particular “products” (compositions) offered by each of them. It makes the very different claim that a combination of individual sellers offering the entire pool of their products through a common sales agent at a negotiated package price is per se illegal, regardless whether the sellers are willing to sell their products on an individual basis.
The claim fails as a matter of law. In Automatic Radio Co., Inc. v. Hazeltine Research, Inc., 339 U.S. 827, 70 S.Ct. 894, 94 L.Ed. 1312 (1950), the parties entered into an agreement by which Automatic Radio acquired a license for a ten year term to incorporate into its products any or all of several hundred patents held by Hazeltine. Automatic Radio was not obligated to use any of the patents in the manufacture of its products, but agreed in any event to pay Hazeltine royalties based on a percentage of its total sales. Automatic argued that the terms of the license constituted per se patent misuse and an illegal tying arrangement because the agreement exacted payment of a royalty on all sales whether or not its products used the patents, and in effect required it to purchase licenses for products for which it needed no license as well as for those which did. In rejecting the argument, the Court stated:
“We cannot say that payment of royalties according to an agreed percentage of the licensee’s sales is unreasonable. Sound business judgment could indicate that such payment represents the most convenient method of fixing the business value of the privileges granted by the licensing agreement. We are not unmindful that convenience cannot justify an extension of the monoploy of the patent. But as we have already indicated, there is in this royalty provision no inherent extension of the monopoly of the patent. Petitioner cannot complain because it must pay royalties whether it uses Hazeltine patents or not. What it acquired by the agreement into which it entered was the privilege to use any or all of the patents and developments as it desired to use them. If it chooses to use none of them, it has nevertheless contracted to pay for the privilege of using existing patents plus any developments resulting from respondent’s continuous research. We hold that in licensing the use of patents to one engaged in a related enterprise, it is not per se a misuse of patents to measure the consideration by a percentage of the licensee’s sales.” 339 U.S. at 834, 70 S.Ct. at 898 (citations omitted).
In Zenith Radio Corp. v. Hazeltine Research, Inc., 395 U.S. 100, 89 S.Ct. 1562, 23 L.Ed.2d 129 (1969) the Court refined the standards by which the validity of package licenses are to be judged. At issue in that case was the propriety of an injunction entered by the district court enjoining Hazeltine from:
“A. Conditioning directly or indirectly the grant of a license to . . . [Zenith] . . . under any domestic patent upon the taking of a license under any other patent or upon the paying of royalties on the manufacture, use or sale of apparatus not covered by such patent.” 395 U.S. at 133-34, 89 S.Ct. at 1582 (emphasis in original).
The quoted provision was directed at Hazeltine’s proven policy of insisting upon acceptance of its standard five-year package license agreement covering some 500 patents, and reserving royalties based on Zenith’s total radio and television sales whether or not the licensed patents were actually used in the products manufactured. The Court of Appeals had stricken the last clause of the quoted paragraph, relying on Automatic Radio for the proposition that conditioning the license upon payment of royalties on unpatented products was not misuse of the patent. The Supreme Court disapproved this construction of its earlier decision. It distinguished between the situation presented in Automatic Radio, in which the parties agreed on a package license “as a convenient method designed by the parties to avoid determining whether each radio receiver embodied [a Hazeltine] patent,” and the situation in Zenith, where the patent holder compelled the licensee to choose between a package license conditioned on the payment of royalties on unpatented products, or no license at all. 395 U.S. at 135-37, 89 S.Ct. 1562. In other words, the critical difference between an illegal licensing arrangement and a legal one is the fact of coercion or compulsion by the licensor.
We disagree with CBS that such compulsion inheres in the present licensing system as regulated by the consent decrees and that defendants are therefore guilty of per se violations. As noted earlier, CBS makes no claim that either ASCAP or BMI has violated any provision of the consent decrees. The terms of the decrees do not by any construction suggest that CBS is in fact compelled to take a blanket license. To the contrary, ASCAP and BMI are required to offer per program licenses under which a fee is charged only with respect to programs in which a composition within the repertory has been performed; and to structure the fees for blanket and per program licenses so that the user has a genuine choice between them. Apart from the licenses available from ASCAP and BMI, the decrees leave a music user free to obtain licenses directly from copyright owners. This factor alone markedly distinguishes the present ease from Zenith, in which Hazeltine, as the sole supplier of the patents in issue, had, for all practical purposes, unlimited leverage in bargaining the terms of any license to them.
C. CBS’ Theory of the Burden of Proof Taking a different tack, CBS also argues that “it is clear that [ASCAP and BMI] insist on licensing exclusively on a blanket basis” and that because they insist on such an “inherently restrictive” method of sale, they have the burden of proving the availability in the market place of acceptable substitutes, i. e., that CBS could obtain direct licenses sufficient to meet its needs from copyright proprietors. (CBS Post-Trial Brief at 15, 26-27) Neither the facts nor the law support the argument. As outlined later in this opinion, the evidence does not establish that ASCAP and BMI insist or have ever insisted on licensing on a blanket basis; and, of course, if they did, they would flatly violate the terms of the consent decrees.
In any event, the argument fails as a matter of law. CBS cites a number of cases for the proposition that a defendant who argues that the plaintiff can avoid injury by obtaining a substitute product bears the burden of proving such an assertion. See, TV Signal Co. of Aberdeen v. American Telephone & Telegraph Co., 462 F.2d 1256 (8th Cir. 1972); Fontana Aviation, Inc. v. Beech Aircraft Corp., 432 F.2d 1080 (7th Cir. 1970), cert. denied, 401 U.S. 923, 91 S. Ct. 872, 27 L.Ed.2d 826 (1971); Gamco, Inc. v. Providence Fruit & Produce Bldg., Inc., 194 F.2d 484 (1st Cir.), cert. denied sub nom., Providence Fruit & Produce Bldg., Inc. v. Gamco, Inc., 344 U.S. 817, 73 S.Ct. 11, 97 L.Ed. 636 (1952); Stanton v. Texaco, Inc., 289 F.Supp. 884 (D.R.I.1968).
To secure injunctive relief in a private antitrust suit, the plaintiff must prove an actual violation of the antitrust laws or that such violation is impending and that as a result the plaintiff is threatened with loss or injury. Zenith Radio Crop. v. Hazeltine Research, Inc., supra, and Credit Bureau Reports, Inc. v. Retail Credit Co., 476 F.2d 989 (5th Cir. 1973). In the cases on which CBS relies, the plaintiff had indisputably established the first element, i. e., that the defendant had illegally denied him something he wished to purchase, for example, space in a fruit market, access to telephone poles for a cable TV installation, or an aircraft dealership. The defendant in those cases argued that the plaintiff had failed to establish the second element of its claim—injury or the threat of injury—because he had not proven that he could not avoid injury simply by purchasing a substitute product elsewhere in the market. The court in each case held that a plaintiff does not have the burden of proving the nonexistence of suitable alternatives in order to prove injury or the threat of injury, particularly when it is clear that no substitute will have the unique attributes of the product which the defendant denied the plaintiff. However, in none of the cases did the court suggest that the plaintiff does not have the burden of proving the first element: the restraint of trade itself.
Accordingly, the validity of CBS’ argument that it does not bear the burden of proving that direct licensing is not a feasible alternative to the blanket license turns on whether the issue of “alternatives” relates to the element of restraint, or the element of injury. We believe that it relates to the first factor: that is whether ASCAP and BMI have restrained trade. In the cases just discussed, plaintiff alleged that the defendant would not sell him something which he wanted to purchase, and the defendant argued that the plaintiff was not injured by the refusal because market substitutes were available. The present case poses an entirely different claim. The alleged restraint of trade is not that CBS is excluded from purchasing the services offered by ASCAP and BMI, and told to find substitutes elsewhere; but that (1) they allegedly offer only blanket licenses, which CBS says it does not want; (2) have combined to make any effort to obtain an alternative form of license (such as direct licensing) unfeasible; and (3) thereby compel CBS to continue to take a blanket license and to pay for music which it does not want to buy. Unlike the situations in the cases on which it relies, CBS does not want the organizational defendant-seller's product at all. Far from spurning “substitute” products, CBS claims that the lack of a substitute constitutes an alleged restraint of trade. So much is clear when one considers the nature of the direct licensing alternative. It is not at all a “substitute” in the sense used in the “injury-avoidance” cases; it is another means of licensing (on an individual basis) the use of precisely the same music which CBS would perform if it purchased a blanket license. If direct licensing is realistically available, it would enable CBS to pay only for the music it uses and for no other music, and would demonstrate that CBS’ complaint in this action is unjustified.
In sum, we adhere to our earlier conclusion, as embodied in the pre-trial order, that to prevail here CBS must prove that defendants’ conduct in combining into ASCAP and BMI compels CBS to take a blanket license as alleged in the complaint. Proof that direct licensing is not a feasible alternative to the blanket license is an essential element of CBS’ claim, on which it accordingly bears the burden of proof. Conversely, proof that CBS could obtain the necessary performance licenses directly from copyright proprietors would be fatal to its claim that they have pooled the rights to perform their music in a manner which illegally restrains trade in those rights. If the restraint is proven, only then do defendants have the burden of proving that the restraint is justified by the economic context in which music licensing for network television use takes place, and cannot be achieved by less anti-competitive means.
III.
The Stipulation as to Competitive Disadvantage
Prior to trial, the parties executed a stipulation which states in part:
“ . . . There is a portion of the performance rights to ASCAP music appearing on [CBS] programs as to which it would be impracticable for [CBS] or such producers to negotiate for licenses directly with the owners of the performance rights of said music. [Without limiting the parties’ rights to adduce and offer additional proof with respect to any subject, both parties specifically reserve the right to adduce and offer proof regarding the reasons for such impracticability.]” (CX 2, ¶ 13; bracketed portion in original.)
“If [CBS] chose not to have an AS-CAP license, the producers of [CBS] programs did not obtain such licenses, and [NBC] and [ABC] had such licenses, to the extent that [CBS] or the producers of [CBS] programs did not otherwise obtain the performance rights to the ASCAP music which they desired to use on [CBS] programming, [CBS] would be at a competitive disadvantage vis-a-vis [NBC] and [ABC] .” (Id. ¶ 15)
CBS argues that, putting aside its proof at trial as to the impracticality of the direct licensing alternative, ASCAP and BMI have ceded the primary issue in the case by stipulating that CBS could not obtain direct licenses for all its music needs and that consequently, if it dropped its blanket license, it would be at a competitive disadvantage vis-a-vis networks which continued to hold such licenses.
We disagree with the contention that defendants have stipulated the case away. Paragraph 13 does not specify the “portion” of the compositions in the ASCAP repertory as to which it would be “impracticable” for CBS to license directly; and the extent of “impracticability” is critical to the feasibility of direct licensing. As detailed later in this opinion, the evidence establishes that musical compositions are substantially interchangeable and that for any proposed use there are several, if not scores, of compositions which are equally suitable. Accordingly, even if CBS had access to far less than all of the compositions in the ASCAP and BMI repertories, that would not in itself render direct licensing unfeasible.
Because a fair reading of Paragraph 13 does not indicate that ASCAP and BMI have admitted the unfeasibility of direct licensing, Paragraph 15 loses the dispositive force which CBS attributes to it. It is obvious that CBS might be at a competitive disadvantage vis-a-vis other networks if it held no music license. But that fact only raises, but does not settle, the question of what licensing methods are available to CBS. We regard the stipulation merely as an aid to the definition of the issues of the case. The extent of CBS’ use of music, the kinds of compositions it needs, and the persons with whom it must deal to negotiate licenses for them are factors whose relevance to the feasibility of direct licensing is only suggested by the stipulation, on which the parties reserved the right to offer proof. The decision in this case rests on the evidence as to those factors, not the stipulation itself. Accordingly, we turn to the question whether CBS is in fact “compelled” as alleged in the complaint.
IV.
Compulsion: The Quality of the Evidence
Defendants argue that CBS’ case, which alleges the refusal of the defendants to license on terms which require CBS to pay only for the music it uses, falters at the threshold because CBS has not shown that it ever made a clear demand on defendants which they have rebuffed. It is true that several courts have imposed such a requirement in treble damage cases based on a conspiracy to deprive the plaintiff of a particular product. See, e. g., Royster Drive-In Theaters, Inc. v. American Broadcasting-Paramount Theaters, Inc., 268 F.2d 246, 251 (2d Cir.), cert. denied, 361 U.S. 885, 80 S.Ct. 156, 4 L.Ed.2d 121 (1959); Webster Rosewood Corp. v. Schine Chain Theaters, Inc., 263 F.2d 533, 536 (2d Cir.), cert.denied, 360 U.S. 912, 79 S.Ct. 1296, 3 L.Ed.2d 1261 (1959); Milwaukee Towne Corp. v. Loew’s, Inc., 190 F.2d 561 (7th Cir. 1951), cert. denied, 342 U.S. 909, 72 S.Ct. 303, 96 L.Ed. 680 (1952). However, the requirement has not be imposed in any case of which we are aware, when the relief sought is an injunction rather than damages. Cf. Zenith Radio Corp. v. Hazeltine, supra; Credit Bureau Reports, Inc. v. Retail Credit Co., supra.
Although we agree with CBS that it is not required as a condition to suit to have been unequivocally refused the kind of license it now seeks, defendants’ argument highlights the unusual nature of CBS’ claim and the kind of evidence on which it relies. CBS does not claim that it is compelled to take a blanket license because ASCAP and BMI, or individual copyright proprietors, have actually refused or threatened to refuse to negotiate with it for alternative methods of licensing. Instead, its position is that ASCAP and BMI would refuse to negotiate new forms of licenses whose fees are based on actual music use; and that individual copyright proprietors would refuse to deal with it on a direct licensing basis, or at least make it such a difficult proposition that CBS would be forced to resume its blanket license. Although proof of what might or might not occur under hypothetical circumstances in the future is customary when the plaintiff in a private antitrust action seeks to establish a threat of injury, CBS relies heavily on hypothetical proof in order to establish the existence of the restraint itself—the nonavailability of direct licensing.
The other side of the coin just described is that CBS has made no effort to obtain the kinds of licenses it now complains defendants are unwilling to grant. Although the absence of such evidence does not establish that CBS is not compelled to take a blanket license, we nevertheless regard it as highly relevant to that issue.
Y.
The Break-TJp of an Amicable Marriage
Until the institution of the present suit CBS appears to have lived quite happily with the blanket arrangement which it now disavows. Since 1929 it has obtained ASCAP blanket licenses for its various broadcast operations, the earliest one purchased on behalf of a radio station; and when CBS and other broadcasters established BMI in 1939, they agreed to take blanket licenses. Since its establishment in 1946, the CBS television network (CTN) has continuously held blanket licenses from ASCAP and BMI. Since 1950, CBS’ negotiations with ASCAP for licenses for its television network have of course been conducted within the framework of the amended consent decree. Although, as noted earlier, the terms of the 1950 decree prohibit ASCAP from negotiating a blanket license prior to determining whether the user would prefer a per-program license, CBS has never applied for relief under the decree complaining that ASCAP insisted on blanket licenses. Nor has the court ever been required to set a “reasonable fee” for the blanket licenses negotiated by the parties from time to time. CBS has never negotiated or held a per-program license from AS-CAP or BMI for its television network and has never attempted to fulfill its music requirements by bypassing either organization and securing performance rights directly from copyright owners.
This suit did not follow a breakdown in negotiations for a new form of license, but for a renewal of CBS’ blanket license from BMI. In April, 1969, CBS and ASCAP submitted for court approval agreements providing for final license fees as adjusted for 1969 and several prior years. Because the payments provided for in the agreements would have had the effect of sharply widening the historical ratio between BMI and AS-CAP fees from CBS, BMI’s President, Edward Cramer, protested to Donald Sipes, CBS’ Vice President in charge of business affairs for the network, that BMI would insist on maintaining parity with ASCAP. After several meetings between Sipes and Cramer in 1969 during which the latter was unable to negotiate higher fees, BMI gave notice on October 29, 1969 that it was exercising its right under the consent decree to terminate CBS’ license, effective January 1, 1970.
CBS did not apply for relief under the decree. Instead, on December 19, 1969, more than a month and a half after BMI’s notice of termination, and less than two weeks before termination would become effective, the President of the CBS television network, Robert D. Wood, wrote to ASCAP and BMI requesting each of them to “promptly submit to us the terms upon which you would be willing to grant a new performance rights license which will provide, effective January 1, 1970, for payments measured by the actual use of your music.” This was the first such demand CBS had made. By letter dated December 23, 1969, Herman Finkelstein, AS-CAP’s general counsel, replied that AS-CAP would consider the proposal at its next Board of Directors meeting on January 29, 1970; that it regarded CBS’ letter as an application for a license in accordance with the consent decree; that CBS would in the meantime have an interim license for 60 days “at rates and terms to be negotiated, or determined ultimately by the court;” and that representatives of ASCAP would meet with CBS counsel on January 12, 1970 to discuss the application further. (PX 201)
By letter dated December 23, 1969 Cramer replied to CBS’ request on behalf of BMI and stated that “The BMI Consent Decree provides for several alternative licenses and we are ready to explore any of these with you.” (PX 202) CBS did not, however, pursue the matter further. Instead it commenced this lawsuit a week later, on December 31,1969.
Neither the history of the relationship between the parties nor the events leading to this action remotely suggest that CBS has been compelled to take a blanket license it did not want. Indeed, CBS does not even appear to have seriously considered available alternatives to the blanket license prior to the commencement of suit. CBS’ Vice President in charge of business affairs and planning for the network, Donald Sipes, was its principal witness as to the undesirability of blanket and per program licenses, and the need for a license under which the fee would be based strictly on actual use. Sipes testified that he first decided to explore alternatives to the blanket license sometime in 1968 or 1969. Although he was almost completely unacquainted with the intricacies of music licensing, he spoke to only three people in the course of his exploration. Two of these, Robert Evans and John Appel were house counsel for CBS. Sipes spoke to them only in their capacity as counsel, and did not seek their advice on the business aspects of licensing. The third person Sipes consulted was Emil Poklitar, the CBS employee in charge of the clerical personnel who process music logs and case sheets submitted by program producers to be sure the necessary rights have been cleared. Poklitar is not a business man and his duties involve a narrow portion of the music licensing spectrum.
Despite Sipes’ lack of expertise, neither he nor his colleagues at CBS consulted any music writers, publishers, television producers or any other expert in the field about possible alternatives to the blanket license. (Tr. 151, 204, 358, 371) No one at CBS ever conducted a feasibility study about presently available or proposed methods of licensing the music to be performed on its television network. (Tr. 156-57) Indeed, Sipes testified that he did not even speak to other CBS executives about alternatives to the blanket license; he considered the alternatives entirely on his own initiative. (Tr. 180, 369) In sum, CBS thought very little indeed about revising its licensing practices prior to Robert Wood’s “demand” letter to ASCAP and BMI just prior to the commencement of this suit. The evidence described hardly supports CBS’ contention that it has been compelled to take a blanket license. To the contrary, it suggests that CBS did not even view music licensing as a business problem until immediately prior to suit.
VI.
The Claim That the Structure of the Market Bars Direct Licensing
In the absence of direct evidence that ASCAP and BMI and their members and affiliates have refused to negotiate licenses which reflect actual music use, CBS’ claim that it is compelled to take a blanket license hinges on proof that the direct licensing alternative which exists in theory under the consent decrees is not a viable method for securing the necessary performance rights.
CBS claims that it established at trial that the defendants have structured the market in such a way as to lock it into a blanket licensing arrangement and to make any attempt to license its music needs directly so prohibitively risky as to preclude it even from trying. The basic elements of this claim are illustrated by the following syllogism: First, it would be uneconomic for CBS to attempt direct dealing while it still holds a blanket license, because it would then be paying twice for the same music: that is, since the blanket license fee covers unlimited use of the ASCAP or BMI repertory, direct licensing transactions would involve the purchase of additional licenses for music already covered under the blanket arrangement. Second, because copyright proprietors and television networks have never engaged in direct dealing, the transactional machinery necessary to negotiate and clear direct licenses between CBS program producers and the large number of individual copyright proprietors has not been developed; and the absence of such machinery creates a “barrier” to direct licensing. Third, because the blanket license system insulates copyright proprietors from price competition among themselves, they have no incentive to create the necessary machinery, and indeed would refuse to deal with CBS if it attempted to license its needs directly. Fourth, the risk of a refusal to deal is particularly acute in relation to CBS’ present inventory of programs and films, which contains a large number of performances of copyrighted music whose initial runs on television were licensed under a blanket license. If CBS dropped its blanket license, it would need to seek direct licenses for the music contained in any programs which it plans to rerun because a rerun constitutes a performance for profit. Accordingly, the CBS inventory would be vulnerable to “hold-ups” by copyright proprietors who could either refuse to license their music at all, or exact a premium price for it.
In sum, CBS claims to have established that because there is at present (1) no market machinery for direct dealing; (2) no expectation that it will be created; and (3) reason to believe that proprietors would refuse to deal with CBS, particularly with regard to programs in its existing inventory which it wishes to rerun, direct licensing is not a feasible alternative and defendants illegally compel CBS to continue to take a blanket license. To understand the evidence relating to these claims, it is necessary first to describe the nature and extent of CBS’ use of music.
VII.
CBS’ Use of Music
Music is used on network television in three principal ways: as theme, background or feature music. Theme music is the music used to introduce and close a program. Background music is used to complement action on the screen. Feature music is music used as “the main focus of audience attention” (PX 469); for example, a performer singing a song on a variety show. Occasionally, however, well-known compositions suitable for feature use may be used as background music, for example, “Tea for Two” as background to a tea party scene. CBS concedes that it would be a simple matter for it to obtain direct licenses for most of the theme and background music it uses, and that the key to the feasibility of the direct licensing method is whether it can obtain licenses for the feature music and some of the background music it needs. To understand why this is so, some familiarity with the manner in which television programs are produced is necessary.
As noted earlier, CBS itself produces virtually none of its “entertainment” programming. Apart from the news, public affairs, sports and special events programs—which CBS does produce and which make little use of music-—-the bulk of the programs broadcast over the network are acquired from independent program production companies, or “packagers.” Some of the packagers are well-known Hollywood “majors,” such as MGM, Universal and Paramount. Variety shows and some of the filmed serials are produced by smaller production companies, which are sometimes owned by the star of the show. For example, the “Mary Tyler Moore Show” is produced by MTM Company, Ms. Moore’s own; and “The Carol Burnett Show” is produced by her husband’s company.
Ordinarily, the music used on entertainment serials is almost exclusively theme and background music composed especially for the program. For example, after the program has been filmed or taped, the producer typically hires a background composer to view the film, decide which action requires musical background, score the music and arrange and conduct the music scored. The producer pays the writer a fee for this work and acquires the copyright from him, as an “employee for hire.” Theme music is created the same way, but the same music is of course used from week to week over the life of the series.
The producers of most of CBS’ regular programs own publishing subsidiaries which acquire the copyrights for the music which has been specially composed for the program. For example, CBS itself owns April Music, which in turn owns the rights to the background music used in “Gunsmoke.” The producer of “The Carol Burnett Show” owns Burngood Music and Jocar, which acquire the music specially created for that show such as background music for comedy sketches. Major studios, such as Universal, own major publishing houses, such as Leeds Music, which in turn own the rights to music created for Universal’s television programs. The publishing subsidiaries receive royalty distributions from ASCAP or BMI for performance of music on the shows created by their parent company. The royalties are of coursé a small fraction of the amount the producer receives from CBS for the program package itself. CBS may pay upwards of $200,000. for a one hour episode of a dramatic serial; the publisher’s performance royalties for that program may amount only to about $1,500.
This description of the process by which theme and background music is created makes clear that CBS can easily acquire performance rights for such music as part of the same transaction by which it acquires the program itself. Because the program production company, or its publishing subsidiary controls the rights to music specially created for the program, CBS could license the right to perform that music at the same time and place as the overall right to televise the program.
In contrast to theme and background music, feature music is not usually composed especially for the program. Rather, it is music which has been previously composed, and is controlled by a publisher who is not connected with the program production company. Feature music, and theme and background music whose copyrights are controlled by an “outside” publisher, cannot of course be licensed as a part of the overall transaction by which CBS acquires the program. Instead, in order to obtain rights to such music, it would be necessary for CBS or the program producer to approach the publisher who owns the rights to the music in question. As noted earlier, it is the feasibility of obtaining the licenses to this “outside” music on which the viability of the direct licensing alternate substantially depends.
In order to establish how much of CBS’ music needs would require “outside” direct licensing transactions (as opposed to “inside” transactions with the program producer or his publishing affiliate whose feasibility CBS generally concedes) both sides have introduced into evidence computer runs which they claim establish the extent of CBS’ music use in the three basic categories. In general the computer runs and the testimony relating to music use verify what the average television viewer would assume. CBS’ news and public affairs programs use virtually no music; the staple situation comedy, crime and drama series use almost exclusively theme and background music specially composed for the program; and feature and background music controlled by outside publishers not connected with the program producer is used regularly on a small group of programs: variety shows and variety specials, sports shows (e. g., football halftime shows), late night talk shows, and the “Captain Kangaroo Show.”
Although the parties are in agreement as to the general pattern of CBS’ music use, they differ in their claims as to precisely how much music CBS uses in each category and how evenly its use of music is distributed over the program schedule.
We believe it fruitless and unnecessary to determine the question whether CBS or defendants have more accurately interpreted the data as to CBS’ use of music. It is fruitless because, as both sides concede, the data of record do not permit complete analysis. It is unnecessary because the validity of the conclusions which the parties seek to draw does not at all hinge on the few percentage points which separate the parties. Thus, defendants argue that the available data show that some 85-90% of CBS’ programs use only “inside” music which could be conveniently licensed through the program packager, or no music at all; and that the music for another 5% of the programs could be licensed by seeking performance rights from only one “outside” publisher. According to defendants, only 8-4% of CBS’ schedule is made up of programs (such as variety shows) which make heavy use of outside music, requiring licenses from several outside publishers. Accordingly, defendants argue, CBS could acquire the necessary performance rights for nearly all of its programming without the creation of the “machinery” which CBS claims (as discussed below) is required to facilitate transactions between producers and publishers. This assertion is not inconsistent with CBS’ argument that, even adopting defendants’ figures, direct licensing for the few programs which do make heavy use of “outside” music would be impracticable in the absence of “machinery” to service the large number of transactions which would be required. In short, no matter whose figures are closer to the truth, the question would remain whether the lack of “machinery” destroys the feasibility of direct licensing as an alternative to the blanket license and constitutes an illegal restraint of trade.
VIII.
Are There “Mechanical” Obstacles to Direct Licensing?
A. The Legal Significance of “Machinery”
Prior to trial the parties stipulated that ASCAP members and BMI affiliates “have not established facilities or procedures” for processing requests by music users for direct licenses for performance rights. (CX 2, CX 3) CBS argues that the fact that the individual defendants have not established such “machinery” constitutes a “barrier” to direct licensing which compels it to take a blanket license. (CBS Post-Trial Reply Brief at 29) Putting aside the question of the kind of machinery CBS claims to be necessary and whether its absence does in fact make direct licensing of outside music unfeasible, we disagree with CBS that defendants’ mere failure to have created machinery amounts, without more, to an illegal refusal to deal. (CBS Post-Trial Reply Brief at 27)
• As outlined above, CBS has not, in the many years it has held blanket licenses, indicated a wish to fill its music needs by means of direct licensing. There is no evidence of substance that before bringing this suit it ever considered such an alternative in its own business planning. The only expression1 of its dissatisfaction with the blanket system was the “demand” letter sent by the network President, Robert Wood, two weeks before the commencement of this suit. That letter did not even refer to direct licensing, nor of course to obstacles, such as the lack of “machinery,” which arguably prevented CBS from engaging in direct dealing with copyright proprietors. Rather, the letter related only to CBS’ request for alternative methods of licensing through ASCAP and BMI. In short, there is no evidence that CBS gave any thought to the need for machinery, or noticed its absence, prior to this litigation.
It is simplistic, in view of these facts, to argue that “by virtue of [defendants’] preemption of the field, there are absolutely no facilities in existence for direct licensing . . . ” (CBS Proposed Findings at 37). The “field” consists of buyers as well as sellers, and by taking a blanket license for twenty years, CBS (as well as other broadcasters) has “preempted” any need for the machinery whose absence is now claimed to constitute an antitrust violation. We are unable to accept the proposition that defendants have had the obligation to create the framework for a direct licensing system, particularly in the absence of any indication that CBS would ever wish to use it. There is no evidence, and indeed CBS does not claim, that defendants have refrained from creating the necessary machinery for the purpose of injuring CBS. In these circumstances, the fact that defendants have so far done nothing to facilitate direct licensing does not support the conclusion that they are illegally restraining it.
B. Problems Allegedly Created by the Lack of “Machinery”
Putting aside the question whether the mere absence of machinery illegally restrains trade in the market for performance rights, CBS has failed to prove that there are substantial mechanical obstacles to direct licensing. CBS postulates that under its new proposed licensing system it would pass on the job of licensing “outside" music to the production companies. In such a case, inside music would be conveniently licensed through the program packager and its publishing subsidiary. However, the producer would take on the additional job of obtaining rights to the outside music to be used on the program by contacting the publishers in question (or their agents, as described below) and dealing for the performance rights.
CBS’ principal witnesses as to the need for machinery were Robert Wright, associate producer of “The Carol Burnett Show," and Edward Vincent, a former staff member of several network programs. Wright and Vincent are in the position of those who would use whatever machinery is required for direct licensing. In general the testimony of these witnesses was not persuasive, and their views on machinery were vague and abstract. (e. g., Tr. 482-83, 500, 687) Three basic claims emerge from their testimony. First, CBS asserts that the producer in a direct licensing world would sometimes have difficulty in identifying the publisher of a given composition in order to approach him for a license. The argument is based on the fact of life in the industry that publishers’ catalogs shift as they buy and sell copyrights, so that the publisher listed on the sheet music or record label may no longer own the composition when the producer wants to license it. CBS further claims that even assuming the producer can locate the publisher, the negotiations will be beset by confusion because, as defendants concede, music publishers have no established procedures for dealing with requests for performance licenses. Accordingly, the argument goes, direct licensing would be impracticable until settled ways of negotiating licenses are developed and publishers train their staffs to handle licensing. Finally, even assuming the producer can speak to the publisher in a language he can understand, CBS claims that difficulty would be caused by provision in certain 'contracts, between writers and publishers requiring that the writer’s consent be obtained prior to the grant of a direct license by the publisher. A provision requiring such consent appears in the form contract of the American Guild of Authors and Composers (AGAC). CBS claims that AS-CAP’s computer runs of CBS’ music use show that some 40% of the outside music it uses is written by AGAC composers, a figure we adopt arguendo; and that the need to obtain writer consents for the use of that music would delay direct licensing transactions and disrupt the tight production schedules under which some programs are produced, particularly variety shows.
For the reasons stated below, we find that CBS’ claims as to the effect of lack of machinery are without merit. There are two basic flaws in CBS’ general approach. First, CBS’ premise that it would abruptly cancel its blanket license and seek to fulfill all its music needs by direct licensing on the next day (see, e. g., Tr. 463, 633, 1874, 1912-13) is utterly unrealistic. If CBS took such a course there might well be problems of the kind just described. But this would not be proof that defendants have created obstacles which render direct licensing unfeasible. As noted earlier, nothing in the antitrust laws requires defendants to maintain well-oiled machinery for direct licensing for the benefit of CBS. Indeed, there is no support in the record for the proposition that CBS could even as a matter of internal business planning, switch over to direct licensing without a long period of advance preparation. Accordingly, to presuppose, as CBS does, that the feasibility of the direct licensing alternative is to be judged literally as of “tomorrow” miscasts the issue. The proper question, we believe, is whether such mechanical obstacles as exist could be remedied within a reasonable period prior to cancellation of the blanket license.
The second flaw in CBS’ approach is that it postulates that new direct licensing machinery would of necessity be an edifice entirely distinct from the machinery which now exists for the purpose of licensing other kinds of rights in music, in which ASCAP and BMI do not deal. As outlined earlier, the program packager is responsible for obtaining all rights necessary to televise the program except performance rights, which CBS obtains from ASCAP and BMI. These rights include “synch” rights, that is, the rights required for any program which is to be rerun. Program producers now obtain “direct” licenses for synch rights from publishers through “machinery” created for that purpose. Similarly, movie producers obtain from publishers the rights to record and perform the music they use, (i. e., “mechanical rights” or “mechanicals”) and there is “machinery” for this purpose as well.
Thus, although CBS is literally correct that “there are absolutely no facilities in existence for the direct licensing of [performance] rights by music publishers or other proprietors” (CBS Proposed Findings at 37), it overstates the issue to assume that such facilities would have to be created “from scratch” (CBS Post-Trial Brief at 40). The narrow question in the first instance is whether, as ASCAP and BMI contend, the machinery which publishers and producers now use to license other kinds of music could be adapted to facilitate the licensing of performance rights as well.
C. A Look at Other Kinds of Machinery
Apart from television performance rights, other rights in copyrighted music include motion picture synchronization and performance rights, and television synchronization rights. While ASCAP and BMI do not deal in these rights, facilities to license these rights directly from copyright owner to user do exist.
The television synchronization right is the right to record copyrighted music on the soundtrack of a filmed or taped program. Such rights are required for programs which are to be rerun, as distinguished from those (such as sports events or certain “one-run” taped programs) which are regarded as “live” performances. The grant of TV “synch” rights is almost exclusively brokered through the facilities of the Harry Fox Agency, Inc., which represents virtually every major publisher, about 3,500 in all. As outlined by Fox’s Managing Director, Albert Berman, and by Robert Wright and Edward Vincent, who are members of producers’ staffs, the typical “synch” rights transaction starts with a telephone call to Fox from the producer or from Bernard Brody or Mary Williams, synch rights agents located in Los Angeles who represent producers in their dealings with Fox. Because Fox has instructions regarding each publisher’s fee structure, (or, more often, is familiar with it on the basis of past experience) it is usually able to quote prices over the telephone for the compositions which interest the producer. The entire transaction, including actual issuance of the license, is completed within two to three days at most. Fox issues several thousand television synchronization licenses annually, using a basic staff of only two employees.
A “movie rights” transaction consists of the licensing of the performance right and synch right in one package for use in a theatrical (as distinguished from television) motion picture. The versatile Fox Agency also represents publishers in the licensing of these rights. The negotiation is similar in form to the TV “synch” rights. As described by Marion Mingle, the Fox employee who handles movie rights, producers call or write to Fox requesting price quotations on a number of compositions. Mingle or her assistant then telephones the publisher and outlines the nature of the film and the kind of use which is to be made of the composition in question, so that he can quote the price for the right. Generally, the producer either accepts or rejects the various quotations on the spot; sometimes, however, he may make a counter-offer which Mingle passes on to the producer. In general, Mingle can quote prices to the producer within two days. She and her assistant license several hundred movies each year.
D. Could Other Kinds of Machinery Help CBS?
As note earlier, CBS claims that the non-existence of direct licensing machinery in the television performance rights field would practically bar direct dealing in several critical aspects: the producer would have difficulty in identifying the copyright owner of a song which had been sold to another publisher; the AGAC writer-consent requirement would delay the licensing transaction and disrupt production schedules; and publishers would be unable to handle requests for licenses because they have no staffs or procedures for direct dealing in performance rights and have not created a central facility such as the Fox Agency to facilitate contact between producers and publishers. These claims dissolve in view of the evidence as to the licensing of other rights in music.
1. Finding Copyright Proprietors
In most cases the producer of a CBS show would have no difficulty identifying the “outside” publisher of a song for which he wants a performance license because he or his agent already deals directly with that publisher to obtain a synch license for the same song. As former CBS Vice President in charge of programming, Michael Dann noted, any program on tape or film is likely to be rerun, and program packagers usually obtain synch rights at the time the program is produced. Wright, who is on the staff of “The Carol Burnett Show,” testified that problems in clearing synch rights are “rare.” Edward Vincent, a former staff member of “The Jim Nabors Variety Hour,” testified that the Bernard Brody Agency would have no difficulty in giving him the name and address of any copyright owner.
Even if lines of communication to obtain synch rights were not already established, there are several other ways in which a producer could identify the publisher of music he plans to use. Emil Poklitar, who works in CBS’ music clearance department stated that CBS maintains a file containing the relevant information on over 100,000 compositions. Indeed, as Wright testified, publishers regularly barrage television producers with catalogs and brochures to promote the use of their music. Where they have not done so, there appears to be no reason why CBS could not simply request the catalogs of the major publishers. Finally, it should be stressed that in the 'vast majority of cases, the copyright owner listed on the sheet music or phonograph record is still the owner of the composition in question.
2. The AGAC Writer-Consent Requirement
Nor has CBS proven that the writer consent requirement in the AGAC form contract would cause significant delay in direct negotiations for performance rights. At present, publishers are required to obtain an AGAC writer’s consent for television synch licenses for songs over ten years old, and movie synch licenses for vocal use of a composition. (3M PX 31) Although Leon Brettler, Vice President of Shapiro, Bernstein & Co., a major publisher, testified that he has occasional difficulty contacting a writer who is on vacation or has just changed residence, the record establishes that meeting the consent requirement rarely causes delay in the issuance of a license. In addition to Brettler, several publisher witnesses were asked if they had trouble getting in touch with their writers. For example, Edwin H. Morris, who owns a company bearing his name, testified he has had no difficulties in doing so. Salvatore Chiantia, of MCA Music, testified that he is routinely able to locate his writers and obtain their consent. The reason why publishers have little difficulty in contacting an AGAC writer and obtaining his consent promptly is not hard to fathom: writers are intensely eager to have their work performed on television. Many AGAC writers have simply given advance blanket consent to their publishers to avoid the risk that the producer will, because of time pressures, substitute a different song. As Chiantia stated in a letter to AGAC concerning consents for background uses (PX 84):
“ . . . if we are not able to give licenses to TV film producers at $25. per film, or if we have to obtain the written consent of each writer for each individual use, we would for all practical purposes never get the compositions from our catalogs into current TV films other than in a few rare instances. The writers who have given us their approval are aware of the competitive situation which exists in connection with the use of music in the filmed TV programs produced in Hollywood. It is because of that very reason that they gave us their okay to go ahead.” (PX 84)
In the same vein, Louis Bernstein of Shapiro, Bernstein & Co. wrote AGAC:
“ . . . Please bear in mind that the authors and composers who come into our office are desperately hungry for performances, which means money from ASCAP. A good number of writers have urged us to get them these performances and stop worrying about the AGAC technicalities. Even so, 19 out of 20 writers would gladly give us any authorization in writing, but since we are dealing with several thousand writers, it becomes a difficult job to be so technical.” (PX 162)
There is every reason to believe that most writers would either give their publishers blanket consent for performance licenses, or give it promptly on a use-by-use basis, just as they presently do regarding synch rights. As Chiantia testified:
“It is no different than the situation with respect to synchronization rights. Why do you have any greater difficulty in this matter than you have in synchronization rights? There are certain synchronization rights that you need that I have to get my writer’s permission on and you get them. Why suddenly do you have such a great problem in respect of getting performance licenses where you don’t have that same problem in getting synchronization licenses.” (Tr. 2970)
3. The Need For Centralized “Machinery”
Although CBS has failed to prove that producers seeking performance licenses could not identify copyright owners, or that the writer consent requirement would significantly delay direct licensing of performance rights, we agree that direct licensing on any major scale would require some central clearing machinery through which transactions could be brokered. Without such machinery, direct licensing might be mechanically feasible, but would be a bulky and inefficient system: for a program producer (or an agent such as the Brody Agency) to contact each of the publishers whose compositions interest him, for every program, would of course be distinctly time-consuming and expensive.
In the past, the Fox Agency has responded to publishers’ needs for central “direct-licensing” machinery for new kinds of music rights by expanding its long roster of services. Defendants argue, accordingly, that music publishers would turn over the job of clearing television performance rights licensing to Fox as well. CBS replies that defendants oversimplify the problem of creating machinery because there can be no assurance that the Fox Agency will agree to take on the job of brokering performance rights. Of course, it is possible that Fox would refuse the opportunity to expand its business. But the lack of hard evidence on the point is chargeable to CBS, not defendants. Never having explored the feasibility of direct licensing, CBS has not given Fox any occasion to consider the possibility of brokering such licenses. In any event, there is no substantial basis for concluding that the Fox Agency would not expand its services to include television performance rights, just as it has expanded in the past to meet the need of publishers for a central agency for movie performance rights and television “synch” rights. However, even if Fox were unwilling to take on the job of brokering performance rights, the creation of a new agency modeled along the same lines need not be the imposing project CBS makes it out to be. Albert Berman, Fox's Managing Director, testified as follows:
“Q Mr. Berman, you were asked by Mr. Hruska on direct something about suppose publishers ask you to take over licensing of public performances on network television. Do you remember that question ?
A Yes.
Q And as I understood it, you said that you wanted to make a study before giving a detailed answer.
A Yes.
Q Let me ask this, sir: Would the task be significantly different from the task you now have when licensing TV sync rights?
A Only in numbers. It is certainly much more formidable merely because the uses would be so much greater. But the job could be done, I assume, with enough people and enought physical equipment.” (Tr. 973-74)
Berman did not testify, and CBS did not offer proof, as to how many people and how much physical equipment would be required. According to CBS’ projections, which we adopt arguendo, the number of direct licensing transactions required each year from outside publishers would range from approximately four thousand to eight thousand. (The low figure is projected from a period in 1971 when CBS had three night-time musical variety series; the high figure is based on a period in 1970 when it had seven such programs. CBS does not make a projection for the season which began in the Fall of 1974, during which it offered only one variety serial, “The Carol Burnett Show.”) CBS’ posttrial submissions strain to give the impression that each time a producer wished to use a certain type of composition, Fox or its newly created equivalent in the performance rights field would have to contact several different publishers, who in turn would have to check whether the AGAC writers (if any) whose music is involved would give their consent to the grant of a license, and then begin active price negotiations for the song or songs in question. (CBS Fost-Trial Brief at 37-44, Reply Brief at 39-40)
It is unrealistic to assume that such cumbersome procedures would be involved in a direct licensing world; indeed CBS’ own papers offer the key to streamlining the job. As noted early in this opinion, CBS seeks as one form of relief in this suit the establishment, under court supervision, of what it calls a “per-use” system. Under the per-use system, as outlined by CBS, musical compositions would continue to be licensed through ASCAP and BMI, but instead of taking a blanket license, CBS would license individual compositions, for which it would pay a specified fee for each use of music from the per-use “reservoir.” The fee for each license would be fixed in a schedule reflecting the nature of the use (e. g., theme, feature or background) and other appropriate factors, such as duration of use. CBS suggests that one convenient way to set a fee schedule is to adopt the present, formula by which ASCAP and BMI give royalty “credits” to their members and affiliates. (CBS Post-Trial Reply Brief at 71) The question which naturally arises, and CBS does not answer, is why publishers would not readily adopt the same concept of a fee schedule under a direct licensing system, in which case a centralized computer would store information as to prices as well as other necessary information for each publisher’s catalog.
Of course, it is not for the court to propose a system for direct licensing. Nevertheless, on studious review of the record, we are left with the belief that careful planning would go far to remove any significant “mechanical” obstacles to direct licensing for performance rights. It is true, as CBS points out, that new personnel would have to be trained to handle the task. But the only evidence on the point indicates that new central machinery could be staffed primarily by clerical personnel, as it is at Fox. The period required to train such personnel is presumably measured in weeks or months, rather than years.
Such a finding is supported by CBS’ own scenario as to how things would go in the event it prevailed in this suit: either of its proposed forms of relief—the establishment of its “per-use” system under ongoing court supervision; or a mandatory injunction against the issuance of blanket licenses to any network by ASCAP and BMI—would require the development of “machinery” at least as extensive and very much of the same pattern as that involved in a direct licensing system under the consent decrees. For example, if CBS won an injunction against the issuance of blanket licenses, it would of course be faced with the very same mechanical “barriers” to direct dealing of which it now complains so strenuously. Nevertheless, Donald Sipes, CBS’ Vice President in charge of business affairs and planning for the network, freely conjectured that in such an event the lack of “machinery” would pose no problem because ABC-TV and NBC-TV would be “in the same boat”:
“A Under that assumption, all three networks are in the same boat. In other words, neither one of the three—that’s bad English— but none of them have a competitive advantage, you see, over the others.
Q Assuming that, then what ?
A Assuming that with a lot of struggle and some chaos up front, I think again that the machinery necessary to broker deals between the sellers and the buyers in this situation will spring up to fill that gap. There is a need, there is money to be made, and people will spring into that breach to fill that need and make it happen. And deals, direct deals will be made for musical compositions between buyers and sellers.
Now, I do believe, of course, that it will take some time for that machinery to develop up front, but of course all three networks in that situation would have that same problem.
Q Suppose, Mr. Sipes, the injunctive order, in other words the order prohibiting ASCAP and BMI from licensing television networks, the effective date of that order was deferred for a period of, let’s say, a year. Would that remove the struggle you mentioned earlier in your answer? Would that solve that problem, in your mind?
A I think that during that time the machinery would develop, yes, sir.” (Tr. 79-80)
As we view the matter, CBS is not entitled to relief in this suit simply for the purpose of insulating it from the risk of competitive disadvantage vis-avis other networks if it makes the business decision to experiment with a new method of music licensing. If CBS’ Vice President in charge of the very subject at hand concedes that within one year suitable machinery would “spring up,” no reason appears on this record why it could not in any event plan to change over to direct licensing, effective one year hence, without a court order to spur the effort.
CBS’ sole response is that copyright proprietors would not of their own accord leave the safe haven of ASCAP and BMI and expend their resources to set up the machinery for direct dealing because they are afraid to engage in price competition for their works; and that only a court order could provide the “signal” that they must do so. The argument is unpersuasive. Assuming that copyright proprietors would in fact be willing to deal with CBS producers—a conclusion we reach in the next section —they would logically create an efficient mechanism to facilitate it (as they have in the case of other music rights), if only to hold down their own costs. In any event, the cost of creating new machinery would be passed on to music users, just as it is at present through ASCAP, BMI and the Fox Agency.
In sum, as stated earlier, CBS might well have “machinery” problems if it cancelled its blanket license “tomorrow.” But this is not proof that defendants have created “barriers” to direct licensing in order to compel CBS to take a blanket license; it is just as consistent with the fact, which the evidence establishes, that no one, including CBS, imagined that the blanket license would lose its charms until shortly before this suit. Because CBS does not claim that it would commence direct licensing tomorrow (although its counsel often questioned witnesses on the assumption that it would), the relevant question is whether the relatively modest machinery required could be developed during a reasonable planning period. The evidence establishes beyond doubt that it could.
IX.
Would Copyright Owners Attempt to Thwart Direct Licensing ?
A. The Nature of CBS’ Proof
In the absence of proof that direct licensing is unfeasible because of mechanical obstacles CBS’ case rests primarily on its claim that copyright proprietors would refuse to deal directly if CBS asked, or at least make it such an arduous and expensive proposition that CBS would be forced to resume the blanket arrangement. (CBS Post-Trial Reply Brief at 29) Indeed, in a substantial sense, the “disinclination issue,” as it has come to be called in the course of the lawsuit, is the major factual issue in the case. As CBS’ post-trial papers recognize, even questions such as mechanical feasibility hinge almost exclusively on the willingness or unwillingness of the defendants to smooth CBS’ course or obstruct it, as the case may be. (See, e. g., CBS Proposed Findings at 45-47)
Such a claim is difficult to prove even in the best of cases, and the present suit is no exception. CBS’ Vice President Donald Sipes testified that CBS has never sought a direct license. The three CBS witnesses who predicted that writers and publishers would refuse to deal with producers were Sipes, and producers Robert Wright and Edward Vincent. Their testimony on the point was unimpressive, particularly inasmuch as none of them had ever spoken to a publisher or a writer in relation to performance rights licensing. Vincent’s direct testimony is representative:
“Q Let’s go back to the delays you said you anticipated getting a secretary on the telephone, et cetera. Don’t you think that copyright proprietors are going to see to it that all those delays are removed in this world in which the CBS Television Network has cancelled its ASCAP-BMI licenses ?
A No, I don’t believe that because I am assuming on the basis of this particular lawsuit, that ASCAP and BMI like things the way they are.
If you are asking me to assume that they are going to have a parade for me if I tell them in front I am going to run an end run around their entire organization and attempt to deal direct and circumvent the ASCAP and BMI—
THE COURT: I don’t think that is the question Mr. Hruska asked you. I think he asked you whether you wouldn’t expect the copyright owners to make quick arrangements to deal with you if CBS didn’t have a—
A No, sir, I don’t, and that’s the reason I don’t. I don’t believe that they would want to see that particular system succeed.
THE COURT: Do you have any basis for saying that?
THE WITNESS: Well, as I began to state before, your Honor, there is a system under which they are operating now, which I assume for them is a very good system and that they like—” (Tr. 636-37)
******
“THE WITNESS: Your Honor, it is my opinion that we are talking about members of a group, of a group that has banded together for a very specific reason, and they have sought the shelter of this group for good and reasonable reasons, again, I assume. .
You are also asking me if I think that I am going to expect them to want to deal with me ?
THE COURT: Yes, I am.
THE WITNESS: To come toward me and say, ‘Yes, let’s make a deal,’ after I have circumvented their group.
No, your Honor, I don’t believe that prudent business sense dictates that I should believe that.
THE COURT: I mean that’s just on your general experience, you are saying that?
THE WITNESS: Yes, sir.”
(Tr. 639-40)
Despite the testimony of Sipes, Wright and Vincent that many if not most ASCAP members and BMI affiliates would be “disinclined” to deal directly with producers for performance rights, none of them could give the name of even one actual publisher or writer whom they thought would fall into that category. Sipes has never spoken to a copyright owner. (Tr. 204, 358). Wright, who' is associated with “The Carol Burnett Show,” was “confident” that ASCAP members would be reluctant to deal with him, but was certain that Joe Hamilton, who wrote the theme music for the show would be inclined to deal (Tr. 489). Edward Vincent, of the Jim Nabors show, stated that the two writers with whom he had actually worked would deal with him (Tr. 726). CBS’ economist Franklin Fisher, also expressed the view that ASCAP members would be reluctant to deal but, like Sipes, he has never spoken to a writer or publisher (Tr. 4853).
In the absence of evidence that any ASCAP member or BMI affiliate has ever refused or even threatened to refuse to grant CBS direct performance rights for any composition, CBS offered evidence as to (1) the strong economic incentives which would deter copyright proprietors from direct dealing (2) the experience of the Minnesota Mining and Manufacturing Company (the “3M” incident) in direct licensing its music needs for a background music tape and tape-player which it marketed in the mid-1960’s and (3) the ease with which defendants could thwart a direct licensing attempt by CBS, by exacting premiums for the licensing of music already taped or filmed, (music “in the can”) which has until now been covered by a blanket license.
B. The Alleged Incentives to Refuse to Deal
CBS’ Post-Trial papers postulate the fact, which we adopt arguendo, that participants in the market for performance rights are rational businessmen motivated by the wish to maximize profits. It excuses its failure to pursue alternatives to the blanket license by arguing that “no reasonably prudent manager of a television network” would subject his company to the risks involved. (CBS Proposed Findings at 33, 36) By a similar line of reasoning CBS contends that no prudent copyright proprietor would voluntarily relinquish the bargaining leverage and shield against price competition which ASCAP and BMI provide. According to CBS’ theory of the case, these essentially hypothetical facts establish both the restraint (i. e., the unavailability of the direct licensing alternative) and the threat of loss (i. e., the economic risk involved in the attempt). We are skeptical of the validity of this general approach, for the issue is not what CBS or copyright proprietors perceive their respective risks to be, but whether CBS has established that its fear that copyright proprietors would in fact attempt to thwart a direct licensing attempt is justified. With that caveat, we turn to the relevant evidence.
It is true, as CBS relentlessly emphasizes, that most of the writer and publisher witnesses who testified, by deposition or at trial, expressed a strong preference for the blanket licensing system. The preference is no surprise in view of the fact that the blanket license is the only way in which performance rights have been marketed to television networks for nearly thirty years. Moreover, the writer-publisher testimony establishes that, from their standpoint, the system is trouble-free and self-executing and the financial rewards are satisfactory. None of them expressed the wish to exchange their present, relatively uncomplicated way of doing business for what they viewed as a new mode involving unfamiliar procedures and possible financial uncertainty.
CBS stresses selected portions of the deposition testimony of several publisher witnesses who expressed themselves vigorously when asked to comment on such questions as the possible prohibition of the blanket licensing system. Several common themes pervade the portions of their testimony which CBS stresses: the recognition that there is a large number of publishers and writers who would be competing for exposure on the CBS network ; that many of them might face financial difficulties as a result of possible intense competition in a very limited market; and the strong preference for licensing through ASCAP and BMI, which have a measure of bargaining clout in dealing with CBS, the world’s largest “consumer” of music.
We may agree that the cited testimony proves that writers and publishers prefer the present system and are apprehensive . of dealing directly with CBS. However, this by no means proves the obverse: that copyright owners would refuse to deal with CBS if it discontinued its blanket license and insisted upon dealing on a direct licensing basis. Indeed, the snippets of testimony on which CBS relies are replete with the Darwinian imagery of cutthroat competition among hungry publishers and writers seeking network exposure. The colorful deposition testimony of Leon Brettler, an officer of Shapiro, Bernstein, Inc., is an example:
“Q Do you think that there would be a good deal of price cutting by publishers in the licensing of performance rights to television networks ?
A In this ease I haven’t got the slightest hesitation of saying not that I know, but I am virtually positive there would be a deluge of price cutting bordering on the cutthroat nature that would lead to mutual self-annihilation.
“I mean among us competitors who would be so desperate and jockeying for position, none of us having any strength, dealing with one huge user or an industry that is a huge user, consisting of three main entities and we only have those three doors open to us and all 4000 of us converging through that door, I think there would be tremendous amounts of concessions and price cutting and deals.” (Dep. 295-98)
* * * -X- * *
“I think that it would have substantial impact across the board, even to the big companies . the largest music publisher is still David compared to the Goliath of the television industry. The so-called top ten are still Davids compared to Goliath and the only time that I have ever heard of David whipping Goliath was in the Bible. Usually Goliath swamps David.” (Dep. 302-03)
We do not view this testimony as aiding CBS’ case. It tends rather to establish that copyright owners would line up at CBS’ door if direct dealing were the only avenue to fame and fortune.
More significant, however, is the fact that, when read in their entirety, the depositions take on an entirely different hue. For example, Brettler testified that “there is no question of the fact that we would negotiate something” if a producer requested performance rights (Dep. 184-85), and that “hordes” of other publishers would do the same. (Dep. 304-05) Edwin H. Morris, who operates another publishing company, expressed anxiety similar to Brettler’s. However, far from stating that he would not deal with CBS, he testified:
“Q Let us assume that telephone does ring, that you are approached by producers and/or network people who are interested in obtaining direct licenses to the compositions or various of the compositions in the Morris catalog. Do you talk to these people ?
A Yes.
Q Do you invite them to come into your office?
A I will even go to theirs.” (Dep. 211-12)
The conclusion that CBS has failed to prove that the “disinclination” of writers and publishers to leave the blanket system would ripen into a refusal to deal directly is fortified by the trial testimony. Although most of the writers and publishers who testified expressed concern similar to those of CBS’ deposition witnesses, all but one of them testified that he or his company would negotiate directly with CBS for performance rights, and most of them believed that their attitudes were representative of others in their position. For example, Arnold Broido, President of the Theodore Presser Company, testified:
“Q * * *
“Let us suppose there came a time when CBS no longer held licenses from ASCAP and BMI and came to you to negotiate or seeking to negotiate direct licenses with you for public performance of compositions in your repertory.
What would be your reaction?
A We would deal with them, of course.
Q Would you tell us why ?
A Well, there is really very little else that we could do. We would have no choice in the matter. We would regret it because, obviously, it would be an inconvenience to us and we would regret the breaking of the relationship but we would deal with them.” (Tr. 3492-93).
Broido also stated that:
“the publishers would by and large talk with CBS or anyone else who came to them.” (Tr. 3498)
Salvatore Chiantia, President of the Music Division of MCA, Inc., testified:
“My primary responsibility is to get my music played. To get it exposed. And if I have to go to CBS in a direct licensing scheme, I am going to go. I am not going to sit back and say, I hope you fail. I want you to use my music and I am going to try to make it work.” (Tr. 2957)
“There are only three games in town. I have to play one of three games. If we are talking about television, there are only three games in town. If I am effectively cut out from one, I only have two more to play with.” (Tr. 2947)
The response of the composers who testified was similar to that of the publisher witnesses. For example, the dean of American composers, Aaron Copland, expressed reluctance to change the blanket arrangement, but testified that he would engage in direct dealing if necessary:
“Q Now, it has been suggested in this lawsuit that in the event that the Columbia Broadcasting System Television Network for some reason or another no longer held a license from ASCAP and BMI it might come to you, as an individual copyright proprietor, as an individual composer, and seek a license from you for the right to perform your copyrighted work or works.
I would like to ask you, sir, whether if someone from Columbia came to see you you would be inclined or disinclined to deal with them or what would your reaction be ?
A Well, I think I would be rather regretful about the need to individually concern myself with the licensing of a particular work, since the present arrangement takes care of a great many of those chores, as I would think of them, and it seems a comfortable arrangement as it now exists, from our standpoint at any rate.
Q If, however, the question were squarely put to you, will you deal with CBS or will you refuse to deal with CBS, what would your answer be?
A Well, I think my answer would be that of most composers. If they want to get a performance, they will do what is necessary to get the performance and if they have to deal with CBS, they would, I suppose, agree to deal with them.” (Tr. 3485-86)
Composer John Green testified to the same effect:
“Q Mr. Green, suppose the following hypothesis. That CBS canceled its ASCAP blanket license and NBC and ABC continued to hold blanket licenses from ASCAP and suppose that either CBS or a producer of a CBS film series show came to you and sought to engage you to write the background and theme music for the show.
Would you be inclined or disinclined to negotiate with him for the writing of that music?
A I would be inclined to negotiate with him.
Q Do you have an opinion as to whether other background writers and composers would be inclined or disinclined to negotiate with CBS or its producers in that situation?
* * * * * *
Q Do you have an opinion ?
A I know how I would like to answer that question. I don’t have an opinion but I would be surprised if they didn’t feel exactly as I do.
******
Q Mr. Green, can you tell us why you would be inclined to negotiate with CBS or the producer of the CBS show in that situation ?
A For the following reasons. I like to think that part of my motivation is aesthetic and artistic, but I am also a fellow who earns his living by the making of music in various forms. I am also an artist who derives only secondary pleasure from thinking how great my music is when I hear it in my head.
I like to hear it performed and I like to get paid for hearing it performed and you referred to [CBS]— would I be inclined to negotiate with [CBS] for the performance? Well, they are one of the principal outlets in the world for the performance of music and I want my music to be performed, I want the public to hear it, I want to get paid for it and I would be totally inclined to negotiate with anybody who would like to use it.
Q Mr. Green, you have also told us that in addition to your work as a background composer you have written songs in your music career.
Suppose in that situation we posited just a moment ago, CBS canceling its ASCAP license, suppose one of your publishers called you up and told you that the producer of a CBS variety show is interested in using one of your compositions, one of your songs, on a show, and he asked you for your opinion or view, would you be inclined to recommend that he license or negotiate with CBS or the producer or would you recommend that he not negotiate with CBS or its producer?
A I would recommend that he negotiate.
Q And why?
A Because I would want my song to be exposed and I would also want to derive the revenue that would come from such a source as [CBS] for that exposure.” (Tr. 3457-60)
Perhaps it is not surprising that Walter Dean, who was called by CBS, was the only witness to testify that the publishers for whom he worked would probably refuse to grant licenses to CBS. The publishers are April and Blackwood, companies which CBS owns, and whose catalogs consist mostly of copyrights owned by CBS as well.
Although the testimony of the writer and publisher witnesses persuasively suggested that they would deal directly with CBS, at least ex necessitate, our conclusion that CBS has not proven that they would refuse to do so does not rest solely on their testimony. The extensive evidence on the nature of the music industry amply confirms the proposition.
The two most salient features of the television music market are the enormous value to copyright proprietors of network exposure and the markedly limited opportunities for securing it. Copyright proprietors are eager to have their music performed on television not simply to earn performance royalties distributed through ASCAP and BMI, but because a television performance before millions of viewers is the most effective way to sell phonograph records and sheet music, and to generate performances by other music users. No less than eleven witnesses testified to the compelling desire of writers and publishers to gain television exposure for their music. To that end, publishers regularly direct extensive promotional efforts toward the networks, including the mailing of advance copies of sheet music or recordings to producers, performers and musical directors of television shows. Some publishers use promotional brochures or other written materials; others solicit by telephone. The record establishes beyond doubt that even in what CBS characterizes as a comfortable, blanket license world devoid of price competition, television network performances are highly sought by copyright owners.
The eagerness, and occasional desperation, of copyright proprietors is heightened by the fact that there are so few opportunities to win the prize. There are only three television networks and, as noted earlier, few programs which make appreciable use of previously published music; most programs either use no music at all or “inside” theme and background music published by the producer. The testimony of Alan Shulman, Vice President of the Belwin Mills Publishing Corp., aptly summarizes the situation :
“Q Are any of your promotional activities aimed at getting your music played on network television ?
A Yes. But there are very, very limited number of opportunities to do this. For example, because of the number—well, they are limited pretty much to the variety shows that appear on the television networks and these are fewer in number of recent years than they were previously.
Q Why are your opportunities limited to the various shows ?
A Well, because the other shows are basically prerecorded. Prefilmed, et cetera. The series, et cetera, are done and produced beforehand and the music that is performed and synchronized and used in those programs are pretty must controlled by the producers of the particular program.
And these producers, in fact, very often are publishers themselves who control the publishing rights and, naturally, are not too happy to use other people’s music unless they absolutely have to because there is income from it.” (Tr. 3083-84)
In fact, apart from variety programs, producers seldom “have to” use “other people’s music.” With rare exceptions, a considerable number of copyrighted songs are suitable for any use a producer might have in mind. Although every copyrighted composition is philosophically or aesthetically “unique” and its uniqueness is dignified by copyright, virtually none of the four million compositions in the ASCAP and BMI repertories is unique in the mind of a television producer. CBS’ producer witnesses Wright and Vincent, testified that “any number” of songs would fit a producer’s intended use and that “there would always be, obviously, alternates.” (Tr. 419, 586) Copyright proprietors are keenly aware that their compositions are substantially interchangeable with the compositions of other writers and publishers, a factor which could well be expected to dissipate any “disinclination” to deal with CBS, which might otherwise exist.
Moreover, CBS’ enormous power within the music industry supports the testimony of the writers and publishers who said they would engage in direct licensing. CBS is far more than a television network. It is, as Michael Dann, former Senior Vice President of CBS in charge of programming testified, “The No. 1 outlet in the history of entertainment” and “the giant of the world in the use of music rights.” (Tr. 3374) CBS is the largest manufacturer and seller of records and tapes in the world (Tr. 4615); it owns radio and television stations in a number of major metropolitan areas. On CBS’ own theory that composers and publishers belong to the race of economic men, it is doubtful that any copyright owner would refuse the opportunity to have his music performed on CBS, much less wish to incur CBS’ displeasure. It would risk not only the loss of CBS performance royalties, but royalties from the sale of records sold by CBS subsidiaries and from radio plays of their records on fourteen radio stations operated by CBS in the seven largest cities in the nation.
Moreover, many of the largest publishers are, like April and Blackwood, subsidiaries of large entertainment companies. A number of the named defendants or their parent corporations are program packagers or movie distributors who compete to sell their products to CBS. For example, Famous Music is owned by Paramount and Leeds Music is owned by Universal. The royalties received by these publishing subsidiaries are a small fraction of the amount CBS pays for the program or film. It would be a rhetorical question to ask whether such producers would risk the sale of a program package to CBS because of the disinclination of its publishing subsidiary to engage in direct dealing for the music performance rights.
C. The “SM Incident”
In support of its disinclination theory, CBS also relies heavily on the experience of the Minnesota Mining and Manufacturing Company (3M) which sought direct licenses from publishers in connection with its marketing, in the mid-1960’s of a tape and tape player (the M-700 project) designed to provide 24 hours of background music. The M-700 was designed for use in small commercial establishments, such as restaurants, stores, and doctors’ and dentists’ offices. To this extent, the M-700 was similar to the packages offered by other vendors of background music services such as the well-known Muzak. In one important respect, however, it was different: 3M planned to sell its tape outright, while other vendors leased theirs for a limited term.
3M retained Allen Arrow, an attorney, to negotiate performance licenses for the M-700. Initially, Arrow approached ASCAP for licenses but during the ensuing discussions several problems arose. For example, ASCAP wanted, and 3M did not want, different rates for different classes of users. According to Arrow, ASCAP took the position that the consent decree, which prohibits it from discriminating between similarly situated licensees, barred it from offering a “one class” license because it would discriminate against vendors of background tape systems for larger establishments, such as Muzak and Seeburg. In addition, ASCAP found serious difficulty in 3M’s proposal to sell its tape outright because ASCAP would be required, after the expiration of an initial three year license term, either to try to relicense 3M’s customers for a renewal term or, if they refused, to fight a losing battle trying to police possible infringement in a number of small establishments whose individual royalty payments would amount to some $10. per year. Although 3M, as a potential user, was entitled to invoke the licensing and rate-fixing procedures available to it under the consent decree, Arrow testified that 3M chose not to take that course because of time pressures in assembling the project and other such factors. In the circumstances, ASCAP suggested to 3M that it deal directly with copyright proprietors.
There is considerable evidence that the publishers 3M approached viewed the M-700 proposal with some reservations. Some of them were concerned about the novel form of license 3M sought; others about the policing problem; others about the amount of money involved; and still others about the implications of such a proposal for the traditional ASCAP structure and their customary way of doing business. Nevertheless, 3M signed contracts with 27 of the 35 publishers it approached. It obtained all of the music it needed within its time schedule at a cost of about three quarters of the amount of its first offer to ASCAP. Shortly thereafter, 3M began another successful program to obtain licenses for a second series of M-700 tapes.
Although the statistics do not favor its case, CBS stresses the fact that 3M was able to reach agreement with “only” 27 of the 35 publishers it approached. It argues that a large music user like CBS could be expected to meet considerably greater resistance in a direct licensing effort than 3M, whose rather modest needs for direct licenses could not have threatened to topple the ASCAP structure. Defendants argue, on the other hand, that the 3M incident demonstrates that direct licensing is feasible. It points to the fact that, in the last analysis, 80% of the publishers 3M approached put aside whatever “disinclination” they might otherwise have had to direct dealing, and engaged in business negotiations over a business proposition.
We hesitate to give important weight to the 3M incident as evidence of the likelihood that copyright proprietors would attempt to frustrate CBS’ efforts to engage in direct licensing. In the first place, 3M is hardly as large a music user as CBS. Moreover, it sought licenses for a highly fragmented group of small users; .rather than for a huge television network. Finally, the form of license which 3M solicited, involving a package of compositions from a publisher for a three year term is not comparable to a license for one performance of a single composition before an audience of millions. In short, CBS and copyright proprietors would be trading for a very different horse than did 3M and the publishers which it approached. Moreover, the evidence as to the 3M incident was developed in large part through documents and deposition testimony; and included considerable hearsay in the testimony of Allen Arrow. We cannot give significant weight to arguments as to the state of mind (e. g., disinclination) of the publishers approached by 3M based on such evidence. Nevertheless, CBS has made the 3M incident a central part of its disinclination claim. For the reasons set forth below we find that the incident, if it proves anything at all, establishes that copyright proprietors would deal with CBS for direct licenses.
In our view, the bare fact that 8 of 35 publishers were unwilling to sign a contract with 3M has no legal significance; the only relevant question is why they did not. To the extent that the motivations of the unwilling publishers can be gleaned from this record, it appears that general opposition to direct licensing and loyalty to ASCAP played a very small part; and that in general where 3M's proposals were rejected there were legitimate business objections to them. We detail some of these below.
1. SM’s Negotiations With Publishers
Chappell & Co., Inc. was the first publisher 3M approached. Initially, it found 3M’s offer very attractive, but ultimately refused it because of concern that the sale of the M-700 tape would, in effect, put a perpetual free performing right in the hands of 3M’s purchasers unless ASCAP licensed and policed those rights. Chappell’s concerns appear to have been justified because, as matters turned out, only about a third of 3M’s purchasers agreed to pay for licenses to use their tapes after expiration of the initial three year term.
3M next approached MPHC, a publishing company owned by Warner Brothers. MPHC’s initial response was favorable, but the MPHC official authorized to make a final agreement was ill, tions. By the time he had been replaced, and died during the course of negotia3M had met its licensing needs. In 1967, 3M negotiated and reached oral agreement with MPHC for licenses covering a subsequent series of tapes, but 3M then decided not to consummate the transaction.
Like Chappell & Co., Famous Music Corp. expressed serious reservations about the possibility of policing the renewal term performance rights, and after consulting ASCAP it rejected 3M’s proposal. In 1967, Famous resumed negotiations with 3M for a second tape series and a tentative agreement was reached, but 3M dropped the proposition because at that point it expected to make a bulk licensing agreement with ASCAP covering the second series, which as matters turned out, did not materialize.
The Edwin H. Morris Company did not accept the 3M proposal because it thought, in Morris’ words, that 3M was “trying to get something for virtually nothing.” (Dep. 131) The evidence establishes that Morris had completely misunderstood the amount of money 3M was actually offering; and that he would have negotiated if there had not been a failure in communication. (Dep. 129-31, 137-41)
Irving Berlin Music, Inc., initially turned down the 3M offer for reasons which are not clear. It appears to have been reluctant to license except through ASCAP, largely because 3M proposed to use too small a number of Berlin songs to make a different decision worthwhile. Berlin changed its mind as to the second tape series but again 3M backed out in anticipation ' of licensing these later tapes through ASCAP.
By the time 3M approached The Richmond Organization it had made sizeable deals with other publishers. Accordingly, it was unwilling to guarantee the use of a sufficiently large number of Richmond songs to make the transaction attractive. Negotiations broke down on that issue. For Richmond, as for other publishers, policing was a problem directly related to the number of songs to be used: above a certain threshold, the cost of relicensing and policing might have become economically worthwhile. For Richmond, that threshold was not reached. Subsequently, Richmond contacted 3M’s representative to express its interest in the 3M program, but because 3M wished to use only about 12 Richmond songs, negotiations again fell apart.
In addition to these publishers, 3M failed to conclude agreements with Robbins, Feist & Miller, Bregman, Voceo & Conn, and Frank Music. The first two publishers were the only ones whose reluctance to deal appears to have been motivated from a sense of devotion to ASCAP. Frank Music refused 3M’s offer for the initial series of tapes because, like Richmond Brothers, it was concerned about the related problems of policing and the number of tapes to be used. However, it approached 3M to negotiate a license for the second series of tapes and agreement was reached.
We conclude, that, at best, the 3M incident does not favor CBS’ case. The publishers which 3M contacted were offered varying proposals and responded as they thought appropriate to their respective legitimate business interests. Four fifths of them accepted the proposal, the remainder rejected it; and some rejected it the first time around but sought to be included in 3M’s second series. The evidence contains no breath of parallel conduct. Those who had fears relating to the problem of relicensing and policing proved to be justified in their fears. Virtually all the publishers responded to 3M’s unusual proposal as essentially a clean-cut business proposition ; none of them refused entirely to negotiate with 3M. On such a record, no general inference of unwillingness to engage in direct dealing with 3M can be drawn. Even if it could be, it would be unwarranted to impute any such inference to the very different circumstances prevailing in the market for performance rights to music used on CBS.
2. The AGAC Ploy
CBS also stressed the role played by The American Guild of Authors & Composers (AGAC) in the 3M incident in voicing opposition to the issuance of direct licenses by their publishers. AGAC, which is riot a party to this action, is a trade association of some 2,400 composers whose traditional concern has been the problems of composers in dealing with their publishers, such as the proper calculation of royalties. About 8% of the writer members of ASCAP and writer-affiliates of BMI are AGAC members. During its 45 year history AGAC has on occasion complained to publishers or to ASCAP and BMI that the interests of its members were not being protected. However, it has never brought suit against anyone; its principal technique appears to be the enthusiastic use of rhetoric.
AGAC’s role in the 3M incident fits this general pattern. Although AGAC responded to the news that publishers had granted licenses to 3M in its typically vociferous way, the actions it took in opposition to the 3M program were untimely and ineffective. 3M started to negotiate with publishers in October, 1964, and by June of 1965 had already licensed the first series of tapes. It was not until November of 1965 that AGAC sent a form letter to publishers protesting that they had failed to secure writer consent for certain songs covered by AGAC contracts and suggesting that publishers refrain from licensing 3M. However, the AGAC letter clearly had no impact on 3M’s efforts: it was circulated after licenses had been obtained for the first series, and after the eight publishers who declined the 3M proposal had told 3M of their decision. To this day 3M has continued to obtain licenses for its M-700 series directly from publishers. AGAC has not even attempted to lobby against the practice since 1966.
As CBS points out, a “radical” wing of AGAC which styled itself the West Coast Committee, criticized the AGAC Council for its mild-mannered response to the 3M project, and one of its strident letters appears to advocate a conspiracy by publishers to refuse to deal. However, the central AGAC Council in New York rejected the proposals on the advice of its lawyers.
In sum, CBS has seriously overreacted to the role of AGAC and the West Coast Committee in the 3M incident. Although there is testimony which supports CBS’ view that the feelings of some writers run high when talk of direct licensing is in the air, the significant facts are that AGAC took no effective action against the 3M project and refused to adopt the suggestion of the West Coast Committee that radical action was appropriate. These facts, together with the evidence of AGAC’s declining influence over the past ten years, and the strong desire of writers to gain exposure for their work do not support any inference that AGAC would or could take effective action against a direct licensing effort by CBS.
D. The “Music in the Can” Problem
1. The Significance of “Music in the Can”
Television programs or movies which have been filmed or taped are said to be “in the can.” Music recorded on the soundtrack of such films or tapes is called “music in the can.” At any given time, CBS has a large inventory of programs or feature films, much of which it will rerun over the network. CBS argues that if it cancelled its blanket license, the proprietors of compositions in the can, knowing that the music could not practicably be removed from the soundtrack, would exact premium prices for performance licenses: CBS would be forced to pay these premiums or risk infringement litigation.
CBS claims that by virtue of their leverage, copyright owners of music in the can could easily thwart any direct licensing attempt; and this fact accounts for the “business judgment of the [CBS] management that an attempted by-pass of ASCAP is not a realistic alternative . . .” (CBS Proposed Findings at 60-61). We disagree with CBS’ analysis. Putting aside the fact, noted earlier, that CBS does not appear to have considered the feasibility of direct licensing prior to this suit, evidence of the ease or difficulty with which the antitrust laws may be violated cannot be equated to proof that the violation will occur.
Indeed, even if CBS had proven that some copyright owners would attempt to extract a premium price for their music in the can, that fact alone would not, absent proof of parallel conduct, tend to establish that defendants have violated the antitrust laws. At any point in the normal course of its business, CBS has a sizeable inventory whose make-up is continuously shifting from one season to the next as old programs and films are “retired” and new ones replace them. CBS obviously knew, when it accumulated its current inventory that some form of performance license would ultimately be necessary for the second runs of the programs and films within it. These circumstances, however, do not result from the fact that CBS has continuously taken a blanket license, by “compulsion” or otherwise. To the contrary, they flow from the networks’ practice of rerunning films and programs. Regardless what system of licensing CBS uses, its inventory would be vulnerable to “hold-ups” every time CBS puts music in the can without obtaining performance rights for future runs of the program in question. The fact that CBS has failed to secure such rights for reruns of the present inventory is hardly the defendants’ fault. No defendant has ever refused CBS a license for any music in the can or out, if for no other reason than that CBS has never asked.
In sum, any changeover to direct licensing, even in a world of complaisant composers and publishers equipped with sturdy “machinery” may subject the CBS inventory to hold-ups by the greedy ones among them. However, simple greed, independently expressed, does not constitute a restraint of trade.
In any event, CBS has not proven that its fears of a “hold-up” by copyright proprietors are justified. CBS’ principal witnesses in support of its “in the can” theory were its Vice President in charge of business affairs and planning, Donald Sipes, and its economist, Franklin Fisher. Neither of them has ever met a copyright proprietor, nor is either more than cursorily acquainted with the music licensing field. As was true of their testimony on “disinclination,” the basis for their conclusion that copyright owners would “hold-up” CBS for rights to music in the can was that it would be economically rational for them to do so (see, e. g., Tr. 1686-87). Even taken alone this is not persuasive evidence, and it is clearly outweighed by the more concrete proof offered by defendants. For example, Albert Berman of the Harry Fox Agency testified that television producers often prepare programs without synchronization licenses and negotiate such licenses after the programs are in the can without being “held up.”
“A . There is a certain confidence on the part of producers that they will not be held up by publishers when they want to use a song after the fact.
Q Do you have any opinion as to what the basis is for that confidence ?
A The basis is that the users and providers of music have to live together. Nobody wants to lock himself up in a closet and not have them used. The producers are aware of that. It is a common interest that would prompt this type of action.” (Tr. 981a-82)
Moreover, although we need not reiterate at length the basis of our earlier conclusion that no copyright proprietors would wish to fall into CBS’ disfavor, the publishers who testified as to the “in the can” problem confirmed that view. Their statements on the point were highly persuasive. For example, Salvatore Chiantia of MCA Music, Inc. testified as follows:
“A . . .1 believe the question here is whether a music publisher would take advantage—let us discard the term holding up— whether a music publisher would take advantage of a situation in which something has already been recorded and a license is subsequently sought.
There have been any number of occasions when that has arisen in the licensing of mechanical reproduction and in the licensing of motion pictures, theatrical motion pictures.
Very often, a theatrical motion picture will be made and the song recorded and the synchronization license is subsequently sought. In those cases, speaking for MCA, we have never held up anybody. We have never been unreasonable. We have licensed.
In the cases of mechanical reproductions, there are cases in which recording companies actually record a song before ever asking for a license. In those cases we would certainly hold them up if we wanted to, but we don’t because it would be bad business practice.
There are some people in our business who make a habit of being unreasonable. There are a number of people. You mentioned Happy Birthday before. There is another very famous person with whom I have done a great deal of business, who lives in Paris, who makes my life miserable because she refuses to allow me to license under certain circumstances.
Well, those people are generally identified and people watch for them. Certainly in my business of being a music publisher I don’t know of any company that has ever had to say, ‘Watch out for Chiantia or MCA; they’ll get you.’
We know that we are in this business and we intend to stay in this business and the way we stay in the business is by establishing some kind of a rapport and good will with our users and customers.
* -X- -X- * *
We want the door to remain open for us and if you start holding up record companies or the people with whom you do business, you are in real trouble.” (Tr. 2895-98)
Even assuming, contrary to the evidence, that some publishers would be drunk with power at having CBS on the spot, CBS has overstated the dimensions of the problem. Most of its inventory is comprised of theatrical motion pictures (i. e., movies), most of whose music is theme and background music controlled by the film producer’s affiliated publishing company (PX 994, AX 287, BX 167). The “problem” of licensing this music is similar to the problem of licensing the “inside” theme and background music for CBS’ regular television programming: it can be obtained from the producer itself. Of course, the producer could hold out for a large premium, but only on pain of losing large sales of his principal product, movies, to one of only three potential network buyers. It does not need to be repeated that the price of music performance rights is a tiny fraction of the price of a program or film.
Most of the remainder of CBS’ in the can inventory is regular programming which is often rerun in the course of a season. As discussed earlier, most of CBS’ serials use only theme and background music owned by an inside publisher. The factors just described as to movies in the can apply equally to other programs in the can.
Unfortunately, CBS has not offered evidence regarding the average life span of a given inventory of programs and films (or of categories of such programs and films), or the portion of the inventory which it actually intends to rerun. The absence of such evidence renders the “in the can” argument even less acceptable. For example, it may be that some television serials have a basic life of one year; if so, that portion of the inventory would be “consumed” during the inevitable interval between CBS’ notice of termination of its blanket license and the date on which it actually commenced direct licensing for all its needs. For these programs, as well as the programs or films which CBS does not intend to rerun, there would be no “in the can” problem because there would be no need for performance licenses.
2. Commercials “in the Can”
|
// Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Microsoft.ML.CommandLine
{
[BestFriend]
internal static class SpecialPurpose
{
/// <summary>
/// This is used to specify a column mapping of a data transform.
/// </summary>
public const string ColumnSelector = "ColumnSelector";
/// <summary>
/// This is meant to be a large text (like a c# code block, for example).
/// </summary>
public const string MultilineText = "MultilineText";
/// <summary>
/// This is used to specify a column mapping of a data transform.
/// </summary>
public const string ColumnName = "ColumnName";
}
}
|
Eduardo Navarro (footballer)
Eduardo Navarro Soriano (28 January 1979 – 29 September 2022) was a Spanish footballer who played as a goalkeeper.
Career
Navarro was born in Zaragoza, Aragon. He played 150 Segunda División matches over six seasons, representing in the competition CD Numancia (three years), UE Lleida (two) and SD Huesca (one); his debut as a professional took place on 28 August 2004, as he appeared for the second club in a 1–2 home loss against Racing de Ferrol. He was a key figure for Huesca in their first-ever promotion to the second tier in 2007–08.
After retiring in 2014 at amateurs Utebo FC, Navarro became a goalkeeper coach.
Death
Navarro died on 29 September 2022 at the age of 43, following a long illness.
|
Requests for Comment/CVT policy
Discussion
* While I will be completely ignoring this RFC. <IP_ADDRESS> 06:47, 17 August 2019 (UTC)
* You are not respecting this RFC znotch190711 (temporary signature) 06:51, 17 August 2019 (UTC)
* Currently 4 as of 17-8-2019. --znotch190711 (temporary signature) 00:11, 18 August 2019 (UTC)
Need for the proposal
* Wow, Spike! --znotch190711 (temporary signature) 22:48, 18 August 2019 (UTC)
Proposal 1
* CVTs should only act in their CVT capacity if:
* a request to remove spam is made by a member of the local community
a) Support
* 8) --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
d) Comments
Proposal 2
a) Support
b) Oppose
Proposal 3
a) Support
* 2) per proposer. Reception123 (talk) (<font color="#DC143C">'C' ) 07:12, 17 August 2019 (UTC)
* 4) --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
b) Oppose
d) Comments
How many wikis? znotch190711 (temporary signature) 07:13, 17 August 2019 (UTC)
Proposal 1
b) Oppose
* 4) users should decide --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
c) Abstain
Proposal 2
* CVTs will be elected by a community vote (on Requests for global rights) where:
* at least 10 users share their view;
* there is a support ratio of at least 80%.
* a period of one week has passed since it started.
a) Support
* 3) A great rate Gustave London (talk) 14:56, 17 August 2019 (UTC)
* 5) good number. --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
b) Oppose
Proposal 3
* CVTs will be elected by a community vote (on Requests for global rights) where:
* at least 5 users share their view;
* there is a support ratio of at least 80%.
* a period of one week has passed since it started.
a) Support
b) Oppose
d) Comments
Removal of rights
Proposal 1 (Revocation)
* receive at least the minimum number of votes needed for appointing;
* have 50% or more support for removal of rights
a) Support
* 1) makes sense. Reception123 (talk) (<font color="#DC143C">'C' ) 09:31, 16 August 2019 (UTC)
* 2) per reception ~ RhinosF1 - (chat) · CA · contribs · Rights - ) 15:17, 16 August 2019 (UTC)
* 6) per Robkelk. --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
Proposal 2 (Revocation)
* receive at least the minimum number of votes needed for appointing;
* have 75% or more support for removal of rights
a) Support
b) Oppose
* 2) too high ~ RhinosF1 - (chat) · CA · contribs · Rights - ) 15:17, 16 August 2019 (UTC)
* 5) too much to remove a CVT --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
Proposal 3 (Revocation)
a) Support
* 1) makes sense. Reception123 (talk) (<font color="#DC143C">'C' ) 09:31, 16 August 2019 (UTC)
* 4) Kind of a no-brainer.--Wedhro (talk) 14:45, 17 August 2019 (UTC)
* 5) Per RhinosF1 Gustave London (talk) 14:59, 17 August 2019 (UTC)
* 7) --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
c) Abstain
Proposal 4 (Inactivity)
a) Support
* 1) -znotch190711 (temporary signature) 07:06, 17 August 2019 (UTC)
b) Oppose
* 2) too long. ~ RhinosF1 - (chat) · CA · contribs · Rights - ) 15:19, 16 August 2019 (UTC)
Proposal 5 (Inactivity)
a) Support
* 8) per others. --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
Proposal 6 (Readdition)
a) Support
* 4) per too much Gustave London (talk) 15:08, 17 August 2019 (UTC)
* 5) as per comment by Reception123. --Robkelk (talk) 12:45, 18 August 2019 (UTC)
* 7) --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
c) Abstain
Proposal 7 (Readdition)
b) Oppose
* 2) ^^ ~ RhinosF1 - (chat) · CA · contribs · Rights - ) 15:25, 16 August 2019 (UTC)
* 6) not enough time and what I said above. --DeeM28 (talk) 08:05, 24 August 2019 (UTC)
c) Abstain
Proposal 1
a) Support
* 2) per reception znotch190711 (temporary signature) 22:47, 22 August 2019 (UTC)
* 4) Of course.The wikis need of the autonomy. Gustave London (talk) 13:35, 24 August 2019 (UTC)
d) Comments
|
[House Hearing, 115 Congress]
COMPOSITE MATERIALS:
STRENGTHENING INFRASTRUCTURE DEVELOPMENT
=======================================================================
HEARING
BEFORE THE
SUBCOMMITTEE ON RESEARCH AND TECHNOLOGY
COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY
HOUSE OF REPRESENTATIVES
ONE HUNDRED FIFTEENTH CONGRESS
SECOND SESSION
__________
APRIL 18, 2018
__________
Serial No. 115-55
__________
Printed for the use of the Committee on Science, Space, and Technology
Available via the World Wide Web: http://science.house.gov
______
U.S. GOVERNMENT PUBLISHING OFFICE
29-782 PDF WASHINGTON : 2018
COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY
HON. LAMAR S. SMITH, Texas, Chair
FRANK D. LUCAS, Oklahoma EDDIE BERNICE JOHNSON, Texas
DANA ROHRABACHER, California ZOE LOFGREN, California
MO BROOKS, Alabama DANIEL LIPINSKI, Illinois
RANDY HULTGREN, Illinois SUZANNE BONAMICI, Oregon
BILL POSEY, Florida AMI BERA, California
THOMAS MASSIE, Kentucky ELIZABETH H. ESTY, Connecticut
JIM BRIDENSTINE, Oklahoma MARC A. VEASEY, Texas
RANDY K. WEBER, Texas DONALD S. BEYER, JR., Virginia
STEPHEN KNIGHT, California JACKY ROSEN, Nevada
BRIAN BABIN, Texas JERRY McNERNEY, California
BARBARA COMSTOCK, Virginia ED PERLMUTTER, Colorado
BARRY LOUDERMILK, Georgia PAUL TONKO, New York
RALPH LEE ABRAHAM, Louisiana BILL FOSTER, Illinois
DANIEL WEBSTER, Florida MARK TAKANO, California
JIM BANKS, Indiana COLLEEN HANABUSA, Hawaii
ANDY BIGGS, Arizona CHARLIE CRIST, Florida
ROGER W. MARSHALL, Kansas
NEAL P. DUNN, Florida
CLAY HIGGINS, Louisiana
RALPH NORMAN, South Carolina
------
Subcommittee on Oversight
RALPH LEE ABRAHAM, Louisiana, Chair
FRANK D. LUCAS, Oklahoma DONALD S. BEYER, Jr., Virginia
BILL POSEY, Florida JERRY McNERNEY, California
THOMAS MASSIE, Kentucky ED PERLMUTTER, Colorado
BARRY LOUDERMILK, Georgia EDDIE BERNICE JOHNSON, Texas
ROGER W. MARSHALL, Kansas
CLAY HIGGINS, Louisiana
RALPH NORMAN, South Carolina
LAMAR S. SMITH, Texas
------
Subcommittee on Research and Technology
HON. BARBARA COMSTOCK, Virginia, Chair
FRANK D. LUCAS, Oklahoma DANIEL LIPINSKI, Illinois
RANDY HULTGREN, Illinois ELIZABETH H. ESTY, Connecticut
STEPHEN KNIGHT, California JACKY ROSEN, Nevada
RALPH LEE ABRAHAM, Louisiana SUZANNE BONAMICI, Oregon
DANIEL WEBSTER, Florida AMI BERA, California
JIM BANKS, Indiana DONALD S. BEYER, JR., Virginia
ROGER W. MARSHALL, Kansas EDDIE BERNICE JOHNSON, Texas
LAMAR S. SMITH, Texas
C O N T E N T S
April 18, 2018
Page
Witness List..................................................... 2
Hearing Charter.................................................. 3
Opening Statements
Statement by Representative Daniel Webster, Subcommittee on
Research and Technology, Committee on Science, Space, and
Technology, U.S. House of Representatives...................... 4
Written Statement............................................ 5
Statement by Representative Daniel Lipinski, Ranking Member,
Subcommittee on Research and Technology, Committee on Science,
Space, and Technology, U.S. House of Representatives........... 6
Written Statement............................................ 8
Statement by Representative Eddie Bernice Johnson, Ranking
Member, Committee on Science, Space, and Technology, U.S. House
of Representatives............................................. 62
Written Statement............................................ 63
Witnesses:
Dr. Joannie Chin, Deputy Director, Engineering Laboratory, NIST
Oral Statement............................................... 11
Written Statement............................................ 13
Dr. Hota V. GangaRao, Wadsworth Distinguished Professor, Statler
College of Engineering, West Virginia University
Oral Statement............................................... 21
Written Statement............................................ 23
Dr. David Lange, Professor, Department of Civil and Environmental
Engineering, University of Illinois at Urbana-Champaign
Oral Statement............................................... 27
Written Statement............................................ 29
Mr. Shane E. Weyant, President and CEO, Creative Pultrusions,
Inc.
Oral Statement............................................... 39
Written Statement............................................ 41
Discussion....................................................... 58
Appendix I: Answers to Post-Hearing Questions
Dr. Joannie Chin, Deputy Director, Engineering Laboratory, NIST.. 70
Dr. Hota V. GangaRao, Wadsworth Distinguished Professor, Statler
College of Engineering, West Virginia University............... 76
Dr. David Lange, Professor, Department of Civil and Environmental
Engineering, University of Illinois at Urbana-Champaign........ 81
Mr. Shane E. Weyant, President and CEO, Creative Pultrusions,
Inc............................................................ 82
COMPOSITE MATERIALS:
STRENGTHENING INFRASTRUCTURE DEVELOPMENT
----------
WEDNESDAY, APRIL 18, 2018
House of Representatives,
Subcommittee on Research and Technology
Committee on Science, Space, and Technology,
Washington, D.C.
The Subcommittee met, pursuant to call, at 10:08 a.m., in
Room 2318 of the Rayburn House Office Building, Hon. Daniel
Webster presiding.
Mr. Webster. The Committee on Science, Space, and
Technology will come to order. Without objection, the Chair is
authorized to declare recesses of the Committee at any time.
Good morning. Everyone's here. Welcome to today's hearing
entitled, ``Composite Materials: Strengthening Infrastructure
Development.'' I recognize myself for five minutes for an
opening statement.
The purpose of this morning's hearing is to review a
National Institute of Standards and Technology (NIST) report on
overcoming barriers to the adoption of composites in
sustainable infrastructure and discuss the value of developing
composite standards for infrastructure applications.
While not widely adopted yet, composites have been used in
select construction projects across the country. As we will
hear from our experts today, fiber-reinforced polymer
composites produced in the United States offer durable,
sustainable, and cost-effective solutions in a variety of
infrastructure applications as diverse as dams, levees,
highways, bridges, tunnels, railroads, harbors, utility poles
and buildings. However, without proper design guidelines and
data tables to harmonize standards and create a uniform
guidance, the practical use of composites to build durable and
cost-effective infrastructure will continue to lag.
The National Institute of Standards and Technology is well-
poised to lead research to provide the evidence and data needed
to set industry standards and design guidelines. NIST has a
deep and varied expertise in advanced composites, which I look
forward to hearing more about in the hearing. It is my
understanding that there are over a dozen projects across NIST
that work to measure, model, and predict the performance of
advanced composites for a variety of applications.
I'm well aware of the challenges our nation's
infrastructure is facing and the anticipated cost of its
restoration. I look forward to learning more about the
potential value of using composites in infrastructure and the
economic case for composites as an alternative or supplement to
conventional materials in infrastructure projects.
I appreciate you all for taking the time to join me for
this hearing. As the Administration and Congress begin to
consider how to tackle the nation's infrastructure challenges,
it is important to understand what role composites can play.
[The prepared statement of Mr. Webster follows:]
Mr. Webster. I now recognize the Ranking Member from
Illinois, Mr. Lipinski, for an opening statement.
Mr. Lipinski. Thank you. I want to thank Chairwoman
Comstock in her absence today for holding the hearing on this
important topic, and I want to thank the witnesses for being
here to share your thoughts on the use of advanced composite
materials for major infrastructure.
Much of the nation's major infrastructure is nearing or has
passed the end of its design lifespan. The American Society of
Civil Engineers' 2017 Infrastructure Report Card gave our
nation's infrastructure a grade of D-plus based on assessments
of capacity, condition, resilience, innovation, and other
criteria. And our current infrastructure is under increased
strain year after year as our population grows. We must find a
way to ensure the safety of our nation's expanding population
as demands on our roads, bridges, utilities, and other
essential infrastructure increase.
I sit on the House Transportation Infrastructure Committee,
and I understand that the status quo is clearly not acceptable.
In addition, we need to examine our approach to rebuilding
infrastructure as climate change and other factors drive
increases in the intensity of wildfires, hurricanes, and other
extreme events wreaking havoc on dams, bridges, above- and
below-ground utilities, and other essential structures. These
are long-term challenges that require long-term solutions. But
right now, we don't have the funding necessary to close
investment gaps and build the infrastructure we know that we
need.
As we make plans to shore up our infrastructure and build
for the future, we must take advantage of all the tools at our
disposal. This includes using innovative technologies and
emerging materials where they offer the best value for a
project. Materials such as fiber-reinforced polymer composites
or advanced composites which are--which we are examining in
today's hearing, they play a key role in how the nation
addresses its challenges under constrained resources.
Decades of federal and private sector research and
development and investment in advanced composites has resulted
in a significant use of these materials in some sectors such as
defense, aerospace, automobile, and energy industries. While
composites have also been used in some construction and
infrastructure applications such as strengthening concrete,
making bridge repairs, and building bridge decks, they haven't
been used as widely for infrastructure as they have been in
other sectors.
I commend NIST for producing the report we are reviewing in
today's hearing. They brought together federal, private, and
university partners to identify and examine how to overcome
barriers to adoption of composites and sustainable
infrastructure, including challenges to developing a skilled
workforce.
I look forward to hearing from Dr. Lange and others about
ways we can incorporate advanced composites into our
engineering education and training programs to make sure that
all those involved in designing and building our infrastructure
have the knowledge and skills to use whichever material is best
for the job. This will require updates for undergraduate and
graduate engineering curriculum, training programs for the
construction trades, and professional development plans in a
wide range of industries. Doing this successfully necessitates
the cooperation of governments, educational institutions, and
industry. I'm glad we have representatives from all these
sectors here today.
As we examine ways to increase the use of advanced
composites, it is important that we don't lose sight of the
strength of traditional materials like concrete and steel. Both
repair and upgrades of existing infrastructure and for new
projects, we need to have safety and design standards in place
to allow engineers to choose the best material for the job and
allow novel and traditional materials to work together. Finding
smart ways to improve our roads, bridges, pipelines, and other
infrastructure is a major priority of mine. I look forward to
your testimony today. Thank you, and I yield back.
[The prepared statement of Mr. Lipinski follows:]
Mr. Webster. All right. Now, I'll introduce our witnesses
for today. First, Dr. Joannie Chin, our first witness today, is
the Deputy Director of an the Engineering Laboratory at NIST,
one of the seven resource labs within NIST. As Deputy Director,
Dr. Chin provides programmatic and operational guidance for the
Engineering Lab and includes nearly 500 federal employees and
guest researchers from industry, universities, and research
institutes. It is the Engineering Lab's mission to promote the
development and dissemination of advanced manufacturing and
construction technology guidelines and services to the U.S.
manufacturing and construction industry.
Prior to being Deputy Director, Dr. Chin previously served
as a leader of the Polymeric Materials Group. Dr. Chin received
a Bachelor of Science in polymer science and engineering from
Case Western Reserve University. She received a Master of
Science in chemistry, as well as a Ph.D. in materials
engineering science from Virginia Polytechnic Institute and
State University.
Our second witness is Dr. Hota GangaRao, a Wadsworth
Distinguished Professor in the Statler College of Engineering
at West Virginia University. He also serves as the Director of
the Constructed Facility Center and Director of the National
Science Foundation's Industry-University Cooperative Research
Center for composites infrastructure at West Virginia
University.
Dr. GangaRao specializes in fiber-reinforced polymer
composites, bridge structures, advanced materials research,
composites for blasting, fire resistance, and others. Dr.
GangaRao received his Ph.D. in civil engineering from North
Carolina State University and is a registered professional
engineer.
Mr. Lipinski, do you want to introduce Dr. Lange?
Mr. Lipinski. Thank you. It is my pleasure to introduce Dr.
David Lange, Professor of Civil and Environmental Engineering
and Director of the Center for--of Excellence for Airport
Technology at the University of Illinois at Urbana-Champaign.
Dr. Lange also serves as President of the American Concrete
Institute, Technical Society, and Standards Developing
Organization.
Dr. Lange holds a B.S. in civil engineering from Valparaiso
University, an MBA from Wichita State University, and a Ph.D.
in civil engineering from my alma mater, Northwestern
University. And I almost majored in civil engineering but I
went with mechanical there as an undergrad, so--he's--Dr. Lange
has been a member of the faculty at the University of Illinois
for the past 25 years and has earned numerous awards and
honors, including the prestigious NSF Career Award, a Fulbright
Award, and several accolades for his publications and teaching.
Dr. Lange's research focuses on interface between the
structural engineering and materials science of concrete and
includes topics such as airport pavement, recycled concrete,
and fiber reinforcement of concrete. His research group has
played an important role in the O'Hare Airport Modernization
Program, coming up with design concepts that save the Chicago
Department of Aviation millions of dollars. I also understand
that when he's not in the lab, Dr. Lange enjoys spending time
with his five-month-old granddaughter and is looking forward to
another granddaughter on the way, and congratulations. And I
want to thank you for being with us today, Dr. Lange, and I
look forward to your testimony.
Mr. Webster. Our final witness today is Mr. Shane Weyant,
President and CEO of Creative Pultrusions, Inc. located in Alum
Bank, Pennsylvania. Creative Pultrusions is a subsidiary of
Hill & Smith Holdings, PLC, an international group with leading
positions in the design, manufacture, and supply of
infrastructure products and galvanizing services. Creative
Pultrusions is a leader in the manufacture of fiberglass-
reinforced polymer protrusion products. Mr. Weyant has been
with Creative Pultrusions for nearly 30 years. He received a
Bachelor of Science in economics from Frostburg State
University, where he graduated magna cum laude.
And now, Dr. Chin, you have five minutes to present your
testimony.
TESTIMONY OF DR. JOANNIE CHIN,
DEPUTY DIRECTOR,
ENGINEERING LABORATORY, NIST
Dr. Chin. Chairman Webster, Ranking Member Lipinski, and
Members of the Subcommittee, thank you for this opportunity to
discuss NIST's role in promoting the adoption of advanced
composites to renew our infrastructure and to increase its
resilience in communities prone to or recovering from
disasters.
At NIST, our world-class experts use unique facilities to
measure materials with increasing precision and characterize
new materials for the first time. We help American industries
develop, test, and manufacture products with features that
outperform previous generations. Our broad program in advanced
materials include advanced composites; that is, polymers
reinforced with fibers or other additives.
Advanced composites can play a significant role in renewing
our nation's crumbling infrastructure and help existing
infrastructure be more resilient to both usual wear and natural
disasters. Compared to traditional materials, advanced
composites are often stronger, lighter, and longer-lasting,
thereby offering many cost savings, including fewer days lost
to repair and maintenance. That means fewer hours stuck in
traffic detoured around bridges, roads, and levees under
repair, fewer days in the dark due to broken utility poles, and
more efficient movement of the goods and services that underpin
our economy and quality of life.
The American advanced composites industry contributes about
$22 billion to the economy each year, and although we currently
lead the world in advanced composite technology, adoption of
these materials for infrastructure has been slower in the
United States than in Canada and Europe. To understand the
barriers to using these materials in the United States, NIST
convened a workshop in February 2017 with infrastructure
engineers, designers, and owners, in partnership with the
American Composites Manufacturers Association. This May, we
will hold a similar workshop with stakeholders interested in
using advanced composites to reinforce existing structures to
make them more resilient to seismic events.
So from the NIST ACMA workshop, we learned that many owners
and design professionals don't yet have enough confidence in
the reliability and long-term durability of advanced composites
to specify their use in new structures, as well as to repair
damaged ones. We also learned that designers and engineers need
data and design guidance so they can provide appropriate safety
margins, while maximizing the weight and cost savings of these
materials.
NIST has the expertise to address these needs. We have been
studying advanced composites since the 1980s and are a leader
in characterizing the performance and properties of advanced
composites on all scales from nano to macro. For example, to
study durability, we have developed sensors that visualize the
molecular nature of damage and composites. We also have unique
device that accelerates the effects of weathering on materials
and large-scale testing facilities that evaluate the effects of
strong loads on advanced composite structures.
Our experience providing a data infrastructure for the
Materials Genome Initiative is now helping members of the
advanced composites community capture and share information on
material properties. We will assist the advanced composites
community as they establish a clearinghouse of curated existing
design guides and data from completed projects, which will
inform additional science-based codes and standards.
Our Community Resilience Program provides guidance to
architects, design engineers, and community leaders to enable
critical decisions about which materials help communities
recover rapidly and build back better. While NIST is not a
regulatory agency, we have long provided strong scientific
foundations for the consensus standards developed by industry.
NIST staff members provide leadership and technical expertise
to more than 1,800 positions on committees for ASTM
International, the international organization for
standardization and other standards development organizations.
So we greatly appreciate the Members of this Committee and
others in Congress for their support of federal acceleration of
the adoption of advanced composites for infrastructure, helping
to keep our nation globally competitive and economically secure
and contributing to our quality of life. I am happy to answer
any questions you may have.
[The prepared statement of Dr. Chin follows:]
Mr. Webster. I recognize Dr. GangaRao for his five minutes.
TESTIMONY OF DR. HOTA V. GANGARAO,
WADSWORTH DISTINGUISHED PROFESSOR,
STATLER COLLEGE OF ENGINEERING,
WEST VIRGINIA UNIVERSITY
Dr. GangaRao. Honorable Congressmen, Chairman Webster,
Members of Research and Technology Committee, I'm immensely
grateful for your invitation to speak on my team today, which
is the infrastructure renovation through smart composites
manufacturing and construction, coupled with testing standards
and enforcement.
As all of you know in this room, our aging, perhaps aged
infrastructure is rapidly deteriorating, certainly not
collapsing. The bulk of our infrastructure problems can be
attributed to $1.5 trillion funding gap between the revenue and
the infrastructure needs for 2016 to 2025. This is costing
$3,400 per year per family and leading to 2.5 million fewer
jobs and, even more importantly, $7 trillion loss to
businesses.
How to bridge this need versus a revenue gap? The--do we
need more debt? Do we need to increase the gas tax? A couple of
these will have adverse effects on our economy, as you all
know. Today, I want to present an alternative to this august
body that is about instead of replacing crumbling
infrastructure, as our Congressman Lipinski pointed out, we
should provide resources to renovate our infrastructure to get
the biggest bang for the buck using advanced composite
materials.
Currently, composites account for less than one percent of
the structural materials by volume in spite of their many
advantages such as the high-strength corrosion resistance,
lighter weights, and better performance per unit weight.
What are the challenges ahead and what are the economic
advantages? Producers of steel and concrete should not view
composites as a competitive product or as a threat to their
markets. Composites will never fully replace traditional
materials, but they are another tool in a toolbox, and they
would be hybridized well with steel and concrete.
Through our National Science Foundation-funded center, the
Center for Integration of Composites into Infrastructure, we
have shown composite wraps have been used to renovate several
deteriorated structures at five to ten percent of the
replacement cost by repairing some of the concrete piers, steel
piles, and the list goes on.
At West Virginia University, we worked on lighter bridge
decks weighing only about 1/4 of a typical concrete deck. We
worked on sheet piles with other industry folks to protect
hostile erosions using composites. We developed utility poles
that cost half the cost of steel transmission towers, and we
also are developing high-pressure gas pipes to push more gas at
a faster rate. We are involved heavily in navigational
structures such as the lock gates, and the list goes on.
Efforts are underway to develop composite modular housing
subsystems that are multifunctional, multimodal, mold free, and
durable. Using smart manufacturing and construction methods,
housing costs can come down dramatically, as it has been done
by Henry Ford's assembly line-type operations.
To be at the cutting edge of research, development, and
innovation of composites and infrastructure, NIST workshop--as
alluded now a few minutes ago--of 2017 identified five critical
areas to be overcome. One of them we can do here is to help the
industry develop smart manufacturing and construction tools
with composites and also develop uniform codes and project
qualification through third-party certification, need to
require future projects to consider composites as alternate
designs. We need to invest in 3.2 million workers dealing with
the designs, contracts, maintenance, and management of
composites.
In conclusion, composites are cost-effective and durable.
Large-scale applications of composites will create huge markets
and open new opportunities, including the smart rehab methods
and educating 3.2 million American workers dealing with the
construction-related industry. To enhance American productivity
of workers, we must invest in the composites in terms of
research development and implementation.
Finally, to maintain public safety, investment in
infrastructure restoration through composites and hybridization
with conventional construction materials have to be made in
tandem with standardization of products and quality control.
Thank you very much.
[The prepared statement of Dr. GangaRao follows:]
Mr. Webster. Dr. Lange, you're recognized for five minutes.
TESTIMONY OF DR. DAVID LANGE,
PROFESSOR, DEPARTMENT OF CIVIL
AND ENVIRONMENTAL ENGINEERING,
UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Dr. Lange. Chairman Smith, Ranking Member Lipinski, and
other Committee Members, I appreciate this kind introduction an
opportunity to share my ideas today.
I wear two hats today, one as Professor of Civil
Engineering at the University of Illinois, the second as
President of the American Concrete Institute, an organization
of 20,000 members from the construction industry, the design
profession, and academia.
FRP is a class of high-strength, low-weight, and durable
materials that can be fabricated in a wide array of shapes and
properties. The attractive aspects of FRP have motivated
significant investment in research and many funded
demonstration projects over the years.
Despite attractive attributes and a successful track record
in field demos, we do not see a widespread adoption of FRP in
construction today. Certainly, one explanation is the presence
of two dominant design paradigms in commercial construction:
reinforced concrete and structural steel. These tried-and-true
systems have a 100-year head start on FRP.
Furthermore, concrete and steel technologies are not
standing still. Large organizations like the American Concrete
Institute work tirelessly to advance these technologies. A
century of commitment at ACI assures that today's concrete is
not your father's concrete.
The adoption of FRP depends on a wider effort to harmonize
material systems. The two dominant silos--concrete and steel--
need effective crosstalk and openness to new material such as
FRP. It can be done. As an example, ACI has opened a path for
use of FRP rebar, and ASTM has released specification language
for those bars.
Market penetration of FRP should be driven by authentic
advantages: durability, low weight, organic shapes,
flexibility, high-strength capacity. Those are among the
competitive advantages of FRP.
Indeed, FRP has excelled in certain applications. The
aircraft and marine industries and more recently the market for
wind turbine blades and cooling towers have embraced FRP. In
construction, FRP products have found a place in market niches
such as corrosion-proof rebar and as a material for repair of
concrete structures.
Despite seemingly high potential for FRP and
infrastructure, the topic is almost nonexistent in civil
engineering education. Courses dedicated to FRP and structural
repair are rare among the 220 civil engineering programs in the
United States. Engineering education has not functioned as a
change agent.
There are opportunities to affect civil engineering
education. Like other professions, civil engineering is moving
toward requiring more than a bachelor's degree to practice in
the profession. As master's degrees grow, the curriculum can
better accommodate specialty topics like FRP if the need from
industry were to drive it. Beyond that, we need courses that
harmonize concrete, steel, masonry, wood, and FRP. The future
is a world with better integration of material systems.
Now, a few words about the NIST roadmap. I think the
roadmap has attractive elements. In particular, I'm drawn to
one of the recommendations related to the design data
clearinghouse barrier. The idea is to charge NIST as a neutral
party to compile durability data and define limits using codes
and standards. Indeed, we can see how codes and standards can
spur adoption of FRP. The 2017 release of ASTM D7957 for FRP
rebar has already had impact on the ability for that product to
be specified and designed. Just days ago, an industry
representative shared with me his positive outlook that is
based on an upswing in FRP bridge deck projects in recent
months.
I also endorse the roadmap plan for its emphasis of FRP
curriculum for civil engineers. Given the large body of
existing research, it is reasonable that federal funding could
foster a modernization movement for civil engineering
curriculum that bolsters design of FRP and harmonized material
systems.
Lastly, I want to encourage use of a proven mechanism
available to the Federal Government. That is research centers
that incubate partnership between academia and industry. My own
experience as Director of the Center for Excellence for Airport
Technology has persuaded me that large infrastructure programs
can benefit from sustained partnership with universities. Since
2005, CEAT has received funding from the O'Hare International
Airport and the Chicago Department of Aviation. Every year, we
select our research projects to inform the decision-making
process, reduce risks, and save money. Our 12-year track record
with O'Hare suggests this has been a successful model. Thank
you.
[The prepared statement of Mr. Lange follows:]
Chairman Smith. [Presiding] Thank you, Dr. Lange. And Mr.
Weyant?
TESTIMONY OF MR. SHANE E. WEYANT,
PRESIDENT AND CEO,
CREATIVE PULTRUSIONS, INC.
Mr. Weyant. Good morning. Chairman Smith, Ranking Member
Lipinski, and the Members of the Subcommittee, on the behalf of
Creative Pultrusions and my fellow members of the American
Composite Manufacturers Association, I appreciate the
opportunity today to testify before you on an issue that is
vital to our industry involving the essential role NIST plays
in materials standards. I am happy to be here to explain the
value that composites offer consumers, communities, and
industries across the nation. With manufacturers in each of
your districts, we're a great example of made-in-America
manufacturing, whose potential has only begun to be realized.
Composites are stronger than other materials such as steel,
concrete, and wood. They are lighter and more energy-efficient
and easier to transfer and install. They offer greater
durability and, most importantly, are resistant to corrosion
and structural degradation. Many of you are already familiar
with fiberglass boats. Saltwater destroys traditional metal and
wood hulls, but fiberglass remains unscathed after decades of
service and has come to dominate that sector due to the
performance.
Using the same material system, we and other composite
manufacturers provide infrastructural solutions with
performance and other benefits that can far exceed traditional
materials of construction. Let me highlight a few examples:
composite bridges that can be manufactured offsite, installed
in less than one day with less traffic disruption, and that
require minimal maintenance throughout their service life;
composite rebar that can replace steel rebar in traditional
concrete construction and is resistant to rust so it won't
degrade; composite utility poles and cross arms that are easier
to install are more durable against extreme weather and fire,
require less maintenance, and last significantly longer. Only
eight utility poles were left standing in the Virgin Islands
this past year after the hurricanes. Those eight poles were
composite poles.
Despite these benefits, barriers to deployment of
composites remain. Fortunately, some of these obstacles can be
cleared with the help of sensible government and industrial
participation. A great first step was the 2017 workshop that
brought folks from NIST together with a wide range of private
and public stakeholders to work towards solutions. I felt the
workshop was a great example of positive engagement between
industry, academia, and government because it produced
actionable results.
What we know from experience is that the lack of awareness
of--and, importantly, standards for--composites is our
threshold problem. NIST can aggregate existing standards and
design data for composites and validate them for broader
dissemination and use. This will help all stakeholders to see
the totality of data on composites and understand the further
research needed. Their world-class laboratories also can help
develop durability and performance testing for composite
infrastructure products. This data can support further
development of standards of composites and better arm engineers
with the performance knowledge to make them more comfortable
with using composite.
Given NIST's role in standards in research, the agency has
a unique capacity to assemble a broad swath of stakeholders and
ensure that this work is impactful. We believe all materials,
techniques, and designs should stand on their own merit. Our
experience with builders and project engineers show that there
is a limited knowledge about composites as a structural
material throughout the design community. Additional research
and data that can contribute to standards development will help
raise the knowledge about composites.
Likewise, bringing together the various agencies
responsible for infrastructure investment to participate in
this effort can help diffuse knowledge to the asset owners and
designers. An existing example of similar collaboration is what
is going on with the Institute for Advanced Composite
Manufacturing and Innovation. Part of the Manufacturing USA
network, IACMI, working with academia and industry and federal
agencies, has developed an exciting new technology to recycle
composites. Productive collaboration demonstrates that federal
investment in composites pays huge dividends and, coupled with
further structural research by NIST we discovered today, will
help composites contribute more to the overall sustainability
of our infrastructure network.
The demands placed on America's infrastructure have never
been greater. To build a network to support the 21st century
population and economy, there needs to be greater availability
of 21st century technologies. With some smart investment and
hard work together, we can make bridge, water systems, and grid
failures something of the past. The ability to build structures
that last centuries instead of years is here. We look to
Congress for support to help make this happen. Thank you.
[The prepared statement of Mr. Weyant follows:]
Mr. Hultgren. [Presiding] Thank you all so much. I
appreciate your testimony. I appreciate you being here.
I'm going to wait with my questions and recognize the
gentleman from Indiana first for five minutes.
Mr. Banks. Thank you, Mr. Chairman. And thanks to each of
you for being here this morning.
We all recognize the need to improve our nation's
infrastructure, but we also recognize the precarious fiscal
situation that we find ourselves in today. The CBO estimates
that we are on track to run $2 trillion annual deficits by
2028. The CBO also found that we will run $82 trillion in total
deficits over the next 30 years. We need to focus on reducing
government spending wherever we can.
So from what I understand, the main benefit to using
composite materials as opposed to steel or concrete is the
reduction in maintenance costs over the long term. So my first
question for each of you, is there any data on what kind of
cost savings can be expected over a 20 or 30 years by using
composite materials for various infrastructure projects? Dr.
Chin?
Dr. Chin. My colleagues from the industry would have more
specific figures on the actual cost savings, but we're very
much aware of studies and existing installations that have
demonstrated great reductions in installation costs, impact on
the economy in regards to road blockages and delays, as well as
maintenance and repair, as well as replacements over the
lifetime of the structure.
Mr. Banks. Okay.
Dr. GangaRao. Thank you. As I stated in my testimony, we
have rehabilitated over 100 structures across the country from
West Virginia University's Constructed Facilities Center. I'll
give you two examples and I'll shut up. One of them is the East
Lyn Viaduct. We rehabilitated it for about 20 percent of the
cost of replacement in Parkersburg, West Virginia. When I took
that job, they said if it survives five years, back in 1999,
they said they would be happy. Last year, we collected the
data, and it looks brand new.
The second example I'd like to quote, which I have done the
rehabilitation renovation part, was for Army Corps of
Engineers. Again, we were able to rehab that complex bridge
system with $120,000 while in fact it would have costed $4
million to replace it. So the list goes on. I'm not going to
stand here and talk about it anymore. But I would be very happy
to supply you with all the cost data and also the durability
data if you need.
Mr. Banks. Okay.
Dr. Lange. Your remarked that the main benefit of FRP is
reducing maintenance costs. I think there's truth in that
because FRP is a very effective repair material. We're seeing
FRP used in sheet products that are put onto reinforced
concrete structures. It's one of the least-expensive ways to
add strengthening in many cases.
But I'm not sure I would say it's the main benefit of FRP.
I think having a landscape for design--multiple materials being
used, a real portfolio of materials--is where we could get even
more benefit in the future. I think there's been some
limitation to have civil engineering organized in silos where
you have the reinforced concrete community, the structural
steel community working somewhat independently and FRP
wondering how do we fit into this situation.
And I think there's probably a higher calling to try to
figure out how to give all materials sort of equal access. In
some respects engineers should be material agnostic. I don't
really care what particular material is used, I want to get a
result. And having more materials available will be the best
benefit of having FRP in the game.
Mr. Banks. Okay. And, Mr. Weyant, before you answer that
question, perhaps with the time left as well you can answer the
question of what would the cost-benefits of replacing or
restoring electric lines with FRP composite poles be?
Mr. Weyant. On the electric line, it's more in the
reliability, how they withstand a lot of the storms. We see
that a lot with a lot of the electric companies. They're
understanding that value now by investing in composites for
that reliability.
As far as the lifecycle, I look at it a couple ways, not
only on the maintenance side, it's also the installation side.
We have seen cooling towers, marine markets with sheet piling,
and also in the utility industry that we have seen probably 30
percent overall lifecycle cost savings when using composites.
Mr. Banks. Thank you. My time is expired.
Mr. Hultgren. The gentleman from Indiana yields back.
I recognize the gentleman from Illinois, the Ranking
Member, Mr. Lipinski for five minutes.
Mr. Lipinski. Thank you. I wanted to say, first of all,
that as Mr. Banks was talking about the savings for government
and for taxpayers, which I think is critically important, the
other part that I wanted to ask about is the--what can we do as
policymakers here in Washington to make sure that the United
States maintains a strong position in producing in these
materials? Obviously, FRP, when we're talking about even things
as large as bridges can be, you know, put together elsewhere
and brought over to the United States to be put in place. We've
seen that with concrete and steel bridges. So what can we do to
try to make sure we have the right incentives in place for the
United States to really--our economy and jobs to thrive in
this--with FRP? So let's start with Dr. Lange.
Dr. Lange. Well, one thing that I would like to emphasize
is that there is opportunity when we have very large
infrastructure programs. O'Hare just announced another $8.5
billion program that will add a terminal to the west side of
O'Hare, and these kind of major infrastructure programs extend
for many years.
The opportunity to partner with university researchers to
help answer questions about what is going on in that project
and how new materials might come into it, how new technologies
might benefit the project, that I think is a great opportunity.
The relationship we've experienced in working directly with a
major infrastructure program is not terribly common. It's a
little bit unusual that we have that kind of a partnership. But
I believe it could be a very good policy moving forward that we
have these major programs to pay attention to the research
landscape.
Mr. Lipinski. Anybody else? Dr. GangaRao?
Dr. GangaRao. Thank you. Thank you. I have indicated six
different approaches of how we can keep the lead in terms of
our high-quality products based on composites in my writeup.
And I'll talk about a couple of them. One of them is that we do
not want to be a dumping ground for some inferior product from
outside. Therefore, we need to maintain very high standards and
also enforce these standards of the materials that we are going
to be introducing as composites or for that matter as a
hybridized material, including the conventional materials like
steel and concrete. That's one. I can elaborate on that much
more later.
The second important thing is we need to come up with smart
manufacturing for infrastructure point of view in terms of
creating as large a subsystem as possible under the
manufacturing settings so that we gain certain degrees of
efficiencies and be able to reduce any form of waste that we
have right now. We're 40 percent waste in the construction
industry. So these are the two I would like to focus on. I have
four other items I mentioned in my writeup. Thank you.
Mr. Lipinski. Thank you. Mr. Weyant, do you have anything
to add?
Mr. Weyant. Yes, I echo Dr. Lange and Dr. GangaRao's
position. I think government needs to take a strong position in
two areas. We need to invest to enhance the development of the
technologies to keep us on the forefront and the materials, you
know, to be produced in the United States. Also, we need to
rebuild America with the right materials. While we're facing
these problems of the large spend on building the
infrastructure is because these materials are not lasting. We
got products here that can be 50 years plus design service
life, so down the road, the payback is, as I said earlier, on
the lifecycle. So we need to make that choice today to rebuild
America the right way and put people back to work.
Mr. Lipinski. And Mr. Weyant, it probably may surprise you
that I have driven through Pleasantville many times on my way
from here to Johnstown, so I wanted to ask you about--do you
have issues with labor force getting workers who are capable?
Mr. Weyant. That is a big demand nowadays, but we reach out
to a lot of the local high schools and a lot of the trade
schools, very aggressive on recruiting. But, you know, to train
people, too, you know, that is a concern. And in the rural
area, as you know, Mr. Lipinski, that does put a big demand
because we have a lot of expansion in our areas with a lot of
different manufacturers.
Mr. Lipinski. Thank you. I'm out of time. I yield back.
Mr. Hultgren. The gentleman from Illinois yields back.
I'll now yield myself five minutes. First, again, I want to
thank you all for being here, for your testimony. For me this
is an especially important hearing today. The State of
Illinois, as my colleague and friend from Illinois has already
stated, leads in materials science research conducted at our
wonderful universities and national labs. I want to hear what
we're doing nationally, but I always like to see how Illinois
universities are testifying before this Committee. I'm grateful
for that.
Infrastructure is also a key priority with every local
official I meet with, and it's why I work to preserve key tools
for municipal finance in the tax reform bill that we had, such
as the tax-exempt status for municipal bonds. Local officials
understand the importance of both construction and maintenance,
and they see the long-term impact of more resilient
infrastructure. So thank you for your work.
Dr. GangaRao, if I could address my first question to you.
How would research at NIST be integrated in its standards
development and used by standards development organizations?
Dr. GangaRao. NIST has excellent facilities in trying to
promote any kind of test methodologies, develop the test
methodologies, and also enforce the testing systems. That's one
way they can do it. The second way they can do it is by
providing excellent platform in terms of educational aspects.
There are half a dozen educational aspects that I can talk
about. They can be the lead nuclei in developing some of these
educational aspects.
And thirdly, they have a great amount of technical know-how
through their full-time employees, and they can certainly
interact with not only the university types but also with the
industry types to promote some of these kinds of advances in a
most systematic fashion. Thank you.
Mr. Hultgren. Thank you. Mr. Weyant, in your testimony you
say that there is limited awareness by engineers and asset
owners about the composites as structural material for
infrastructure. I wonder if you could describe in more detail
what you encounter?
Mr. Weyant. A lot of times when we approach the design
community when you have to introduce a composite material, a
lot of the traditional materials have design codes, okay? They
have their own handbooks. When you buy a steel beam from XYZ
company versus ABC, you know you're getting the same steel
beam. Those standards need to be developed, you know.
Composites being fairly new in the construction market, you
know, really came about in the mid-80s to '90s. Those
standards, a lot of the engineers do not understand them. So we
have to educate them. And a lot of the companies are a lot
smaller and don't have those resources to really put, you know,
in the technical design capabilities to help educate the
engineering community.
Mr. Hultgren. Thanks. Dr. Chin, it's been cited in numerous
reports, including one in 2014 by the President's Council of
Advisors on Science and Technology that composites are a
crosscutting enabler for the manufacturing technology of the
future supporting not only infrastructure but also automotive,
aerospace, energy, and other key sectors. I wonder if you could
elaborate on the strategic importance of composites to the
national economy?
Dr. Chin. In regards to the more general application of
composites in the sectors that you mentioned, the weight
reductions through the use of composite materials enable energy
savings. That's the primary driver in the aerospace, marine,
and automotive industries.
In infrastructure, it's not a matter of designing based on
weight constraints, but the availability of composite materials
that can be prefabricated, premanufactured offsite, brought to
the construction sites, and installed much more quickly. The
weight savings in this particular case also lends itself to
much more rapid installation, which mitigates the delays,
obstacles, roadblocks, all of the issues involved with
construction projects that reroute people and goods around the
points where the construction is taking place. Those have an
impact that may not be as measurable in terms of economic
return on investments, but you can definitely see the impacts
on the lost time. And just in terms of the process of getting
people and goods from point A to point B, there is definitely a
dollar value associated with those benefits of composites as
well.
Mr. Hultgren. Thank you. I'm just about out of time. I may
follow up if that's all right with you. I had a question just
in regards to opportunities for students and graduates to
obtain hands-on experience with composites with internships and
research, so I may follow up to see if I can see if you have
suggestions or ideas from that.
With that, my time is expired, and I will recognize the
Ranking Member of the full Committee, Ms. Johnson from Texas,
for five minutes.
Ms. Johnson. Thank you very much, Mr. Chairman and Ranking
Member Lipinski, for holding this hearing. And thanks to all
the witnesses for being here.
In addition to this Committee, I serve as a senior member
of Transportation and Infrastructure. And I really do
understand the challenges that we face in crumbling
infrastructure. My home district of Dallas, Texas, was recently
named the fastest-growing metropolitan area in the country by
the U.S. Census. It also rated it as the 10th worst city in the
nation for traffic congestion in another recent report. And
though there has been great improvement from last year's
position, which was number five, commuters still face a daily
tackle with bottlenecks, wasting time and fuel, and this is a
struggle for many communities, I'm sure.
And while it is an example of perhaps reaching the stars,
I'd like you to explain to me what your feelings are about what
type of emerging technologies that we will be looking at for
our infrastructure needs, and also, how would we go about
preparing our workforce? I'm particularly interested in the
emphasis on resilience and materials that we use and the talent
that's needed. We're already looking at aerial transportation,
drones, and all kind of alternative things. What seems to be
realistic? And I'd like to hear from each of you.
[The prepared statement of Ms. Johnson follows:]
Dr. Lange. Well, let me chime in with one idea here. One
thing that I would like to add about this discussion about
durability is that if you want durable infrastructure, you need
to ask for durable infrastructure. And kind of an old saying,
you get what you ask for. Too often our contracting mechanism
is based on a low bid when people are asked to, say, build a
road or build infrastructure, the winner of that competition is
the one who prices it the lowest.
And when you look at the specifications, they don't
emphasize durability like they should. They don't emphasize
lifecycle, as they should. The choice is made on initial cost
rather than by lifecycle cost where you take into account the
full length of service life of the structure and its
maintenance cost. So one issue that is a policy issue is how
can we move more toward performance specification and looking
at lifecycle cost.
Ms. Johnson. Thank you. Yes?
Dr. GangaRao. I'd like to start out by stating certain
issues with regards to resilient infrastructure. With my center
that is the NSF-sponsored one dealing with the composites for
infrastructure, University of Texas at Arlington is a member of
our center, and they have been using composite to try to
minimize your expansive shale problems for your foundations and
the roads, so there again, we need to use some of these
advanced materials that would help enhance the service life of
each and every one of these infrastructures. That's just one of
the many other parts.
The other part is we need to marry these advanced materials
with the conventional materials so that the longevity can be
improved, the traffic jams can be cut down, and what have you.
And there are many other transportation systems, including some
of the electronics that are going to be built into it coming
into vogue that will greatly enhance the efficiency of movement
from point A to point B. Thank you.
Ms. Johnson. Thank you very much. Anybody else?
Dr. Chin. So one of the big national multiagency programs
that NIST is involved with is the Materials Genome Initiative.
And through that program, we seek to accelerate the development
of these innovative materials that can be used in
infrastructure, as well as many other industry sectors. But
this type of program would enable materials scientists and
engineers and designers to be able to receive the benefits of
materials developed at a much faster rate, which could
potentially be used in infrastructure and making it more
resilient to natural disasters and other types of high impacts.
We also have a Community Resilience Program which seeks
also to develop more infrastructure--more resilient materials
for use in infrastructure.
Ms. Johnson. Thank you. My time is expired.
Mr. Hultgren. The gentlewoman from Texas yields back.
The gentlewoman from Connecticut, Congresswoman Esty, is
recognized for five minutes.
Ms. Esty. Thank you very much. And again, I want to thank
the Chairman and Ranking Member for holding today's hearing.
You'll find I think all of us are on the Transportation
Committee, and there's a reason that we're also on this
Committee, because we recognize the important challenges facing
the country on resiliency in our infrastructure, the aging
infrastructure laid out so well by you.
I've also been working on this, and I want to make sure to
get copies of this for each of you. There's a bipartisan group
of Democrats and Republicans in the House called the Problem
Solvers Caucus. And I was the Co-Chair of this report, which we
released in January, making several of the points that you've
underscored, Dr. Lange. You just recently talked about the
importance of lifecycle costs. We're specifically calling for
that. My father and grandfather were both civil engineers. I
know exactly what you're talking about, and it is the low-bid
problem that's always been a problem but never more acute than
now when we really need to be looking at the entire cycle of
the cost, better from day one and lasting much longer.
I'm also Co-Chair of--and Co-Founder of the Corrosion
Caucus, so we've been looking at these issues in the Resiliency
Caucus, the importance of upgrading those requirements.
So I wanted to also flag--again, so you know, that a number
of us have been working on this in multiple committees. We've
called for the creation--in the report we called for the
creation of something like an ARPA H2O to look at the water
infrastructure, which is often not included in the civil
engineers' report because that alone is, you know, approaching
$1 trillion of unmet needs to replace and upgrade the nation's
water infrastructure. So when I get to questions, I'd ask for
your thoughts of whether you think something like an ARPA H2O
make sense for basic research, especially given that water is
delivered at the local level and cannot possibly have the
research facilities to figure out if you're Detroit and you
need to reduce the size of your mains by 3/4 to keep the flows
in place, they can't be paying for that research. It's just not
reasonable. We need to have a federal role in that.
Chairwoman Comstock and I, who chairs this Subcommittee,
are getting ready to introduce a bill in the coming weeks on
this basic issue of composites, on the importance of
highlighting the need to include this as innovation and to
include this with new standards. One of the pieces we've looked
at are calling for--and it's the IMAGINE Act, the Innovative
Materials in American Grid and Infrastructure Newly Expanded--
you can tell that was put together to make out IMAGINE--but the
IMAGINE Act calls for the creation of an interagency innovative
materials task force to assist in some of these issues we've
talked about this morning for assessing existing standards and
test methods and then compare them against these new materials
and how they compare.
The interagency task force would work to identify key
barriers in the current standards that inhibit market
adaptation and adoption and develop new methods of protocols,
as necessary, to encourage incorporations. This interagency
task force would be chaired by NIST, by the National Institute
of Standards and Technology, bringing together the Federal
Highway Administration, the Army Corps of Engineers, and EPA,
and other standard regulatory agencies.
So, Dr. Chin, can you comment on whether you think that
would be helpful to have a coordinated effort across the
agencies which otherwise are siloed, as we know, which is a
huge problem. Thank you.
Dr. Chin. Yes, NIST has had a very long history of
collaborating with other federal agencies and other primary
stakeholders in big national initiatives such as the one that
you're describing. We are absolutely committed to working in
the area of water. That is definitely seen as an area of great
importance to the nation.
Ms. Esty. And what's your thoughts on something--or any of
you--on something on the basic R&D side, something like an ARPA
H2O? Is that--do we think we're at a point that there should be
basic research, or is it more a function of standards and
dissemination of best practices?
Dr. Lange. Well, I think on the subject of basic research,
you're touching on one of the biggest challenges that we have,
and that is the durability and interaction of materials with
their environment. Dr. Chin talked about how NIST has a long
history of looking at durability issues. I think that the
durability topics are more challenging and more necessary than,
say, looking at mechanical properties of materials. And so I
would encourage that kind of direction of looking at durability
first.
Ms. Esty. Thank you. Go ahead.
Dr. GangaRao. Basic research is always extremely important,
no question about it. However, to get the biggest bang for your
buck, a good bit amount of monies have to be invested in field
implementations, experimentation, and evaluations as soon as
possible so that we establish a protocol of how to do some of
these in the field and able to disseminate this knowledge base
in a widescale manner. Thank you.
Ms. Esty. Thank you very much, and I see I'm out of time.
Thank you.
Mr. Hultgren. Thank you, the Gentlewoman from Connecticut
yields back.
The gentlewoman from Oregon, Ms. Bonamici, Congresswoman
Bonamici is recognized for five minutes.
Ms. Bonamici. Thank you very much, Chairman Hultgren and
Ranking Member Lipinski. And thank you to all of our witnesses
for being here today. I'm very glad that we're discussing
infrastructure. And listening to my colleague talk about things
like the Corrosion Caucus, you know that we're all interested
in this issue.
We know that making long-term investments in our nation's
infrastructure stimulates the economy, creates jobs, and drives
commerce. And as we restore our roads and bridges and build
affordable housing and invest in public transit and upgrade our
schools and ports and water systems, we need to be responsive
to environmental concerns but also creative in the use of
emerging materials.
And I am the Co-Chair of the Oceans Caucus, and marine
debris is one of our priorities. And recently, I've been
reading about projects that integrate plastic bottles and
materials salvaged from debris in the ocean into asphalt to
create more durable roads. And this is the kind of ingenuity we
need as we develop an infrastructure proposal. And I know the
Chairman of the full committee has gone, but I know that Texas
is working on a pilot project on this as well.
At Oregon State University in my home state, the Kiewit
Materials Performance Lab has been one of the leaders in
innovative efforts to test composite materials. The lab is
conducting sensitive electrochemical investigations to study
both corrosion phenomena and metals and alloys and the
performance and durability of coatings and composite materials.
And I visited there, and they're doing some great work.
Dr. Lange, I wanted to ask you how federally funded
researchers at universities can best partner with engineers in
the private sector to support continued advanced research
testing and standards development?
Dr. Lange. I would say that one of the themes that I have
hit on, this idea of partnering with major infrastructure
programs. This is something I would put back on the table. I
think that when you're spending, as O'Hare is going to spend $8
billion on the next phase of expansion of the airport, there
should be a piece of that investment used for looking toward
the state-of-the-art. Engineers working on everyday tasks may
not have time to see that state-of-the-art very clearly, but in
partnership with universities, perhaps they can.
With respect to recycled materials, I think that's a great
theme to continue to hit. One thing I would encourage is that,
as you think about recycling materials, try to have some
integrity about what you're trying to do with these materials.
Sometimes uses of recycled materials are almost using concrete
as a trash can. How many things can we throw into concrete or
asphalt without caring about the degradation of properties that
happens when we do it? Wwe really want to find synergy where we
get not the only use of recycled material but improvement of
properties, not a degradation of properties.
Ms. Bonamici. Right. Absolutely. Well, I'm from Oregon; we
recycle everything. So in northwest Oregon, it's not a question
of if but when a tsunami triggered by an earthquake happens. We
have the Cascadia Subduction Zone is going to hit our state. We
are overdue. So we've been having many conversations about
rebuilding our infrastructure to withstand these natural
disasters. And in the district I represent, the Newberg Dundee
Bypass has just been built to withstand a 9.0 earthquake.
But an earthquake is not the only threat facing our
Nation's infrastructure. We also need to be resilient to the
effects of climate change. And of course with the ocean, we're
seeing acidification, we're seeing more extreme weather events.
What is the current state of our understanding of how climate
change affects infrastructure, and how has that understanding
shaped the composites research agenda and standards development
to make sure that resiliency is a factor? And anybody who wants
to weigh in on that.
Dr. GangaRao. I want to answer a couple of things along
those lines. Before I do that, I want to talk a little bit
about the recycling aspect of it. At West Virginia University,
we have been doing a lot of recycling of composites. For
example, we can talk in terms of low-grade material recycling,
as well as a very high-grade material recycling, and we have
done polymers to recycle and create core material that are of
low value while in fact create a very high-grade material as a
shell for a given system--
Ms. Bonamici. Interesting.
Ms. GangaRao. --and that helped a great deal. And also, we
are partnering now with Mexico. CONACYT is an equivalent of NSF
of ours where they want to recycle a lot of their high-end
composites coming out of aerospace and other places.
There are three or four different ways of recycling it. One
is just simply burn it. That's not the best approach. There are
a few other chemical ways of recycling, and we are looking at
those kinds of things as well to enhance our productivity
levels in the area of composites as opposed to dumping in the
oceans like you're referring to.
Ms. Bonamici. Right. Right. Thank you. And just--I know I'm
out of time, but with the Chairman's indulgence, would you
address the climate change issue?
Ms. GangaRao. Well, I don't know a whole lot about the
climate change. As Dr. Chin pointed out, I think the amount of
energy required to produce a unit pound of a composite per unit
workability and the efficiency of a composite is much less than
steel or concrete.
Ms. Bonamici. Thank you. I yield back, Mr. Chairman. Thank
you.
Mr. Hultgren. Thank you. The gentlewoman from Oregon yields
back.
I want to thank all of our witnesses for your testimony and
all the members for their questions today. I also do want to
send regards from Chairwoman Comstock, who really wanted to be
here but was not feeling well today, so she sends her regards
and gratitude for each of you being here.
The record will remain open for two weeks for additional
written comments and written questions from Members.
Mr. Hultgren. With that, the hearing is adjourned. Thank
you so much.
Dr. GangaRao. Thank you very much.
[Whereupon, at 11:10 a.m., the Subcommittee was adjourned.]
Appendix I
----------
Answers to Post-Hearing Questions
[all]
|
Cosmetic: Update "Automatic export from google code"
Change it something a bit more meaningful?
Perhaps something along the lines of:
C++ Multipurpose Hash based containers
Maybe C++ associative containers?
Go for it.
Done :-)
|
Talk:Mega Man X VS Metal Sonic/@comment-25969827-20151117032300/@comment-30757253-20170520083607
Megaman x always my favorite.
He always wins. He always stronger than sonic and metal sonic.
|
Baumann Fiord
Baumann Fiord is a natural inlet in the south-west of Ellesmere Island, Qikiqtaaluk Region, Nunavut in the Arctic Archipelago. To the west, it opens into Norwegian Bay. Hoved Island lies in the fiord.
|
Microsoft KB Archive/234516
= FIX: OLAP:DSO Role Update May Return "Key Already Associated" Error or Cause System Hang =
Article ID: 234516
Article Last Modified on 8/7/2006
-
APPLIES TO
* Microsoft SQL Server OLAP Services
-
This article was previously published under Q234516
BUG #: 1351 (PLATO7X)
SYMPTOMS
This key is already associated with an element of this collection.
WORKAROUND
STATUS
http://www.microsoft.com/downloads/details.aspx?familyid=F62F45E9-24ED-4FA6-BD74-8A26606F96D8
For more information, contact your primary support provider.
Keywords: kbbug kbfix KB234516
-
<EMAIL_ADDRESS>Send feedback to Microsoft]
© Microsoft Corporation. All rights reserved.
|
Thread:MeiLing13/@comment-32769624-20190512205040
Hi, I'm an admin for the community. Welcome and thank you for your edit to SirGawain8!
Enjoy your time at !
|
Wikipedia:Wiki Ed/Alverno College/Natural History of North America (Fall 2017)
Using concepts from botany, zoology, earth science, ecology, and environmental science, students explore the diversity of living and non-living systems. Students undertake a meaning comparison of regions across North America. Specific emphasis will be placed on historical and humanistic understandings of water access as an exemplar of how these approaches inform scientific knowledge of place.
In this assignment, students will write an article (or add substantively to an existing article) that describes key features of the natural history of a selected place. The place must have meaningful natural character, be of significant interest, and be publicly accessible. The description will contain new information about what the writer has learned from credible sources, which will be cited in the article. The writer will include information from direct observations, where that information has been verified by an expert. The writer will include links to other articles, such as individual species and relevant topics like invasive plants. Where appropriate, it should also contain links to interest groups and government agencies that provide resources to maintain the place.
Week 1
Welcome to your Wikipedia project's course timeline. This page will guide you through the Wikipedia project for your course. Be sure to check with your instructor to see if there are other pages you should be following as well.
This page breaks down writing a Wikipedia article into a series of steps, or milestones. These steps include online trainings to help you get started on Wikipedia.
Your course has also been assigned a Wikipedia Expert. Check your Talk page for notes from them. You can also reach them through the "Get Help" button on this page.
To get started, please review the following handouts:
* Editing Wikipedia pages 1–5
* Evaluating Wikipedia
* Create an account and join this course page, using the enrollment link your instructor sent you. (To avoid hitting Wikipedia's account creation limits, this is best done outside of class. Only 6 new accounts may be created per day from the same IP address.)
* It's time to dive into Wikipedia. Below, you'll find the first set of online trainings you'll need to take. New modules will appear on this timeline as you get to new milestones. Be sure to check back and complete them! Incomplete trainings will be reflected in your grade.
* When you finish the trainings, practice by introducing yourself to a classmate on that classmate’s Talk page.
Participate in discussion about your experiences. You can use discussion questions to frame your comments, or reflect on the research and writing process.
This week, everyone should have a Wikipedia account.
Week 2
It's time to think critically about Wikipedia articles. You'll evaluate a Wikipedia article related to the course and leave suggestions for improving it on the article's Talk page.
* Complete the "Evaluating Articles and Sources" training (linked below).
* Create a section in your sandbox titled "Article evaluation" where you'll leave notes about your observations and learnings.
* Choose an article on Wikipedia related to your course to read and evaluate. As you read, consider the following questions (but don't feel limited to these):
* Is everything in the article relevant to the article topic? Is there anything that distracted you?
* Is the article neutral? Are there any claims, or frames, that appear heavily biased toward a particular position?
* Are there viewpoints that are overrepresented, or underrepresented?
* Check a few citations. Do the links work? Does the source support the claims in the article?
* Is each fact referenced with an appropriate, reliable reference? Where does the information come from? Are these neutral sources? If biased, is that bias noted?
* Is any information out of date? Is anything missing that could be added?
* Check out the Talk page of the article. What kinds of conversations, if any, are going on behind the scenes about how to represent this topic?
* How is the article rated? Is it a part of any WikiProjects?
* How does the way Wikipedia discusses this topic differ from the way we've talked about it in class?
* Optional: Choose at least 1 question relevant to the article you're evaluating and leave your evaluation on the article's Talk page. Be sure to sign your feedback with four tildes — ~.
Now that you're thinking about what makes a "good" Wikipedia article, consider some additional questions.
* Wikipedians often talk about "content gaps." What do you think a content gap is, and what are some possible ways to identify them?
* What are some reasons a content gap might arise? What are some ways to remedy them?
* Does it matter who writes Wikipedia?
* What does it mean to be "unbiased" on Wikipedia? How is that different, or similar, to your own definition of "bias"?
Week 3
Familiarize yourself with editing Wikipedia by adding a citation to an article. There are two ways you can do this:
* Add 1-2 sentences to a course-related article, and cite that statement to a reliable source, as you learned in the online training.
* The Citation Hunt tool shows unreferenced statements from articles. First, evaluate whether the statement in question is true! An uncited statement could just be lacking a reference or it could be inaccurate or misleading. Reliable sources on the subject will help you choose whether to add it or correct the statement.
Week 4
* Blog posts and press releases are considered poor sources of reliable information. Why?
* What are some reasons you might not want to use a company's website as the main source of information about that company?
* What is the difference between a copyright violation and plagiarism?
* What are some good techniques to avoid close paraphrasing and plagiarism?
It's time to choose an article and assign it to yourself.
* Review page 6 of your Editing Wikipedia guidebook.
* Find an article from the list of "Available Articles" on the Articles tab on this course page. When you find the one you want to work on, click Select to assign it to yourself.
* In your sandbox, write a few sentences about what you plan to contribute to the selected article.
* Think back to when you did an article critique. What can you add? Post some of your ideas to the article's talk page.
* Compile a list of relevant, reliable books, journal articles, or other sources. Post that bibliography to the talk page of the article you'll be working on, and in your sandbox. Make sure to check in on the Talk page to see if anyone has advice on your bibliography.
Week 5
You've picked a topic and found your sources. Now it's time to start writing.
Creating a new article?
* Write an outline of that topic in the form of a standard Wikipedia article's "lead section." Write it in your sandbox.
* A "lead" section is not a traditional introduction. It should summarize, very briefly, what the rest of the article will say in detail. The first paragraph should include important, broad facts about the subject. A good example is Ada Lovelace. See Editing Wikipedia page 9 for more ideas.
Improving an existing article?
* Identify what's missing from the current form of the article. Think back to the skills you learned while critiquing an article. Make notes for improvement in your sandbox.
-
Keep reading your sources, too, as you prepare to write the body of the article.
Resources: Editing Wikipedia pages 7–9
Everyone has begun writing their article drafts.
Week 6
* What do you think of Wikipedia's definition of "neutrality"?
* What are the impacts and limits of Wikipedia as a source of information?
* On Wikipedia, all material must be attributable to reliable, published sources. What kinds of sources does this exclude? Can you think of any problems that might create?
* If Wikipedia was written 100 years ago, how might its content (and contributors) be different? What about 100 years from now?
* Keep working on transforming your article into a complete first draft. Get draft ready for peer-review.
* If you'd like a Wikipedia Expert to review your draft, now is the time! Click the "Get Help" button in your sandbox to request notes.
* First, take the "Peer Review" online training.
* Select two classmates’ articles that you will peer review and copyedit. On the Articles tab, find the articles that you want to review. Then in the "My Articles" section of the Home tab, assign them to yourself to review.
* Peer review your classmates' drafts. Leave suggestions on on the Talk page of the article, or sandbox, that your fellow student is working on. Other editors may be reviewing your work, so look for their comments! Be sure to acknowledge feedback from other Wikipedians.
* As you review, make spelling, grammar, and other adjustments. Pay attention to the tone of the article. Is it encyclopedic?
Every student has finished reviewing their assigned articles, making sure that every article has been reviewed.
You probably have some feedback from other students and possibly other Wikipedians. It's time to work with that feedback to improve your article!
* Read Editing Wikipedia pages 12 and 14.
* Return to your draft or article and think about the suggestions. Decide which ones to start implementing. Reach out to your instructor or your Wikipedia Expert if you have any questions.
Week 7
Once you've made improvements to your article based on peer review feedback, it's time to move your work to Wikipedia proper - the "mainspace."
Editing an existing article?
* NEVER copy and paste your draft of an article over the entire article. Instead, edit small sections at a time.
* Copy your edits into the article. Make many small edits, saving each time, and leaving an edit summary. Never replace more than one to two sentences without saving!
* Be sure to copy text from your sandbox while the sandbox page is in 'Edit' mode. This ensures that the formatting is transferred correctly.
Creating a new article?
* Read Editing Wikipedia page 13, and follow those steps to move your article from your Sandbox to Mainspace.
* You can also review the Sandboxes and Mainspace online training.
Week 8
Do additional research and writing to make further improvements to your article, based on suggestions and your own critique.
* Read Editing Wikipedia page 12 to see how to create links from your article to others, and from other articles to your own. Try to link to 3–5 articles, and link to your article from 2–3 other articles.
* Consider adding an image to your article. Wikipedia has strict rules about what media can be added, so make sure to take Contributing Images and Media Files training before you upload an image.
Continue to expand and improve your work, and format your article to match Wikipedia's tone and standards. Remember to contact your Wikipedia Expert at any time if you need further help!
Week 9
It's the final week to develop your article.
* Read Editing Wikipedia page 15 to review a final check-list before completing your assignment.
* Don't forget that you can ask for help from your Wikipedia Expert at any time!
Write a reflective essay (2–5 pages) on your Wikipedia contributions.
Consider the following questions as you reflect on your Wikipedia assignment:
* Critiquing articles: What did you learn about Wikipedia during the article evaluation? How did you approach critiquing the article you selected for this assignment? How did you decide what to add to your chosen article?
* Summarizing your contributions: include a summary of your edits and why you felt they were a valuable addition to the article. How does your article compare to earlier versions?
* Peer Review: If your class did peer review, include information about the peer review process. What did you contribute in your review of your peers article? What did your peers recommend you change on your article?
* Feedback: Did you receive feedback from other Wikipedia editors, and if so, how did you respond to and handle that feedback?
* Wikipedia generally: What did you learn from contributing to Wikipedia? How does a Wikipedia assignment compare to other assignments you've done in the past? How can Wikipedia be used to improve public understanding of our field/your topic? Why is this important?
Week 10
Everyone should have finished all of the work they'll do on Wikipedia, and be ready for grading.
|
Many complex real-world tasks are composed of several levels of sub-tasks.
Humans leverage these hierarchical structures to accelerate the learning
process and achieve better generalization. In this work, we study the inductive
bias and propose Ordered Memory Policy Network (OMPN) to discover subtask
hierarchy by learning from demonstration. The discovered subtask hierarchy
could be used to perform task decomposition, recovering the subtask boundaries
in an unstruc-tured demonstration. Experiments on Craft and Dial demonstrate
that our modelcan achieve higher task decomposition performance under both
unsupervised and weakly supervised settings, comparing with strong baselines.
OMPN can also bedirectly applied to partially observable environments and still
achieve higher task decomposition performance. Our visualization further
confirms that the subtask hierarchy can emerge in our model.
|
Thread:Ryantransformer017/@comment-25498563-20160809225309/@comment-26216332-20160811142827
What will Mal say to his Mixle Drama bandmates in Ryan's TMNT adventure?
|
E. Med. — On the Dover, Ryde, sparingly. On an earthen bank in Sandown bay, near the turning off to Brading and Ryde, 1848. A few small but perfect specimens on a bank by the sea, between Sandown and the fort, Sandown bay. Dr. Martin !.'.' [On St. Helen's spit abundantly. Dr. Bell-Salter, Edrs.]
Root annual, whitish, tapering and fibrous. Stems several, quite prostrate, irre gularly though not much branched, round, solid, smooth, whitish or purplish, usu ally spreading in a circular form, from 3 or 4 to 6 or 8 inches in length. Leaves bright green with a pale spot In the centre of each leaflet, quite glabrous like the whole plant ; leaflets small, very shortly stalked, with numerous straight parallel veins, roundly obovate, edged with purple, minutely sharply and unequally denti culato-serrulate, lowermost serralures larger and more distant, the base of the leaf lets entire. Petioles of variable length, semiterete, canaliculate above. Stipules sheathing, coloured, ribbed, roundish or oblong, with suliulate sometimes reflexed points. Flowers minute, light pink or purplish, in small, dense, axillary and ter minal sessile heads or clusters, of a globular form and about the size of peas. Bracts solitary under each flower, small, white, scariose and pointed. Calyx ses sile, campanulate, with 10 purplish prominent ribs, and 5 green, short, ovato-tri angular teeth, which are 3-nerved, very acutely pointed and spreading, though finally reflexed and rigid. Corolla longer than the calyx, the standard very flatly conduplicate, curved upwards and acute, without claws, not striated except when beginning to wither, when it becomes very evidently streaked, nearly or quite con cealing the very minute whitish heel and wings. Legume about as long as the tube of the calyx, roundish, compressed, very thin and membranous, tipped with the long persistent style, bursting along its upper suture and rupturing irregularly besides. See'ds 2, very often but 1 by abortion, very globular, pale yellow, often greenish.
heads of flowers, areall sufiicient distinctions between this species and T. striatum.
8. T. suffocatum, L. Suffocated Trefoil. " Heads sessile roundish, petals shorter than the membranaceous faintly striated calyx whose teeth are broadly subulate falcate recurved." — Br. Fl. p. 108. E. B. t. 1049.
E. Med.~Oa the Dover, Ryde, Rev. G. E. Smith .'!.' and where I have since picked it myself. Red ells', just the spot on which the new lighthouse (St. Catherine's) is erected, George Kirkpatriek, Esq. [In great abundance on St.. Helen's spit, especially in the less worn parts of the road, A. G. More, Esq., Edrs.]
Root whitish, tapering, branched and fibrous, sometimes with a few granula tions. Stems numerous, prostrate, simple or slightly branched at the base, mostly spreading in a circular form, and half buried in the sand, from about 1 to 3 or 4 inches in length, round, smooth, solid, leafy and glabrous, as is the whole plant. Leaves bright green, alternate, on very long, slender, semiterete petioles that are flattened or very slightly grooved above, sometimes nearly 3 inche.s in length ; leaflets very shortly stalked (the 2 lateral almost sessile), obcordate or oliovale, dis tantly and .sharply denticulato-serrate in their upper half, the serratuies spinulose, wedge-shaped and quite entire in their lower half, somewhat shining beneath, with few, distant, filiform, not at all prominent ribs running direct to the marginal points. Stipules broad, scariose, not coloured, with long, subulate, green points, ribless except 3 strong green nerves beneath (the centre one continued into the
|
Skip to main content
Advertisement
Browse Subject Areas
?
Click through the PLOS taxonomy to find articles in your field.
For more information about PLOS Subject Areas, click here.
• Loading metrics
Regional economic integration via detection of circular flow in international value-added network
Abstract
Global value chains are formed through value-added trade, and some regions promote economic integration by concluding regional trade agreements to promote these chains. However, it has not been established to quantitatively assess the scope and extent of economic integration involving various sectors in multiple countries. In this study, we used the World Input–Output Database to create a cross-border sector-wise network of trade in value-added (international value-added network) covering the period of 2000–2014 and evaluated them using network science methods. By applying Infomap to the international value-added network, we confirmed two regional communities: Europe and the Pacific Rim. We applied Helmholtz–Hodge decomposition to the value-added flows within the region into potential and circular flows, and clarified the annual evolution of the potential and circular relationships between countries and sectors. The circular flow component of the decomposition was used to define an economic integration index. Findings confirmed that the degree of economic integration in Europe declined sharply after the economic crisis in 2009 to a level lower than that in the Pacific Rim. The European economic integration index recovered in 2011 but again fell below that of the Pacific Rim in 2013. Moreover, sectoral economic integration indices suggest what Europe depends on Russia in natural resources makes the European economic integration index unstable. On the other hand, the indices of the Pacific Rim suggest the steady economic integration index of the Pacific Rim captures the stable global value chains from natural resources to construction and manufactures of motor vehicles and high-tech products.
Introduction
It is not easy to grow a country’s economy without establishing economic relations with other countries. Every year, the amount of international trade increases along with the world’s GDP. Free trade agreements (FTAs) and regional trade agreements (RTAs) have been created to support this trend, and countries are working to stabilize trade.
How, then, do countries become interdependent through trade? Classically, Ricard advocated comparative advantage, which stated that countries specializing in different industries are supposed to trade by taking advantage of their respective strengths [1]. However, in recent years, trade in intermediate goods, which did not exist at that time, has begun and flourished. In other words, there are forms of trade specializing in different industries and global value chains (GVCs) in the same industry to produce complex and sophisticated products. In addition, distance (as represented by the gravity model) and economic integration (such as that promoted by the EU and NAFTA) facilitate international trade.
Many studies have been conducted on measuring international production structures. In the 21st century, the measurement of GVCs became a widely discussed topic after Hummels et al. [2] proposed the vertical specialization index. Moreover, the VAX and FVAiX indices were proposed by Johnson and Noguera [3], and Amador et al. [4], respectively. Koopman et al. [5] decomposed and classified the trade prices into nine terms and clarified double-counted terms.
The abovementioned progress has led to the development of international input–output (IO) tables. After Dietzenbacher et al. [6] and Timmer et al. [7, 8] publicized the World Input–Output Database (WIOD), Cerina et al. [9] and Zhu et al. [10] analyzed the data for measuring GVCs network analysis methods. Los et al. [11] showed that regional value chains had been fragmented more rapidly than global ones in 1995–2011. According to Johnson and Noguera [12], over 40 years, the ratio of value-added to gross exports in non-manufacturing sectors rose, while that in manufacturing sectors fell by 20 percentage points; their study showed that RTAs led to such decline.
Due to international relationship problems in the economy, researchers in international organizations have been analyzing GVCs. Recently comprehensive summary reports of GVCs research have been published by the World Bank [13, 14]. According to these reports, GVCs research aims to address the misunderstanding of trade data and give international trade opportunities and export diversity to developing countries for economic growth [15]. As shown by Degain et al., GVCs had been expanding until the global financial crisis, but they stopped growing in 2011 when they recovered from a decline in equipment. The GVCs expansion seen before the financial crisis has been interpreted as China’s accession to the WTO and participation in GVCs [15]. These GVCs research developments were described in detail by Amador and Cabral [16], and Inomata [17]. However, economics studies mainly focus on the relationship between nations. Little works are showing the complicated relationship between industries in different countries.
By contrast, there are many studies on international trade in network science because of recent developments in analytical methods and data availability. Initially, the scale-free characteristics of international trade networks were clarified [1820]; then, the virtual nodes in these networks were examined using centrality indices and other measures, and changes over time were analyzed [2123].
Ikeda et al. [24, 25] studied the international trade network of the WIOD using community analysis of modularity maximization, which combines the synchronization aspect of the G7 production network [26, 27]; then, they found stable sectoral communities that had emerged in 1995–2011.
From an institutional point of view, a study examining interregional agreements and geographic factors in international trade change found no evidence that the WTO has contributed to the increase in international trade and indicated that geographic factors are more significant than interregional agreements [28]. By contrast, other studies show that trade policies contribute to shaping trade networks [29], and the impact of distance by trade is decreasing in industrial sectors [30].
However, there is no way to quantitatively assess the scope and extent of economic integration involving various sectors in multiple countries. It is not easy to quantitatively evaluate such economic phenomena, where multiple countries and sectors are intertwined, and the trade balance between two countries is still used as a basis for foreign economic policy. In this study, we analyze the economic linkages through trade that have been made in recent years using international IO tables and network science methods and clarify how the world economy is being integrated from the perspective of value-added trade.
In this context, this study offers the following essential contributions. First, rather than using trade figures, which can mislead international economic relations, we used the value-added calculations used in an IO analysis to construct a sector-wise network. Next, from the value-added linkage, we confirmed the regional characteristics of Europe and the Pacific Rim. Findings showed that the behavior of this value-added relationship is highly different from that of the sectoral community in the international trade network shown in previous studies. The dense economic relationships within these regions were decomposed into potential and circular relationships, and the countries and sectors that contribute to the strong regional ties were identified. Finally, we propose an economic integration index based on the circular flow component, and the results showed that economic integration was higher in the Pacific Rim than in Europe in 2010, 2013, and 2014.
The remainder of this paper is organized into three parts. The following section describes the IO table and the computational method used to create the network and the community analysis, explains the decomposition of the potential and circular flow components, and presents the proposed method for measuring economic integration. Then, the characteristics and structure of the network, the potential and circular relationships, and the results of the integration index are presented, and the international economic linkages are examined based on these results. Finally, we conclude this study.
Data and methods
This section consists of four subsections. First, we describe the WIOD and the computational methods used to build the cross-border, sector-wise trade in value-added network (international value-added network, IVAN). Second, we briefly describe Infomap, a community analysis algorithm, which was used to determine the extent to which economic integration is occurring. Third, we explain the use of Helmholtz–Hodge decomposition to extract the potential and circular nature of IVANs. Finally, the economic integration index is defined.
Data
We used the WIOD released in 2016, which includes 43 countries, and classifies the rest of the world (RoW) into 56 sectors. These countries and sectors are listed in S1 and S2 Tables. The country codes are the same as those of the WIOD, but we simplified the sectoral codes. The WIOD has IO tables for 2000–2014. Table 1 shows a simplified example of the table. The IO table was developed by Leontief [31] and is now used globally to estimate economic and environmental conditions. In this paper, we used the WIOD without RoW because we aim to analyze the relationships of specific countries.
Table 1. World input–output table with two sectors in three countries.
https://doi.org/10.1371/journal.pone.0255698.t001
Calculation for adjacency matrix of IVAN
Let nc be the number of countries and ns be the number of sectors in the IO table. Then, we use the nc ns × nc ns intermediate matrix Z, nc ns × nc final demand matrix F, 1 × nc ns value-added vector V, and 1 × nc ns total output matrix T from WIOD. We calculate the value-added coefficient vector C, the final demand vector D, and the coefficient matrix A as V/(T), Dj = ∑i Fij, and Z/T, respectively. T means transpose of the vector T. Then, we calculate an induced value-added vector by a vector LD left-hand multiplied by the nc ns × nc ns diagonal matrix , where the diagonal components are the vector C and L is well-known Leontief matrix which calculated by (IA)−1, where I is the identity matrix. In this paper, we use ˆ as a symbol of diagonal matrix.
Then, the adjacency matrix of the global value-added network (GVAN) G is calculated as , where is a diagonal matrix whose diagonal components are the vector D. This operation calculates the value called trade in value-added (TiVA) [32]. Eliminating components of 43 on-diagonal 56 × 56 blocks (which mean domestic components) as zero from G, we obtained the IVAN’s adjacency matrix Y. This matrix represents the sum of all the ripple effects of value-added induced by the final demand in foreign sectors. The GVAN and IVAN nodes comprise 56 sectors in 43 countries. S1 and S2 Tables list these countries and sectors.
In summary, IVANs are directed and weighted networks constructed by the adjacency matrix Y, which does not have domestic links. By contrast, GVANs have domestic and international links. In the example of Table 1, Y is written as (1)
Community detection
To know how each network has developed a community structure, we apply community analysis to IVANs. Some community analysis methods have been developed, and their features were summarized by Fortunato and Hric [33] and Barabási [34]. We used Infomap, which is an application of random walk and Huffman coding [35] to analyze the communities in the studied network [36]. Let a random walker run in the network where this analysis will be applied; the random walker transitions with a probability dependent on the path weights (plus a constant transposition probability) [37]. The map equation establishes communities and renames the node in which the random walk track’s code length is minimized. In comparison, the well-known method of modularity maximization clusters a network by counting nodes’ link weights, inflows, and outflows, whereas the map equation method clusters a network by the remaining time of the random walker in the nodes; this difference leads to variations in results [36, 38].
In the Infomap method, the optimization problem for community segmentation in a network is replaced with the minimization problem of the code length of a segmented network. Consider the segmentation of a network composed of n nodes into m communities with a community partition M. Let the mean code length of segmented communities (index code length) be and the mean code length of nodes within a community i be ; then, with use of Shannon’s source coding theorem [39], the average description length of a single step of the random walk L(M) is calculated as follows: (2) where q is the probability of the random walker moving to another community and is the probability of the random walker moving within a community i = 1, 2, …, m plus the exit probability from i. Each of the probabilities is defined as and , where pα is the probability of visiting node α = 1, 2, …, n and qi is the exit probability from community i. Eq (2) is called the map equation [38].
Structural characteristics of networks
Specific indices reveal network characteristics. We used eleven indices: (1) density, reciprocity, diameter, average path length, average betweenness, assortativity, average in-degree, and average out-degree are calculated as unweighted and directed networks; (2) in-strength and out-strength are for weighted and directed ones; (3) the clustering coefficient is calculated as unweighted and undirected [34].
The density of a network is the proportion of the number of links to the number of maximum possible links in the network. Thus, it is calculated as l/n(n − 1), where n and l are the numbers of nodes and links in the network, respectively.
The reciprocity of a network is the proportion of the reciprocal links to the total number of links in the network. It is calculated as lm/l, where lm is the number of reciprocal links (two directed links between two nodes in the network).
The clustering coefficient indicates how each node is connected to its neighbors. The clustering coefficient ci of a node i is calculated as ci = li/ki(ki − 1), where li (ki) is the number of links (neighbors) of node i. Thus, the clustering coefficient of a network is ∑ci/n.
The diameter of a network is the maximum number of shortest paths for all pairs of nodes in the network.
The average path length of a network is the mean of the shortest path length in the network: ∑i,j;ij dij/n(n − 1), where dij is the shortest path length between nodes i and j.
The betweenness is calculated by the number of the shortest paths through a node. The betweenness bi of node i is ∑jki sjk(i)/sjk, where sjk is the number of the shortest paths between nodes j and k, and sjk(i) is the number of the shortest paths through node i. Therefore, the average betweenness of a network is the mean of bi.
The degree of a node means the sum of the number of links in the node. There are two kinds of degrees in a directed network, namely, in-degree and out-degree, which are the sums of the links to the node from the other nodes (inflow) and links from the node to other nodes (outflow), respectively. Thus, the in-degree and out-degree are represented as: ∑j lj,i and ∑j li,j, respectively, where li,j is a link (flow) from node i to node j.
The assortativity of a network scores the similarity of the connections in the network as -1 to 1, which is calculated by (∑jk jkejkμ)/σ2, where ejk is the probability of the two nodes of degrees j and k are at the end of a randomly chosen link, and μ and σ are the mean and standard deviation, respectively, of the excess degree distribution. Assortativity close to -1 (1) indicates that high-degree nodes in a network are connected to low-degree nodes (high-degree nodes).
The strength of a node is the total weight of the links in the node. There are two kinds of strength in a directed network, namely, in-strength and out-strength, which are the summed weights of the inflow and outflow, respectively. Thus, the in-strength and out-strength are represented as: and , respectively, where is the flow amount from node i to node j.
Decomposition to potential and circular flows
Firstly, we define flows as directed and weighted links, and the flow of IVANs as value flow. We used Helmholtz–Hodge decomposition [40] to extract potential and circular relationships from IVANs. This method was also used by Kichikawa et al. [41] to clarify the potential relationships in corporate transaction networks. With Helmholtz–Hodge decomposition, the flow Fij from node i to node j can be separated into the circular flow and potential flow . Here the potential flow is given by , where ϕi is the Helmholtz–Hodge potential and wij is a positive weight for the link between node i and node j. The circular flow satisfies , in which the inflows and outflows are balanced in each node. represents the difference in potentials between the nodes, whereas represents the number of feedback loops among the nodes.
An IVAN captures the value-added chain of various international sectors to their respective final goods. The potential of a sector is the difference between the amount of value added by the sector for final foreign production and the amount of value added by the foreign sector for the sector’s final production. In other words, if the value-added potential is positive, the sector contributes more to the production of the final goods of the foreign sector, and if it is negative, the sector is contributed more to the own production of the final goods by foreign sectors. This can indicate the degree of asymmetric dependence or co-dependence. Moreover, the circular flow component indicates the amount of contribution to (from) the foreign sector, that is, the degree of interdependence.
Economic integration index
When several economies proceed with economic integration by some factors such as establishing FTAs or RTAs, the value added in a country becomes increasingly induced by other countries, and vice versa. In other words, higher economic integration makes a larger feedback loop within an IVAN. Based on this assumption, we define the economic integration index E as the aggregate amount of circular flow divided by the total flow of the GVAN in a specific community. (3)
The range of applications of this index covers communities detected by Infomap, which also detects communities using flow. The detected communities are interpreted as the circulation observed when random walkers rotate in the community. The degree of circulation is quantified and divided by the economic scale of the community; then, the economic integration is measured as the amount of value-added circulation per economic scale.
The index defined here indicates how much of the domestic and international value-added chain toward final demand can be extracted as international interdependence. The higher the value of the index, the greater the value-added induced in the community across national borders (economic activities performed by sectors in other countries), and this value measures economic integration in this study.
This economic integration index can decompose into sectoral indices arbitrarily. For sector k, the sectoral economic integration index is (4) where Sk is a set of nodes in sector k.
In summary, the economic integration index is the circular magnitude of the cross-border value-added contribution of economic activities in a country (other countries) toward the final production of sectors in other countries (home country), compared with the circular magnitude of the value-added contribution of economic activities in the community, including circulation within a country.
Results
This section is composed of three subsections. The first subsection shows the community structure of the IVANs, the threshold set for community detection, and the resulting 15-year change in regional communities. The second subsection shows the essential characteristics of the original IVAN, the cut IVAN, and the IVAN of the regional communities in selected years. The third subsection confirms the value-added potential and circular relationships revealed by Helmholtz–Hodge decomposition for countries and sectors in Europe and the Pacific Rim. The last subsection shows the evolution of economic integration in the regional communities and the results of decomposing the economic integration indicators into key sectoral components.
Communities of IVANs
Infomap detected only a giant community from IVAN without thresholds for 15 years. This result differs from the community of international trade networks (see S3 Table and S1 Fig) also shown in a previous study [24] which illustrated 7–12 communities. Next, thresholds are set to exclude the branches and leaves of the IVANs to see any concealed community structure. In the range of 6,500–11,000 remaining links, several large communities appear (Fig 1). Here, the thresholds are set by the number of the most extensive links to be retained, as the threshold value set by USD is inappropriate for the weight of the IVAN links, which fluctuates due to the economic growth and economic crisis that occurred in those 15 years. In Fig 1, each cell colored on the number of communities with more than 240 nodes which is 10% of the total nodes in the IVANs. We named these communities major communities. The remaining number of links in 2014 is almost half of that in 2004–2009.
Fig 1. Heat map of the number of major communities in the IVANs cut by each threshold, in 2000–2014.
This research uses the thresholds represented as a red-bordered cell in each year. Colors mean the number of major communities.
https://doi.org/10.1371/journal.pone.0255698.g001
By using the abovementioned threshold values, we can detect the communities from the IVANs as shown in Fig 2 which shows four years’ heat maps and represents each cell as a node of the IVANs. Fig 2 illustrates the results of the red-bordered cells of Fig 1. In the maps, the separated communities are indicated by different colors; the gray cells are the nodes that do not belong to any of those two communities. The figure shows orange horizontal stripes, including most of the sectors in countries such as Australia, Brazil, and Canada. The green nodes include the sectors A01: agriculture, C29: manufacture of motor vehicles, and C30: manufacture of other transport equipment of mainly European countries. The gray area includes many other sectors in Europe, especially in small countries. We define the two major communities as European and Pacific Rim communities, which are the automatically detected communities with Infomap mainly formed by nodes from Europe and Pacific Rim, respectively.
Fig 2. Community maps in 2000, 2004, 2009, and 2014.
Each cell is a node of IVAN, the gray cells are the nodes that do not belong to any of two communities, while the other colors indicate community. The green ones belong to the community dominated by sectors of European countries and the orange ones to the community dominated by sectors of the Pacific Rim countries. The regional classification is in S1 Table.
https://doi.org/10.1371/journal.pone.0255698.g002
The results of community detection 15 years are shown in Fig 3 as a Sankey diagram, where orange represents the Pacific Rim sectors, and green represents the European sectors (nodes) (the details of the classification are in S1 Table). In Fig 1, we illustrate the number of major communities, which include more than 10% of all (2408) nodes of IVAN. However, Fig 3 also represents minor communities that include less than 10% of all nodes. The relatively larger two communities at the bottom are the regional communities.
Fig 3. Sankey diagram of communities colored by actual regions where nodes are situated.
Communities in each year are ordered by their size, from the largest to the smallest, from the bottom to the top. Green and orange in Fig 3 do not mean regional communities but mean the regions in which the node is positioned. The crossings of the major communities in 2001–2002 and 2009–2010 mean changes in the order of their size. Their classification of them is in S1 Table. The two bottom communities are major regional communities, but there had been other small European communities over the years.
https://doi.org/10.1371/journal.pone.0255698.g003
Observed structural characteristics
Table 2 shows the structural features for 2000, 2008, and 2014. The densities and cluster coefficients show that the IVANs are dense, and the reciprocity shows that more than 97% of the links are mutual. The density of the IVANs that are cut at the threshold in Fig 1 is reduced by about 82%. In terms of reciprocities, the cut IVANs are about 20% of the remaining links that are cut. This indicates that the IVAN has many asymmetric mutualities with large values in only one direction. In the two local communities, the cluster coefficients and reciprocities are at the same level as those of the IVANs, although the densities are about 10% higher than those of the entire IVANs.
Table 2. Structural characteristics of IVANs, IVANs cut with the threshold, IVANs in the regional communities: Europe and the Pacific Rim.
https://doi.org/10.1371/journal.pone.0255698.t002
Each IVAN has a diameter of 2, which means that any sector in any country is connected to all sectors through one sector in another country. The minimum diameter is two because links to sectors in the same country are not included, but since the average path length is close to 1, most sectors are directly connected. In the cut IVANs, the diameter is about 5, and the average betweenness is more than 50 times larger than that of the original IVANs because the diameter and average path length are long, and a particular node mediates many shortest paths. In the regional communities, the average path length is slightly longer, and the betweenness is roughly half of the original IVANs.
The number of nodes in the IVANs is 2,408, and the average in-degree and out-degree value are over 2,100. Since most of the sectors are connected, the assortativity is close to zero. In the cut IVANs, the degrees and assortativity are around 70 and -0.43, respectively, indicating that sectors with degree differences are connected by high values flows. In the regional communities, the degree varies according to the size of the detected regional community, and the assortativity is negative, close to zero.
Table 2 includes three points: 1) no much difference in characteristics of IVANs between around 2006 (the lowest threshold year) and 2014 (the highest threshold year) are 2) influences of threshold on IVAN in each year 3) regional communities with similar characteristics, except for average betweenness and degrees, which are affected by size in each year. For the first point, the features of IVANs in 2014 and 2000 are almost the same, and the IVANs in 2008 have relatively higher density and assortativity. For the second point, we mentioned above the differences between original IVAN and cut IVAN. For the third point, most characteristics of the regional communities were similar, but there were significant differences in degrees that reflect the size of the community, especially in 2014.
Fig 4 shows a wide strength distribution of the IVAN’s. The distribution of Fig 4 partly fits a log-normal distribution, especially in the right tail. The probability density function of a log-normal distribution p(x) for x > 0 is: (5) where μ and σ are the mean and standard deviation of the logarithm, respectively. As seen in Fig 4, the mean and standard deviation of the logarithm of strength distribution in 2014 are μ = 5.959 and σ = 2.129 in in-strength, and μ = 6.146 and σ = 2.001 in out-strength, respectively.
Fig 4. Cumulative probability of strength of IVAN in 2000, 2008, and 2014.
The black lines are plots of the cumulative density function of the log-normal distribution whose μ and σ are equal to the average and the standard deviation, respectively, of the logarithm of strength distribution in 2014. Their right tails are almost fitted, but the left tails are not, especially at the bottom.
https://doi.org/10.1371/journal.pone.0255698.g004
We examined whether the difference in the threshold values before and after the economic crisis in Fig 1 was due to the difference in the network features. Fig 4 shows that the strength distributions in 2008 and 2014 are almost equal. Therefore, this indicates that there is no significant change in the IVAN as a network structure, which does not appear in the features used in this study.
Potential and circular relationships in two regional communities
We applied Helmholtz–Hodge decomposition to the two regional communities detected by Infomap. The size of the circular flow component and the value of the IVAN link (value flow) are well correlated in the range where the value flow is greater than 100 million USD, as shown in Fig 5. In the economic integration index, the amount of circular flow is divided by the GVAN weights; thus, the economic scale does not appear directly.
Fig 5. Scatter plot of correlation between circular flow and value flow of two regional communities in 2000.
The larger circular flows are in proportion to the original value flows, but the lower flows are disarranged.
https://doi.org/10.1371/journal.pone.0255698.g005
The potential and circular relationships in European and the Pacific Rim communities can be analyzed from the international and intersectoral perspectives by aggregating nodes into countries and sectors. As Fig 2 illustrates, some countries were present with a smaller number of sectors in each community.
European community.
Tables 38 show the breakdown of the potential and circular relationships in Europe, which is obtained from the Helmholtz–Hodge decomposition. In terms of countries, the value potential is mainly high in Russia, Norway, and Germany (Table 3) and low in France, Spain, and Italy (Table 4). From 2003 to 2012, Russia and Norway mainly have the first and second highest potential, respectively. In 2001, 2002, and 2010, with changes in the ranking of the communities in Fig 3, Germany shows the highest potential. In 2000 and 2001, 2010, and 2013, when the European community was relatively small (Fig 3), Russia had not been in Table 3. In addition, Belgium, Switzerland, and the Netherlands were the other high-potential countries that were ranked repeatedly in Table 3. Non-European countries, such as the United States, had also been ranked because of specific industries in the European Community. The United States was ranked as a high-potential country in 2002–2005, before the economic crisis, and China was ranked as a high-potential country in the European Community in 2009–2010, during the economic crisis. As for low-potential countries (Table 4), basically France had the lowest potential, and Spain had the lowest potential in 2002 and 2003. Table 4 also represented Italy, Portugal, and Turkey as low-potential countries in half of the years, and the United Kingdom as a low-potential country after 2007. Tables 3 and 4 show that Germany is both a high-potential (in 2001–2007, 2009, 2010, 2014) and low-potential country (in 2008, 2011–2013).
Table 3. Five highest-potential countries in European community.
https://doi.org/10.1371/journal.pone.0255698.t003
Table 4. Five lowest-potential countries in European community.
https://doi.org/10.1371/journal.pone.0255698.t004
Table 5. Five highest-potential sectors in European community.
https://doi.org/10.1371/journal.pone.0255698.t005
Table 6. Five lowest-potential sectors in European community.
https://doi.org/10.1371/journal.pone.0255698.t006
Table 7. Five-highest circulated countries in European community.
https://doi.org/10.1371/journal.pone.0255698.t007
Table 8. Five-highest circulated sectors in European community.
https://doi.org/10.1371/journal.pone.0255698.t008
In general, Russia was ranked first in the high-potential table (Table 3) when sector B: mining and quarrying was ranked first (Table 5). The other highest-potential sectors in Europe were M69: legal, accounting, and consultancy activities in 2000 and 2010; C24: manufacture of basic metals in 2001; C25: manufacture of fabricated metal products, except machinery and equipment in 2013; and G46: wholesale trade except that of motor vehicles and motorcycles in 2014. In addition, sectors N: administrative and support service activities and C20: manufacture of chemicals and chemical products were often ranked high. In terms of low potential (Tables 4 and 6), sector F: construction had been ranked first throughout the 15 years, followed by sectors C28: manufacture of machinery and equipment, C19: manufacture of coke and refined petroleum products, C29: manufacture of motor vehicles, which occupy the second and subsequent positions; O84: public administration and defense, and compulsory social security, which appears in the third and subsequent positions; and L68: real estate activities, which often appears in the fifth position.
According to the amount of circulation (Table 7), Germany had been the top country for the 15 years, followed by France, the United Kingdom, Italy, and Russia. In terms of sectoral ranking (Table 8), sector F: construction is consistently ranked first until 2010. In 2011 and 2012, sector B: mining and quarrying, which often ranked second, was ranked first. In 2013, sector C29: manufacture of the motor vehicle was first. Russia was ranked second in the year that sector B: mining and quarrying was ranked first. Throughout the whole years, F: construction was first, followed by B: mining and quarrying, C28: manufacture of machinery and equipment, C19: manufacture of coke and refined petroleum products, and G46: wholesale trade except for that of motor vehicles and motorcycles.
For both countries and sectors, those ranked higher or lower in terms of potential are also greater in terms of circular strength, which is the strength of the circular network decomposed from an IVAN. The relationship between the circular strength and the Helmholtz–Hodge potential is shown in Fig 6. The relationship is V-shaped; in other words, the larger the absolute value of the potential, the higher the circular strength.
Fig 6. Scatter plot of correlation between circular strength and Helmholtz–Hodge potential in European community in 2000, 2008, and 2014.
Each year shows a V-shaped pattern, but the inclination is changed (sharpest in 2008 and gentlest in 2014).
https://doi.org/10.1371/journal.pone.0255698.g006
The Pacific Rim community.
The characteristics of the potential and circular flows of the IVANs in the Pacific Rim are shown in Tables 914. Japan, which had the maximum potential in 2000 and 2002, had been out of the ranking since 2012; Canada showed the maximum potential from 2003 to 2007, and Australia had been the highest-potential country since then (Table 9). Indonesia, Taiwan, and India were also among the five highest-potential countries. In terms of low potential, Mexico was the top-ranked country until 2002, the United States from 2004 to 2006, and China in 2003 and 2007 to 2013. In addition, India, the United Kingdom, and France appeared in Table 10 numerous times.
Table 9. Five highest-potential countries in the Pacific Rim community.
https://doi.org/10.1371/journal.pone.0255698.t009
Table 10. Five lowest-potential countries in the Pacific Rim community.
https://doi.org/10.1371/journal.pone.0255698.t010
Table 11. Five highest-potential sectors in the Pacific Rim community.
https://doi.org/10.1371/journal.pone.0255698.t011
Table 12. Five lowest-potential sectors in the Pacific Rim community.
https://doi.org/10.1371/journal.pone.0255698.t012
Table 13. Five highest-circulated countries in the Pacific Rim community.
https://doi.org/10.1371/journal.pone.0255698.t013
Table 14. Five highest-circulated sectors in the Pacific Rim community.
https://doi.org/10.1371/journal.pone.0255698.t014
In terms of sectoral potential (Tables 11 and 12), as in Europe, B: mining and quarrying was the highest-potential sector, and F: construction was the lowest-potential sector. The relationship was more stable than that in Europe and remained unchanged over the 15 years. In addition, as shown in Table 11, sectors B: mining and quarrying, C20: manufacture of chemicals and chemical products, C24: manufacture of basic metals, G46: wholesale trade, K64: financial service activities, and N: administrative and support service activities were included in the high-potential sectors. Sectors F: construction; O84: public administration and defense, and compulsory social security; C29: manufacture of motor vehicles; C10: manufacture of food products, beverages, and tobacco products; and Q: human health and social work activities were among the lowest-potential sectors until 2009 except 2001. Since 2010, Q: human health and social work activities had risen to higher than third place.
In terms of circulation (Tables 13 and 14), the United States had been consistently in the first place, followed by Japan, Canada, and China since 2007. Since 2005, Japan had been in the fourth place, followed by Mexico. By contrast, the sector rankings in Table 14 were not as consistent as those of the countries. In 2000 and 2007, C26: manufacture of computer, electronic, and optical products was ranked first; in 2008 and 2011, B: mining and quarrying was ranked first. These three sectors remained in the top three positions over the 15 years. They were followed by C29: manufacture of motor vehicles and O84: public administration and defense, and compulsory social security until 2009, and by O84: public administration and defense, and compulsory social security and Q: human health and social work activities since 2010.
As in European community, the high- or low-potential countries and sectors in the Pacific Rim community were high in circular strength. The relationship between the circular strength and the Helmholtz–Hodge potential is shown in Fig 7. The V-shaped relationship between them is the same as that in European community. However, there is a difference in Tables 214. Sector C26: manufacture of computer, electronic, and optical products did not appear in Tables 11 and 12, which indicates a large value-added circulation in the midstream of the Pacific Rim’s value flow, although all high-circular-strength sectors in Europe were also in the value-added potential ranking.
Fig 7. Scatter plot of correlation between circular strength and Helmholtz-Hodge potential in the Pacific Rim community in 2000, 2008, and 2014.
The years show the similar inclinations of the V shape.
https://doi.org/10.1371/journal.pone.0255698.g007
Economic integration index
The results of applying the economic integration index to the two regional communities are shown in Fig 8. The Pacific Rim community showed a stable and upward trend for economic integration, while European community showed a higher but more unstable integration level. In particular, there was a large decline in the level of integration in 2009 and 2010, after the economic crisis, while the Pacific Rim community was slightly affected in 2009.
Fig 8. Economic integration estimation of the two regional communities in 2000–2014.
https://doi.org/10.1371/journal.pone.0255698.g008
Figs 911 show the sector-wise economic integration indices, focusing on the sectors that showed high circulation in Tables 8 and 14. As illustrated in Fig 9, European community exhibited a fivefold increase in 2007 and 2008 (just before the economic crisis) in sector B: mining and quarrying and F: construction (the top-ranking sectors in terms of potential and circular flows) compared with 2000, then a sharp decline from 2009 to 2010 (below 2000 level). The low values in 2010, 2013, and 2014 were partly due to a decline in F: construction, but the circulation within B: mining and quarrying was almost zero. By contrast, in the Pacific Rim, these values were tripled between 2007 and 2008; they declined to begin 2011, but they remained above 2007 levels in 2014.
Fig 9. Sectoral economic integration index of mining and quarrying, and construction in two regional communities.
B: mining and quarrying, F: construction.
https://doi.org/10.1371/journal.pone.0255698.g009
Fig 10. Sectoral economic integration index of manufacture in two regional communities.
C19: manufacture of coke and refined petroleum products; C20: manufacture of chemicals and chemical products; C24: manufacture of basic metals, C25: manufacture of fabricated metal products except for machinery and equipment; C26: manufacture of computer, electronic, and optical products; C28: manufacture of machinery and equipment n.e.c.; C29: manufacture of motor vehicles, and trailers and semi-trailers.
https://doi.org/10.1371/journal.pone.0255698.g010
Fig 11. Sectoral economic integration index of other important sectors in two regional communities.
G46: wholesale trade except motor vehicles and motorcycles; K64: financial service activities except for insurance and pension funding; N: administrative and support service activities; O84: public administration and defense, and compulsory social security; Q: human health and social work activities.
https://doi.org/10.1371/journal.pone.0255698.g011
Fig 10 shows the sectoral economic integration indices of the manufacturing sectors ranked in Tables 8 and 14. The figure illustrates that the increase in circulation in the manufacturing sectors played a major role in Europe having the largest economic integration index in 2008. Sector C29: manufacture of motor vehicles made a significant contribution in 2007 and 2008. Sectors C24: manufacture of basic metals, C25: manufacture of fabricated metal products, and C29: manufacture of motor vehicles exhibited the highest values in 2008. The high value in 2008 can be seen from the fact that C24: manufacture of basic metals, C25: manufacture of fabricated metal products, and C29: manufacture of motor vehicles showed the highest values throughout the 15 years. As for the manufacturing sectors, C19: manufacture of coke and refined petroleum products disappeared from the European community when the economic integration index for B: mining and quarrying was nearly zero. On the contrary, in the Pacific Rim, the manufacturing sector steadily increased its contribution to economic integration. In the years with large values in the economic integration index, C19: manufacture of coke and refined petroleum products, C20: manufacture of chemicals and chemical products, and C29: manufacture of motor vehicles were large in sectoral indices.
Finally, Fig 11 shows the other sectors that are ranked three or more times in Tables 8 and 14. When European community showed a high degree of economic integration, the contributions of G46: wholesale trade, K64: financial service activities, and N: administrative and support service activities were large. The contributions of K64: financial service activities and N: administrative and support service activities became so small after 2013, when the level of integration was low, that they are barely visible. In the Pacific Rim, the contributions of G46: wholesale trade; O84: public administration and defense, and compulsory social security; and Q: human health and social work activities are large, although they tend to peak and decline every three years except for 2009 when the economic crisis occurred.
Discussion
In the international trade networks used by Ikeda et al. [25], the communities are divided into industries (see S3 Table and S1 Fig); in original IVANs, the entire world is connected. Two buried communities—Europe and the Pacific Rim—were identified in the this paper by eliminating low flows with a threshold. However, since the IVAN included only 43 countries (excludes RoW), a more extensive international economic network, including other countries, may reveal more than two regional communities.
The variability between the threshold and the community detection results is also important. In this study, we did not set the threshold as a specific weight to see the change over the 15 studied years. This is because the weights of IVAN links increase or decrease depending on economic growth or crisis; thus, it was inappropriate to use the same value as the threshold throughout the 15 years. In addition, as seen in Fig 1, the results of the detected communities are unstable, even for close threshold values. The threshold value did not separate whether a single large community or two communities appeared; rather, the trend of the distribution changed. In this study, we used a resolution of every 500 links, but more detailed research is needed on the relationship between the number of remaining links and the number of detected communities. Thus, the threshold with the most remaining links was used for this analysis. The impact of this approach on the results is limited because the threshold was only applied for detecting concealed communities; it was not adopted for calculating the economic integration. With the threshold value we adopted, two communities with more than 240 nodes appeared, around 10% of the 2408 nodes included by the IVAN, and those communities were analyzed as regional communities. Therefore, the sector-specific European communities shown as gray nodes in Fig 2 and green small communities in Fig 3 were not treated as the regional community of Europe. In other words, the European community does not include sector-specific European communities because Europe has substantial value flows that have mainly the same type of sector, not different types of sectors.
In addition, the relationship between threshold and regional community detection shows a large difference before and after the economic crisis, but Table 2 or Fig 4 do not represent such a difference. More research is needed to understand how the threshold for the appearance of regional characteristics in IVAN can be interpreted from international trade, and whether it is related to the economic crisis.
In this study, we set a threshold for detecting major communities and calculated the degree of economic integration within the range found by the Infomap algorithm. Therefore, we interpreted that the economic integration of European communities is unstable because the industries that contribute the most to the level of economic integration are not continuously detected as European communities. However, since the results differ depending on the threshold value and the community detection algorithm, it remains to research how to set the appropriate threshold value when Infomap needs to detect communities in a real weighted directed network such as IVAN.
Originally, the smile curve [42] was viewed in terms of the production stage of a particular product or its relative position from a specific sector. In a broad sense, the smile curve in this study appears as a V-shaped curve when its relative position to the final goods of various sectors is considered as a whole. Not surprisingly, Helmholtz–Hodge decomposition shows a cross-country potential flow with high (low) potential in B: mining and quarrying (F: construction), which is in the upstream (downstream) stage of production. As seen from the circular relationship, the contributions of B: mining and quarrying and F: construction to international economic integration also indicate that the manufacture of each country depends on the resources and development demands (such as construction) of other countries.
To examine the implications of the economic integration index, we focused on the period of 2008–2011, which has exhibited substantial changes in economic integration (Fig 8); we also analyzed 2000 and 2014, which are the first and last years of the analyzed WIOD. Fig 12 shows the changes in the international and intersectoral relationships of circular flow in Europe. The circulation in 2008, the year of the highest integration index in Europe for the 15 years, is higher between Germany, France, Italy, and Russia compared with that in 2000. However, in 2011 and 2014, which has the lowest economic integration, Russia is absent in Europe, as shown in Tables 3 and 7. From a sectoral point of view, Fig 13 shows sector B: mining and quarrying as a hub of sectors F: construction; C19: manufacture of coke and refined petroleum products; and D35: electricity, gas, steam, and air conditioning supply. Therefore, the role of Russia and sector B: mining and quarrying is important for the high economic integration index in Europe.
Fig 12. International value-added circulation of European community in 2000, 2008–2011, and 2014.
These graphs illustrate the top 20 links. Their node sizes and link widths are in proportion to the square root of the degree and the amount of circular flow, respectively. S1 Table shows a detailed list of the sectors.
https://doi.org/10.1371/journal.pone.0255698.g012
Fig 13. Intersectoral value-added circulation of European community in 2000, 2008–2011, and 2014.
These graphs illustrate the top 20 links. Their node sizes and link widths are in proportion to the square root of the degree and the amount of circular flow, respectively. S2 Table shows a detailed list of the sectors.
https://doi.org/10.1371/journal.pone.0255698.g013
Figs 14 and 15 show the changes in international and intersectoral relationships in the Pacific Rim. According to Fig 14, the center of the value-added circulation in the Pacific Rim changed from the United States and Japan to the United States and China. This also means that the Pacific Rim, as a regional community of IVANs, was detected stably because of the strong value-added circulation around the United States and China despite the economic crisis around 2009. From the sectoral perspective, Fig 15 shows that there are three crucial points in the circular structure of the Pacific Rim; these are the strongest stable circulations from B: mining and quarrying to F: construction, those within C26: manufacture of computer, electronic, and optical products, and those within C29: manufacture of motor vehicles, and trailers and semi-trailers for the years 2000–2014.
Fig 14. International value-added circulation of the Pacific Rim community in 2000, 2008–2011, and 2014.
These graphs illustrate the top 20 links. Their node sizes and link widths are in proportion to the square root of the degree and the amount of circular flow, respectively. S1 Table shows a detailed list of the sectors.
https://doi.org/10.1371/journal.pone.0255698.g014
Fig 15. Intersectoral value-added circulation of the Pacific Rim community in 2000, 2008–2011, and 2014.
These graphs illustrate the top 20 links. Their node sizes and link widths are in proportion to the square root of the degree and the amount of circular flow, respectively. S2 Table shows a detailed list of the sectors.
https://doi.org/10.1371/journal.pone.0255698.g015
In Europe, the circulation between sectors B: mining and quarrying and F: construction is unstable because sector B mainly occurs in Russia, which is not an EU member country. Therefore, the extent of economic integration tends to be unstable. By contrast, the Pacific Rim has mineral resources. It also has neutral- and low-potential sectors with high circulation, namely, manufacturing high-tech and motor vehicles, respectively, which produces the high-value-added products as parts of GVCs.
These results suggest that the stable growth of economic integration in the Pacific Rim has been driven by the relationship of the value-added circulation of these resources and international division in the manufacture of high value-added products. Furthermore, in the Pacific Rim, the international division of labor is advancing, and value-added circulation occurs across countries. This may be explained by the fact that, the free mobility of labor in the EU has led to the specialization of sectors; in the Pacific Rim, labor mobility is limited, and the international division of labor has led to the free movement of goods. The relationship between labor mobility and industrial structure is left for future research.
Conclusion
The purpose of this study is to clarify how international economic integration is occurring from the perspective of trade in value-added. For this purpose, we used the WIOD released in 2016 to construct and analyze IVANs, which show the international relationship of sector-wise trade in value-added. First, the scope of economic integration was identified by Infomap, a community detection method that uses network flows. With threshold setting in the IVANs, regional communities in Europe and the Pacific Rim were detected throughout the 15 studied years. These two communities have not been found in studies that analyzed international trade networks constructed from the WIOD. To analyze how value flows within these two regions, we used Helmholtz–Hodge decomposition to extract the potential and circular relationships and clarified the annual changes in the roles played by the countries and sectors within these regions. In addition, we defined an economic integration index using the circular flow and applied it to both regions. We found that the level of economic integration in Europe, which had been increasing until 2008, dropped sharply after the economic crisis in 2009 to a level lower than that of the Pacific Rim in 2010, recovered in 2011, and dropped again after 2012 to a level below that of the Pacific Rim in 2013 and 2014. While the level of economic integration in Europe has been unstable, that in the Pacific Rim has been on a stable upward trend.
Moreover, the sectoral economic integration index provides a background to the changes in the extent of the economic integration of these two regions. In Europe, the extent of economic integration declined in 2009, 2010, and 2013 due to the decrease of intra-European value-added circulation through the industries of mining, construction, petroleum, metal, machinery manufacture, wholesale, financial services, and management services, which showed a large amount of the circular flow of IVAN. On the other hand, the economic integration index of the Pacific Rim was steady because the Pacific Rim community had stably included mining, manufacture (especially of motor vehicles and high-tech products), and construction which had high value-added circulation.
The obtained results suggest several topics for future research. The thresholds for detecting two regional communities by Infomap were significantly different before and after the economic crisis. What this means in terms of international trade requires further study. In addition, how to set appropriate thresholds for time-series community analysis of directed weighted networks such as IVANs also remains a research topic. Finally, it is also necessary to study whether the economic integration indices reflected the labor mobility in each regional community.
Supporting information
S1 Table. List of countries and regional classification.
https://doi.org/10.1371/journal.pone.0255698.s001
(PDF)
S3 Table. Number of communities of international trade networks.
https://doi.org/10.1371/journal.pone.0255698.s003
(PDF)
S1 Fig. Community maps of international trade network.
https://doi.org/10.1371/journal.pone.0255698.s004
(PDF)
References
1. 1. Krugman PR, Obstfeld M, Melitz MJ. International Economics Theory and Policy, 11th Edition. Pearson Education. 2018. Chapter 3: Labor productivity and comparative advantage: the Ricardian model.
2. 2. Hummels D, Ishii J, Yi K-M. The nature and growth of vertical specialization in world trade. Journal of International Economics 2001; Vol. 54.
3. 3. Johnson RC, Noguera G. Accounting for intermediates: Production sharing and trade in value added. Journal of International Economics. 2012.
4. 4. Amador J, Cappariello R. Global value chains: a view from the EURO Area Asian Economic Journal 2015; 29(2), 99–120.
5. 5. Koopman R, Wang Z, Wei S-J. Tracing Value-Added and Double Counting in Gross Exports. NBER Working Paper Series 2012.
6. 6. Dietzenbacher E, Los B, Stehrer R, Timmer M, de Vries G. The Construction of World Input-Output Tables in the WIOD Project. Economic Systems Research. 2013.
7. 7. Timmer MP, Dietzenbacher E, Los B, Stehrer R, de Vries GJ. An Illustrated User Guide to the World Input-Output Database: The Case of Global Automotive Production. Review of International Economics. 2015.
8. 8. Timmer MP, Dietzenbacher E, Los B., Stehrer R, de Vries GJ. User Guide to World Input–Output Database. Review of International Economics. 2015; 23: 575–605.
9. 9. Cerina F, Zhu Z, Chessa A, Riccaboni M. World input-output network. PLoS ONE. 2015. pmid:26222389
10. 10. Zhu Z, Puliga M, Cerina F, Chessa A, Riccaboni M. Global value trees. PLoS ONE, 2015; 10(5), 10–12. pmid:25978067
11. 11. Los B, Timmer MP, de Vries GJ. How Global Are Global Value Chains? A New Approach to Measure International Fragmentaion. J. Regin. Sci., 2015; Vol. 55, No. 1, pp. 66–92.
12. 12. Johnson RC, Noguera G. A portrait of trade in value-added over four decades. Review of Economics and Statistics, 2017; 99(5), 896–911.
13. 13. World Bank. Global Value Chain Development Report 2017: Measuring and analyzing the impact of GVCs on economic development.
14. 14. World Bank. Global Value Chain Development Report 2019: Technological innovation, supply chain trade, and workers in a globalized world.
15. 15. Dollar D. Executive summary. Global Value Chain Development Report 2017, pp.1-14, The World Bank Group, Washington D.C.
16. 16. Amador J, Cabral S. GlobalL Value Chains: A Survey of Drivers and Measures. J. Econ. Surv., 2016; Vol. 30, No. 2, pp. 278–301.
17. 17. Inomata S. Analytical frameworks for global value chains: An overview. Global Value Chain Development Report 2017, Chapter 1, The World Bank Group, Washington D.C.
18. 18. Li X, Jin YY, Chen G. Complexity and synchronization of the World trade Web. Phys. A 2003; 328, 287–296.
19. 19. Serrano MA, Boguna M. Topology of the world trade web. Phys. Rev. E 2003; 68, 015101. pmid:12935184
20. 20. Fagiolo G, Reyes J, Schiavo S. On the topological properties of the world trade web: A weighted network analysis. Phys. A 2008; 387, 3868–3878.
21. 21. Fagiolo G, Reyes J, Schiavo S. World-trade web: Topological properties, dynamics, and evolution. Phys. Rev. E 2009; 79, 036115. pmid:19392026
22. 22. Foti NJ, Pauls S, Rockmore DN. Stability of the World Trade Web over time—An extinction analysis. Journal of Economic Dynamics & Control 2011; 37, 1889–1910.
23. 23. Xiang L, Dong X, Guan J. Global industrial impact coefficient based on random walk process and inter-country input-output table. Physica A: Statistical Mechanics and Its Applications, 2017; 471, 576–591.
24. 24. Ikeda Y, Iyetomi H, Mizuno T, Ohnishi T, Watanabe T. Community structure and dynamics of the industry sector-specific international-trade-network. Proceedings—10th International Conference on Signal-Image Technology and Internet-Based Systems, SITIS 2014, (4), 456–461.
25. 25. Ikeda Y, Aoyama H, Iyetomi H, Mizuno T, Ohnishi T, Sakamoto Y, et al. Econophysics Point of View of Trade Liberalization: Community dynamics, synchronization. RIETI Discussion Paper Series 2016; 16-E-026.
26. 26. Ikeda Y, Aoyama H, Sakamoto Y. Community dynamics and controllability of G7 global production network. In 2015 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) pp. 391–397. IEEE.
27. 27. Aoyama H, Fujiwara Y, Ikeda Y, Iyetomi H, Souma W. Macro-Econophysics: New Studies on Economic Networks and Synchronization. Cambridge University Press, 2017/7/4, ISBN-13:<PHONE_NUMBER>951, chapter 6 and chapter 8.
28. 28. Barigozzi M, Fagiolo G, Manigioni G. Identifying the community structure of the international-trade multi-network Phys. A 2011; 390, 2051–2066.
29. 29. De Benedictis L, Tajoli L. The World Trade Network. World Economy. 2011.
30. 30. Borchert I, Yotov YV. Distance, globalization, and international trade. Economics Letters. 2017.
31. 31. Leontief WW. Quantitative input and output relations in the economic systems of the United States. The Review of Economics and Statistics. 1936; Vol. 18, No. 3, pp. 105–125.
32. 32. Stehrer R. Trade in Value Added and the Value Added in Trade. WIOD Working Paper 2012; 8, 1–19.
33. 33. Fortunato S, Hric D. Community detection in networks: A user guide. Physics Reports, 2016; 659, 1–44.
34. 34. Barabási A-L. Network Science. Cambridge University Press; 2016.
35. 35. Huffman DA. A Method for the Construction of Minimum-Redundancy Codes. Proceedings of the I.R.E. 1952.
36. 36. Rosvall M,Bergstrom . Maps of random walks on complex networks reveal community structure. PNAS, 2008; Vol. 105, No. 4, 1118–1123. pmid:18216267
37. 37. Lambiotte R, Rosvall M. Ranking and clustering of nodes in networks with smart teleportation. Physical Review E—Statistical, Nonlinear, and Soft Matter Physics, 2012; 85(5), 1–9. pmid:23004821
38. 38. Rosvall M, Axelsson D, Bergstrom CT. The map equation. European Physical Journal: Special Topics 2009; 178(1), 13–23.
39. 39. Shannon CE. A mathematical theory of communication. The Bell system technical journal, 1948;27(3), 379–423.
40. 40. Bhatia H, Norgard G, Pascucci V, Bremer P-T. The Helmholtz-Hodge Decomposition—A Survey. IEEE Transactions on Visualization and Computer Graphics, 2013 Aug. VOL. 19, NO. 8. pmid:23744268
41. 41. Kichikawa Y, Iyetomi H, Iino T, Inoue H. Community structure based on circular flow in a large-scale transaction network. Applied Network Science 2019; 4(1), 92.
42. 42. Mudambi R. Location, control and innovation in knowledge-intensive industries. Journal of economic Geography, Volume 8, Issue 5, September 2008, Pages 699–725.
|
Bédenes/1st edition/24
Trade at the Fair
10-13 Agrazhar: Employment Bazaar
12-13 Agrazhar: The Beast Auction
14 Agrazhar: Judgment Day
15 Agrazhar: Blessing and Procession
|
import re
from decimal import Decimal
import responses
from tests import BaseXchangeTestCase
from xchange.clients.base import BaseExchangeClient
from xchange.exceptions import BaseXchangeException
class BaseClientTestCase(BaseXchangeTestCase):
def setUp(self):
super(BaseClientTestCase, self).setUp()
self.client = BaseExchangeClient('API_KEY', 'API_SECRET')
self.client.BASE_API_URL = 'https://xchage-testing.url'
self.client.ERROR_CLASS = BaseXchangeException
def test_init_arguments(self):
self.assertEqual(self.client.api_key, 'API_KEY')
self.assertEqual(self.client.api_secret, 'API_SECRET')
def test_client_interface(self):
methods = [
# internal
'_get',
'_post',
'_request',
# non-authenticated
'get_ticker',
'get_order_book',
# authenticated
'get_account_balance',
'get_open_orders',
'get_open_positions',
'get_order_status',
'open_order',
'cancel_order',
'cancel_all_orders',
'close_position',
'close_all_positions',
]
for method in methods:
self.assertTrue(getattr(self.client, method) is not None)
def test_client_interface_not_implement(self):
methods = [
('get_ticker', ('symbol_pair', )),
('get_order_book', ('symbol_pair', )),
('get_account_balance', ()),
('get_open_orders', ('symbol_pair', )),
('get_open_positions', ('symbol_pair', )),
('get_order_status', ('order_id', )),
('open_order', ('action', 'amount', 'symbol_pair', 'price', 'order_type')),
('cancel_order', ('order_id', )),
('cancel_all_orders', ('symbol_pair', )),
('close_position', ('position_id', 'symbol_pair', )),
('close_all_positions', ('symbol_pair', )),
]
for method, args in methods:
with self.assertRaises(NotImplementedError):
getattr(self.client, method)(**{arg_name: None for arg_name in args})
@responses.activate
def test_get_request_raw_data(self):
"""Should return response data as plain JSON when no model_class is provided"""
responses.add(
method='GET',
url='{}/test-get-method'.format(self.client.BASE_API_URL),
json={'success': True, 'msg': 'All good.'},
status=200,
content_type='application/json')
data = self.client._get(path='/test-get-method', model_class=None)
self.assertEqual(data, {'msg': 'All good.', 'success': True})
@responses.activate
def test_get_server_error(self):
"""Should raise client ERROR_CLASS when status code is 500"""
responses.add(
method='GET',
url='{}/test-get-method'.format(self.client.BASE_API_URL),
json={'success': False, 'msg': 'Something went wrong.'},
status=500,
content_type='application/json')
with self.assertRaisesRegexp(BaseXchangeException, 'Got 500 response with content:'):
self.client._get(path='/test-get-method', model_class=None)
@responses.activate
def test_get_bad_request(self):
"""Should raise client ERROR_CLASS when status code is 400"""
responses.add(
method='GET',
url='{}/test-get-method'.format(self.client.BASE_API_URL),
json={'success': False, 'msg': 'Invalid request params.'},
status=400,
content_type='application/json')
with self.assertRaisesRegexp(BaseXchangeException, 'Got 400 response with content:'):
self.client._get(path='/test-get-method', model_class=None)
@responses.activate
def test_get_empty_response_content(self):
"""Should return None when the response content is empty"""
responses.add(
method='GET',
url='{}/test-get-method'.format(self.client.BASE_API_URL),
body='',
status=200,
content_type='text/html')
data = self.client._get(path='/test-get-method', model_class=None)
self.assertEqual(data, None)
@responses.activate
def test_get_invalid_json_response_content(self):
"""Should raise client ERROR_CLASS when response JSON is badly formatted"""
responses.add(
method='GET',
url='{}/test-get-method'.format(self.client.BASE_API_URL),
body='{"not-valid-json"}',
status=200,
content_type='application/json')
with self.assertRaisesRegexp(BaseXchangeException, 'Could not decode JSON response'):
self.client._get(path='/test-get-method', model_class=None)
@responses.activate
def test_get_valid_response_code_with_error_message(self):
"""Should raise client ERROR_CLASS when status_code is 200 but response content includes error message"""
responses.add(
method='GET',
url='{}/test-get-method'.format(self.client.BASE_API_URL),
json={'error': 'Something went wrong.'},
status=200,
content_type='application/json')
with self.assertRaisesRegexp(BaseXchangeException, 'Something went wrong'):
self.client._get(path='/test-get-method', model_class=None)
@responses.activate
def test_get_apply_transformation_function(self):
"""Should apply transformation function to the response content if it's provided"""
responses.add(
method='GET',
url='{}/test-get-method'.format(self.client.BASE_API_URL),
json={'number': 100},
status=200,
content_type='application/json')
transformation = lambda x: {'number': x['number']* 2}
data = self.client._get(path='/test-get-method',
model_class=None, transformation=transformation)
self.assertEqual(data, {'number': 200})
@responses.activate
def test_get_headers(self):
"""Should pass headers to the request when they are provided"""
responses.add(
method='GET',
url='{}/test-get-method'.format(self.client.BASE_API_URL),
json={'success': True, 'msg': 'All good.'},
status=200,
content_type='application/json')
data = self.client._get(
path='/test-get-method',
model_class=None,
headers={'X-CUSTOM-HEADER': 'xchange'}
)
self.assertEqual(data, {'msg': 'All good.', 'success': True})
@responses.activate
def test_get_custom_params(self):
"""Should add params to the URL when they are provided"""
responses.add(
method='GET',
url=re.compile('{}/test-get-method\?custom_param=1'
''.format(self.client.BASE_API_URL.replace('.', '\.'))),
json={'success': True, 'msg': 'All good.'},
status=200,
content_type='application/json',
match_querystring=True)
data = self.client._get(
path='/test-get-method',
model_class=None,
params={'custom_param': 1}
)
self.assertEqual(data, {'msg': 'All good.', 'success': True})
@responses.activate
def test_post_request(self):
"""Should return response as plain JSON when POSTing with no model_class"""
responses.add(
method='POST',
url='{}/test-post-method'.format(self.client.BASE_API_URL),
json={'success': True, 'msg': 'All good.'},
status=200,
content_type='application/json')
data = self.client._post(
path='/test-post-method',
model_class=None)
self.assertEqual(data, {'msg': 'All good.', 'success': True})
@responses.activate
def test_post_headers(self):
"""Should pass headers to the POST request when provided"""
responses.add(
method='POST',
url='{}/test-post-method'.format(self.client.BASE_API_URL),
json={'success': True, 'msg': 'All good.'},
status=200,
content_type='application/json')
data = self.client._post(
path='/test-post-method',
model_class=None,
headers={'X-CUSTOM-HEADER': 'xchange'}
)
self.assertEqual(data, {'msg': 'All good.', 'success': True})
@responses.activate
def test_post_custom_params(self):
"""Should pass params to the POST request when provided"""
responses.add(
method='POST',
url=re.compile('{}/test-post-method\?custom_param=1'
''.format(self.client.BASE_API_URL.replace('.', '\.'))),
json={'success': True, 'msg': 'All good.'},
status=200,
content_type='application/json')
data = self.client._post(
path='/test-post-method',
model_class=None,
params={'custom_param': 1}
)
self.assertEqual(data, {'msg': 'All good.', 'success': True})
@responses.activate
def test_post_body(self):
"""Should pass body to the POST request if provided"""
responses.add(
method='POST',
url='{}/test-post-method'.format(self.client.BASE_API_URL),
json={'success': True, 'msg': 'All good.'},
status=200,
content_type='application/json')
data = self.client._post(
path='/test-post-method',
model_class=None,
body={'send-this': 'as-body'}
)
self.assertEqual(data, {'msg': 'All good.', 'success': True})
@responses.activate
def test_post_all_params_together(self):
"""Should pass all valid params to the POST request when they are provided"""
responses.add(
method='POST',
url='{}/test-post-method'.format(self.client.BASE_API_URL),
json={'success': True, 'msg': 'All good.'},
status=200,
content_type='application/json')
data = self.client._post(
path='/test-post-method',
model_class=None,
params={'custom_param': 1},
body={'send-this': 'as-body'},
headers={'X-CUSTOM-HEADER': 'xchange'}
)
self.assertEqual(data, {'msg': 'All good.', 'success': True})
|
Storing and using casino content
ABSTRACT
A wagering game system and its operations are described herein. In embodiments, the operations can include presenting wagering game content via one or more output devices of a wagering game machine during a wagering game session. The wagering game session is associated with a player account. The operations can further include detecting a user input via one or more input devices of the wagering game machine. The user input indicates a selection of the wagering game content. The operations can further include storing an aspect of the wagering game content in a data store associated with the player account. The aspect of the wagering game content is accessible from the data store via the player account after the wagering game session.
RELATED APPLICATIONS
This application is a divisional application of, and claims priority benefit to, U.S. application Ser. No. 13/129,022 which is a National Stage of International Application No. PCT/US09/64481 filed 13 Nov. 2009, which claims priority benefit of U.S. Application No. 61/114,755 filed 14 Nov. 2008. The U.S. application Ser. No. 13/129,022, the International Application No. PCT/US09/64481, and the U.S. Application No. 61/114,755 are incorporated by reference.
LIMITED COPYRIGHT WAIVER
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. Copyright 2013, WMS Gaming, Inc.
TECHNICAL FIELD
Embodiments of the inventive subject matter relate generally to wagering game systems, and more particularly to devices and processes that utilize content in wagering game systems and networks.
BACKGROUND
Wagering game machines, such as slot machines, video poker machines and the like, have been a cornerstone of the gaming industry for several years. Generally, the popularity of such machines depends on the likelihood (or perceived likelihood) of winning money at the machine and the intrinsic entertainment value of the machine relative to other available gaming options. Where the available gaming options include a number of competing wagering game machines and the expectation of winning at each machine is roughly the same (or believed to be the same), players are likely to be attracted to the most entertaining and exciting machines. Shrewd operators consequently strive to employ the most entertaining and exciting machines, features, and enhancements available because such machines attract frequent play and hence increase profitability to the operator. Therefore, there is a continuing need for wagering game machine manufacturers to continuously develop new games and gaming enhancements that will attract frequent play.
BRIEF DESCRIPTION OF THE DRAWING(S)
Embodiments are illustrated in the Figures of the accompanying drawings in which:
FIG. 1 is an illustration of selecting casino content and saving it in a user accessible storage, according to some embodiments;
FIG. 2 is an illustration of a wagering game system architecture 200, according to some embodiments;
FIG. 3 is a flow diagram 300 illustrating determining selections of casino content and sending the content to a user accessible storage, according to some embodiments;
FIG. 4 is an illustration of collecting casino content, according to some embodiments;
FIG. 5 is a flow diagram 500 illustrating analyzing casino user selections history, and other casino user information, to present targeted casino content, according to some embodiments;
FIG. 6 is an illustration of presenting casino content using a user account, according to some embodiments;
FIG. 7 is a flow diagram 700 illustrating using information from a user accessible storage to determine casino content to present to a user, according to some embodiments;
FIG. 8 is an illustration of a wagering game machine architecture 800, according to some embodiments; and
FIG. 9 is an illustration of a mobile wagering game machine 900, according to some embodiments.
DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
This description of the embodiments is divided into six sections. The first section provides an introduction to embodiments. The second section describes example operating environments while the third section describes example operations performed by some embodiments. The fourth section describes additional example embodiments while the fifth section describes additional example operating environments. The sixth section presents some general comments.
INTRODUCTION
This section provides an introduction to some embodiments.
Casinos provide various types of casino content to casino users. Some of that casino content relates to wagering games (“wagering game casino content”), such as wagering games, account information related to games, advertisements for games, congratulatory displays for winning games, etc. Some of the casino content relates to information other than wagering games (“non-wagering game casino content”) such as messages related to products or services offered in the casino (e.g., promotions for merchandise, food advertisements, messages about upcoming events, shows, concerts, etc.), as well as messages related other things, such as communications from other patrons (e.g., chat sessions, shared files, etc.), third party advertisements (e.g., non-casino ads), television broadcasts, and so forth. Casinos are interested in making casino users, such as wagering game players (“players”) and other types of casino patrons, aware of the wagering game casino content and the non-wagering game casino content (collectively, the “casino content”). The casinos present the casino content on various electronic devices (“devices”) positioned throughout the casino, such as on wall-mounted screens, on electronic billboards, on television monitors, on projection screens, on computers, on wagering game machines, etc. Casinos often present casino content on devices that are within reach or in close proximity to casino users. For instance, casinos will often present non-wagering game casino content on displays, speakers, etc. that are a part of, and/or in proximity to, wagering game machines as the wagering game machines are presenting wagering game casino content. The casinos know that casino users will very likely see the non-wagering game casino content when it is presented on, or close to, a wagering game machine, because the casino user may be looking at, or around, the wagering game machine for long periods of time during game play sessions. Casinos, however, face certain challenges by presenting non-wagering game casino content in close proximity to wagering game casino content. For example, when a patron is playing a wagering game, the casino makes money on the patron's losses. The more games that the patron plays, the more money the casino may make. Therefore, the casinos face a conflict with presenting non-wagering game casino content in close proximity to wagering game casino content because although casino users such as patrons are very likely to see or hear the non-wagering game casino content if it is in close proximity to a the wagering game casino content, the casinos also want patrons to continue playing wagering games without being unduly distracted by non-wagering game casino content or anything else that may hinder or slow down the speed of play. Embodiments of the inventive subject matter, however, provide ways for casinos to present non-wagering game casino content in close proximity to wagering game casino content while still allowing the casino user to focus primarily on the wagering game casino content. For example, FIG. 1 shows a wagering game system 100 that provides a way for an individual to select interesting casino content and save the casino content so that the individual can review it later. Thus, patrons, players, and the like can focus primarily on wagering games, but also save interesting casino content for later review. In some embodiments, like in FIGS. 4 and 6, a wagering game system also provides ways for an individual to capture or collect casino content (including wagering game casino content) and store it for later review. The embodiments also describe ways to analyze the selections and/or use stored preferences in user accounts to generate interesting casino content that the wagering game system can present within the casino or via accounts associated with the account user.
FIG. 1 is a conceptual diagram that illustrates an example of selecting casino content and saving it in a user accessible storage, according to some embodiments. In FIG. 1, a wagering game system (“system”) 100 includes a wagering game machine 160 connected to a communications network 122. Also connected to the communications network 122 are a wagering game server 150, a web server 140, an account server 170, and an advertisement (“ad”) server 180. The wagering game machine 160 includes a display 110 that presents various images, controls, meters, etc. associated with a wagering game session. For example, the display 110 presents wagering game content, such as slot reels 104 that present a wagering game outcome. In some embodiments the wagering game server 150 can provide the wagering game content and outcome to the wagering game machine 160. The display 110 also presents a bet meter 105 to track and control betting on the wagering game and a spin control 109 to spin the slot reels 104. The display 110 also presents a credit meter 107 to track an amount of credits won during a wagering game session, and a panel 108 to present all types of information and functions (e.g., information about a casino user that is logged in to the wagering game machine 160 via a wagering game account, information about financial transactions, promotions, chat messages, console buttons, etc.). The panel 108 can resize to fit content and can move around the display 110. The wagering game account can be stored in the account server 170 that stores information related to the casino user, the casino user's wagering activities, financial transactions, etc. The account server 170 can also store information about other accounts (e.g., a web account, a social network account, etc.) in addition to, instead of, or in conjunction with, a wagering game account. Slot games are one example of wagering games that can be played on the wagering game machine 160. The wagering game machine 160, however, can be used to present a variety of different wagering games (e.g., video poker, blackjack, bingo, group games, bonus games, progressive games, etc.). The wagering game machine 160 can also be used for other casino services and/or non-wagering game activities, such as for ordering drinks, receiving messages about casino events, chatting with patrons, communicating with technicians, surfing the internet, playing non-wagering games, receiving news feeds related to casino content, patron information, and/or promotions, etc. The wagering game machine 160 can also be used to receive advertisements (“ads”), such as the promotion 112 promoting a new wagering game that is available on the casino network. In some embodiments, the display 110 may be presented on a peripheral device (e.g., a display monitor) connected to a docking station at which the wagering game machine 160 is docked. Consequently, in some embodiments, the promotion 112 can be presented on the peripheral device. During a wagering game session, the casino user may want to focus on playing wagering games instead of looking at ads. Nevertheless, the casino user may be interested in a promotion 112 and want to review it later. Consequently, the system 100 presents the promotion 112 as a selectable and savable advertisement, or rather, an advertisement that can be selected in some way by the casino user, and saved, in some form, to one or more storage locations that are accessible by the casino user (e.g., a personal storage device, a website account, a wagering game account, a cell phone, a laptop, a local storage device provided by the casino, a web account, a shared account, etc.). The one or more storage locations may be referred to as “user accessible storages” or “content-storage” locations because the storage locations can be accessible to the casino user for saving and/or retrieving the casino content. Other examples, such as in FIGS. 4 and 6 illustrate how casino users can store casino content that they find interesting or important. The one or more user accessible storages do not have to be owned by the casino user, but are “accessible” by the casino user for storing and retrieving information (e.g., the casino user has user rights to save the casino content, read the casino content, etc.). User accessible storages are not confined solely to devices or accounts associated with a casino's private network, but can extend beyond the casino's private network to other networks and locations that are accessible from within the casino, such as the Internet, a cell phone network, a wide area network, etc. The promotion 112 can have information associated with it. The ad server 180 can include a record 111 containing advertising information for the promotion 112. The advertising information can include an ad name or identifier, an ad image, a link, a patron offer, and other information (e.g., terms of a deal) that can be used to present the promotion 112 on the display 110 and also to store the promotion 112 in the user accessible storage. The promotion 112 can be selectable. For example, the casino user can touch a screen displaying the promotion 112. When touched, the promotion 112 can present a pop-up message prompting the casino user to select where to store the promotion 112 (e.g., prompt for an email address, prompt for a device selector, etc.). To save time, however, the casino user can pre-configure what happens when the promotion 112 is selected by storing pre-configuration information in the account server 170 (see FIG. 4 for more details). After the promotion 112 is selected, the system 100 can package the ad information into a transportable package, file, etc., (e.g., an email, a text message, a data packet, a multi-media presentation, a web animation, an electronic document, a web page, a configuration file, a command, etc.). The system 100 can send the advertisement information to the casino user's storage, along with any commands needed to store the ad information in memory (e.g., data writing commands). The system 100 can package the promotion 112 exactly as it appears on the display 110 and send the exact replica of the promotion 112 to the user accessible storage. On the other hand, the system 100 can modify how the promotion 112 looks (e.g., alters the size based on the size of a display) or behaves and can also add other information that is only included in the transferrable package that wasn't presented on the promotion 112 (e.g., a website link, an offer, patron information, wagering game information, financial account information, etc.). For example, the system 100 can detect that a casino user touches the promotion 112 and package some of the images presented on the promotion 112 into a message 115 that the system 100 sends to a web account 114 belonging to the casino user. The web account 114 can be stored on the web server 140. The web account 114 receives the message 115 and stores it. The casino user can then access the web account 114 at a later time and review the message 115.
The system 100 can save and store any information presented on the display 110, not just information presented on the promotion 112. For example, the system 100 can save and store congratulatory animations, game results (e.g., wins, impressive hands, etc.), wagering game images, re-enactments of what occurs during a wagering game, demonstrations of new games, chat conversations, replays, prior news feeds related to casino content, etc. The system 100 can also work with other devices within a casino network, not just the wagering game machine 160. For example, the system 100 can present selectable and savable casino content on electronic signs displayed on monitors within a casino, on television channels on a casino television set, etc.
Although FIG. 1 describes some embodiments, the following sections describe many other features and embodiments.
Example Operating Environments
This section describes example operating environments and networks and presents structural aspects of some embodiments. More specifically, this section includes discussion about wagering game system architectures.
Wagering Game System Architecture
FIG. 2 is a conceptual diagram that illustrates an example of a wagering game system architecture 200, according to some embodiments. The wagering game system architecture 200 can include an account server 270 configured to control user related accounts accessible via wagering game networks and social networks. The account server 270 can store and track user information, such as identifying information (e.g., avatars, screen name, account identification numbers, virtual assets, identifier information, virtual awards, other awards, etc.) or other information like financial account information, social contact information (e.g., archived chat communications with social contacts, names and contact information for social contacts, etc.), etc. The account server 270 can contain accounts for social contacts referenced by the user account. The account server 270 can also provide auditing capabilities, according to regulatory rules, and track the performance of users, machines, and servers. The account server 270 can include an account controller 272 configured to control information for a user's account. The account server 270 can also include an account store 274 configured to store information for a user's account.
The wagering game system architecture 200 can also include a wagering game server 250 configured to control wagering game content and communicate wagering game information, account information, and other information to and from a wagering game machine 260. The wagering game server 250 can include a content controller 251 configured to manage and control content for the presentation of content on the wagering game machine 260 or other casino devices. For example, the content controller 251 can generate game results (e.g., win/loss values), including win amounts, for games played on the wagering game machine 260. The content controller 251 can communicate the game results to the wagering game machine 260 via a communications network 222. The content controller 251 can also generate random numbers and provide them to the wagering game machine 260 so that the wagering game machine 260 can generate game results. The content controller 251 can also present casino content, determine selections of content, gather content and metadata, and package content and metadata into one or more transportable electronic packages, files, instructions, etc. The wagering game server 250 can also include a content store 252 configured to contain content to present on the wagering game machine 260. The content store 252 can include casino content that is selectable and savable to a user accessible storage. The wagering game server 250 can also include an account manager 253 configured to control information related to user accounts. For example, the account manager 253 can communicate wager amounts, game results amounts (e.g., win amounts), bonus game amounts, etc., to the account server 270. The wagering game server 250 can also include a communication unit 254 configured to communicate information to the wagering game machine 260 and to communicate with other systems, devices and networks on the communications network 222. The wagering game server 250 can also include a content selection analyzer 256 configured to analyze content selection history, user account information, patron history, external account information, etc. and generate analytic information (“analytics”). The content selection analyzer 256 can also determine predictive analytics based on an individual's past behavior and/or by addressing a group behavior that shares characteristics with an individual. The casino content controller 255 can use the analytics to determine (e.g., select, generate, predict, etc.) casino content to present to a casino user and/or to present on accounts associated with the casino user.
The wagering game system architecture 200 can also include a wagering game machine 260 configured to present wagering games and receive and transmit information to store and use casino content. The wagering game machine 260 can include a content controller 261 configured to manage and control content and presentation of content on the wagering game machine 260. The wagering game machine 260 can also include a content store 262 configured to contain content to present on the wagering game machine 260. The wagering game machine 260 can also include a content selection controller 264 configured to determine that a casino user (e.g., a player, a casino patron, a casino staff, a friend or relative of a casino patron, a social contact, etc.) has selected a selectable casino content item. The content selection controller 264 can determine various ways that a casino user may select items. For example, as shown in FIG. 6, the content selection controller 264 can detect a hand motion (e.g., a finger swipe, a tap, etc.) on a touch screen. Alternatively, the content selection controller 264 can detect when a casino user or casino user's personal device is within a pre-determined distance (e.g., within a wireless signal range) of a casino display and save information presented on the display to the casino user's account and/or to the personal device (see FIG. 6). The content selection controller 264 can also automatically detect pre-configurations set by a casino user regarding content that the casino user would like to select and store (see FIG. 6). The content selection controller 264 can also gather information about the selected content (e.g., movie files, picture files, links, descriptions, pre-set messages, associated discounts, Internet websites, etc.). The content selection controller 264 can present that information to the content controller 251 or the content controller 261, for packaging. The content selection controller 264 can present the packaged content to a personal storage communicator 265. The personal storage communicator 265 can be configured to receive content from the content selection controller 264, the wagering game server 250, or other sources, and send the information to a user accessible storage associated with the casino user.
Each component shown in the wagering game system architecture 200 is shown as a separate and distinct element. However, some functions performed by one component could be performed by other components. For example, the content controller 251 and the content controller 261 can both package information associated with selected casino content items. Furthermore, the components shown may all be contained in one device, but some, or all, may be included in, or performed by multiple devices, as in the configurations shown in FIG. 2 or other configurations not shown. Furthermore, the wagering game system architecture 200 can be implemented as software, hardware, any combination thereof, or other forms of embodiments not listed. For example, any of the network components (e.g., the wagering game machines, servers, etc.) can include hardware and machine-readable content including instructions for performing the operations described herein. Machine-readable content includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine (e.g., a wagering game machine, computer, etc.). For example, tangible machine-readable content includes read only memory (ROM), random access memory (RAM), magnetic disk storage content, optical storage content, flash memory machines, etc. Machine-readable content also includes any content suitable for transmitting software over a network.
Example Operations
This section describes operations associated with some embodiments. In the discussion below, some flow diagrams are described with reference to block diagrams presented herein. However, in some embodiments, the operations can be performed by logic not described in the block diagrams.
In certain embodiments, the operations can be performed by executing instructions residing on machine-readable content (e.g., software), while in other embodiments, the operations can be performed by hardware and/or other logic (e.g., firmware). In some embodiments, the operations can be performed in series, while in other embodiments, one or more of the operations can be performed in parallel. Moreover, some embodiments can perform more or less than all the operations shown in any flow diagram.
FIG. 3 is a flow diagram illustrating determining selections of casino content and sending the content to a user accessible storage, according to some embodiments. FIGS. 1, 4, and 6 are conceptual diagrams that help illustrate elements of a flow 300 in FIG. 3, according to some embodiments. This description will present FIG. 3 in concert with FIGS. 1, 4, and 6. In FIG. 3, the flow 300 begins at processing block 302, where a wagering game system (“system”) determines casino content that is selectable and storable. The system can determine all types of selectable and storable casino content (“casino content items”), such as visible graphics, text, sounds, music files, movie files, animations, etc. The system can generate casino content items with properties, controls, or other elements that can detect when one or more casino content items are selected. The casino content items can include perceptible data (e.g., images displayed on a device, sounds presented via speakers, etc.) and metadata (e.g., data stored in a database). The perceptible data and metadata can be associated with the casino content item so that when the casino content item is selected, the system can react with one or more different responses that gather some or all of the perceptible data and metadata and send it to a user accessible storage.
The flow 300 continues at processing block 304, where the system presents the casino content. The system can present the casino content on wagering game machines, monitors, wall displays, speakers, etc. In some embodiments, the wagering game machine may be a standing model wagering game machine. The standing model wagering game machine can have multiple displays build into it, such as peripheral devices, box-top monitors, etc., that can also display casino content. In some embodiments, the wagering game machine can be a mobile wagering game machine. The mobile wagering game machine can be docked at a docking station. The docking station can expand the viewing area of a wagering game machine by having one or more peripheral displays attached to the docking station. The peripheral displays can have the same capabilities to present the casino content as the wagering game machine. A casino user can be logged in to a wagering game session on the wagering game machine. The docking station can recognize the casino user's identity via the docked wagering game machine and detect pre-configurations associated with the casino user's selection of objects.
The flow 300 continues at processing block 306, where the system determines a selection of a casino content item. The system can detect various ways that a casino user and/or device might select a casino content item. For example, in FIG. 1, the system 100 detects that a casino user touches a casino content item on a screen of a wagering game machine 160. FIG. 4 illustrates two other examples of ways that the system can detect when a casino user selects casino content items, for instance by, (1) detecting a circular finger motion on a touch-screen and (2) detecting that a casino user device is within a wireless range of a casino display. In FIG. 4, a wagering game system (“system”) 400 includes multiple casino display devices, such as a wagering game machine 460 and a casino display 403. The casino display 403 can cycle messages and ads for casino events, games, services, products, and activities. The casino display 403 can also present ads for non-casino services and products by advertisers that want to market to players, casino patrons, and other casino users. The ads presented on the casino display 403 can include selectable and savable casino content items. The wagering game machine 460 can also present casino content items. For instance, a display 410 on the wagering game machine 460 presents a congratulatory display of a wagering game win. The wagering game machine 460 and the casino display 403 can receive the casino content from a wagering game server 450 and/or an ad server 480. The wagering game server 450, the ad server 480, the casino display 403 and the wagering game machine 460 are connected via a communications network 422. A casino user can select a casino content item (“item”) on the display 410 by touching a touch-screen on the display 410 and making a circular motion around one or more items. The wagering game machine 460 detects the motion which creates a boundary 411 around the encircled items. The wagering game machine 460 can then determine that any items within the boundary 411 are selected items. The system 400 can read from a pre-configured, content selection configuration setting (“pre-configuration setting”) 409. The pre-configuration setting 409 can be stored in a user account on an account server 470. The system 400 can use the pre-configuration setting 409 to determine that a circular finger motion on a touch-screen is an action that selects items. The wagering game machine 460 can read properties of the selected items and, based on the object's properties, present a prompt 412 that prompts the casino user for additional options. For example, if a casino user selects an item that changes periodically, the system 400 may want additional assistance from the casino user to indicate whether the system 400 has correctly selected the proper item, as the item may have changed during the selection. Further, the system 400 may give options to select a history of items that have changed, such as certain number of games (e.g., the last game, the last two games, etc.). Further, the nature of the items may be different, and have different properties. For example, a casino user may select or highlight several items and the system 400 may prompt the casino user to indicate whether the casino user wants all of the objects selected as a single item or as individual items. The prompt 412 can also provide an option to crop or resize the selection. In addition to selecting items with a touch-screen on the display 410, a casino user can also select objects by activating a user accessible storage device and/or moving the user accessible storage device within a wireless range of the casino display 403. The casino display 403 can have a wireless transceiver 405 that can detect and send wireless signals. A personal device 446 may also have wireless capabilities. When a casino user sees casino content, such as an ad 402, the casino user can move within wireless range to initiate a selection process. The personal device 446 may be equipped with software that can interface with the wireless transceiver 405 and present a selection panel 407 indicating options for selecting the ad 402. The selection panel 407 can include options for selecting (e.g., capturing) one or more items that were displayed on the casino display 403. The pre-configuration settings 409 can also indicate, in advance, selection configurations (e.g., how far away the personal device should be to the casino display 403 to activate the selection process). The pre-configuration settings 409 can also have an on/off setting so that selection functionality can be turned on or off. Casinos may also provide devices that can be configured to interface with casino displays in different ways to indicate a selection of a casino content item. For example, a casino may provide casino patrons with devices that are equipped with laser pointers to point at casino displays and highlight selectable items. Some displays can also be equipped with touch screens, like the touch-screen on the wagering game machine 460, so that casino users can touch the casino displays and select items using finger or hand motions. Devices can use radio-frequency identification (RFID) devices, motion detectors, optical transmitters, video transmitters, tactile devices, text recognition devices, speech recognition devices, and other devices to select and communicate casino content. Some devices have a direct, or wired, connection to each other (e.g., the personal device 446 can connect to the casino display 403 via an input/output port). The system 400 can also provide pop-ups or other prompts that take notes about a casino user's desires concerning the content. For example, the prompt 412 and the selection panel 407 can include a section for notes (e.g., to indicate a web-address, to select one or more user accessible storage devices, to indicate how the content should be packaged, to indicate a cell phone to send the information to, to indicate friends that should receive the content, to provide instructions to an intermediary recipient of the content, etc.). In some embodiments, the system 400 may also select non-viewable casino content items. For instance, the wagering game machine 460 and the casino display 403 may include speakers 440, 441 that present sounds, music, etc. The system 400 can present a sounds selection interface that can display sound files describing recently played sounds (e.g., game theme music, a congratulatory sound, etc.). A casino user can select the desired sound files from the sounds selection interface.
The flow 300 continues at processing block 308, where the system determines casino content information from the casino content item. Some casino content information can be perceptible (e.g., graphics, pictures, text, video, audio, etc.). Other information can be metadata associated with the item. The metadata can be pre-stored to place into messages, content packages, etc. that are sent to user accessible storage locations. The system can determine the information from the selected items by reading properties and settings of the items or by reading data stored in a database associated with the items. The system can then prepare the data to be transferred to a user accessible storage, such as by packaging data from portions of a database record, as well as any associated graphics, videos, sound files, etc., the into a transportable package. In some embodiments, the system can select or generate a reproduction of the item (e.g., a casino user selects one or more graphics and the system packages a copy of the graphics exactly as they appear to the casino user). However, some items, though they may appear as a cohesive unit to the casino user may actually be a group of separable items that the system can separate and repackage to appear different than what the casino user sees. The system can provide prompts and/or settings that allow a casino user to indicate whether the casino user wants to receive an exact copy or whether the system can repackage the information in another way that may be more appealing, that may store more easily, that can be displayed on specific technology different than the casino display, etc.
The flow 300 continues at processing block 310, where the system determines a user accessible storage. A user accessible storage can be a personal device (e.g., a cell phone, a personal digital assistant, a personal database, a flash card, a personal computer, an external hard drive, etc.) that the casino user carries or possesses. The system can detect one or more devices connected to, or in proximity to, the casino display device and prompt the casino user to indicate a storage location (e.g., select a device and a drive on the device). In some embodiments, a user accessible storage can be on a device that the casino user does not carry or possess, such as a storage space or account on a remote device (e.g., an account server, a web server, etc.).
The flow 300 continues at processing block 312, where the system sends the casino content information to the user accessible storage. The system can send the casino content information to a designated device or storage location. In some embodiments, the system can connect with a host device and initiate a command to save the information on a computer hard drive, a database, or some other file system or long-term (e.g., non-volatile) memory location. In some embodiments, the system can store the information in temporary memory (e.g., volatile memory, random access memory, etc.) on the device (e.g., the wagering game machine) that displayed the information. The casino user can review the casino content item information before the wagering game session ends and/or the machine power-cycles and flushes the casino content from the temporary memory. In some embodiments, the system can send a message containing the information, such as to an email account, which the host email server can store in the form of an email, a text message, a chat message, an archive file, etc. The system can provide storage commands and user login information, along with the casino content information, to a remote server, such as a web server. The web server can use the user login information to determine a web account associated with a player account, or other user account used to access the system. The web server can process the storage commands to determine a memory location associated with the web account and store the casino content information in the memory location.
The flow 300 continues at processing block 314, where the system uses account information to present the casino content information. The system can present the casino content information (e.g., copies of the selected casino content items and/or other data) on a player profile, on a user account, on an email message, on a chat screen, or any other device or display that can access the user accessible storage to which the casino content information was sent and stored. FIG. 1 illustrates and example of presenting saved and stored casino content. In FIG. 1, the web account 114 displays a message showing information from the saved promotion 112. FIG. 6 illustrates an example of a web account with additional features that can further present the saved casino content information. In FIG. 6, a wagering game system (“system”) 600 includes a personal computer 636 that a user can use (e.g., external to a casino network) to access a web account hosted by a web server 640. The computer 636 includes a display 601 that shows account information for the web account that belongs to the user. The user may have selected casino content from a casino device and stored the casino content to the web account. The user can log on to the web account using the computer 636, which connects to the web server 640 via the communications network 622. The computer 636 can access the casino content that was stored on the web account and display it within the computer display 601. In some embodiments, the system 600 can use information from the web account to present the information. For instance, the web account includes information about social contacts 602 (e.g., friends, acquaintances, etc.) of the user. The system 600 can send the casino content information to one or more of the social contacts associated with the user. The system 600 can send the packaged casino content information to any designated account, cell phone, web page, or other device and/or location that belongs to the social contact, such as to a social contact's mobile device 638. In some embodiments, the system 600 can determine groupings of social contacts based on information provided on the user account. For instance, the social contacts 602 may include tags, properties, or other descriptors that indicate that some of the social contacts may like various types of wagering game content and would like to receive a copy of the saved casino content. In some embodiments, the system 600 can present controls and functionality that allows a user to modify or edit the information. For example, the system can show the user what was stored from the casino, but then modify it (e.g., resize it, reshape it, record over portions of it, personalize it, etc.). The system 600 can pre-configure a casino content item with modification options to assist the user in easily modifying and edit the stored casino content. The system 600 can also convert the casino content to different file formats so that the casino content can be opened and modified with third-party applications. In some embodiments, the system can read preferences from the web account and use the preferences to determine targeted casino content that the system 600 can present to the user when the user is logged in to the web account. FIG. 5 illustrates an example flow 500 that can determine and provide targeted content.
FIG. 5 is a flow diagram illustrating analyzing casino user selections history, and other casino user information, to present targeted casino content, according to some embodiments. FIG. 6 is a conceptual diagram that helps illustrate elements of a flow 500 in FIG. 5, according to some embodiments. This description will present FIG. 5 in concert with FIG. 6. In FIG. 5, the flow 500 begins at processing block 502, where a wagering game system (“system”) determines one or more selections of casino content. The system can determine selection of casino content as described in FIG. 3.
The flow 500 continues at processing block 504, where the system analyzes the one or more selections and other casino user information. As a casino user selects casino content items (“items”) to save and store, the system can analyze those items and generate analytical information (“analytics”) based on the casino user's history of selecting items. The items can have descriptive metadata (e.g., properties, tags, etc.) that indicate the nature of the items (e.g., ad types, related game themes, etc.). The system can also provide information related to the presentation of the items (e.g., demographics, time and date presented, content provider, etc.). For example, a casino user may consistently select and save items related to casino musical shows and events. Those items may have metadata tags that identify the items as belonging to a “musical” category. The system can use that information to determine musical ads with some musical properties and target the casino user with the musical ads (e.g., show ads related to musical events, show ads with rich musical sound tracks, etc.). The system can generate and/or access analytics from an advertising server. For instance, in FIG. 6, an ad server 680 can generate and store the analytics. The ad server 680 can access a casino account server 670, via the communication network 622, to obtain a casino user's selection history and analyze it to generate the analytics.
The flow 500 continues at processing block 506, where the system determines preferences stored on a user account. For example, in FIG. 6, the system 600 can access the web account hosted by the account server 640. The web account can include various preferences set by the user, such as a music play list 626 indicating music that the user enjoys, a preferred games and themes panel 627 indicating favorite games, sports, movie genres, and a wagering game notification widget 628 indicating types of wagering games that the user would like to be notified about. The ad server 680 can access the web server 640, via the communication network 622, to obtain user preferences. The ad server 680 can also access user preferences on other accounts that store user preferences, such as the casino account server 670.
The flow 500 continues at processing block 508, where the system determines targeted content to present to a user account based on analytics and/or account preferences. For example, in FIG. 6, the system 600 can use any of the preferences (e.g., music play list 626, preferred games and themes panel 627, wagering game notification widget 628, etc.) and the user analytics (e.g., a user's preference for musical casino content based on user selection history) to determine targeted casino content. The system can also provide the analytics to third parties to target market their services and products (e.g., musical CDs, concert tickets, etc.) to the user.
The flow 500 continues at processing block 510, where the system presents the targeted content using the user account. For example, in FIG. 6, the system 600 presents a targeted ad in the ad banner box 630, when the web account is active (e.g., when the user is logged on and/or using the web account). For instance, the system 600 determines, based on the user's selection history, that the user likes musical events. Further, the system 600 determines, from the music play list 626 that the user likes a certain performing artist. The ad server 680 then searches through listings for musical concerts that may be playing in a location close to the user's residence, in a local casino, etc. The system 600 then presents the ad in the ad banner box 630. The system 600 can also search through other servers of advertising partners to find content to present in the ad banner box 630. In some embodiments, the system 600 can present saved content that relates most closely to the user's likes based on the user selection history and/or user preferences. For example, if the user had selected several items from casino devices, including an ad for the performing artist when it was presented on a casino display device, the system 600 may present the ad for the performing artist first, or more frequently, than other selected and saved items. In other embodiments, the system 600 can send reminders to a user, based on the selections, to remind the user about new games that the user has tagged. For example, the system can read a preference about a new game from the wagering game notification widget 628, or other games that are related to the new games. The system 600 can also detect a selection of a game from a casino display device and send commands along with the packaged information from the casino content item. The commands can update the settings on part of the web account 601, such as settings within the wagering game notification widget 628.
In some embodiments, a wagering game system can also use analytics and preferences to present casino content on a casino device. For example, in FIG. 7 a flow 700 illustrates using information from a user accessible storage to determine casino content to present to a casino user, according to some embodiments. In FIG. 7, the flow 700 begins at processing block 702, where a wagering game system (“system”) determines a casino user in proximity to, or currently using, a casino device. For instance, in FIG. 4, a casino user may be carrying the personal device 446 and may approach the casino display 403. In some embodiments, the wireless transceiver 405 detects, via an RFID transmitter, a player account card, chip, or other identification device that a specific user account uses for the personal device 446. In other embodiments, the wireless transceiver 405 can read identifying information stored on the personal device 446, such as a user's name, and then cross-reference a user account list to find a user account with the same name. In some embodiments, the personal device 446 may be a mobile wagering game machine, or other mobile device, that has been registered to the user account.
The flow 700 continues at processing block 704, where the system determines one or more preferences from a personal storage. For example, in FIG. 4, the wireless transceiver 405 can detect preferences stored on the personal device 446 indicating ads or other casino content that the user has previously selected. The wireless transceiver 405 can also communicate with the account server 470, the ad server 480, or any other device (e.g., a remote web server) to determine preferences from a user account.
The flow 700 continues at processing block 706, where the system determines analytics associated with the casino user's past selection of casino content items. The system can determine analytics associated with the user account as described in FIG. 5. For example, in FIG. 6, the system 600 can obtain analytics from the ad server 680, which generates and/or stores player game analytics, along with other kinds of information related to the user's game history, selection history, social group rankings, etc.
The flow 700 continues at processing block 708, where the system uses the preferences and/or analytics to determine casino content of interest to the casino user. The casino content can be ads (e.g., ad items stored in an advertising server) that match some of the same properties, tags, descriptions, or other information that is similar to the preferences and/or analytics. In some embodiments, the system can also determine non-casino content items of interest, such as ads from other advertisers that want to market to casino users. The system can determine the casino content by predicting what a casino user may like based on the information from the preferences and analytics.
The flow 700 continues at processing block 710, where the system presents the casino content on the casino device. For example, in FIG. 4, the system 400 detects that the casino user device 446 is close to the casino display 403 (e.g., within the pre-configured distance stored in the pre-configuration setting 409) and presents a targeted ad (e.g., ad 402) on the casino display 403. The system 400 can also present targeted casino content on the wagering game machine 460, when a user account is active (e.g., the player is logged in and/or using the wagering game machine 460). In some embodiments, the targeted casino content can be sounds or images of things that the casino user prefers. For example, the system can detect a song that the casino user likes by looking at user's account (e.g., “Viva Las Vegas” by Elvis Presley, as listed on the music play list 626 in FIG. 6). The system can play the song to entice the casino user to play wagering games for a longer duration. The system can also incorporate the song into casino content ads so that the ads become more appealing to the user.
Furthermore, although flow 700 describes determining a user account, the system can also determine information for an individual without actually determining a user account. For example, in FIG. 4, when the personal device 446 comes within a pre-configured proximity to the casino display 403, the system 400 can communicate with the personal device 446 and look for personal information (e.g., look at a specific file folder that is designated for casino content storage, look at a file folder that contains music files, etc.). The system can use the personal information to determine if it contains past casino content selections and/or preferences by the individual. The system can then use that information to determine casino content to present to the individual on a casino device. Further, the system can utilize guest, anonymous accounts, one-time accounts, shared accounts, etc. for individuals in a casino who have not registered for a casino account but that still want to save casino content.
Additional Example Embodiments
According to some embodiments, a wagering game system (“system”) can provide various example devices, operations, etc., to store and use casino content. The following non-exhaustive list enumerates some possible embodiments.
- - The system can provide an icon on a television screen that a casino user can use to rate television channels, or an icon on a wagering game machine that a casino user can use to indicate preferences or ratings for wagering games. The system can save the ratings and preferences to a user account. The system can also augment settings based on the information. For example, the system can package commands that the account can use to update settings and configurations or store information in specific locations within the account's file structure. - The system can send saved casino content items to individuals or groups that a casino user does not know, but that may have common characteristics (e.g., system uses analytics to send items to users that have similar preferences or analytics). - The system can rank and/or organize groups based on the selection history of the individual in the group. - The system can prioritize casino content displayed on a casino display and/or on an account display, based on previous behavior, selections of items, etc. - The system can automatically select items to analyze during a wagering game. For example, the system can select wagering game play objects and use the data from the objects to grade a player and place the player into a level of competency. - The system can determine user preferences and integrate them into the casino content. The system can present personal content, such as an item that a user likes (e.g., a music file of a favorite song, a video clip of a favorite television show or movie, a picture of a friend or favorite celebrity, etc.) and require a user to meet a certain level of wagering game activity (e.g., play a certain number of wagering games) to continue presenting the item. The system can integrate the personal content into the wagering game elements (e.g., the system determines a favorite avatar or icon from a user preference and places it on a slot reel). The system can also detect selected and saved items and integrate those into game play elements (e.g., a user touches a color, texture, or picture displayed on a screen or other casino content item and the system integrates it into the wagering game elements). - The system can read information from a user accessible storage device, like an MP3 player or a digital camera, and use that information in casino content (e.g., play a sound or musical file, show a picture from the camera, etc.). - The system can provide a route to a requested game (e.g., the system detects that new game is available that a user has indicated in a preference). In route to that game, the system can target ads to the user as the user walks to the game. - The system can take information from user preferences or selection history and send the information to an intermediary party to review. For example, the system can send the user selection history to a tour operator. The tour operator can determine a trip that the user and others might like to take based on the selection history. - The system can send a saved item to various groups or businesses so that they can compete to provide better offers or similar content. - The system can present selectable and savable objects on a web browser. The web browser can be used to access an online casino website, or any other wagering game website. The system can determine when a user selects (e.g., clicks on) one or more online casino content items or items displayed in the web browser and save the information to the user's hard drive, web account, or other user accessible storage location. - The system can transport casino content between a “brick-and-mortar” casino and a wagering game website. The user can access the casino content by being in the casino and/or by accessing the wagering game website.
Additional Example Operating Environments
This section describes example operating environments, systems and networks, and presents structural aspects of some embodiments.
Wagering Game Machine Architecture
FIG. 8 is a conceptual diagram that illustrates an example of a wagering game machine architecture 800, according to some embodiments. In FIG. 8, the wagering game machine architecture 800 includes a wagering game machine 806, which includes a central processing unit (CPU) 826 connected to main memory 828. The CPU 826 can include any suitable processor, such as an Intel® Pentium processor, Intel® Core 2 Duo processor, AMD Opteron™ processor, or UltraSPARC processor. The main memory 828 includes a wagering game unit 832. In some embodiments, the wagering game unit 832 can present wagering games, such as video poker, video black jack, video slots, video lottery, reel slots, etc., in whole or part.
The CPU 826 is also connected to an input/output (“I/O”) bus 822, which can include any suitable bus technologies, such as an AGTL+frontside bus and a PCI backside bus. The I/O bus 822 is connected to a payout mechanism 808, primary display 810, secondary display 812, value input device 814, player input device 816, information reader 818, and storage unit 830. The player input device 816 can include the value input device 814 to the extent the player input device 816 is used to place wagers. The I/O bus 822 is also connected to an external system interface 824, which is connected to external systems 804 (e.g., wagering game networks). The external system interface 824 can include logic for exchanging information over wired and wireless networks (e.g., 802.11g transceiver, Bluetooth transceiver, Ethernet transceiver, etc.)
The I/O bus 822 is also connected to a location unit 838. The location unit 838 can create player information that indicates the wagering game machine's location/movements in a casino. In some embodiments, the location unit 838 includes a global positioning system (GPS) receiver that can determine the wagering game machine's location using GPS satellites. In other embodiments, the location unit 838 can include a radio frequency identification (RFID) tag that can determine the wagering game machine's location using RFID readers positioned throughout a casino. Some embodiments can use GPS receiver and RFID tags in combination, while other embodiments can use other suitable methods for determining the wagering game machine's location. Although not shown in FIG. 8, in some embodiments, the location unit 838 is not connected to the I/O bus 822.
In some embodiments, the wagering game machine 806 can include additional peripheral devices and/or more than one of each component shown in FIG. 8. For example, in some embodiments, the wagering game machine 806 can include multiple external system interfaces 824 and/or multiple CPUs 826. In some embodiments, any of the components can be integrated or subdivided.
In some embodiments, the wagering game machine 806 includes a wagering game module 837. The wagering game module 837 can process communications, commands, or other information, where the processing can store and use casino content.
Furthermore, any component of the wagering game machine 806 can include hardware, firmware, and/or machine-readable content including instructions for performing the operations described herein.
Mobile Wagering Game Machine
FIG. 9 is a conceptual diagram that illustrates an example of a mobile wagering game machine 900, according to some embodiments. In FIG. 9, the mobile wagering game machine 900 includes a housing 902 for containing internal hardware and/or software such as that described above vis-à-vis FIG. 8. In some embodiments, the housing has a form factor similar to a tablet PC, while other embodiments have different form factors. For example, the mobile wagering game machine 900 can exhibit smaller form factors, similar to those associated with personal digital assistants. In some embodiments, a handle 904 is attached to the housing 902. Additionally, the housing can store a foldout stand 910, which can hold the mobile wagering game machine 900 upright or semi-upright on a table or other flat surface.
The mobile wagering game machine 900 includes several input/output devices. In particular, the mobile wagering game machine 900 includes buttons 920, audio jack 908, speaker 914, display 916, biometric device 906, wireless transmission devices 912 and 924, microphone 918, and card reader 922. Additionally, the mobile wagering game machine can include tilt, orientation, ambient light, or other environmental sensors.
In some embodiments, the mobile wagering game machine 900 uses the biometric device 906 for authenticating players, whereas it uses the display 916 and speakers 914 for presenting wagering game results and other information (e.g., credits, progressive jackpots, etc.). The mobile wagering game machine 900 can also present audio through the audio jack 908 or through a wireless link such as Bluetooth.
In some embodiments, the wireless communication unit 912 can include infrared wireless communications technology for receiving wagering game content while docked in a wager gaming station. The wireless communication unit 924 can include an 802.11G transceiver for connecting to and exchanging information with wireless access points. The wireless communication unit 924 can include a Bluetooth transceiver for exchanging information with other Bluetooth enabled devices.
In some embodiments, the mobile wagering game machine 900 is constructed from damage resistant materials, such as polymer plastics. Portions of the mobile wagering game machine 900 can be constructed from non-porous plastics which exhibit antimicrobial qualities. Also, the mobile wagering game machine 900 can be liquid resistant for easy cleaning and sanitization.
In some embodiments, the mobile wagering game machine 900 can also include an input/output (“I/O”) port 930 for connecting directly to another device, such as to a peripheral device, a secondary mobile machine, etc. Furthermore, any component of the mobile wagering game machine 900 can include hardware, firmware, and/or machine-readable content including instructions for performing the operations described herein.
The described embodiments may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic device(s)) to perform a process according to embodiments(s), whether presently described or not, because every conceivable variation is not enumerated herein. A machine readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions. In addition, embodiments may be embodied in an electrical, optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.), or wireline, wireless, or other communications medium.
GENERAL
This detailed description refers to specific examples in the drawings and illustrations. These examples are described in sufficient detail to enable those skilled in the art to practice the inventive subject matter. These examples also serve to illustrate how the inventive subject matter can be applied to various purposes or embodiments. Other embodiments are included within the inventive subject matter, as logical, mechanical, electrical, and other changes can be made to the example embodiments described herein. Features of various embodiments described herein, however essential to the example embodiments in which they are incorporated, do not limit the inventive subject matter as a whole, and any reference to the invention, its elements, operation, and application are not limiting as a whole, but serve only to define these example embodiments. This detailed description does not, therefore, limit embodiments, which are defined only by the appended claims. Each of the embodiments described herein are contemplated as falling within the inventive subject matter, which is set forth in the following claims.
1. An method comprising: presenting wagering game content via one or more output devices of a wagering game machine during a wagering game session, wherein the wagering game session is associated with a player account; detecting a user input via one or more input devices of the wagering game machine, said user input indicating a selection of the wagering game content; and storing an aspect of the wagering game content in a data store associated with the player account, wherein the aspect of the wagering game content is accessible from the data store via the player account after the wagering game session.
2. The method of claim 1, wherein the wagering game content is presented via a a touch-screen display of the wagering game machine, and wherein the detecting the user input via the one or more input devices comprises detecting a finger motion on the touch-screen display, said finger motion indicating a selection of the wagering game content.
3. The method of claim 1, wherein the aspect of the wagering game content is one or more of a copy of the wagering game content, a modified version of the wagering game content, metadata of the wagering game content, a property of the wagering game content, a setting from the wagering game content, a graphic from the wagering game content, a video from the wagering game content, audio from the wagering game content, pictures from the wagering game content, and text from the wagering game content.
4. The method of claim 1 further comprising: presenting a history of the wagering game content in response to the user input; detecting a selection from the history of the wagering game content, via additional player input via the one or more input devices, said selection indicating a version of the wagering game content requested; and storing the version of the wagering game content in the data store in response to the selection from the history.
5. The method of claim 1 further comprising: detecting that the user input selects multiple items associated with the wagering game content; detecting additional player input that specifies one of the multiple items; and saving an aspect of the one of the multiple items in the data store.
6. The method of claim 1 further comprising: determining one or more of preferences for the player account and a history of selections of additional wagering game content by the player account; performing a comparison of properties for advertisement content to properties of the one or more of the preferences for the player account and the history of the selections of additional wagering game content by the player account; and based on the comparison, selecting the advertisement content for presentation to the player account.
7. The method of claim 6 further comprising selecting the advertisement content from one or more servers associated with one or more third-party advertisers.
8. The method of claim 6, further comprising storing a copy of the advertisement content in the data store.
9. One or more machine-readable storage devices having instructions stored thereon, which when executed by a set of one or more processors causes the set of one or more processors to perform operations comprising: presenting wagering game content on a display associated with a wagering game machine during a wagering game session, wherein the wagering game content is configured to be selectable via the display, wherein the wagering game session is associated with a player account, detecting a user input via the display, said user input indicating a selection of the wagering game content presented via the display, and storing an aspect of the wagering game content in a data store associated with the player account, wherein the aspect of the wagering game content is accessible from the data store via the player account after the wagering game session.
10. The one or more machine-readable storage devices of claim 9, wherein the display is a touch-screen display and wherein the detecting the user input via the display comprises detecting a finger motion on the touch-screen display, said finger motion indicating a selection of the wagering game content presented on the touch-screen display.
11. The one or more machine-readable storage devices of claim 9, wherein the aspect of the wagering game content is one or more of a copy of the wagering game content, a modified version of the wagering game content, metadata of the wagering game content, a property of the wagering game content, a setting from the wagering game content, a graphic from the wagering game content, a video from the wagering game content, audio from the wagering game content, pictures from the wagering game content, and text from the wagering game content.
12. The one or more machine-readable storage devices of claim 9, said operations further comprising: presenting a history of the wagering game content in response to the user input, detecting a selection from the history of the wagering game content, via additional player input, said selection indicating a version of the wagering game content requested, and storing the version of the wagering game content in the data store in response to the selection from the history.
13. The one or more machine-readable storage devices of claim 9, said operations further comprising: determining one or more of preferences for the player account and a history of selections of additional wagering game content by the player account, performing a comparison of properties for advertisement content to properties of the one or more of the preferences for the player account and the history of the selections of additional wagering game content by the player account, and based on the comparison, selecting the advertisement content for presentation to the player account.
14. A system comprising: at least one processor; and a casino content module configured to, via the processor, present wagering game content on a touch-screen display associated with a wagering game machine during a wagering game session, wherein the wagering game content is configured to be selectable via user touch on the touch-screen display, wherein the wagering game session is associated with a player account, detect a finger motion on the touch-screen display, said finger motion indicating a selection of the wagering game content presented on the touch-screen display, and store information associated with the wagering game content in a data store associated with the player account, wherein the information associated with the wagering game content is accessible from the data store via the player account after the wagering game session.
15. The system of claim 14, wherein the information associated with the wagering game content is one or more of a copy of the wagering game content, a modified version of the wagering game content, metadata of the wagering game content, a property of the wagering game content, a setting from the wagering game content, a graphic from the wagering game content, a video from the wagering game content, audio from the wagering game content, pictures from the wagering game content, and text from the wagering game content.
16. The system of claim 14, wherein the casino content module is further configured to present a history of the wagering game content in response to the finger motion, detect a selection from the history of the wagering game content, via additional player input, said selection indicating a version of the wagering game content requested, and store the version of the wagering game content in the data store in response to the selection from the history.
17. The system of claim 14, wherein the casino content module is further configured to detect that the finger motion selects multiple items associated with the wagering game content, detect additional player input that specifies one of the multiple items, and save a copy of the one of the multiple items in the data store.
18. The system of claim 17, wherein the casino content module is further configured to, determine one or more of preferences for the player account and a history of selections of wagering game content by the player account, perform a comparison of properties for advertisement content to properties of the one or more of the preferences for the player account and the history of the selections of wagering game content by the player account, and select, based on the comparison, the advertisement content for presentation to the player account.
19. The system of claim 18, wherein the casino content module is further configured to store a copy of the advertisement content in the data store.
20. An apparatus comprising: means for presenting wagering game content via one or more output devices associated with a wagering game machine during a wagering game session, wherein the wagering game session is associated with a player account; means for detecting a user input via one or more input devices of the wagering game machine, said user input indicating a selection of the wagering game content presented via the one or more output devices; and means for storing a copy of the wagering game content in a data store associated with the player account, wherein the copy of the wagering game content is accessible from the data store via the player account after the wagering game session.
21. The apparatus of claim 20 further comprising: means for presenting an option to modify one or more aspects of the copy of the wagering game content
22. The apparatus of claim 20 further comprising: means for presenting an option to one or more of crop, resize, reshape, personalize, and record over a portion of the copy of the wagering game content.
23. The apparatus of claim 20, wherein the copy of the wagering game content includes one or more of a graphic, a video, a sound, a picture and text from the wagering game content.
24. The apparatus of claim 20 further comprising: means for presenting an option to specify one or more recipients to receive the copy of the wagering game content; and means for providing the copy of the wagering game content to the one or more recipients.
25. The apparatus of claim 20 further comprising: presenting a history of the wagering game content; detecting a selection of a version of the wagering game content presented in the history of the wagering game content; and storing, as the copy of the wagering game content, the version of the wagering game content that was selected from the history.
|
Device, method, and graphical user interface for controlling multiple devices in an accessibility mode
ABSTRACT
In accordance with some embodiments, a method is performed at a first device with one or more processors, non-transitory memory, and a display. The method includes displaying, on the display, a device control transfer affordance while operating the first device based on user input from an input device that is in communication with the first device. The method includes receiving a device control transfer user input from the input device selecting the device control transfer affordance that is displayed on the display of the first device. In response to receiving the device control transfer user input, the method includes configuring a second device to be operated based on user input from the input device and ceasing to operate the first device based on user input from the input device.
CROSS-REFERENCE TO RELATED-APPLICATIONS
This application claims the benefit of U.S. Provisional Patent App. No. 62/348,884, filed on Jun. 11, 2016, which is incorporated by reference in its entirety.
TECHNICAL FIELD
This relates generally to electronic devices with touch-sensitive surfaces, including but not limited to electronic devices with touch-sensitive surfaces that include a user interface for controlling multiple devices in an accessibility mode.
BACKGROUND
The use of touch-sensitive surfaces as input devices for computers and other electronic computing devices has increased significantly in recent years. Example touch-sensitive surfaces include touchpads and touch-screen displays. Such surfaces are widely used to manipulate user interface objects on a display.
Example manipulations include adjusting the position and/or size of one or more user interface objects or activating buttons or opening files/applications represented by user interface objects, as well as associating metadata with one or more user interface objects or otherwise manipulating user interfaces. Example user interface objects include digital images, video, text, icons, control elements such as buttons and other graphics. A user will, in some circumstances, need to perform such manipulations on user interface objects in a file management program (e.g., Finder from Apple Inc. of Cupertino, Calif.), an image management application (e.g., Aperture, iPhoto, Photos from Apple Inc. of Cupertino, Calif.), a digital content (e.g., videos and music) management application (e.g., iTunes from Apple Inc. of Cupertino, Calif.), a drawing application, a presentation application (e.g., Keynote from Apple Inc. of Cupertino, Calif.), a word processing application (e.g., Pages from Apple Inc. of Cupertino, Calif.), a website creation application (e.g., iWeb from Apple Inc. of Cupertino, Calif.), a disk authoring application (e.g., iDVD from Apple Inc. of Cupertino, Calif.), or a spreadsheet application (e.g., Numbers from Apple Inc. of Cupertino, Calif.).
But people with limited motor skills, such as those with certain finger or hand impairments, may find performing certain gestures difficult and may employ alternative input devices to control an electronic device. However, people with limited motor skills may have multiple electronic devices and only a single easily accessible alternative input device and may have difficulty reconfiguring the input device to control different electronic devices, e.g., by disconnecting the input device from a first device and connecting the input device to a second device.
SUMMARY
Accordingly, there is a need for electronic devices with faster, more efficient methods and interfaces for controlling multiple devices in an accessibility mode. Such methods and interfaces optionally complement or replace conventional methods for controlling multiple devices in an accessibility mode. Such methods and interfaces reduce the cognitive burden on a user and produce a more efficient human-machine interface. For battery-operated devices, such methods and interfaces conserve power and increase the time between battery charges.
The above deficiencies and other problems associated with user interfaces for electronic devices with touch-sensitive surfaces are reduced or eliminated by the disclosed devices. In some embodiments, the device is a desktop computer. In some embodiments, the device is portable (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the device has a touchpad. In some embodiments, the device has a touch-sensitive display (also known as a “touch screen” or “touch-screen display”). In some embodiments, the device has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some embodiments, the user interacts with the GUI primarily through stylus and/or finger contacts and gestures on the touch-sensitive surface. In some embodiments, the functions optionally include image editing, drawing, presenting, word processing, website creating, disk authoring, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographing, digital videoing, web browsing, digital music playing, and/or digital video playing. Executable instructions for performing these functions are, optionally, included in a non-transitory computer readable storage medium or other computer program product configured for execution by one or more processors.
In accordance with some embodiments, a method is performed at a device with one or more processors, non-transitory memory, and a display. The method includes: displaying, on the display, a device control transfer affordance while operating the first device based on user input from an input device that is in communication with the first device, receiving a device control transfer user input from the input device selecting the device control transfer affordance that is displayed on the display of the first device, and, in response to receiving the device control transfer user input, configuring a second device to be operated based on user input from the input device and ceasing to operate the first device based on user input from the input device.
In accordance with some embodiments, an electronic device includes a display unit configured to display a user interface, one or more input units configured to receive user inputs, and a processing unit coupled with the display unit and the one or more input units. The processing unit is configured to display, on the display unit, a device control transfer affordance while operating the first device based on user input from an input device that is in communication with the first device, receive a device control transfer user input from the input device selecting the device control transfer affordance that is displayed on the display of the first device, and, in response to receiving the device control transfer user input, configure a second device to be operated based on user input from the input device and ceasing to operate the first device based on user input from the input device.
In accordance with some embodiments, an electronic device includes a display, one or more processors, non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of the operations of any of the methods described herein. In accordance with some embodiments, a non-transitory computer readable storage medium has stored therein instructions which when executed by one or more processors of an electronic device with a display and an input device, cause the device to perform or cause performance of the operations of any of the methods described herein. In accordance with some embodiments, a graphical user interface on an electronic device with a display, a memory, and one or more processors to execute one or more programs stored in the non-transitory memory includes one or more of the elements displayed in any of the methods described above, which are updated in response to inputs, as described in any of the methods described herein. In accordance with some embodiments, an electronic device includes: a display, and means for performing or causing performance of the operations of any of the methods described herein. In accordance with some embodiments, an information processing apparatus, for use in an electronic device with a display, includes means for performing or causing performance of the operations of any of the methods described herein.
Thus, electronic devices with displays, touch-sensitive surfaces and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface are provided with faster, more efficient methods and interfaces for controlling multiple devices in an accessibility mode, thereby increasing the effectiveness, efficiency, and user satisfaction with such devices. Such methods and interfaces may complement or replace conventional methods for controlling multiple devices in an accessibility mode.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the various described embodiments, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating example components for event handling in accordance with some embodiments.
FIG. 2 illustrates a portable multifunction device having a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an example multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
FIG. 4A illustrates an example user interface for a menu of applications on a portable multifunction device in accordance with some embodiments.
FIG. 4B illustrates an example user interface for a multifunction device with a touch-sensitive surface that is separate from the display in accordance with some embodiments.
FIG. 5 illustrate an example environment including multiple devices in accordance with some embodiments.
FIGS. 6A-6BD illustrate example user interfaces for controlling multiple devices in an accessibility mode in accordance with some embodiments.
FIGS. 7A-7G are flow diagrams illustrating a method of controlling multiple devices in an accessibility mode in accordance with some embodiments.
FIG. 8 is a functional block diagram of an electronic device in accordance with some embodiments.
DESCRIPTION OF EMBODIMENTS
The use of electronic devices with touch-based user interfaces (e.g., devices such as the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, Calif.) has increased significantly in recent years. These devices use touch-sensitive surfaces, such as a touch screen display or a touch pad, as the main input for manipulating user interface objects on a display and/or controlling the device. People with limited motor skills, such as those with certain finger or hand impairments, may find applying force or pressure to the touch-sensitive surface difficult, if not impossible, and may employ alternative input devices to control the device. However, people with limited motor skills may have multiple electronic devices and only a single easily accessible alternative input device and may have difficulty reconfiguring the input device to control different electronic devices, e.g., by disconnecting the input device from a first device and connecting the input device to a second device.
Described below are methods and devices that enable users who cannot easily reconfigure an input device to work with a different device to nevertheless operate multiple devices with a single input device. In some embodiments, as described below, an electronic device displays, as part of a user interface, a selectable affordance for transferring control from a first device to a second device (and/or back to the first device or to third device).
Below, FIGS. 1A-1B, 2, 3, and 4A-4B provide a description of example devices. FIG. 5 illustrates an environment with multiple devices. FIGS. 6A-6BD illustrate example user interfaces for controlling multiple devices in an accessibility mode. FIGS. 7A-7G illustrate a flow diagram of a method of controlling multiple devices in an accessibility mode. The user interfaces in FIGS. 6A-6BD are used to illustrate the processes in FIGS. 7A-7G.
Example Devices
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact, unless the context clearly indicates otherwise.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as PDA and/or music player functions. Example embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, Calif. Other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch-screen displays and/or touchpads), are, optionally, used. It should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch-screen display and/or a touchpad).
In the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. It should be understood, however, that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse and/or a joystick.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
The various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. One or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user.
Attention is now directed toward embodiments of portable devices with touch-sensitive displays. FIG. 1A is a block diagram illustrating portable multifunction device 100 with touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display system 112 is sometimes called a “touch screen” for convenience, and is sometimes simply called a touch-sensitive display. Device 100 includes memory 102 (which optionally includes one or more computer readable storage mediums), memory controller 122, one or more processing units (CPUs) 120, peripherals interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input or control devices 116, and external port 124. Device 100 optionally includes one or more optical sensors 164. Device 100 optionally includes one or more intensity sensors 165 for detecting intensity of contacts on device 100 (e.g., a touch-sensitive surface such as touch-sensitive display system 112 of device 100). Device 100 optionally includes one or more tactile output generators 163 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touchpad 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user's sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user.
It should be appreciated that device 100 is only one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in FIG. 1A are implemented in hardware, software, firmware, or a combination thereof, including one or more signal processing and/or application specific integrated circuits.
Memory 102 optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory 102 by other components of device 100, such as CPU(s) 120 and the peripherals interface 118, is, optionally, controlled by memory controller 122.
Peripherals interface 118 can be used to couple input and output peripherals of the device to CPU(s) 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data.
In some embodiments, peripherals interface 118, CPU(s) 120, and memory controller 122 are, optionally, implemented on a single chip, such as chip 104. In some other embodiments, they are, optionally, implemented on separate chips. RF (radio frequency) circuitry 108 receives and sends RF signals, also called electromagnetic signals. RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The wireless communication optionally uses any of a plurality of communications standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n), voice over Internet Protocol (VoW), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. Audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111. Speaker 111 converts the electrical signal to human-audible sound waves. Audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves. Audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing. Audio data is, optionally, retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripherals interface 118. In some embodiments, audio circuitry 110 also includes a headset jack (e.g., 212, FIG. 2). The headset jack provides an interface between audio circuitry 110 and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone).
I/O subsystem 106 couples input/output peripherals on device 100, such as touch-sensitive display system 112 and other input or control devices 116, with peripherals interface 118. I/O subsystem 106 optionally includes display controller 156, optical sensor controller 158, intensity sensor controller 159, haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/send electrical signals from/to other input or control devices 116. The other input or control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller(s) 160 are, optionally, coupled with any (or none) of the following: a keyboard, infrared port, USB port, stylus, and/or a pointer device such as a mouse. The one or more buttons (e.g., 208, FIG. 2) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206, FIG. 2).
Touch-sensitive display system 112 provides an input interface and an output interface between the device and a user. Display controller 156 receives and/or sends electrical signals from/to touch-sensitive display system 112. Touch-sensitive display system 112 displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output corresponds to user-interface objects.
Touch-sensitive display system 112 has a touch-sensitive surface, sensor or set of sensors that accepts input from the user based on haptic/tactile contact. Touch-sensitive display system 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch-sensitive display system 112 and converts the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages or images) that are displayed on touch-sensitive display system 112. In an example embodiment, a point of contact between touch-sensitive display system 112 and the user corresponds to a finger of the user or a stylus.
Touch-sensitive display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch-sensitive display system 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch-sensitive display system 112. In an example embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone®, iPod Touch®, and iPad® from Apple Inc. of Cupertino, Calif.
Touch-sensitive display system 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen video resolution is in excess of 400 dpi (e.g., 500 dpi, 800 dpi, or greater). The user optionally makes contact with touch-sensitive display system 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user.
In some embodiments, in addition to the touch screen, device 100 optionally includes a touchpad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. The touchpad is, optionally, a touch-sensitive surface that is separate from touch-sensitive display system 112 or an extension of the touch-sensitive surface formed by the touch screen.
Device 100 also includes power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices.
Device 100 optionally also includes one or more optical sensors 164. FIG. 1A shows an optical sensor coupled with optical sensor controller 158 in I/O subsystem 106. Optical sensor(s) 164 optionally include charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) photo transistors. Optical sensor(s) 164 receive light from the environment, projected through one or more lens, and converts the light to data representing an image. In conjunction with imaging module 143 (also called a camera module), optical sensor(s) 164 optionally capture still images and/or video. In some embodiments, an optical sensor is located on the back of device 100, opposite touch-sensitive display system 112 on the front of the device, so that the touch screen is enabled for use as a viewfinder for still and/or video image acquisition. In some embodiments, another optical sensor is located on the front of the device so that the user's image is obtained (e.g., for self ies, for videoconferencing while the user views the other video conference participants on the touch screen, etc.).
Device 100 optionally also includes one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled with intensity sensor controller 159 in I/O subsystem 106. Contact intensity sensor(s) 165 optionally include one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor(s) 165 receive contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of device 100, opposite touch-screen display system 112 which is located on the front of device 100.
Device 100 optionally also includes one or more proximity sensors 166. FIG. 1A shows proximity sensor 166 coupled with peripherals interface 118. Alternately, proximity sensor 166 is coupled with input controller 160 in I/O subsystem 106. In some embodiments, the proximity sensor turns off and disables touch-sensitive display system 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
Device 100 optionally also includes one or more tactile output generators 163. FIG. 1A shows a tactile output generator coupled with haptic feedback controller 161 in I/O subsystem 106. Tactile output generator(s) 163 optionally include one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device). Tactile output generator(s) 163 receive tactile feedback generation instructions from haptic feedback module 133 and generates tactile outputs on device 100 that are capable of being sensed by a user of device 100. In some embodiments, at least one tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system 112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device 100) or laterally (e.g., back and forth in the same plane as a surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the back of device 100, opposite touch-sensitive display system 112, which is located on the front of device 100.
Device 100 optionally also includes one or more accelerometers 167, gyroscopes 168, and/or magnetometers 169 (e.g., as part of an inertial measurement unit (IMU)) for obtaining information concerning the position (e.g., attitude) of the device. FIG. 1A shows sensors 167, 168, and 169 coupled with peripherals interface 118. Alternately, sensors 167, 168, and 169 are, optionally, coupled with an input controller 160 in I/O subsystem 106. In some embodiments, information is displayed on the touch-screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. Device 100 optionally includes a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information concerning the location of device 100.
In some embodiments, the software components stored in memory 102 include operating system 126, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, graphics module (or set of instructions) 132, haptic feedback module (or set of instructions) 133, text input module (or set of instructions) 134, Global Positioning System (GPS) module (or set of instructions) 135, and applications (or sets of instructions) 136. Furthermore, in some embodiments, memory 102 stores device/global internal state 157, as shown in FIGS. 1A and 3. Device/global internal state 157 includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch-sensitive display system 112; sensor state, including information obtained from the device's various sensors and other input or control devices 116; and location and/or positional information concerning the device's location and/or attitude.
Operating system 126 (e.g., iOS, Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
Communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by RF circuitry 108 and/or external port 124. External port 124 (e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with the 30-pin connector used in some iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, Calif. In some embodiments, the external port is a Lightning connector that is the same as, or similar to and/or compatible with the Lightning connector used in some iPhone®, iPod Touch®, and iPad® devices from Apple Inc. of Cupertino, Calif.
Contact/motion module 130 optionally detects contact with touch-sensitive display system 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). Contact/motion module 130 includes software components for performing various operations related to detection of contact (e.g., by a finger or by a stylus), such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). Contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one finger contacts or stylus contacts) or to multiple simultaneous contacts (e.g., “multi touch”/multiple finger contacts and/or stylus contacts). In some embodiments, contact/motion module 130 and display controller 156 detect contact on a touchpad.
Contact/motion module 130 optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (lift off) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (lift off) event. Similarly, tap, swipe, drag, and other gestures are optionally detected for a stylus by detecting a particular contact pattern for the stylus.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including without limitation text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module 132 receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156.
Haptic feedback module 133 includes various software components for generating instructions used by tactile output generator(s) 163 to produce tactile outputs at one or more locations on device 100 in response to user interactions with device 100.
Text input module 134, which is, optionally, a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g., contacts 137, e-mail 140, IM 141, browser 147, and any other application that needs text input).
GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to telephone 138 for use in location-based dialing, to camera 143 as picture/video metadata, and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets).
Applications 136 optionally include the following modules (or sets of instructions), or a subset or superset thereof:
- - contacts module 137 (sometimes called an address book or contact list); - telephone module 138; - video conferencing module 139; - e-mail client module 140; - instant messaging (IM) module 141; - workout support module 142; - camera module 143 for still and/or video images; - image management module 144; - browser module 147; - calendar module 148; - widget modules 149, which optionally include one or more of: weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, dictionary widget 149-5, and other widgets obtained by the user, as well as user-created widgets 149-6; - widget creator module 150 for making user-created widgets 149-6; - search module 151; - video and music player module 152, which is, optionally, made up of a video player module and a music player module; - notes module 153; - map module 154; and/or - online video module 155.
Examples of other applications 136 that are, optionally, stored in memory 102 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, contacts module 137 includes executable instructions to manage an address book or contact list (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers and/or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference 139, e-mail 140, or IM 141; and so forth.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, telephone module 138 includes executable instructions to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in address book 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation and disconnect or hang up when the conversation is completed. As noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact module 130, graphics module 132, text input module 134, contact list 137, and telephone module 138, videoconferencing module 139 includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, e-mail client module 140 includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module 144, e-mail client module 140 makes it very easy to create and send e-mails with still or video images taken with camera module 143.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, the instant messaging module 141 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, Apple Push Notification Service (APNs) or IMPS for Internet-based instant messages), to receive instant messages and to view received instant messages. In some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in a MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and Internet-based messages (e.g., messages sent using XMPP, SIMPLE, APNs, or IMPS).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module 146, workout support module 142 includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); communicate with workout sensors (in sports devices and smart watches); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store and transmit workout data.
In conjunction with touch-sensitive display system 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify characteristics of a still image or video, and/or delete a still image or video from memory 102.
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the Internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, e-mail client module 140, and browser module 147, calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to do lists, etc.) in accordance with user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, widget modules 149 are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, and dictionary widget 149-5) or created by the user (e.g., user-created widget 149-6). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! Widgets).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, the widget creator module 150 includes executable instructions to create widgets (e.g., turning a user-specified portion of a web page into a widget).
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions to search for text, music, sound, image, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present or otherwise play back videos (e.g., on touch-sensitive display system 112, or on an external display connected wirelessly or via external port 124). In some embodiments, device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.).
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, notes module 153 includes executable instructions to create and manage notes, to do lists, and the like in accordance with user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 includes executable instructions to receive, display, modify, and store maps and data associated with maps (e.g., driving directions; data on stores and other points of interest at or near a particular location; and other location-based data) in accordance with user instructions.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, audio circuitry 110, speaker 111, RF circuitry 108, text input module 134, e-mail client module 140, and browser module 147, online video module 155 includes executable instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen 112, or on an external display connected wirelessly or via external port 124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module 141, rather than e-mail client module 140, is used to send a link to a particular online video.
Each of the above identified modules and applications correspond to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are, optionally, combined or otherwise re-arranged in various embodiments. In some embodiments, memory 102 optionally stores a subset of the modules and data structures identified above. Furthermore, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 is, optionally, reduced.
The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that is displayed on device 100. In such embodiments, a “menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad.
FIG. 1B is a block diagram illustrating example components for event handling in accordance with some embodiments. In some embodiments, memory 102 (in FIG. 1A) or 370 (FIG. 3) includes event sorter 170 (e.g., in operating system 126) and a respective application 136-1 (e.g., any of the aforementioned applications 136, 137-155, 380-390).
Event sorter 170 receives event information and determines the application 136-1 and application view 191 of application 136-1 to which to deliver the event information. Event sorter 170 includes event monitor 171 and event dispatcher module 174. In some embodiments, application 136-1 includes application internal state 192, which indicates the current application view(s) displayed on touch-sensitive display system 112 when the application is active or executing. In some embodiments, device/global internal state 157 is used by event sorter 170 to determine which application(s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information.
In some embodiments, application internal state 192 includes additional information, such as one or more of: resume information to be used when application 136-1 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 136-1, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripherals interface 118. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display system 112, as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or a sensor, such as proximity sensor 166, accelerometer(s) 167, gyroscope(s) 168, magnetometer(s) 169, and/or microphone 113 (through audio circuitry 110). Information that peripherals interface 118 receives from I/O subsystem 106 includes information from touch-sensitive display system 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. In response, peripherals interface 118 transmits event information. In other embodiments, peripheral interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration).
In some embodiments, event sorter 170 also includes a hit view determination module 172 and/or an active event recognizer determination module 173.
Hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views, when touch-sensitive display system 112 displays more than one view. Views are made up of controls and other elements that a user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture.
Hit view determination module 172 receives information related to sub-events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (i.e., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
Active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views.
Event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173. In some embodiments, event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver module 182.
In some embodiments, operating system 126 includes event sorter 170. Alternatively, application 136-1 includes event sorter 170. In yet other embodiments, event sorter 170 is a stand-alone module, or a part of another module stored in memory 102, such as contact/motion module 130.
In some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, a respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of event recognizers 180 are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application 136-1 inherits methods and other properties. In some embodiments, a respective event handler 190 includes one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177 or GUI updater 178 to update the application internal state 192. Alternatively, one or more of the application views 191 includes one or more respective event handlers 190. Also, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
A respective event recognizer 180 receives event information (e.g., event data 179) from event sorter 170, and identifies an event from the event information. Event recognizer 180 includes event receiver 182 and event comparator 184. In some embodiments, event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which optionally include sub-event delivery instructions).
Event receiver 182 receives event information from event sorter 170. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device.
Event comparator 184 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator 184 includes event definitions 186. Event definitions 186 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187-2), and others. In some embodiments, sub-events in an event 187 include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event 1 (187-1) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first lift-off (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second lift-off (touch end) for a predetermined phase. In another example, the definition for event 2 (187-2) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display system 112, and lift-off of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes a definition of an event for a respective user-interface object. In some embodiments, event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display system 112, when a touch is detected on touch-sensitive display system 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects an event handler associated with the sub-event and the object triggering the hit test.
In some embodiments, the definition for a respective event 187 also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer's event type.
When a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture.
In some embodiments, a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. In some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy.
In some embodiments, a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer 180 delivers event information associated with the event to event handler 190. Activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module 145. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 177 creates a new user-interface object or updates the position of a user-interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display.
In some embodiments, event handler(s) 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input-devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc., on touch-pads; pen stylus inputs; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized.
FIG. 2 illustrates a portable multifunction device 100 having a touch screen (e.g., touch-sensitive display system 112, FIG. 1A) in accordance with some embodiments. The touch screen optionally displays one or more graphics within user interface (UI) 200. In this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward) and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device 100. In some implementations or circumstances, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap.
Device 100 optionally also includes one or more physical buttons, such as “home” or menu button 204. As described previously, menu button 204 is, optionally, used to navigate to any application 136 in a set of applications that are, optionally executed on device 100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on the touch-screen display.
In some embodiments, device 100 includes the touch-screen display, menu button 204, push button 206 for powering the device on/off and locking the device, volume adjustment button(s) 208, Subscriber Identity Module (SIM) card slot 210, head set jack 212, and docking/charging external port 124. Push button 206 is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In some embodiments, device 100 also accepts verbal input for activation or deactivation of some functions through microphone 113. Device 100 also, optionally, includes one or more contact intensity sensors 165 for detecting intensity of contacts on touch-sensitive display system 112 and/or one or more tactile output generators 163 for generating tactile outputs for a user of device 100.
FIG. 3 is a block diagram of an example multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. Device 300 need not be portable. In some embodiments, device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child's learning toy), a gaming system, or a control device (e.g., a home or industrial controller). Device 300 typically includes one or more processing units (CPU's) 310, one or more network or other communications interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication buses 320 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Device 300 includes input/output (I/O) interface 330 comprising display 340, which is typically a touch-screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and touchpad 355, tactile output generator 357 for generating tactile outputs on device 300 (e.g., similar to tactile output generator(s) 163 described above with reference to FIG. 1A), sensors 359 (e.g., touch-sensitive, optical, contact intensity, proximity, acceleration, attitude, and/or magnetic sensors similar to sensors 112, 164, 165, 166, 167, 168, and 169 described above with reference to FIG. 1A). Memory 370 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices remotely located from CPU(s) 310. In some embodiments, memory 370 stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (FIG. 1A), or a subset thereof. Furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk authoring module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1A) optionally does not store these modules.
Each of the above identified elements in FIG. 3 are, optionally, stored in one or more of the previously mentioned memory devices. Each of the above identified modules corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are, optionally, combined or otherwise re-arranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures identified above. Furthermore, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed towards embodiments of user interfaces (“UI”) that are, optionally, implemented on portable multifunction device 100.
FIG. 4A illustrates an example user interface for a menu of applications on portable multifunction device 100 in accordance with some embodiments. Similar user interfaces are, optionally, implemented on device 300. In some embodiments, user interface 400 includes the following elements, or a subset or superset thereof:
- - Signal strength indicator(s) 402 for wireless communication(s), such as cellular and Wi-Fi signals; - Time 404; - Bluetooth indicator 405; - Battery status indicator 406; - Tray 408 with icons for frequently used applications, such as: - Icon 416 for telephone module 138, labeled “Phone,” which optionally includes an indicator 414 of the number of missed calls or voicemail messages; - Icon 418 for e-mail client module 140, labeled “Mail,” which optionally includes an indicator 410 of the number of unread e-mails; - Icon 420 for browser module 147, labeled “Browser”; and - Icon 422 for video and music player module 152, also referred to as iPod (trademark of Apple Inc.) module 152, labeled “iPod”; and - Icons for other applications, such as: - Icon 424 for IM module 141, labeled “Text”; - Icon 426 for calendar module 148, labeled “Calendar”; - Icon 428 for image management module 144, labeled “Photos”; - Icon 430 for camera module 143, labeled “Camera”; - Icon 432 for online video module 155, labeled “Online Video”; - Icon 434 for stocks widget 149-2, labeled “Stocks”; - Icon 436 for map module 154, labeled “Map”; - Icon 438 for weather widget 149-1, labeled “Weather”; - Icon 440 for alarm clock widget 169-6, labeled “Clock”; - Icon 442 for workout support module 142, labeled “Workout Support”; - Icon 444 for notes module 153, labeled “Notes”; and - Icon 446 for a settings application or module, which provides access to settings for device 100 and its various applications 136.
It should be noted that the icon labels illustrated in FIG. 4A are merely examples. For example, in some embodiments, icon 422 for video and music player module 152 is labeled “Music” or “Music Player.” Other labels are, optionally, used for various application icons. In some embodiments, a label for a respective application icon includes a name of an application corresponding to the respective application icon. In some embodiments, a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon.
FIG. 4B illustrates an example user interface on a device (e.g., device 300, FIG. 3) with a touch-sensitive surface 451 (e.g., a tablet or touchpad 355, FIG. 3) that is separate from the display 450. Device 300 also, optionally, includes one or more contact intensity sensors (e.g., one or more of sensors 359) for detecting intensity of contacts on touch-sensitive surface 451 and/or one or more tactile output generators 359 for generating tactile outputs for a user of device 300.
FIG. 4B illustrates an example user interface on a device (e.g., device 300, FIG. 3) with a touch-sensitive surface 451 (e.g., a tablet or touchpad 355, FIG. 3) that is separate from the display 450. Although many of the examples that follow will be given with reference to inputs on touch screen display 112 (where the touch sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in FIG. 4B. In some embodiments, the touch-sensitive surface (e.g., 451 in FIG. 4B) has a primary axis (e.g., 452 in FIG. 4B) that corresponds to a primary axis (e.g., 453 in FIG. 4B) on the display (e.g., 450). In accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in FIG. 4B) with the touch-sensitive surface 451 at locations that correspond to respective locations on the display (e.g., in FIG. 4B, 460 corresponds to 468 and 462 corresponds to 470). In this way, user inputs (e.g., contacts 460 and 462, and movements thereof) detected by the device on the touch-sensitive surface (e.g., 451 in FIG. 4B) are used by the device to manipulate the user interface on the display (e.g., 450 in FIG. 4B) of the multifunction device when the touch-sensitive surface is separate from the display. It should be understood that similar methods are, optionally, used for other user interfaces described herein.
As used herein, the term “focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting. In some implementations that include a cursor or other location marker, the cursor acts as a “focus selector,” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in FIG. 3 or touch-sensitive surface 451 in FIG. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch-screen display (e.g., touch-sensitive display system 112 in FIG. 1A or the touch screen in FIG. 4A) that enables direct interaction with user interface elements on the touch-screen display, a detected contact on the touch-screen acts as a “focus selector,” so that when an input (e.g., a press input by the contact) is detected on the touch-screen display at a location of a particular user interface element (e.g., a button, window, slider or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations, focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch-screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface. Without regard to the specific form taken by the focus selector, the focus selector is generally the user interface element (or contact on a touch-screen display) that is controlled by the user so as to communicate the user's intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact). For example, the location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device).
Multiple Device Environments
FIG. 5 illustrates an environment 500 including multiple electronic devices such as those described above. The environment 500 includes a smartphone 501, a tablet 502, a media player 503 (connected to a television display), and a second smartphone 504. Although the example environment 500 of FIG. 5 includes four devices, it is to be appreciated that other environments can include other numbers of devices and other devices, such as a laptop, desktop computer, or a smart watch. A user 599 with limited motor skills, such as those with certain finger or hand impairments, may find interacting with the smartphone 501 (and other devices) via the touch-sensitive surface difficult, if not impossible. Thus, the smartphone 501 can be configured to operate in an accessibility mode in which input from a switch device 510 is used to navigate the user interface.
The switch device 510 generates a binary input stream including binary inputs that are communicated to the smartphone 501. The switch device 510 can include, for example, a switch that produces an “on” input when the switch is pressed and an “off” input when the switch is not pressed. The switch device 510 can include, as another example, a camera that produces an “on” input when the user turns his/her head to the left and an “off” input when the camera does not detect this motion. The binary input stream can be, for example, a voltage wave form that has a first value (e.g., 5 V) to indicate an “on” input and a second value (e.g., 0 V) to indicate an “off” input.
The switch device 510 can generate multiple binary input streams that are communicated to the smartphone 501. The switch device 510 can include, for example, a first switch and a second switch. The first switch produces a first “on” input when the first switch is pressed and a first “off” input when the first switch is not pressed. Similarly, the second switch produces a second “on” input when the second switch is pressed and a second “off” input when the second switch is not pressed. The first “on” input and the second “on” input can have different effects in operating the smartphone 501. As another example, the switch device 510 can include a camera that produces a first “on” input when the user turns his/her head to the left and a second “on” input when the user turns his/her head to the right.
A variety of devices for people of limited mobility can be used to generate switch inputs, including a device that detects when air in blown into a straw or when the person blinks.
In the accessibility mode, the smartphone 501 interprets the input from the switch device 510 to navigate the user interface. In some implementation, the user interface includes a selection indicator that sequentially highlights interface objects. In some embodiments, when the selection indicator is highlighting a first interface object and a select switch input (e.g., a first “on” input) is received, a menu for interacting with the interface object is displayed. In some embodiments, when the selection indicator is highlighting a first interface object, the selection indicator moves to a second interface object automatically after a time, e.g. a scanning period. In some embodiments, when the selection indicator is highlighting a first interface object, the selection indicator moves to a second interface object upon receiving a next switch input (e.g., a second “on” input).
The switch device 510 can be connected to the smartphone 501 via a wired or wireless connection. However, a user 599 with limited motor skills may find it difficult, if not impossible, to disconnect the switch device 510 from the smartphone 501 and connect the switch device 510 to another device (e.g., the tablet 502) to control the other device using the switch device 510 in the accessibility mode. Accordingly, described below are various method, devices, and user interfaces for allowing a user to control multiple devices in an accessibility mode using a single switch device 510.
User Interfaces and Associated Processes
Attention is now directed towards embodiments of user interfaces (“UI”) and associated processes that may be implemented on an electronic device, such as the smartphone 501, tablet 502, or media player 503 of FIG. 5.
FIGS. 6A-6BD illustrate example user interfaces for controlling multiple devices in an accessibility mode in accordance with some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in FIGS. 7A-7G.
FIG. 6A illustrates an environment 600 including the smartphone 501, coupled to a switch device 690 with a first switch 691 and a second switch 692, and the tablet 502. The switch device 690 generate multiple binary input streams that are communicated to the smartphone 501. The switch device 690 includes, for example, a first switch 691 and a second switch 692. The first switch 691 produces a first “on” input when the first switch is pressed and a first “off” input when the first switch is not pressed. Similarly, the second switch produces a second “on” input when the second switch is pressed and a second “off” input when the second switch is not pressed. As described below, the first “on” input and the second “on” input can have different effects in operating the smartphone 501. Accessibility switch devices are frequently used by people who have difficulty operating a standard input mechanism for a computer (e.g., mouse, keyboard, trackpad, touch-sensitive display). Accessibility switch devices come in a variety of different configurations that are tailored to users with different accessibility needs (e.g., switches that can be activated via foot inputs, via hand inputs, via breathing, via eye tracking, via head movement, etc.). While accessibility switch devices are designed to be easier to activate for users with different accessibility needs, they rely on software to intelligently provide binary decisions to navigate through the user interface instead of the more complex inputs that are possible with standard input mechanisms. The smartphone 501 displays a first user interface 610 including a plurality of interface objects 611A-611G on a display of the smartphone 501. The tablet 502 displays a second user interface 620 including a plurality of interface objects 621A-621J on a display of the tablet 502. Although substantially rectangular interface objects are illustrated in FIGS. 6A-6BD, the interface objects can be circular, irregular, or any other shape.
The first user interface 610 includes a selection indicator 695 highlighting a first interface object 611A. Although FIGS. 6A-6BD illustrate the selection indicator 695 as surrounding the first interface object 611A, in various implementations, the selection indicator 695 highlights the interface object 611A in other ways. For example, in various implementations, the selection indicator 695 surrounds the interface object, is displayed over the interface object, changes a visual characteristic of the interface object (e.g., a brightness, a contrast, or a color), points to a location of interface object, or otherwise highlights the interface object.
FIG. 6A illustrates a switch device 690 coupled to the smartphone 501. In various implementations, the switch device 690 can be coupled to the smartphone via a wired or wireless (e.g., Bluetooth™) connection. The switch device 690 includes a first switch 691 configured to provide a select input and a second switch 692 configured to provide a next input. In some embodiments, the switch device 690 includes only the first switch 691 and the selection indicator 695 moves to a next interface object automatically after a scanning period. As described further below, the first switch 691 and second switch 692 are configurable to provide switch inputs besides the select input and the next input. Further, the scanning speed (e.g., the length of the scanning period) is also configurable.
FIG. 6A illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6B illustrates the environment 600 of FIG. 6A in response to detecting the next input. In FIG. 6B, the selection indicator 695 highlights a second interface object 611B.
FIG. 6B illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6C illustrates the environment 600 of FIG. 6B in response to detecting the select input. The user interface 610 includes a first interaction menu 631 including a plurality of interaction affordances. The interaction affordances include a tap affordance 631A for performing a tap operation associated with a tap of the interface object 611B. The interaction affordances include a scroll affordance 631B for performing a scroll operation (e.g., replacing the interface objects 611A-611G with other interface objects). The interaction affordances include a more affordance 631C for displaying a second interaction menu. In FIG. 6C, the tap affordance 631A is highlighted and the other interaction affordances are not highlighted.
FIG. 6C illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6D illustrates the environment 600 of FIG. 6C in response to detecting the next input. In FIG. 6D, the scroll affordance 631B is highlighted and the other interaction affordances are not highlighted.
FIG. 6D illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6E illustrates the environment 600 of FIG. 6D in response to detecting the next input. In FIG. 6E, the more affordance 631C is highlighted and the other interaction affordances are not highlighted.
FIG. 6E illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6F illustrates the environment 600 of FIG. 6E in response to detecting the select input. In FIG. 6F, the first interaction menu 631 is replaced with a second interaction menu 632. The second interaction menu 632 includes a plurality of interaction affordances. The interaction affordances include a gestures affordance 632A for displaying a gestures menu. The gestures menu can include a plurality of gesture affordances for performing operations associated with performing touch gestures at a location of the interface object 611B. The interaction affordances include a device affordance 632B for display a device menu. The device menu can include a plurality of device affordances for performing operations on the device, such as returning to a home screen, entering a settings screen, changing a volume of the device, or powering down the device. The interaction affordances include a transfer control affordance 632C for controlling other devices using the switch device 690 (as described in detail below). The interaction affordances include a back affordance 632D for exiting the second interface menu (and returning to the state of FIG. 6B). In FIG. 6F, the gestures affordance 632A is highlighted and the other interaction affordances are not highlighted.
FIG. 6F illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6G illustrates the environment 600 of FIG. 6F in response to detecting the next input. In FIG. 6G, the device affordance 632B is highlighted and the other interaction affordances are not highlighted.
FIG. 6G illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6H illustrates the environment 600 of FIG. 6G in response to detecting the next input. In FIG. 6H, the transfer control affordance 632D is highlighted and the other interaction affordances are not highlighted.
FIG. 6H illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6I illustrates the environment 600 of FIG. 6H in response to detecting the select input. In FIG. 6I, the second interaction menu 632 is replaced with a device select menu 633 including a plurality of device select affordances. In response to detecting the select input while the transfer control affordance 632D is highlighted, the smartphone 501 scans the environment to detect the presence of other nearby devices. For example, the smartphone 501 can be associated with a user account and can access a list of other devices associated with the user account. For each other device on the list, the smartphone 501 can determine whether the other device is nearby based on a proximity sensor, GPS location information, network connectivity (e.g., connected to the same WLAN), or other information. In some embodiments, the device select affordances include a device select affordance for each detected nearby device.
In FIG. 6I, the device select affordances include a media player select affordance 633A for selecting a media player 503 of the environment 600. The device select affordances include a tablet select affordance 633B for selecting the tablet 502 of the environment 600. The device select affordances include a phone select affordance for selecting a smartphone 504 of the environment 600. The device select menu 633 also includes a back affordance 633D for exiting the device select menu (and returning to the state of FIG. 6B or, alternatively, to the state of FIG. 6H). In FIG. 6I, the media player select affordance 633A is highlighted and the other affordances of the device select menu 633 are not highlighted.
FIG. 6I illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6J illustrates the environment 600 of FIG. 6I in response to detecting the next input. In FIG. 6J, the tablet select affordance 633B is highlighted and the other affordances of the device select menu 633 are not highlighted.
FIG. 6J illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6K illustrates the environment 600 of FIG. 6J in response to detecting the select input. In FIG. 6K, the device select menu 633 is replaced with a transfer control confirmation menu 634. The transfer control confirmation menu 634 include a confirmation affordance 634A for transferring control to the selected device and a back affordance 634B for exiting the transfer control confirmation menu 634 (and returning to the state of FIG. 6B or, alternatively, to the state of FIG. 6H or 6J).
The first user interface 610 includes a first transfer confirmation notification 641A indicating that selection of the confirmation affordance 634A will transfer control to the selected device (e.g., the tablet 502). The second user interface 620 includes a second transfer confirmation notification 641B indicating that selection of the confirmation affordance 634A will transfer control to the device upon which the second transfer confirmation notification 641B is displayed (e.g., the tablet 502).
Thus, if a user mistakenly selects an unintended device (e.g., a device that is not visible or nearby), the first transfer confirmation notification 641A and second transfer confirmation notification 641B provide an indication that a mistake has been made.
FIG. 6K illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6L illustrates the environment 600 of FIG. 6K in response to detecting the select input. In FIG. 6L, the selection indicator 695 is part of the second user interface 620 of the tablet 501, highlighting a first interface object 621A. The first user interface 610 includes a first transferred control notification 642A indicating that the switch device 690 coupled to the smartphone 501 is interacting with the second user interface 620 of the tablet 502 (and not the first user interface 610 of the smartphone 501). The second user interface 620 includes a second transferred control notification 642B indicating that the second user interface 620 is being controlled by the switch device 690 coupled to the smartphone 501 (and not any other input device of the tablet 502 or connected to the tablet 502). In some embodiments, the first transferred control notification 642A and/or the second transferred control notification 642B persist as long as the second device is being operated based on input from the switch device 690.
In some implementations, switch inputs received from the switch device are received by the smartphone 501 and forwarded by the smartphone 501 to the tablet 502. In some implementations, switch inputs received from the switch device are sent directly to the second device. Thus, in some implementations, upon detecting the select input while highlighting of the confirmation affordance 634A, the smartphone 501 configures the tablet 502 to establish a connection with the switch device 690.
FIG. 6L illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6M illustrates the environment 600 of FIG. 6L in response to detecting the next input. In FIG. 6M, the selection indicator 695 has moved to highlight a second interface object 621B.
FIG. 6M illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6N illustrates the environment 600 of FIG. 6M in response to detecting the next input. In FIG. 6N, the selection indicator 695 has moved to highlight a third interface object 621C.
FIG. 6N illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6O illustrates the environment 600 of FIG. 6N in response to detecting the select input. The second user interface 620 includes the first interaction menu 631 displayed in association with the third interface object 621C. The tap affordance 631A of the first interaction menu 631 is highlighted and the other affordances of the first interaction menu 631 are not highlighted.
FIG. 6O illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6P illustrates the environment 600 of FIG. 6O in response to detecting the next input. In FIG. 6P, the scroll affordance 631B of the first interaction menu 631 is highlighted and the other affordances of the first interaction menu 631 are not highlighted.
FIG. 6P illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6Q illustrates the environment 600 of FIG. 6P in response to detecting the next input. In FIG. 6Q, the more affordance 631C of the first interaction menu 631 is highlighted and the other affordances of the first interaction menu 631 are not highlighted.
FIG. 6Q illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6R illustrates the environment 600 of FIG. 6Q in response to detecting the select input. In FIG. 6R, the first interaction menu 631 is replaced with a transferred control second interaction menu 635. The transferred control second interaction menu 635 differs from the second interaction menu 632 (e.g., as shown in FIG. 6F) in that it includes a return affordance 632E for returning control to the smartphone 501. In FIG. 6R, the gestures affordance 632A is highlighted and the other affordances of the transferred control second interaction menu 635 are not highlighted.
FIG. 6R illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6S illustrates the environment 600 of FIG. 6R in response to detecting the next input. In FIG. 6S, the device affordance 632A is highlighted and the other affordances of the transferred control second interaction menu 635 are not highlighted.
FIG. 6S illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6T illustrates the environment 600 of FIG. 6S in response to detecting the next input. In FIG. 6T, the return affordance 632E is highlighted and the other affordances of the transferred control second interaction menu 635 are not highlighted.
FIG. 6T illustrates that the first switch 691 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6U illustrates the environment of FIG. 6T in response to detecting the select input. In FIG. 6U, control is returned to the smartphone 501 and the environment is in the same state as in FIG. 6A. Thus, switch inputs received from the switch device 690 will affect the first user interface 610 of the smartphone 501 (as shown, for example, in FIGS. 6B-6K).
FIG. 6V, like FIG. 6T, illustrates the environment of FIG. 6S in response to detecting the next input. Unlike FIG. 6T, FIG. 6V illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6W illustrates the environment 600 of FIG. 6V in response to detecting the next input. In FIG. 6W, the transfer control affordance 632C is highlighted and the other affordances of the transferred control second interaction menu 635 are not highlighted.
FIG. 6W illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6X illustrates the environment 600 of FIG. 6W in response to detecting the select input. In FIG. 6X, the transferred control second interaction menu 635 is replaced with a transferred control device select menu 636 including a plurality of device select affordances. The transferred control device select menu 636 differs from the device select menu 633 (e.g., as shown in FIG. 6I) in that it includes a smartphone select affordance 633E for selecting the smartphone 501 and does not include a tablet affordance 633B for selecting the tablet 502. In FIG. 6X, the media player select affordance 633A is highlighted and the other affordances of the transferred control device select menu 636 are not highlighted.
FIG. 6X illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6Y illustrates the environment 600 of FIG. 6X in response to detecting the select input. In FIG. 6Y, the transferred control device select menu 636 is replaced with the transfer control confirmation menu 634. As described above with respect to FIG. 6K, the transfer control confirmation menu 634 include a confirmation affordance 634A (highlighted in FIG. 6Y) for transferring control to the selected device and a back affordance 634B (not highlighted in FIG. 6Y) for exiting the transfer control confirmation menu 634.
The second user interface 620 includes the first transfer confirmation notification 641A indicating that selection of the confirmation affordance 634A will transfer control to the selected device (e.g., the media player 503). The user interface of the media player can also include the second transfer confirmation notification 641B indicating that selection of the confirmation affordance 634A will transfer control to the device upon which the second transfer confirmation notification 641B is displayed (e.g., the media player 503).
FIG. 6Y illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6Z illustrates the environment 600 of FIG. 6Y in response to detecting the select input. In FIG. 6Z, the selection indicator 695 is moved to a third user interface 660 of the media player 503, displayed on a display 503A coupled to the media player 503. The third user interface 660 includes a plurality of interface objects 661A-661E with the selection indicator 695 highlighting a first interface object 661A.
The first user interface 610 of the smartphone 501 includes the first transferred control notification 642A indicating that the switch device 690 coupled to the smartphone 501 is interacting with the third user interface 660 of the media player 503 (and not the first user interface 610 of the smartphone 501). The third user interface 660 of the media player 503 includes the second transferred control notification 642B indicating that the third user interface 660 is being controlled by the switch device 690 coupled to the smartphone 501 (and not any other input device of media player 503 or connected to the media player 503).
FIG. 6Z illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the media player 503).
FIG. 6AA illustrates the environment 600 of FIG. 6Z in response to detecting the next input. The third user interface 660 includes the selection indicator 695 moved from the first interface object 661A to the second interface object 661B.
FIG. 6AA illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the media player 503).
FIG. 6AB illustrates the environment 600 of FIG. 6AA in response to detecting the select input. The third user interface 660 includes a media interaction menu 637 including a plurality of media interaction affordances. The media interaction affordances include a select affordance 637A for selecting the highlighted interface object. The media interaction affordances include a scroll affordance 637B for scrolling the third user interface 660. The media interaction affordances include a remote affordance 637C for displaying a virtual remote including a plurality of remote affordances for interacting with the third user interface 660. The media interaction affordances include a more affordance 637D for showing a second media interaction menu with other media interaction affordances.
FIG. 6AC, like FIG. 6N, illustrates the environment of FIG. 6M in response to detecting the next input. Unlike FIG. 6N, FIG. 6V illustrates that the first switch 691 and second switch 692 are activated, resulting in a select input and a next input detected by the smartphone 501 (and forwarded to the tablet 502). Such a combined input can be configured, as described further below, as an escape input to return control to the smartphone 601.
FIG. 6AD illustrates the environment of FIG. 6AC in response to detecting the escape input. In FIG. 6AD, control is returned to the smartphone 501 and the environment is in the same state as in FIG. 6A. Thus, switch inputs received from the switch device 690 will affect the first user interface 610 of the smartphone 501 (as shown, for example, in FIGS. 6B-6K).
Thus, control can be returned to the smartphone 501 in a number of ways. Control can be returned to the smartphone 501 by selecting a return affordance 632A of a transferred control interaction menu 635, as shown in FIG. 6T. Control can be returned to the smartphone 501 by selecting the smartphone select affordance 633E of the transferred control device select menu 636, as shown in FIG. 6X. Control can be returned to the smartphone 501 by inputting the escape input, as shown in FIG. 6AC. The escape input can be particularly useful for users with limited mobility who cannot easily find a device that is not behaving correctly (and unable to effect the other two forms of returning control) and reset it or otherwise correct a problem with the device. Further, if the user mistakenly transfers control to a second device that is not visible, control can be returned to the first device without interacting with the second device.
FIG. 6AE illustrates the environment 600 of FIG. 6AC in response to an alert being generated by the smartphone 501 (without the escape input being provided or detected), e.g., while switch inputs from the switch device 690 control the second user interface 620 of the tablet 502.
The first user interface 610 displays an alert user interface 615 based on the alert. In FIG. 6AE, the alert is generated in response to a received phone call. In various implementations, the alert can be, for example, a text message, a timer expiration, a clock alarm, or an event reminder. The alert user interface 615 includes alert information 616 regarding the alert and one or more affordances for responding to the alert. In FIG. 6AE, the affordances for responding to the alert include a slide affordance 617 for answering the phone call.
The second user interface 610 includes an alert response menu 638 with a plurality of alert response affordances. The alert response affordances include a dismiss affordance 638A for dismissing the alert response menu 638 without any action taken on the smartphone 501. The alert response affordances include a return affordance 638B for returning control to the smartphone 501. The alert response affordances include an answer affordance 638C for answering the phone call. The alert response affordances include an ignore affordance 638D for ignoring the phone call. In FIG. 6AE, the dismiss affordance 638A is highlighted and the other affordances of the alert response menu 638 are not highlighted. In various implementations, the alert response affordances can include affordances for responding to a text message, snoozing a clock alarm, or displaying information regarding an event.
FIG. 6AE illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6AF illustrates the environment 600 of FIG. 6AE in response to detecting the select input. In FIG. 6AF, the alert response menu 638 is no longer displayed on the second user interface 620 and switch inputs continue to control the second user interface 620. On the first user interface 610, the alert user interface 615 is displayed unchanged and no action is taken with respect to the phone call.
FIG. 6AG, like FIG. 6AE, illustrates the environment of FIG. 6AC in response to the alert being generated by the smartphone. Unlike FIG. 6AE, FIG. 6AG illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6AH illustrates the environment of FIG. 6AG in response to detecting the next input. In FIG. 6AH, the return affordance 638B is highlighted and the other affordances of the alert response menu 638 are not highlighted.
FIG. 6AH illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6AI illustrates the environment of FIG. 6AH in response to detecting the select input. In FIG. 6AI, the first transferred control notification 642A and second transferred control notification 642B are not displayed indicating that control has returned to the smartphone 501. The selection indicator 695 highlights the slide affordance 617 and switch inputs from the switch device can be detected to operate the smartphone 501, e.g., to answer the phone call.
FIG. 6AJ, like FIG. 6AH, illustrates the environment of FIG. 6AG in response to detecting the next input. Unlike FIG. 6AH, FIG. 6AJ illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6AK illustrates the environment 600 of FIG. 6AJ in response to detecting the next input. In FIG. 6AK, the answer affordance 638C is highlighted and the other affordances of the alert response menu 638 are not highlighted.
FIG. 6AK illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6AL illustrates the environment 600 of FIG. 6AK in response to detecting the select input. In response to detecting the select input, the smartphone 501 answers the phone call, but the switch inputs are still forwarded to the tablet 502. On the first user interface 610, an answered call user interface 615A is displayed including information regarding the phone call and a number of call affordances. On the second user interface 620, an answered call menu 639 is displayed including a plurality of call affordances. The call affordances include a mute affordance 639A for muting the answered phone call. The call affordances include a keypad affordance 639B for displaying a keypad. The call affordances include a hang-up affordance 639C for ending the answered phone call. In FIG. 6AL, the mute affordance 639A is highlighted and the other affordances of the answered call menu 639 are not highlighted.
FIG. 6AM, like FIG. 6AK, illustrates the environment 600 of FIG. 6AJ in response to detecting the next input. Unlike FIG. 6AK, FIG. 6AM illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6AN illustrates the environment 600 of FIG. 6AM in response to detecting the next input. In FIG. 6AM, the ignore affordance 638D is highlighted and the other affordances of the alert response menu 638 are not highlighted.
FIG. 6AN illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501 (and forwarded to the tablet 502).
FIG. 6AO illustrates the environment 600 of FIG. 6AN in response to detecting the select input. In response to the select input, the smartphone 501 ignores the phone call and the environment 600 is returned to the state of FIG. 6AC (without the escape input being provided or detected), e.g., while switch inputs from the switch device 690 control the second user interface 620 of the tablet 502.
FIG. 6AP, like FIG. 6G, illustrates the environment 600 of FIG. 6F in response to detecting the next input. In FIG. 6G, the device affordance 632B is highlighted and the other interaction affordances are not highlighted. Unlike FIG. 6G, FIG. 6AP illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6AQ illustrates the environment 600 of FIG. 6AP in response to detecting the response input. In FIG. 6AQ, the second interaction menu 632 is replaced with a device menu 639 including a plurality of device menu affordances. The device menu affordances include a home affordance 639A for performing a home operation (e.g., returning to a home user interface). The device menu affordances include a settings affordance 639B for accessing a settings user interface. The device menu affordances include a lock affordance 639C for locking the smartphone 501. The device menu affordances include a back affordance 639D for exiting the device menu 639 (and returning to the state of FIG. 6B). In FIG. 6AQ, the home affordance 639A is highlighted and the other affordances of the device menu 639 are not highlighted.
FIG. 6AQ illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6AR illustrates the environment 600 of FIG. 6AQ in response to detecting the next input. In FIG. 6AR, the settings affordance 639B is highlighted and the other affordances of the device menu 639 are not highlighted.
FIG. 6AR illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6AS illustrates the environment 600 of FIG. 6AR in response to detecting the select input. In FIG. 6AS, the first user interface 610 displays a settings user interface 671. The settings user interface 671 includes a plurality of settings affordances for changing various settings of the smartphone 501. The settings affordances include a connectivity settings affordance 671A for changing connectivity settings of the smartphone 501, e.g., WLAN network connections, turning on and off an airplane mode, configuring the smartphone 501 as a hotspot, etc. The settings affordances include an accessibility settings affordance 671B for changing accessibility settings of the smartphone 501, as described further below. The settings affordances include a sound settings affordance 671C for changing sound settings of the smartphone 501, such as a maximum volume, ringtones, etc. The settings affordances include a display settings affordance 671D for changing display settings of the smartphone 501, such as a brightness, a text size, etc. The settings affordances include an applications settings affordance 671E for changing settings of various applications of the smartphone 501. In FIG. 6AS, the selection indicator 695 highlights the connectivity settings affordance 671A.
FIG. 6AS illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6AT illustrates the environment 600 of FIG. 6AS in response to detecting the next input. In FIG. 6AT, the selection indictor 695 highlights the accessibility settings affordance 671B.
FIG. 6AT illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6AU illustrates the environment 600 of FIG. 6AT in response to detecting the select input. In FIG. 6AU, the first user interface 610 displays an accessibility settings user interface 672 including a plurality of accessibility settings affordances. The accessibility settings affordances include an accessibility mode toggle affordance 672A for switching in and out of the accessibility mode. The accessibility settings affordances include a switches settings affordance 672B for adding switches and configuring user interface responses to the switches, as described further below. The accessibility settings affordances include a scanning speed affordance 672C for changing a scanning speed of the selection indicator 695. The accessibility settings affordances include a highlight color affordance 672D for changing a color of the selection indicator 695. The accessibility settings affordance include an escape action affordance 672E for changing the action required to form the escape input. In FIG. 6AU, the selection indicator 695 highlights the accessibility mode toggle affordance 672A. The accessibility settings user interface 672 includes a settings affordance 672X for returning to the settings user interface 671.
FIG. 6AU illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6AV illustrates the environment 600 of FIG. 6AU in response to detecting the next input. In FIG. 6AV, the selection indicator 695 highlights the switches settings affordance 672B.
FIG. 6AV illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6AW illustrates the environment 600 of FIG. 6AV in response to detecting the select input. In FIG. 6AW, the first user interface 610 displays a switches settings user interface 673. The switches settings user interface 673 includes a plurality of switches settings affordances for changing switch settings of the accessibility mode of the smartphone 501.
The switches settings affordances include a switch action affordance 673A for changing an action taken by the first user interface 610 in response to a switch input from the first switch 691. As described with respect to the figures above, the switch input from the first switch 691 is interpreted by the smartphone 501 as a select input. In various implementations, the switch input can be interpreted as a stop/start scanning input that stops or starts scanning of the selection indicator 695, a next input that moves the selection indicator 695 to a next interface object, a previous input that moves the selection indicator 695 to a previous interface object, a tap input that is interpreted as on tap on the touchscreen at the location of the selection indicator 695, a volume input that increases or decreases the volume of the smartphone 501, or as another input.
The switches settings affordances include an other device switch action affordance 673B for changing an action taken by the second user interface 620 (and/or third user interface 660) in response to a switch input from the first switch 691. In some implementations, the action taken by the second user interface 620 is the same as the action taken by the first user interface 620 (as shown in FIG. 6AW). However, in some implementations, the action taken by the second user interface 620 is different from the action taken by the first user interface 610. For example, a user can configure the action taken by the first user interface 610 in response to a switch input from the second switch 692 to be a next action (e.g., moving the selection indicator 695 to a next interface object), but configure the action taken by the second user interface 620 in response to a switch input from the second switch 692 to be a scroll action (e.g., scrolling a set of interface objects). A user may wish to do so when the first device is a smartphone and the second device is a media player in which scrolling is an often performed task.
The switches settings affordances include a backup other device switch action affordance 673C for changing a backup action taken by the second user interface 620 in response to a switch input from the first switch 691 in circumstances in which the second user interface 620 does not support the primary action.
The switches settings affordances include a full screen action affordance 673C for changing an action taken by the first user interface 610 in response to a touch anywhere on the touchscreen of the smartphone 501. Thus, the touchscreen of the smartphone 501 can be configured as a switch device as an alternative to or in addition to the switch device 690.
The switches settings affordances include an add switch device affordance 673E for configuring the smartphone to accept switch inputs from other switch devices (not shown).
The switches settings user interface includes an accessibility settings affordance 673X for returning to the accessibility settings user interface 672. In FIG. 6AW, the selection indicator 695 highlights the accessibility settings affordance 673X.
FIG. 6AW illustrates that the first switch 691 is activated, resulting in a select input detect by the smartphone 501.
FIG. 6AX illustrates the environment 600 of FIG. 6AW in response to detecting the select input. In FIG. 6AX, the first user interface 610 displays the accessibility settings user interface 672 with the selection indicator 695 highlighting the switches settings affordance 672B.
FIG. 6AX illustrates that the second switch 692 is activated, resulting in a next input detected by the smartphone 501.
FIG. 6AY illustrates the environment 600 of FIG. 6AX in response to detecting the next input. In FIG. 6AY, the selection indicator 695 has moved to highlight the scanning speed affordance 672C.
FIG. 6AY illustrates that the first switch 691 is activated, resulting in a select input detected by the smartphone 501.
FIG. 6AZ illustrates the environment 600 of FIG. 6AY in response to detecting the select input. In FIG. 6AZ, the first user interface 610 displays a scanning speed user interface 674 including a plurality of scanning speed setting affordances. The scanning speed setting affordances include a primary scanning speed setting affordance 674A for changing the scanning speed of the selection indicator 695 on the smartphone 501. In FIG. 6AZ, the scanning speed on the smartphone 501 is set to manual. In various implementations, the scanning speed can be set to one of a set of speeds (e.g., slow, medium, and fast) or can be set to a specific scanning period (e.g., half a second, 1 second, or 2 seconds).
|
Page:EB1911 - Volume 13.djvu/878
Rh N.B., or other Atlantic port, for shipment to London by Canadian Pacific Railway Company’s mail ships, or other line of steamers, to be sold at auction.
In the year 1670 Charles II. granted a charter to Prince Rupert and seventeen other noblemen and gentlemen, incorporating them as the “Governor and Company of Adventurers of England trading into Hudson’s Bay,” and securing to them “the sole trade and commerce of all those seas, straits, bays, rivers, lakes, creeks and sounds, in whatsoever latitude they shall be, that lie within the entrance of the straits commonly called Hudson’s Straits, together with all the lands and territories upon the countries, coasts and confines of the seas, bays, &c., aforesaid, that are not already actually possessed by or granted to any of our subjects, or possessed by the subjects of any other Christian prince or state.” Besides the complete lordship and entire legislative, judicial and executive power within these vague limits (which the Company finally agreed to accept as meaning all lands watered by streams flowing into Hudson Bay), the corporation received also the right to “the whole and entire trade and traffic to and from all havens, bays, creeks, rivers, lakes and seas into which they shall find entrance or passage by water or land out of the territories, limits or places aforesaid.” The first settlements in the country thus granted, which was to be known as Rupert’s Land, were made on James Bay and at Churchill and Hayes rivers; but it was long before there was any advance into the interior, for in 1749, when an unsuccessful attempt was made in parliament to deprive the Company of its charter on the plea of “non-user,” it had only some four or five forts on the coast, with about 120 regular employés. Although the commercial success of the enterprise was from the first immense, great losses, amounting before 1700 to £217,514, were inflicted on the Company by the French, who sent several military expeditions against the forts. After the cession of Canada to Great Britain in 1763, numbers of fur-traders spread over that country, and into the north-western parts of the continent, and began even to encroach on the Hudson’s Bay Company’s territories. These individual speculators finally combined into the North-West Fur Company of Montreal.
The fierce competition which at once sprang up between the companies was marked by features which sufficiently demonstrate the advantages of a monopoly in commercial dealings with savages, even although it is the manifest interest of the monopolists to retard the advance of civilization towards their hunting grounds. The Indians were demoralized, body and soul, by the abundance of ardent spirits with which the rival traders sought to attract them to themselves; the supply of furs threatened soon to be exhausted by the indiscriminate slaughter, even during the breeding season, of both male and female animals; the worst passions of both whites and Indians were inflamed to their fiercest (see ). At last, in 1821, the companies, mutually exhausted, amalgamated, obtaining a licence to hold for 21 years the monopoly of trade in the vast regions lying to the west and north-west of the older company’s grant. In 1838 the Hudson’s Bay Company acquired the sole rights for itself, and obtained a new licence, also for 21 years. On the expiry of this it was not renewed, and since 1859 the district has been open to all.
The licences to trade did not of course affect the original possessions of the Company. Under the terms of the Deed of Surrender, dated November 19th, 1869, the Hudson’s Bay Company surrendered “to the Queen’s Most Gracious Majesty, all the rights of Government, and other rights, privileges, liberties, franchises, powers and authorities, granted or purported to be granted to the said Government and Company by the said recited Letters Patent of His Late Majesty King Charles II.; and also all similar rights which may have been exercised or assumed by the said Governor and Company in any parts of British North America, not forming part of Rupert’s Land or of Canada, or of British Columbia, and all the lands and territories within Rupert’s Land (except and subject as in the said terms and conditions mentioned) granted or purported to be granted to the said Governor and Company by the said Letters Patent,” subject to the terms and conditions set out in the Deed of Surrender, including the payment to the Company by the Canadian Government of a sum of £300,000 sterling on the transfer of Rupert’s Land to the Dominion of Canada, the retention by the Company of its posts and stations, with a right of selection of a block of land adjoining each post in conformity with a schedule annexed to the Deed of Surrender; and the right to claim in any township or district within the Fertile Belt in which land is set out for settlement, grants of land not exceeding one-twentieth part of the land so set out. The boundaries of the Fertile Belt were in terms of the Deed of Surrender to be as follows:—“On the south by the United States’ boundary; on the west by the Rocky Mountains; on the north by the northern branch of the Saskatchewan; on the east by Lake Winnipeg, the Lake of the Woods, and the waters connecting them,” and “the Company was to be at liberty to carry on its trade without hindrance, in its corporate capacity; and no exceptional tax was to be placed on the Company’s land, trade or servants, nor any import duty on goods introduced by them previous to the surrender.”
An Order in Council was passed confirming the terms of the Deed of Surrender at the Court of Windsor, the 23rd of June 1870.
In 1872, in terms of the Dominion Lands Act of that year, it was mutually agreed in regard to the one-twentieth of the lands in the Fertile Belt reserved to the Company under the terms of the Deed of Surrender that they should be taken as follows:—
“Whereas by article five of the terms and conditions in the Deed of Surrender from the Hudson’s Bay Company to the Crown, the said Company is entitled to one-twentieth of the lands surveyed into Townships in a certain portion of the territory surrendered, described and designated as the Fertile Belt.
“And whereas by the terms of the said deed, the right to claim the said one-twentieth is extended over the period of fifty years, and it is provided that the lands comprising the same shall be determined by lot, and whereas the said Company and the Government of the Dominion have mutually agreed that with a view to an equitable distribution throughout the territory described, of the said one-twentieth of the lands, and in order further to simplify the setting apart thereof, certain sections or parts of sections, alike in numbers and position in each township throughout the said Territory, shall, as the townships are surveyed, be set apart and designated to meet and cover such one-twentieth:
“And whereas it is found by computation that the said one-twentieth will be exactly met, by allotting in every fifth township two whole sections of 640 acres each, and in all other townships one section and three quarters of a section each, therefore—
“In every fifth Township in the said Territory; that is to say: in those townships numbered 5, 10, 15, 20, 25, 30, 35, 40, 45, 50 and so on in regular succession northerly from the International boundary, the whole of sections Nos. 8 and 26, and in each and every of the other townships the whole of section No. 8, and the south half and north-west quarter of section 26 (except in the cases hereinafter provided for) shall be known and designated as the lands of the said Company.”
See G. Bryce, Remarkable History of the Hudson’s Bay Company (London, 1900); and A. C. Laut, Conquest of the great Northwest; being the story of the adventurers of England known as Hudson’s Bay Co. (New York, 1909).
HUÉ, a town of French Indo-China, capital of Annam, on the Hué river (Song-Huong-Giang) about 8 m. from its mouth in the China Sea. Pop. about 42,000, of whom 240 are Europeans. The country immediately surrounding it is flat, alluvial land, traversed by streams and canals and largely occupied by rice fields. Beyond the plain rises a circle of hills formed by spurs of the mountains of Annam. The official portion of the town, fortified under French superintendence, lies on the left bank of the river within an enclosure over 7300 yds. square. It contains the royal palace, the houses of the native ministers and officials, the arsenals, &c. The palace stands inside a separate enclosure. Once forbidden ground, it is to-day open to foreigners, and the citadel is occupied by French troops. The palace of the French resident-general and the European quarter, opposite the citadel on the right bank of the Hué, are connected with the citadel by an iron bridge. Important suburbs adjoin the official town, the villages of Dōng–Bo, Bo-vinh, Gia-Ho, Kim-Long and Nam-Pho forming a sort of commercial belt around it. Glass- and ivory-working are carried on, but otherwise industry is of only local importance. Rice is imported by way of the river. A frequent service of steam launches connects the town with the ports of Thuan-an, at the mouth of the river, and Tourane, on the bay of that name. Tourane is also united to Hué by a railway opened in 1906. In the vicinity the chief objects of interest are the tombs of the dead kings of Annam.
HUE AND CRY, a phrase employed in English law to signify the old common law process of pursuing a criminal with horn and voice. It was the duty of any person aggrieved, or discovering a felony, to raise the hue and cry, and his neighbours were bound to turn out with him and assist in the discovery of the offender. In the case of a hue and cry, all those joining in the pursuit were justified in arresting the person pursued, even though it turned out that he was innocent. A swift fate awaited any one overtaken
|
track max $clusterTime "locally" as part of session state
this is tracked globally as part of the cluster state, but this means cluster timestamps received by a session cannot be used by that session until the update has been processed by the cluster actor.
we cannot keep doing this, because this means cluster time is effectively frozen across GetMore commands, because they must all share a ReadMedium.
we need to track a parallel cluster timestamp as part of session state, and send the maximum of the two.
this was fixed as part of a larger re-architecting
|
Non-playable character
List of Halo NPCs
Halo: Combat Evolved
* Captain Jacob Keyes
* 343 Guilty Spark
Halo 2 Halo 3
* Commander Miranda Keyes
* Sergeant Major Avery J. Johnson
* SpecOps Commander Rtas 'Vadumee
* 343 Guilty Spark
* Commander Miranda Keyes
* Sergeant Major Avery J. Johnson
* SpecOps Commander Rtas 'Vadumee
* 343 Guilty Spark
* Arbiter?
|
import pandas as pd
from sklearn import preprocessing
from sklearn import impute
import util.data
from util.string import print_primary, print_secondary, print_warning
class Estimator:
# Interface, partly compatible with sklearn.base.TransformerMixin
def __init__(self, k: str = ''):
self.k = k
self.est = None
def extend(self, data: pd.DataFrame):
""" Extend the dataframe by adding additional attributes
This function should be applied to a dataframe before fitting or before
transforming it
"""
pass
def fit(self, data: pd.DataFrame):
# fit a model and save the parameters
pass
def transform(self, data):
# use the fitted model to predict attribute values
raise NotImplementedError
###############################################################################
# Estimator Subclasses
###############################################################################
class Wrapper(Estimator):
""" Wrapper for sklearn.preprocessing estimators
"""
def __init__(self, k: str, est):
super().__init__(k)
self.est = est
def fit(self, data):
self.est.fit(data[self.k].values.reshape(-1, 1))
def transform(self, data):
if util.data.is_int(data[self.k]):
data[self.k] = self.est.transform(
data[self.k].values.reshape(-1, 1)).astype(int)
else:
data[self.k] = self.est.transform(
data[self.k].values.reshape(-1, 1))
def Imputer(k):
return Wrapper(k, impute.SimpleImputer(strategy='median', copy=False))
def MinMaxScaler(k):
return Wrapper(k, preprocessing.MinMaxScaler(feature_range=(-1, 1), copy=False))
def RobustScaler(k):
return Wrapper(k, preprocessing.RobustScaler(copy=False))
def PowerTransformer(k):
return Wrapper(k, preprocessing.PowerTransformer(), copy=False)
class Discretizer(Estimator):
# Bin numerical data
""" Encode data[k] to a numerical format (in range [0,n_bins])
Use stragegy=`uniform` when encoding integers (e.g. id's)
:E Encoder object with attributes `encoders`, `decoders`
"""
def __init__(self, k):
super().__init__(k)
self.n_bins = 3
def fit(self, data):
# row = data[self.k].values
# X = np.array([x for x in X]).reshape(-1, 1)
# bins = np.repeat(n_bins, X.shape[1]) # e.g. [5,3] for 2 features
# encode to integers
# quantile: each bin contains approx. the same number of features
strategy = 'uniform' if util.data.is_int(
data[self.k]) else 'quantile'
self.est = preprocessing.KBinsDiscretizer(
n_bins=self.n_bins, encode='onehot', strategy=strategy)
self.est.fit(data[self.k].values.reshape(-1, 1))
self.n_bins = self.est.bin_edges_[0].size
def transform(self, data):
print_primary('\tdicretize `%s`' % self.k)
print('\tDiscretize (bin) strategy: %s & n bins: %i \\\\' %
(self.est.strategy, self.est.n_bins))
if util.data.is_int(data[self.k]):
print('\tAttribute & Number of bins (categories)')
else:
print('\tAttribute & Bin start & Bin 1 & Bin 2 \\\\')
s = ''
for st in [round(a, 3) for a in self.est.bin_edges_[0]]:
s += '$%s$ & ' % str(st)
if s:
print('\t%s & %s' % (self.k, s[:-2]))
else:
print('\t\t no bins')
# note the sparse output
rows = self.est.transform(data[self.k].values.reshape(-1, 1))
util.data.join_inplace(data, rows, self.k, k_suffix='bin')
class RemoveKey(Estimator):
# Remove k, e.g. after discretization
def transform(self, data):
# print(data[self.k + '_bin0'])
# data.drop(self.k, axis=1, inplace=True)
data.drop(columns=self.k, inplace=True)
print('\tk removed', self.k)
class LabelBinarizer(Estimator):
""" Wrapper for sklearn.preprocessing.LabelBinarizer, allowing
pandas.DataFrame mutations
Can discretize (onehot-encode) categorical attributes (or ints). The least
occuring categories are grouped
"""
def __init__(self, k, n_max=4, use_keys=False):
# :use_keys = use keys as labels
super().__init__(k)
self.n_max = n_max
self.use_keys = use_keys
def fit(self, data):
""" Find most common catorgies, group uncommon categories under a
single `uncommon_value` and fit estimator
"""
print_primary('\nOneHotEncode labels `%s`' % self.k)
# max_prop_null_values = 0.05
# if util.data.proportion_null_values(data, self.k) > max_prop_null_values :
# flag_null_values(data, self.k)
# isinstance is not supported by pd.Series.dtype
if util.data.is_int(data[self.k]):
# TODO or any other int
self.na_value = data[self.k].max() + 1
else:
if 'float' in str(data[self.k].dtype):
print_warning('\t Using LabelBinarizer for floats')
# assume dtype == string
self.na_value = '_pd_na'
# assert data[self.k].isnull().sum(
# ) == 0, 'Missing values shoud be removed'
# row = data[self.k].copy()
row = self._transform_na(data)
# add a new category to group uncommen categories
if row.dtype == 'int64':
self.uncommon_value = max(row.unique()) + 1
self.uncommon_value = max([1e16, self.uncommon_value + 1])
else:
self.uncommon_value = 'Other'
most_common = util.data.select_most_common(
row, n=self.n_max - 1, key=self.uncommon_value, v=0)
self.common_keys = most_common.keys()
self.est = preprocessing.LabelBinarizer()
transformed_row = self._transform_uncommon(row)
self.est.fit(transformed_row)
print('\tLabels:', list(self.common_keys))
def transform(self, data):
# self.est = preprocessing.KBinsDiscretizer(
# n_bins = bins, encode = 'onehot', strategy = strategy)
row = self._transform_na(data)
row = self._transform_uncommon(row)
rows = self.est.transform(row)
if self.use_keys:
keys = list(self.est.classes_)
util.data.join_inplace(data, rows, self.k, keys=keys)
else:
util.data.join_inplace(data, rows, self.k)
def _transform_na(self, data):
return data[self.k].fillna(self.na_value, inplace=False)
def _transform_uncommon(self, row):
# Replace all values that are not in `common_keys`
return util.data.replace_uncommon(row, list(self.common_keys),
self.uncommon_value)
class GrossBooking(Estimator):
def fit(self, data):
print_secondary('\tGrossBooking fit')
regData = data.loc[~data['gross_bookings_usd'].isnull(), :]
cols = regData.columns
keys1 = [k for k in cols if 'bool' in str(k)]
keys2 = [k for k in cols if 'null' in str(k)]
keys3 = [k for k in cols if 'able_comp'in str(k)]
keys4 = [k for k in cols if 'location_score' in str(k)]
keys5 = [k for k in cols if 'prop_log' in str(k)]
self.fullK = keys1 + keys2 + keys3 + keys4 + keys5 + ['avg_price_comp']
self.fullK.remove('booking_bool')
self.fullK.remove('click_bool')
self.fullK = [k for k in self.fullK if 'log' not in str(k)]
self.est = util.data.regress_booking(regData, self.fullK)
def transform(self, data):
print_secondary('\tGrossBooking transform')
if 'gross_bookings_usd' in data.columns:
data.loc[data['gross_bookings_usd'].isnull(), 'gross_bookings_usd'] = \
self.est.predict(
data.loc[data['gross_bookings_usd'].isnull(), self.fullK])
else:
# unlabelled data
data['gross_bookings_usd'] = self.est.predict(data[self.fullK])
|
WATER WORKS BOARD of the CITY OF BIRMINGHAM v. Allan ISOM.
2090413.
Court of Civil Appeals of Alabama.
Aug. 27, 2010.
Susan Walker of Waldrep Stewart & Kendrick, LLC, Birmingham, for appellant.
Jonna M. Denson, Birmingham, for ap-pellee,
THOMPSON, Presiding Judge.
The Water Works Board of the City of Birmingham (“the Board”) appeals from the judgment of the Jefferson Circuit Court awarding Allan Isom permanent-partial-disability benefits pursuant to the Alabama Workers’ Compensation Act, § 25-5-1 et seq., Ala.Code 1975 (“the Act”). For the reasons set forth herein, we affirm.
Isom began working for the Board in 1990. On June 16, 2003, he was injured in an on-the-job accident (“the 2003 injury”). As a result of that accident, Isom filed an action against the Board pursuant to the Act. That action concluded in a settlement agreement between the parties that provided for a lump-sum payment to Isom of $18,480, with the Board’s obligation to pay for Isom’s future medical benefits remaining open. As part of its judgment based on the parties’ settlement, the trial court assigned Isom a 10% permanent whole-body disability rating.
On July 3, 2008, Isom filed an action against the Board in which he asserted that he had been injured again on July 4, 2006, as a result of another on-the-job accident (“the 2006 injury”). He sought relief against the Board pursuant to the Act. The Board filed an answer to the complaint in which it denied that Isom had been injured in an accident on July 4, 2006. The Board asserted, among other things, that the injury of which Isom was complaining was a continuation of the 2003 injury and that Isom’s recovery was limited to compensation for the degree of injury that would have resulted from the 2006 injury if the 2003 injury had not occurred.
On August 26, 2009, the trial court held a bench trial of the action. Isom testified that he began working for the Board in 1990 and that he was promoted to his present position of supervisor in 2002. His duties as a supervisor included operating water valves, locating water leaks, and directing crews that worked in the field.
Isom testified that the 2003 injury stemmed from an automobile accident in which he injured his left shoulder and neck. Isom testified that he had two surgeries on his left shoulder and underwent physical therapy as a result of the 2003 injury. He testified that, after the second surgery, he was released to return to work without restrictions and that he had worked several months before the 2006 injury without seeing a doctor regarding his shoulder. He testified that, during that time, he had no problems with his shoulder and that he could do anything he wanted to do with it with regard to his job. He testified that he felt as though he had recovered and healed from the 2003 injury after the second surgery related to that injury.
At trial, Isom introduced medical records relating to the 2003 injury. The report from a surgery that Dr. David Adki-son performed on Isom’s left shoulder on August 28, 2003, indicated that Isom had been diagnosed with a left-shoulder superi- or labral tear and subacromial impingement. The surgery revealed, among other things, a “very large type II labral tear extending into the origin of the MGHL and involving the entire biceps anchor.”
The medical records indicated that Dr. Daniel Michael began treating Isom in April 2004. A medical record from Isom’s April 26, 2004, appointment with Dr. Michael indicated that Isom was experiencing continued pain in his left shoulder and neck, occasionally radiating to his elbow. Dr. Michael ordered an MRI of Isom’s neck. The MRI revealed some degeneration of Isom’s cervical spine.
A medical note from Isom’s May 5, 2004, visit with Dr. Michael indicated that Dr. Michael decided to treat Isom’s neck pain with an epidural block. A medical note dated June 2, 2004, indicated that Isom obtained significant relief for a few days from his neck pain from the epidural block. A medical note from July 14, 2004, indicated that Isom’s neck pain had virtually resolved but that he was continuing to experience pain in his left shoulder with an increasing popping in the shoulder. Dr. Michael ordered an MRI for Isom’s shoulder to determine whether his shoulder was having difficulty healing from the August 2003 surgery or whether there was some other problem with his rotator cuff that had been previously overlooked. That MRI revealed no evidence of a tear in the rotator cuff or any injury to the labrum. Dr. Michael ordered an epidural block for Isom’s left shoulder, which reduced Isom’s pain.
A medical record from December 20, 2004, indicated that Isom was experiencing stiffness and irritation in his shoulder. A physical examination revealed that he might have been experiencing some recurring tendinitis or bursitis. He was given another epidural block. A medical record from January 3, 2005, indicated that he was continuing to experience pain in his left shoulder and that Dr. Michael determined that he should undergo a second surgery on his shoulder to remove scar tissue from the previous surgery that could have been causing Isom’s shoulder difficulty.
On January 21, 2005, Dr. Michael performed a second surgery on Isom’s left shoulder. It was determined during that surgery that the labrum had not reattached to the glenoid, and corrective procedures were performed to cause that reattachment. A medical note from April 6, 2005, indicated that Isom’s shoulder was better except for some occasional soreness when he crossed his arm over his chest. A medical note from May 4, 2005, indicated that “[t]he only time [Isom] has trouble with the shoulder is when he over does it.” Dr. Michael indicated that the second surgery had been successful. He assigned Isom a medical-impairment rating of 10% to the body as a whole and indicated that Isom’s activities were unrestricted. . A medical record from August 10, 2005, indicated that Isom was released from all work restrictions as of that day. A medical record from October 12, 2005, indicated that Isom’s left shoulder was not bothering him very much and that Isom would follow up with Dr. Michael on an as-needed basis.
Isom testified that, on July 4, 2006, the day of his injury made the basis of the present action, the Board was closed. Isom testified that he was on call that day, and he was called in to work because of multiple broken water mains. He stated that, at the time he was injured that day, he was operating a 12-inch water valve with another Board employee. He testified that the valve was difficult to operate because it was old, having been installed before 1921. He testified that, as he and the other employee were pulling the valve, he felt a tearing pain in his left shoulder that went to his left elbow. He stated that he also experienced pain in his neck.
Isom testified that he called Bruce Adams, his supervisor, the next day and told him that he had hurt his shoulder and how he had done so. Isom testified that this was the appropriate procedure for reporting a workplace injury, although he admitted that he did not inform Adams that the injury to his shoulder was a new one. Isom stated that Adams told him to call Herb Allen, the Board’s safety officer, and inform Allen about the injury. Isom testified that Allen directed him to contact Dr. Michael, the second doctor who had treated Isom for the 2003 injury.
A medical record dated July 12, 2006, indicated that Isom returned to Dr. Michael complaining of soreness in his left shoulder. Dr. Michael gave Isom a steroid shot in the shoulder and restricted him from heavy lifting and strenuous activity with his left arm and left shoulder. A medical record from August 7, 2006, indicated that Isom was continuing to experience pain and that Dr. Michael ordered another steroid shot. A medical record dated September 20, 2006, indicated that Isom was continuing to experience pain in his left shoulder. The record states:
“Up until he turned the large valve earlier this year he stated that his shoulder had gotten completely well and he was basically doing everything that he wanted to. Since it is not responding and he is acting the same way he did before we did his last surgery I would be worried that the labral repair might have been disrupted.”
The note also related that Isom was developing cubital tunnel symptoms in his left elbow. Dr. Michael ordered an MRI of Isom’s shoulder. The MRI revealed that the “rotator cuff tendon repair appeared] to be intact with no evidence of recurrence.”
On November 10, 2006, Dr. Michael performed another surgery on Isom’s left shoulder. Based on the surgery, Dr. Michael noted that Isom had a “SLAP” lesion at the insertion of the biceps tendon and that he had an anterior labral tear. He wrote: “[Isom] previously had a labral repair several years ago and with a recent injury this apparently retore that repair.” A medical note from January 24, 2007, indicated that Isom’s left shoulder was progressively improving but that he was experiencing pain in his neck. Dr. Michael ordered a cervical epidural block for Isom’s neck pain. A medical note from March 21, 2007, indicated that Isom was continuing to experience neck pain, and Dr. Michael ordered another epidural block for Isom’s neck, as well as treatment for his left elbow.
The medical records indicate that Dr. Thomas Powell began treating Isom in June 2007 and that Isom was continuing to have pain in his left shoulder and neck at that time. A medical note dated September 10, 2007, indicated that Dr. Powell believed Isom to be at maximum medical improvement, and he ordered an impairment rating. Upon evaluation, Isom was assigned a 10% impairment of the left upper extremity and a 6% impairment of the whole person. A medical note from Isom’s October 23, 2007, appointment with Dr. Powell indicated that Dr. Powell agreed with those impairment ratings. The note indicated that Isom was able to return to work and that Dr. Powell would see Isom on an as-needed basis. He prescribed pain medication for Isom.
Isom testified that, after his last visit with Dr. Powell in October 2007, Dr. Powell released him to return to full-duty work without restrictions. He testified that he had not received treatment for his left shoulder, left elbow, and neck from any other doctors since his last visit with Dr. Powell. Isom testified that he had not had any problems with his left elbow before the 2006 injury but that, since that injury, he has continued to have problems with it. He testified that he had ongoing problems with his left arm and his neck as well. He testified that, before the 2006 injury, he did not have those problems. However, on cross-examination, Isom admitted that he had experienced ongoing pain in his neck since the 2003 injury, for which he took a narcotic pain medication until the last time he saw Dr. Michael in October 2005. Isom testified that, because of the problems with his neck and his left shoulder, he had difficulty sleeping at night and doing particular activities, such as cutting the grass or changing a flat tire.
Herb Allen, the Board’s safety officer, testified that, under the circumstances, Isom’s reporting-of the 2006 injury to his supervisor was the correct procedure for Isom to follow to report his injury. Allen testified that he did not recall being informed before October 24, 2006, that Isom had sustained a new shoulder injury, and he stated that he did not recall Isom contacting him on July 5, 2006, regarding the 2006 injury. Allen testified that he found Isom, with whom he had worked for the preceding 18 years, to be honest, to be a hard worker, and to be a dependable employee.
The Board offered into evidence the deposition of Dr. Powell. In his deposition, Dr. Powell testified that he could not state to any degree of medical certainty whether the labral tear was related to the 2003 injury or to the 2006 injury because he did not begin treating Isom until after Isom had had the November 2006 surgery. He stated that if Isom had “had an acute symptomatic tear, whether it was a retear or not,” he would think that it “would be related to an event that had just occurred.” Dr. Powell testified that the 2006 injury “appear[ed] to be a new injury” and that, based on Dr. Michael’s records and assuming those records were correct, it appeared that what he was treating Isom for was caused or contributed to by the July 4, 2006, accident.
A letter from Dr. Michael to Isom’s counsel dated April 28, 2009, was made an exhibit to Dr. Powell’s deposition. In that letter, Dr. Michael wrote, in pertinent part:
“It appears that between [Isom’s] last two surgeries, there was a period of time where he was symptom-free in regard to the left shoulder. It appears that he was back [to] full duty with no restrictions and no limitations in regards to having had the two procedures done, one by Dr. Adkison and the other by me. On or around July 4, 2006, there appeared to be a new injury while turning a large valve. This injury subsequently led to further surgery on the same shoulder where there was injury to the area where he had the previous two surgeries. In reviewing the records, it is my opinion that the incident on or around July 4, 2006, was a new injury even though [it] was in the same area where he had some surgical repair of a torn labrum with that period of time where he was symptom-free. I feel [that] his injury had healed and the injury in July 2006 created a new injury to the old area of the shoulder.”
On December 17, 2009, the trial court entered a final judgment. The trial court found that Isom “was performing his duties at the time of this traumatic injury,” that the injury arose out of his employment with the Board, and that the Board had been given timely notice of the injury. The trial court found that, as a result of the injury to his left shoulder, left elbow, and neck, Isom had suffered a loss of at least 12.5% of the use of his left arm and that he was due to be awarded benefits pursuant to the Act. The trial court awarded Isom $9,459.48 as accrued permanent-partial-disability payments. The trial court awarded Isom weekly permanent-partial-disability payments of $92.74 for the next 190 weeks. The trial court taxed the costs of the action to the Board, and it ordered the Board to be responsible for future medical treatment for Isom’s left shoulder, left elbow, and neck. On December 21, 2009, the trial court amended the amount of costs taxed to the Board. The Board appeals.
Section 25-5-81(e), Ala.Code 1975, provides the standard by which this court reviews appeals in cases arising under the Act. That section provides:
“(e) Review. From an order or judgment, any aggrieved party may, within 42 days thereafter, appeal to the Court of Civil Appeals and review shall be as in cases reviewed as follows:
“(1) In reviewing the standard of proof set forth herein and other legal issues, review by the Court of Civil Appeals shall be without a presumption of correctness.
“(2) In reviewing pure findings of fact, the finding of the circuit court shall not be reversed if that finding is supported by substantial evidence.”
Discussing this standard, this court wrote in Reeves Rubber, Inc. v. Wallace, 912 So.2d 274, 279 (Ala.Civ.App.2005):
“When this court reviews a trial court’s factual findings in a workers’ compensation case, those findings will not be reversed if they are supported by substantial evidence. § 25-5-81(e)(2), Ala.Code 1975. Substantial evidence is ‘evidence of such weight and quality that fair-minded persons in the exercise of impartial judgment can reasonably infer the existence of the fact sought to be proved.’ West v. Founders Life Assurance Co. of Florida, 547 So.2d 870, 871 (Ala.1989). Further, this court reviews the facts ‘in the light most favorable to the findings of the trial court.’ Whitsett v. BAMSI, Inc., 652 So.2d 287, 290 (Ala.Civ.App.1994), overruled on other grounds, Ex parte Trinity Indus., Inc., 680 So.2d 262 (Ala.1996). This court has also concluded: ‘The [1992 Workers’ Compensation] Act did not alter the rule that this court does not weigh the evidence before the trial court.’ Edwards v. Jesse Stutts, Inc., 655 So.2d 1012, 1014 (Ala.Civ.App.1995). However, our review as to purely legal issues is without a presumption of correctness. See Holy Family Catholic School v. Boley, 847 So.2d 371, 374 (Ala.Civ.App.2002) (citing § 25-5-81(e)(1), Ala.Code 1975).”
The Board contends that Isom did not give it adequate notice of the 2006 injury. It argues that he merely indicated to his supervisors that he had hurt his left shoulder without actually relating the 2006 injury to a work-related incident. It argues that Isom did not provide written notice to it of the 2006 injury until October 24, 2006, the day after Dr. Michael determined that he had experienced another labral tear.
Section 25-5-78, Ala.Code 1975, provides:
“For purposes of this article only, an injured employee or the employee’s representative, within five days after the occurrence of an accident, shall give or cause to be given to the employer written notice of the accident. If the notice is not given, the employee or the employee’s dependent shall not be entitled to physician’s or medical fees nor any compensation which may have accrued under the terms of this article, unless it can be shown that the party required to give the notice had been prevented from doing so by reason of physical or mental incapacity, other than minority, fraud or deceit, or equal good reason. Notwithstanding any other provision of this section, no compensation shall be payable unless written notice is given within 90 days after the occurrence of the accident or, if death results, within 90 days after the death.”
Although § 25-5-78 speaks in terms of written notice, this court has written:
“The purpose of written notice is to advise the employer that the employee received a specified injury, in the course of his employment, at a specified time, and at a specified place, so that the employer may verify the injury by its own investigation. James v. Hornady Truck Line, Inc., 601 So.2d 1059 (Ala.Civ.App.1992). Written notice is not required where it is shown that the employer had actual notice of the injury. James. Oral notice is sufficient to give the employer actual notice. James. Like written notice, oral notice imparts to the employer the opportunity to investigate and protect itself against simulated and exaggerated claims. International Paper Co. v. Murray, 490 So.2d 1228 (Ala.Civ.App.), remanded on other grounds, 490 So.2d 1230 (Ala.1984). Even with oral notification, the employer must be notified that the employee was injured while in the scope of his employment. James. The fact that an employer is aware that the employee suffers from a malady or has medical problems is not, by itself, sufficient to charge the employer with actual notice. Russell Coal Co. v. Williams, 550 So.2d 1007 (Ala.Civ.App.1989). Knowledge on the part of a supervisory or representative agent of the employer that a work-related injury has occurred will generally be imputed to the employer. Beatrice Foods Co. v. Clemons, 54 Ala.App. 150, 306 So.2d 18 (Ala.Civ.App.1975).”
Wal-Mart Stores, Inc. v. Elliott, 650 So.2d 906, 908 (Ala.Civ.App.1994).
The Board argues that, in the present case, Isom told Bruce Adams, his supervisor, only that he had hurt his shoulder. We agree with the Board that, if that was all Isom had said to Adams, that information would have been insufficient to provide the Board with the knowledge that the injury causing the pain in Isom’s shoulder was related to his work for the Board. See Premdor Corp. v. Jones, 880 So.2d 1148, 1155 (Ala.Civ.App.2003) (worker’s oral statement that she had hurt her back without indication of how she had hurt her back was not sufficient notice to employer that injury was work related). However, the testimony at trial indicated that Isom told Adams more than that he had hurt his shoulder; Isom testified that he had told Adams how he had hurt his shoulder. Both Isom and Herb Allen, the Board’s safety officer, indicated that Isom’s informing Adams of the 2006 injury was the proper procedure he was to follow to report his injury. In our view, Isom’s and Allen’s testimony was sufficient to support the trial court’s conclusion that Isom had provided adequate notice to the Board of the 2006 injury and sufficient information to apprise it that his injury was related to the work he was performing for the Board. As a result, we cannot agree that the Board demonstrated that Isom failed to provide adequate notice under the Act.
The Board also contends that the trial court erred when it awarded Isom permanent-partial-disability benefits under the Act based on the 2006 injury when the 6% whole-body impairment rating Dr. Powell assigned to Isom for the 2006 injury was less than the 10% whole-body impairment rating found by the trial court in the 2005 judgment it entered based on the 2008 injury. Relying on § 25-5-57(a)(4)e. and § 25-5-58, Ala.Code 1975, the Board argues that “the trial court erred in failing to apportion Isom’s disability between [the 2003 injury] and impairment and his alleged [2006 injury] and impairment” and that Isom was not entitled to any additional payments beyond those which he had received as a result of the 2003 injury because Isom had not experienced an increase in his impairment rating for the 2006 injury over the 2003 injury.
Section 25-5-57(a)(4)e. provides:
“If an employee has a permanent disability or has previously sustained another injury than that in which the employee received a subsequent permanent injury by accident, as is specified in this section defining permanent injury, the employee shall be entitled to compensation only for the degree of injury that would have resulted from the latter accident if the earlier disability or injury had not existed.”
Section 25-5-58 provides: “If the degree or duration of disability resulting from an accident is increased or prolonged because of a preexisting injury or infirmity, the employer shall be liable only for the disability that would have resulted from the accident had the earlier injury or infirmity not existed.”
In Francis Powell Enterprises, Inc. v. Andrews, 21 So.3d 726, 736 (Ala.Civ.App.2009), this court discussed the proper application of § 25-5-58, writing:
“Powell argues that because Andrews had a preexisting back injury with spondylolisthesis, his right to recover workers’ compensation benefits for the November 3, 2003, on-the-job injury is limited by § 25-5-58, Ala.Code 1975....
“In a long line of cases beginning with Ingalls Shipbuilding Corp. v. Cahela, 251 Ala. 163, 36 So.2d 513 (1948) (superseded on other grounds by statute, see Tit. 26, § 262(a), Ala.Code 1940), Alabama appellate courts have held that ‘ “the term ... infirmity in [§ 25-5-58] refer[s] to a condition which affects [the plaintiffs] ability to work as a normal man at the time of the accident or which would probably so affect him within the compensable period.” ’ Ex parte Lewis, 469 So.2d 599, 601 (Ala.1985) (quoting Cahela, 251 Ala. at 173, 36 So.2d at 521). Pursuant to Cahela,
‘the law presumes that there is no preexisting injury or infirmity when the employee is able to fully perform his or her job duties in a normal manner prior to the subject injury. [Section 25-5-58] only applies when the previous injury or infirmity has demonstrated itself as disabling and prevented the employee from earning wages in a normal manner.’
“1 Terry A. Moore, Alabama Workers’ Compensation § 16.25 at 708-09 (1998) (emphasis added; footnote omitted).”
We note that “[§] 25-5-57(a)(4)e. has been construed identically to § 25-5-58 so that if the employee is working normally at the time of the second accident, the law presumes the employee had no preexisting disability or injury.” Alamo v. PCH Hotels & Resorts, Inc., 987 So.2d 598, 604 n. 1 (Ala.Civ.App.2007) (citing Ex parte Bratton, 678 So.2d 1079, 1083 (Ala.1996)).
In the present case, Isom testified that, following the 2003 injury and his recovery therefrom, he was able to return to work and perform his job fully without any restrictions for a period of several months before the 2006 injury. He testified that, during that period, he experienced no problems with his left shoulder and that he could do anything he wanted ■with regard to his job. The medical records submitted at trial support Isom’s testimony in this regard. Because the 2003 injury neither demonstrated itself as disabling nor prevented Isom from earning wages in a normal manner, the trial court did not err in refusing to consider the effects of the 2003 injury on Isom’s impairment following the 2006 injury. The trial court’s judgment is due to be affirmed as to this issue.
Finally, relying on the “last-injurious-exposure rule,” the Board contends that Isom’s 2006 injury was merely a recurrence of his 2003 injury and that, because the Board had already compensated him for the 2003 injury, he was not entitled to any further compensation based on the 2006 injury. The Board points to medical notes indicating that the 2006 injury was a “retear” of the surgical repair on Isom’s glenoid labrum, which had been torn as a result of the 2003 injury. The Board also points out that Isom’s medical-impairment rating from the 2003 injury exceeded his medical-impairment rating from the 2006 injury. The Board concludes that “[t]he medical evidence undisputedly establishes that Isom sustained a recurrence of his [2003 injury], and, therefore, that Isom did not sustain a new injury entitling him to additional compensation for the same impairment of the same torn labrum that he sustained on June 13, 2003.”
Generally, the last-injurious-exposure rule is utilized by courts to determine which of multiple employers is liable for a subsequent injury sustained by a worker.
“Under the ‘last injurious exposure’ rule, ‘liability falls upon the carrier covering [the] risk at the time of the most recent injury bearing a causal relation to the disability.’ North River Insurance Co. v. Purser, 608 So.2d 1379, 1382 (Ala. Civ.App.1992). The trial court must determine whether the second injury is ‘a new injury, an aggravation of a prior injury, or a recurrence of an old injury; this determination resolves the issue of which insurer is liable.’ Id.
“A court finds a recurrence when ‘the second [injury] does not contribute even slightly to the causation of the [disability].’ 4 A. Larson, The Law of Workmen’s Compensation, § 95.23 at 17-142 (1989). ‘[T]his group also includes the kind of case in which a worker has suffered a back strain, followed by a period of work with continuing symptoms indicating that the original condition persists, and culminating in a second period of disability precipitated by some lift or exertion.’ 4 A. Larson, § 95.23 at 17-152. A court finds an ‘aggravation of an injury when the ‘second [injury] contributed independently to the final disability.’ 4 A. Larson, § 95.22 at 17-141. If the second injury is characterized as a recurrence of the first injury, then the first insurer is responsible for the medical bills; however, if the injury is considered an aggravation of the first injury, then it is considered a new injury and the employer at the time of the aggravating injury is liable for the medical bills and disability payments. North River, supra.”
United States Fid. & Guar. Co. v. Stepp, 642 So.2d 712, 715 (Ala.Civ.App.1994).
As noted above, in the April 28, 2009, letter from Dr. Michael to Isom’s counsel, Dr. Michael opined that the 2003 injury had completely healed before Isom sustained the 2006 injury and that the 2006 injury was a new injury. Although Dr. Powell testified that he could not state to any degree of medical certainty whether Isom’s labral tear following the accident causing the 2006 injury was related to the 2003 injury or to the 2006 injury because he did not treat Isom until after the November 2006 surgery, Dr. Powell also testified that if Isom had “had an acute symptomatic tear, whether it was a retear or not,” he would think that it “would be related to an event that had just occurred” and that the 2006 injury “appear[ed] to be a new injury.” Dr. Powell also testified that, based on Dr. Michael’s records and assuming those records were correct, it appeared that what Dr. Michael was treating Isom for was caused or contributed to by the accident that he described as having occurred on July 4, 2006.
Isom testified that, after the January 2005 surgery, he was released to return to work without restrictions and that he worked several months before the 2006 injury without seeing a doctor regarding his shoulder. He testified that, during that time, he had no problems with his shoulder and that he could do anything he wanted to do with it with regard to his job, and he testified that he felt as though he had recovered and healed from the 2003 injury.
Dr. Michael’s medical records related to the treatment of Isom’s 2003 injury largely support Isom’s testimony. Those records indicated that, within three months of having performed the January 2005 surgery on Isom’s shoulder, Isom was experiencing only occasional soreness in his shoulder, that his shoulder pain had resolved or had virtually resolved within eight months of the January 2005 surgery, and that Dr. Michael believed that the January 2005 surgery had been successful. Additional medical records indicated that Dr. Michael released Isom from all work restrictions almost 11 months before the 2006 injury.
The evidence, considered in its totality, supports the trial court’s conclusion that the 2006 injury was compensable separately from the 2003 injury. The above-discussed evidence indicates that the labral tear in Isom’s left shoulder from the June 2003 accident had healed and that the July 4, 2006, accident caused a new labral tear in his left shoulder or, at the very least, resulted in a retearing of the previous labral tear, i.e., an aggravation of the previous tear. See Health-Tex, Inc. v. Humphrey, 747 So.2d 901, 905 (Ala.Civ.App. 1999) (finding that evidence “that [the worker] was not experiencing problems when she first started working for the second company, but that after a few months of performing her repetitive-motion sewing duties she began to experience pain and numbness characteristic of carpal tunnel syndrome,” and the worker’s treating physician’s testimony “that her work at the second company contributed to her subsequent carpal tunnel syndrome,” supported trial court’s conclusion that the worker’s subsequent injury was an aggravation of her prior injury rather than a recurrence of it). See also 1 Terry A. Moore, Alabama Workers’ Compensation § 6:19 (1998 & 2009 Supp.) (“A new injury is a harmful change in the condition of the employee that is different and independent of the original compensable injury, even if located near or on the site of the original compensable injury.”); id. at § 6:20 (“An aggravation occurs when, after the employee has resumed working normally without continuing symptoms, a subsequent work-related factor independently worsens the original compensable injury.”). As a result, we find no merit in the Board’s contention that the evidence supported only a finding that the 2006 injury was a mere recurrence of the 2003 injury, and, in this regard, the trial court’s judgment is due to be affirmed.
Based on the foregoing, we conclude that the trial court’s judgment is supported by substantial evidence and that the Board has failed to demonstrate a basis on which to hold the trial court in error. Accordingly, we affirm the trial court’s judgment.
AFFIRMED.
PITTMAN, BRYAN, THOMAS, and MOORE, JJ., concur.
. Neither party contends that the Act does not apply in this case on the basis that the Board is an agency of a municipality that is exempted from the Act by virtue of § 25-5-13(b), Ala.Code 1975, and the record does not reflect that that exemption is applicable to the Board.
. During Dr. Powell's deposition, the Board objected to, and moved to strike, Dr. Michael's letter. When the letter appeared on Isom's pretrial list of exhibits, the Board filed an objection to the letter. Isom did not offer the letter as an exhibit at trial. However, at the end of the trial, the Board offered Dr. Powell's deposition into evidence and did not move, pursuant to Rule 32(b), Ala. R. Civ. P., to strike Dr. Michael's letter as an exhibit to the deposition. Having failed to make such a motion and obtain a ruling thereon from the trial court, the letter became a part of the record that the trial court was free to consider. See Glenn v. Vulcan Materials Co., 534 So.2d 598, 601-02 (Ala.1988), overruled, on other grounds, Lowman v. Piedmont Exec. Shirt Mfg. Co., 547 So.2d 90, 95 (Ala.1989). Cf. Bryant v. State Farm Fire & Cas. Ins. Co., 447 So.2d 181, 184 (Ala.1984) ("In conclusion, we add that in the absence of a motion to strike or exclude the witness's answer and a ruling by the trial court on the motion, there is nothing for this Court to review.”).
In apparent recognition of its oversight in failing to move the trial court to strike Dr. Michael’s letter as an exhibit to the deposition that it offered to the court, the Board moves this court to strike the letter as an exhibit to the deposition of Dr. Powell. For reasons requiring no citation to law, such a request is wholly improper at the appellate-court level. Even if we were not to consider the letter on appeal, however, we conclude that the other evidence of record, discussed herein, supports the trial court’s determination that the 2006 injury is not a recurrence of the 2003 injury but, instead, is a new injury or an aggravation of the 2003 injury.
. The Board also moves this court to strike the portion of Dr. Powell’s deposition testimony that was based on Dr. Michael’s medical records and opinions. However, the Board, which offered Dr. Powell’s deposition into evidence, did not move to strike that portion of Dr. Powell's testimony in the trial court, and, as a result, the Board will not be heard to argue on appeal that the trial court's judgment cannot be supported by that testimony. See supra note 2.
|
Two fragments of limestone approximately 7cm across are remnants of roof tiles. No find spot is specified. A whetstone fragment and the lower stone from a rotary quern, now missing, were found 'near' inhumation 2.
ANIMAL BONE
Although bones of sheep/goat, cattle, horse, deer and bird (unspecified) were kept, much of the material mentioned in Grant King's notes is now missing. It is therefore difficult to determine any distribution of bone over the site or to associate it with dateable pottery. The majority of the labelled bone comes from Cutting IVC.
DISCUSSION
The pottery evidence form Erlestoke Detention Centre suggests use of the area over a long period beginning sometime in the Bronze Age. A hiatus following the middle Iron Age was followed by activity on the site in the late Iron Age and throughout almost the whole of the Roman period. The quantity of Savernake wares perhaps reflects most intensive use in the 1st and 2nd centuries AD.
It is probable that this area was on the fringe of a farmstead or small rural settlement where marginal land unsuitable for agricultural or light industrial purposes would have been used for domestic rubbish, perhaps in pits (hence the large size of many of the sherds) . Such 'waste ground' would also have been used for adult burials, not normally allowed within a settlement during the Roman period. The cluster of inhumations in Area B, at least three of which are associated with Romano-British material, may represent some of the members of such a community.
The incomplete recording of many such small (<10 burials) rural Romano-British cemeteries in Wiltshire has made it difficult to detect patterns in burial practices of this period. Nevertheless, Erlestoke is in keeping with other more recently excavated cemeteries of this type where, for example, a mixture of flexed, crouched, and extended burials is recorded (e.g. Figheldean and Maddington Farm) . Similarly, although grave goods occur in roughly a third overall of burials in such cemeteries, this proportion can vary greatly from cemetery to cemetery: at Eyewell Farm two of a
possible eight burials contain hobnails, while at Figheldean hobnails are included in eight of nine burials. Hobnails, as in these two examples and at Erlestoke, are the most frequent inclusion. However the sparseness of securely dateable artefacts within many of the burials in these and larger cemeteries makes it difficult to trace use of these cemeteries over time and some of the burials at Erlestoke may well be Iron Age.
As the area within the Detention Centre is now almost entirely built over, there will be in future very little scope for further examination of the site. Any conclusions about its use in previous periods must therefore remain tantalising speculation.
ACKNOWLEDGEMENTS
The authors are most grateful for the specialist contributions of Philip Harding (flints), Dr. Paul Robinson (Iron Age pottery), Robert Hopkins (Savernake and samian wares), Dr. Brenda Dickinson (samian stamps), Dr. Kay Hartley (mortaria), Dr. Colin Pardoe (human remains) and Dr. I.W.Young FRCS (animal bone). We should also like to thank Mr Michael Cook, and Mr Adrian Mills, former governors of Erlestoke Prison, for their assistance. Mr Nicholas Griffiths kindly produced all the drawings. Finally we would like to thank Mrs Marian Geeves for her help in compiling the archive.
CLARKE 1979, The Roman cemetery at Lankhills. Oxford
CRUMMY, N.j 1983, The Roman Small Finds from Excavations in Colchester 1971-9. Colchester: Colchester Archaeological Trust
FITZPATRICK, A.P., and CROCKETT, A.D., 1998, A Romano-British Settlement and Inhumation Cemetery at Eyewell Farm, Chilmark. WANHM 91, 11-35
GRAHAM, A., and NEWMAN, C, 1993, Recent Excavations of Iron Age and Romano-British Enclosures in the Avon Valley, Wiltshire: Site A: Figheldean. WANHM 86, 10-52
|
Handle GPS coordinates in search form
When entering GPS coordinates (ex: 50.6416,3.0597) the map understand it and display the result as a point on map
Agreed.
Actually, this is what currently works:
Calling following URL directly: https://www.qwant.com/maps/place/latlon:63.16595:10.11174 (which is near address "1966 Hølondvegen, 7212 Korsvegen, Norway").
Left-Klick anywhere on the map, for example near "63.16594, 10.11170". This then displays on Qwant Maps "Near ..." and a coordinate "63.16594 : 10.11170" also in the Search input field.
This is what does not work yet (but should): Entering (or confirming) a coordinate in the search field, like "63.16594, 10.11170" - then clicking "Search"! The behavour of case 2 above is weird that Qwant puts "63.16594 : 10.11170" into the Search field and then fails when a user confirms and clicks again on "Search!". In both cases Qwant currently responds with "Sorry, we could not find this place".
The solution and behaviour I'd expect is, that when a user searches (or confirms) a coordinate like "63.16594, 10.11170" (or "63.16594 : 10.11170"), Qwant Maps centers and zooms there, and does a geocoding displaying "Near ...".
Any update on this? Extracting coordinates from a string to a lat/lon is a no-brainer.
I would accept many delimiters: space, colon, semicolon, etc.
|
# Who-Loses
Who loses is an interactive website that allows users to create and join rooms and pick a random loser from the people currently in the room.
A fun little project that is the result of experimenting with Web Sockets.
Stack consits of Node.js using the Express.js framework hosted on [Ziet's Now](https://zeit.co/now) service.
Website is live at [who-loses.com](https://who-loses.com/)
On connection to the website the browser tries to connect to the websocket server running on the same Node.js server. Users can create or join rooms using the side bar and pick a loser when in a room.
The 'magic' behind the scenes is that when users create or join rooms it fires off an event that will broadcast this action to the rest of the connected clients using the Websocket API. The UI is automatically updated without the other users having to refresh the page.
When somebody clicks on the "Pick a Loser" button in a room a random user that is currently in that room is chosen and annonced to everyone else in that room.
If no one else if using the site, open up another tab in your web browser and enjoy who-loses.com by yourself!
Elements of the UI that get updated include:
* Sidebar room list names
* Sidebar number of users in a room
* 'In Room Now' names
* Modal announcing the rooms loser.
Popular room names include:
* Who is walking the dog?
* Who is taking out the bins tonight?
* Who has to wash the dishes?
|
Unable to save a state into a file through MS office TaskPane using React JS
As a user writing an MS Word template, after I have added a number of manual fields on the ‘AddField’ tab, I want to be able to save this state to disk in a config file. When the Save button is pressed, a dialog should open allowing the user to choose a filename and save a file to disk. The file should contain the data in this.state.addedFields in JSON format, but use extension .dg so that we can easily differentiate docgen files.
Simply, Clicking save button opens dialog to choose location to save a .dg file in MS word TaskPane.
This codes work fine on browser but when it comes to MS TaskPane it doesn't trigger anything.
const handleSave = () => {
const { addedFields } = this.state;
const data = JSON.stringify(addedFields);
const filename = "document.dg";
const blob = new Blob([data], { type: "text/plain;charset=utf-8" });
await saveAs(blob, filename);
}
this codes which mixed with electron and fs returning the error of can't resolve fs modules. seems to do not work in react or using wrong import and wrong implementation
function onSaveButtonClick() {
// Show the save dialog
dialog.showSaveDialog({
defaultPath: 'untitled.dg',
filters: [{ name: 'DG Files', extensions: ['dg'] }]
}).then(result => {
if (result.filePath) {
// Write the file
const fileContent = 'Your file contents here'; // Replace with your actual file content
fs.writeFile(result.filePath, fileContent, 'utf-8', err => {
if (err) {
console.error('An error occurred while saving the file:', err);
} else {
console.log('The file was saved successfully!');
}
});
} else {
console.log('The user did not select a file location');
}
}).catch(err => {
console.error('An error occurred while showing the save dialog:', err);
});
Do you develop a Word add-in?
yes @EugeneAstafiev but I'm using react to interact with MS taskpane
I want to be able to save this state to disk in a config file. When the Save button is pressed, a dialog should open allowing the user to choose a filename and save a file to disk.
For security reasons you can't access the local file system on the end user machine. See Persist add-in state and settings for possible options in web add-ins.
Use members of the Office JavaScript API that store data as either:
Name/value pairs in a property bag stored in a location that depends on add-in type.
Custom XML stored in the document.
Also you may consider using techniques provided by the underlying browser control: browser cookies, or HTML5 web storage (localStorage or sessionStorage). However, some browsers or the user's browser settings may block browser-based storage techniques. You should test for availability as documented in Using the Web Storage API.
I tried to read through the doc you've shared to me but none worked for me
You can post or vote for an existing feature request on Tech Community where they are considered when the Office dev team goes through the planning process.
|
package cn.uncode.cache.store.redis.cluster;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TreeSet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import cn.uncode.cache.CacheUtils;
import cn.uncode.cache.framework.ICache;
import cn.uncode.cache.framework.util.ByteUtil;
import cn.uncode.cache.framework.util.SerializeUtil;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
public class RedisStore implements ICache<Object, Object> {
private static final Logger LOG = LoggerFactory.getLogger(CacheUtils.class);
private JedisClusterCustom jedisCluster;
public void clear() {
}
@Override
public Serializable get(Object key) {
byte[] reslut = null;
byte[] tkey = SerializeUtil.serialize(key);
try {
if (jedisCluster.exists(tkey)) {
reslut = jedisCluster.get(tkey);
Object object = SerializeUtil.unserialize(reslut);
if (LOG.isDebugEnabled())
LOG.debug("-->[get] read from redis success!key:"
+ key.toString() + ",result:" + object.toString());
return (Serializable) object;
} else {
if (LOG.isDebugEnabled())
LOG.debug("-->" + " [read] not exists in redis!");
return null;
}
} catch (Exception e) {
LOG.error("[read] redis cache error", e);
return null;
}
}
@Override
public void put(Object key, Object value) {
byte[] tkey = SerializeUtil.serialize(key.toString());
try {
jedisCluster.set(tkey, SerializeUtil.serialize(value));
String valStr = "";
if (value != null) {
valStr = value.toString();
}
LOG.debug("-->[put] write redis success!key:" + key.toString()
+ ",value:" + valStr);
} catch (Exception e) {
LOG.error("[put] redis cache error", e);
}
}
@Override
public void put(Object key, Object value, int expireTime) {
byte[] tkey = SerializeUtil.serialize(key.toString());
try {
jedisCluster.set(tkey, SerializeUtil.serialize(value));
jedisCluster.expire(tkey, expireTime);
String valStr = "";
if (value != null) {
valStr = value.toString();
}
LOG.debug("-->[put] write redis success!key:" + key.toString()
+ ",value:" + valStr + ",expire:" + expireTime);
} catch (Exception e) {
LOG.error("[put] redis cache error", e);
}
}
@Override
public void remove(Object key) {
byte[] tkey = SerializeUtil.serialize(key.toString());
try {
if (jedisCluster.exists(tkey)) {
jedisCluster.expire(tkey, 0);
}
LOG.debug("-->[remove] write redis success!key:" + key.toString());
} catch (Exception e) {
LOG.error("[remove] redis cache error", e);
} finally {
// jedisCluster.close();
}
}
@Override
public List<Object> keys(String pattern) {
List<Object> list = new ArrayList<Object>();
TreeSet<String> keys = innerKeys(pattern);
for (String str : keys) {
Object obj = SerializeUtil.unserialize(ByteUtil.stringToByte(str));
if (obj != null) {
list.add(obj);
}
}
return list;
}
@Override
public int size() {
int count = 0;
try {
count = innerKeys("*").size();
} catch (Exception e) {
LOG.error("redis cache error", e);
} finally {
// jedisPool.returnResource(jedis);
}
return count;
}
@Override
public boolean isExists(Object key) {
byte[] tkey = SerializeUtil.serialize(key.toString());
try {
if (jedisCluster.exists(tkey)) {
return true;
}
return false;
} catch (Exception e) {
LOG.error("[isExists] redis cache error", e);
return false;
}
}
public JedisClusterCustom getJedisCluster() {
return jedisCluster;
}
public void setJedisCluster(JedisClusterCustom jedisCluster) {
this.jedisCluster = jedisCluster;
}
private TreeSet<String> innerKeys(String pattern) {
TreeSet<String> keys = new TreeSet<String>();
Map<String, JedisPool> clusterNodes = jedisCluster.getClusterNodes();
for (String key : clusterNodes.keySet()) {
JedisPool jp = clusterNodes.get(key);
Jedis connection = jp.getResource();
try {
keys.addAll(connection.keys(pattern));
} catch (Exception e) {
e.printStackTrace();
} finally {
connection.close();
}
}
return keys;
}
@Override
public Object get(Object key, cn.uncode.cache.framework.ICache.Level level) {
return null;
}
@Override
public void put(Object key, Object value, cn.uncode.cache.framework.ICache.Level level) {
}
@Override
public void put(Object key, Object value, int expireTime, cn.uncode.cache.framework.ICache.Level level) {
}
@Override
public void remove(Object key, cn.uncode.cache.framework.ICache.Level level) {
}
@Override
public void removeAll() {
}
@Override
public void removeAll(cn.uncode.cache.framework.ICache.Level level) {
}
@Override
public int ttl(Object key) {
return 0;
}
@Override
public int ttl(Object key, cn.uncode.cache.framework.ICache.Level level) {
return 0;
}
@Override
public boolean isExists(Object key, cn.uncode.cache.framework.ICache.Level level) {
return false;
}
@Override
public Set<String> storeRegions() {
return null;
}
@Override
public Map<String, Set<String>> storeRegionKeys() {
return null;
}
@Override
public Object putIfAbsent(Object key, Object value) {
// TODO Auto-generated method stub
return null;
}
@Override
public Object putIfAbsent(Object key, Object value, int expireTime) {
// TODO Auto-generated method stub
return null;
}
}
|
Lithia
Background
Lithia is the current Princess of the World of Gods.
In the events of Princess x Princess, Lithia and Raito Aioi are dating.
Character Relationships
* Kunzite - Her father.
* Nelia - Her childhood friend.
* Limes - Her friend.
* Lisianthus - Her great-aunt.
* Kikyou - Her great-aunt.
* Eustoma - Her great-grandfather.
Trivia
* Her basis of her name is derived from "spodumene," a gemstone often containing lithium.
|
Líbano | Land Portal
Biblioteca
Organizaciones
Following World War I, France acquired a mandate over the northern portion of the former Ottoman Empire province of Syria. The French demarcated the region of Lebanon in 1920 and granted this area independence in 1943. Since independence the country has been marked by periods of political turmoil interspersed with prosperity built on its position as a regional center for finance and trade. The country's 1975-90 civil war that resulted in an estimated 120,000 fatalities, was followed by years of social and political instability. Sectarianism is a key element of Lebanese political life.
The Arab Union of Surveyors aims to promote cooperation, coordination and communication among surveyors in the Arab countries.
Comparta esta página
|
package net.alexheavens.cs4099.concurrent;
/**
* The WaitRegistrar specifies an interface for Threads to wait and resume,
* allowing custom operations on the part of the WaitRegistrar to be performed
* on either.
*
* @author Alexander Heavens <[email protected]>
* @version 1.0
*/
public interface WaitRegistrar {
/**
* Performs operations before waiting the Thread that made the call to
* await. Any implementation must wait the current Thread during this call.
*
* @throws InterruptedException
* On interruption of the waiting Thread.
*/
public void waitThread() throws InterruptedException;
/**
* Notifies a specific thread waiting at the registrar. Any implementation
* must notify the target Thread in this call.
*
* @param target
* the thread to be resumed.
*/
public void notifyThread(Thread target);
/**
* @param member
* any Thread.
* @return Whether member is waiting in the registrar.
*/
public boolean containsThread(Thread member);
}
|
/*******************************************************
*
* 作者:胡庆访
* 创建日期:20140814
* 运行环境:.NET 4.0
* 版本号:1.0.0
*
* 历史记录:
* 创建文件 胡庆访 20140814 15:49
*
*******************************************************/
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Rafy.Domain.ORM
{
/// <summary>
/// 目前本类只支持 Rdb。
/// </summary>
public static class TreeIndexHelper
{
/// <summary>
/// 重新设置整个表的所有 TreeIndex。
/// 注意,此方法只保证生成的 TreeIndex 有正确的父子关系,同时顺序有可能被打乱。
///
/// 此方法用于 TreeIndex 出错时的整表修复。
/// TreeIndex 在以下已知情况时会出现错误:
/// * 刚从一个非树型实体变更为树型实体时,历史数据中的所有 TreeIndex 会没有值。
/// * 开发者未按规定方法使用树型实体。
/// * 开发者直接修改了数据库,导致 TreeIndex 出错。
/// </summary>
/// <param name="repository"></param>
public static void ResetTreeIndex(EntityRepository repository)
{
using (var tran = RF.TransactionScope(repository))
{
//先清空所有的 TreeIndex
ClearAllTreeIndex(repository);
var all = repository.GetTreeRoots();
if (all.Count > 0)
{
(all as ITreeComponent).LoadAllNodes(LoadAllNodesMethod.ByTreePId);
//如果加载的过程中,第一个节点刚好是根节点,
//则加载完成后是一棵完整的树,Index 也生成完毕,不需要再次处理。
if (all.IsTreeRootList)
{
all.ResetTreeIndex();
repository.Save(all);
}
else
{
var cloneOptions = CloneOptions.ReadSingleEntity(CloneValueMethod.LoadProperty);
var oldList = new List<Entity>();
all.EachNode(e =>
{
var cloned = repository.New();
cloned.Clone(e, cloneOptions);
cloned.PersistenceStatus = PersistenceStatus.Saved;
oldList.Add(cloned);
return false;
});
var newList = repository.NewList();
while (oldList.Count > 0)
{
foreach (var item in oldList)
{
var treePId = item.TreePId;
if (treePId == null)
{
newList.Add(item);
oldList.Remove(item);
break;
}
else
{
var parent = newList.EachNode(e => e.Id.Equals(treePId));
if (parent != null)
{
parent.TreeChildren.LoadAdd(item);
oldList.Remove(item);
break;
}
}
}
}
TreeComponentHelper.MarkTreeFullLoaded(newList);
newList.ResetTreeIndex();
repository.Save(newList);
}
}
tran.Complete();
}
}
private static void ClearAllTreeIndex(EntityRepository repository)
{
var dp = repository.DataProvider as RdbDataProvider;
if (dp == null)
{
throw new InvalidProgramException("TreeIndexHelper.ResetTreeIndex 方法只支持在关系数据库上使用。");
}
var table = dp.DbTable;
var column = table.Columns.First(c => c.Info.Property == Entity.TreeIndexProperty);
using (var dba = dp.CreateDbAccesser())
{
dba.ExecuteText(string.Format("UPDATE {0} SET {1} = NULL", table.Name, column.Name));
}
}
}
}
|
Electronic apparatus, content recording method, and program therefor
ABSTRACT
An electronic apparatus communicable with another electronic apparatus via a network is disclosed. The electronic apparatus may include receiving means for receiving a content recorded on a first recording medium of the other electronic apparatus from the other electronic apparatus, determining means for determining whether the received content is reproducible or irreproducible by decoding the content, and recording means for recording the content on a second recording medium if the content is determined to be reproducible.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority from Japanese Patent Application No. JP 2006-107298 filed in the Japanese Patent Office on Apr. 10, 2006, the entire content of which is incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an electronic apparatus capable of recording a content such as video, a content recording method, and a program therefor.
2. Description of the Related Art
In recent years, a network system in which contents can be reproduced in a plurality of places in the home by interconnecting a plurality of electronic apparatuses such as a video recorder and a television via a network such as a home network (home LAN (Local Area Network)) and exchanging contents such as video among the respective electronic apparatuses is proposed.
When a content is copied or transferred (so-called moved) from some apparatus to a storage device such as a HDD of another electronic apparatus or a medium such as an optical disc such as a DVD (Digital Versatile Disk) or a BD (Blu-ray Disc (registered trademark)) via a network in such a system, the content sometimes turns out to be irreproducible after being copied or moved. In this case, useless copy processing is performed, and when writing on an only once writable medium such as a DVD−R is performed, this medium may be wasted. Further, concerning a copyrighted content, by a content of a source of movement being erased, the content itself may be lost.
In relation to processing when a content is copied or reproduced via the network an information processing device which, when the content is copied or reproduced, acquires a non-permission list describing types or versions of applications inappropriate for this copying or reproduction and a startup file describing types and versions of applications appropriate for the copying or reproduction of the content from the outside, specifies an application installed in itself based on the startup file, and verifies the specified application against the non-permission list so as to be able to prevent execution of the application described in the non-permission list is described (see Japanese Patent Application Laid-Open No. 2005-338959 (Paragraph [0008], FIG. 8, etc.).
SUMMARY OF THE INVENTION
However, in a technology described in the above document, it is necessary to acquire the above non-permission list and startup file from an external Web server, for example, via the Internet in order to determine whether execution of an application to copy or reproduce a content is permitted or not. Accordingly, it is difficult to acquire the above non-permission list and startup file, for example, under an environment where only a content providing side apparatus and a content using side apparatus are connected to a network (connected by a dedicated line), so that it is difficult to make the above determination of permission or non-permission.
Further, in the technology of the above patent document, the permission/non-permission of copying or reproduction of the content is determined with reference to information such as the type, version, and so on of the application, and hence even if the execution of the application is permitted when the content is copied, the content sometimes turns out to be irreproducible when being actually reproduced. For example, instead of a content, like the content in the above patent document, which is provided in a content delivery service and whose reproduction is guaranteed if a predetermined application is used, a content recorded, edited, or created by a user of another apparatus on his or her own, or the like is sometimes a content created in a different format or sometimes a content obtained by combining contents with plural formats even if its extension is the same as that of a content delivered from the above delivery service or the like. If copying of such a content is permitted, there are still dangers that a medium is wasted and the content is lost.
In view of the above circumstances, it may be desirable to provide an electronic apparatus capable of certainly preventing an irreproducible content from being copied or moved from another apparatus, a content recording method, and a program therefor.
According to a principal aspect of the present invention, there is provided an electronic apparatus communicable with another electronic apparatus via a network. The electronic apparatus may include receiving means for receiving a content recorded on a first recording medium of the other electronic apparatus from the other electronic apparatus, determining means for determining whether the received content is reproducible or irreproducible by decoding the content, and recording means for recording the content on a second recording medium of the electronic apparatus if the content is determined to be reproducible.
Here, the electronic apparatus may be a recording/reproducing device such as a HDD (Hard Disk Drive) recorder, a DVD (Digital Versatile Disk) recorder, a BD (Blu-ray Disc (registered trademark)), or their combined recorder, a PC (Personal Computer) (which may be either a desktop or a notebook type), a portable audio/video player, a television (including a portable television), a portable telephone, a PDA (Personal Digital Assistant), a game machine, or any other electric appliance. The content may be video data such as a television program, audio data such as music, text data such as a so-called electronic book, or the like. The first recording medium may be, for example, a hard disk, a flash memory, or the like, and the second recording medium may be, for example, a hard disk, an optical disc such as a DVD (DVD-Video, DVD-RAM, DVD−R, DVD−RW, DVD+R, DVD+RW, or the like) or a BD (Blu-ray Disc (registered trademark)), any other magnetic optical disc, a semiconductor memory, or the like. The recording means may be a drive such as a HDD, a DVD drive, or a BD drive, a circuit including a flash memory, or the like.
Owing to the above constitution, the content may be copied or transferred (so-called moved) from the first recording medium of the other electronic apparatus to the second recording medium after whether the content is reproducible or irreproducible is confirmed, which can prevent the content recorded on the second recording medium from becoming irreproducible, and thereby prevent the second recording medium from being wasted and the content itself from being lost.
The above electronic apparatus may further include means for transmitting a retransmission request signal requesting retransmission of the content determined to be reproducible by the determining means to the other electronic apparatus, and the receiving means may receive the content retransmitted from the other electronic apparatus based on the retransmission request signal, and the recording means may record the retransmitted and received content.
Thus, the content may not be recorded on the second recording medium until the content is determined to be reproducible, so that, for example, the use area of the storage area can be extremely reduced compared to when the content is temporarily stored in a storage area such as a memory different from the above second recording medium at the time of determination.
The above electronic apparatus may further include a buffer memory for temporarily storing the received content, and the recording means may include means for transferring the content from the buffer memory to the second recording medium of the electronic apparatus if the content is determined to be reproducible by the determining means.
Hence, the processing burden can be reduced compared to when the content is received twice for determining whether the content is reproducible or irreproducible and for recording the content on the second recording medium.
The above electronic apparatus may further include reproducing means for reproducing the content recorded on the second recording medium by decoding the content at a first speed, and the determining means may decode the content at a second speed faster than the decoding speed by the reproducing means.
Consequently, the determination processing can be performed speedily compared to when the content is decoded at the same speed as the decoding speed in the reproducing means to determine whether the content is reproducible or irreproducible.
The above electronic apparatus may further include storing means for generating a reproducible/irreproducible list describing whether the content is reproducible or irreproducible based on a result of the determination and storing the reproducible/irreproducible list.
Here, the storing means may indicate a storage device such as the above flash memory or the HDD built in the electronic apparatus. Hence, by generating the above reproducible/irreproducible list and referring to this reproducible/irreproducible list, a certainly reproducible content can be immediately recorded, and an irreproducible content can be prevented from being uselessly recorded, which can improve user friendliness.
In the above electronic apparatus, the receiving means may receive attribute information on contents recorded on the first recording medium of the other electronic apparatus, the storing means may store preference degree information on the contents of a user, and the determining means may preferentially determine whether a content with a high preference degree out of the contents recorded on the first recording medium is reproducible or irreproducible, based on the received attribute information and the stored preference degree information.
Here, the attribute information may be, for example, metadata including a title, performer, category, and so on of the content, and, for example, it may be extracted from the above received content in some cases, extracted from received broadcast data in some cases, and received via a network such as the Internet in other cases. The content with the high preference degree may be, for example, a content with the same attribute information as a content recorded and reproduced to the end before in this electronic apparatus, such as a content with the same title, a content of the same category, or the like as a content recorded before in this electronic apparatus.
Thus, it may be preferentially determined whether a content with a possibility that the user's preference degree is high is reproducible or irreproducible, so that it can be immediately informed whether this content is reproducible or irreproducible as far as the content whose recording the user desires is concerned, which enables smooth recording processing.
The above electronic apparatus may further include means for performing control to display the reproducible/irreproducible list, and means for inputting a user operation requesting recording of the contents described in the reproducible/irreproducible list based on the displayed reproducible/irreproducible list.
Here, “performing control to display” is a concept which may include not only a case where the electronic apparatus includes a display means such as a display, but also a case where the electronic apparatus outputs data to be displayed on a television or a display device connected to the electronic apparatus. Hence, the user can allow only a reproducible content out of contents on the network to be easily recorded without being aware of the determination of whether it is reproducible or irreproducible since the content may be recorded on the recording medium in response to a content movement request from the user based on the reproducible/irreproducible list.
The above electronic apparatus may further include means for transmitting a conversion request information requesting to convert the content into a reproducible recording format if the content is determined to be irreproducible by the determining means, and the receiving means may receive the converted content from the other electronic apparatus.
Consequently, even if the content is recorded in an irreproducible recording format, the content can be reproduced by being converted into a recording format reproducible by the other apparatus and received. Here, the recording format may be a format such as a video format (compression format) such as MPEG-1 (Moving Picture Experts Group phase 1), MPEG-2, MPEG-4, or MPEG-4 AVC, an audio format (compression format) such as a linear PCM (Linear Pulse Code Modulation), Dolby digital (AC-3), DTS, ATRAC (Adaptive TRansform Acoustic Coding), MPEG-1 Audio, or MPEG-2 Audio (MPEG-2 Audio AAC (Advanced Audio Coding)), or a multiplexing format such as MPEG-2 PS (Program Stream) or MPEG-2 TS (transport Stream).
The above electronic apparatus may further include means for storing a first model information on a model of the electronic apparatus, the receiving means may receive a second model information on a model of the other electronic apparatus from the other electronic apparatus, and the determining means may determine that the content is reproducible without decoding the content when the stored first model information and the received second model information matches.
Here the model information may be, for example, a model name, a model ID, a serial number, or the like. Consequently, when the electronic apparatus and the other electronic apparatus are the same model of apparatus, the reproducible content may be speedily recorded on the second recording medium by omitting decoding processing.
A content recording method according to another aspect of the present invention is a content recording method by which an electronic apparatus communicable with another electronic apparatus via a network records a content. The content recording method may include receiving a content recorded on a first recording medium of the other electronic apparatus from the other electronic apparatus, determining whether the received content is reproducible or irreproducible by decoding the content, and recording the content on a second recording medium of the electronic apparatus if the content is determined to be reproducible.
A program that causes an electronic apparatus to function as an apparatus communicable with another electronic apparatus via a network according to yet another aspect of the present invention may include receiving a content recorded on a first recording medium of the other electronic apparatus from the other electronic apparatus, determining whether the received content is reproducible or irreproducible by decoding the content, and recording the content on a second recording medium of the electronic apparatus if the content is determined to be reproducible.
As described above, according to the aspects of the present invention, the irreproducible content can be certainly prevented from being copied or moved from the other apparatus.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 is a diagram showing a schematic configuration of a system in which digital video recorders are connected according to a first embodiment of the present invention;
FIG. 2 is a block diagram showing a configuration of a DVR 1 according to the first embodiment of the present invention;
FIG. 3 is a block diagram showing a configuration of a DVR 2 according to the first embodiment of the present invention;
FIG. 4 is a sequence diagram showing a flow of processing when a content is moved from a HDD 41 of the DVR 2 to an optical disc 20 of the DVR 1 in the first embodiment of the present invention;
FIG. 5 is a flow chart showing a flow of an operation of the DVR 1 when the content is moved from the HDD 41 of the DVR 2 to the optical disc 20 of the DVR 1 in the first embodiment of the present invention;
FIG. 6 is a flow chart showing a flow of an operation of the DVR 1 in a second embodiment of the present invention;
FIG. 7 is a diagram showing an example of a list of contents recorded on respective DVRs in the second embodiment of the present invention; and
FIG. 8 is a diagram showing a reproducible/irreproducible list in the second embodiment of the present invention.
DETAILED DESCRIPTION
Next, with reference to the accompanying drawings, embodiments of the present invention will be described.
First Embodiment
First, a first embodiment of the present invention will be described. FIG. 1 is a diagram showing a schematic configuration of a system in which digital video recorders are connected according to an embodiment of the present invention.
As shown in this figure, this system includes a digital video recorder 1 (hereinafter referred to as a DVR 1) and a digital video recorder 2 (hereinafter referred to as a DVR 2). The DVR 1 and the DVR 2 are connected to a network 4 (a so-called home LAN) such as Ethernet or a wireless LAN (Local Area Network) and can communicate with each other. In particular, in this embodiment, contents can be mutually exchanged (copied or moved) between the respective apparatuses via the network 4, for example, based on the DLNA (Digital Living Network Alliance guideline) specification. Further, the DVR 1 and the DVR 2 are connected to a digital television 3 (hereinafter referred to as a digital TV 3), respectively, by dedicated lines. The digital TV 3 has a display and a speaker (not shown) and can output a video signal and an audio signal transmitted from each DVR.
FIG. 2 is a block diagram showing a configuration of the above DVR 1. As shown in this figure, the DVR 1 includes a CPU (Central Processing Unit) 11, a RAM (Random Access Memory) 12, a ROM (Read Only Memory) 13, an operation input unit 14, a digital tuner 15, an IEEE1394 interface (I/F) 16, an Ethernet/wireless LAN interface 17, a USB (Universal Serial Bus) interface 18, a flash memory interface 19, an optical disc drive 21, a buffer controller 22, a Mux/Demux (multiplexer/demultiplexer) 23, a CODEC (COmpressor DECompressor) 24, an OSD (On-Screen Display) 25, D/A (Digital/Analog) converters 26 and 27, a selector 28, and a flash memory 29.
The CPU 11 accesses the RAM 12 or the like properly as necessary and wholly controls all of respective blocks of the DVR 1. The RAM 12 is a memory which is used as a working area of the CPU 11 or the like and temporarily holds an OS, programs, processing data, and so on. Further, the RAM 12 is also used as a buffering area of data for streaming reproduction received via the network 4. The ROM 13 is a nonvolatile memory in which the OS, programs, and firmware including various parameters to be executed by the CPU 11 are fixedly stored.
The flash memory 29 is a nonvolatile memory which stores, for example, the above OS, data on contents to be recorded on an optical disc 20, and so on.
The operation input unit 14 includes a button, a switch, a key, an indicator for operation confirmation, a light receiving part of an infrared signal transmitted from a remote controller (not shown), and so on, and receives inputs of various set values and commands given by the operation of a user and outputs them to the CPU 11.
In accordance with the control of the CPU 11, the digital tuner 15 selects a specific channel of digital broadcasting to receive broadcast data via an antenna not shown and demodulates the broadcast data, and via the selector 30, the resulting broadcast data is outputted to the Mux/Demux 23 and reproduced or recorded on the optical disc 20 via the buffer controller 22. The broadcast data is, for example, an MPEG stream compressed by an MPEG-2 TS format, but not limited to this format.
The IEEE1394 interface 16 is connectable to an external apparatus such as a digital video camera. For example, content data such as moving image data taken and recorded by the digital video camera can be reproduced or recorded on the optical disc 20 in the same manner as moving image data received by the above digital tuner 15.
The Ethernet/wireless LAN interface 17 receives inputs of content data such as moving image data and other data recorded on the above DVR 2 via the above network 4 such as Ethernet or the wireless LAN. This content data also can be reproduced and recorded on the optical disc 20.
The USB interface 18 receives inputs of content data and other data, for example, from an external storage device such as a USB memory and an apparatus such as a digital camera via a USB. These data also can be reproduced and recorded on the optical disc 20.
The flash memory interface 19 connects with, for example, a memory card with a built-in flash memory (for example, a memory card) and receives inputs of content data and other data recorded on this flash memory. These data also can be reproduced and recorded on the optical disc 20.
The selector 28 selects data inputted from any of the above respective interfaces and the optical disc 20 based on a control signal from the CPU 11.
The optical disc drive 21, on which the optical disc 20 can be mounted, can record and reproduce a signal to the optical disc 20. The optical disc drive 21, for example, reads data such as the above moving image data and inputs it to the buffer controller 22. The optical disc 20 is, for example, a DVD (DVD-Video, DVD-RAM, DVD−R, DVD−RW, DVD+R, DVD+RW, or the like), a BD, a CD or the like.
The buffer controller 22 controls the reading timing and data amount of data continuously inputted from the optical disc drive 21 and continuously outputs data such as an MPEG stream intermittently read from the optical disc drive 21 to the Mux/Demux 23.
Further, the buffer controller 22 buffers not only data read from and written to the optical disc 20 but also, for example, stream data inputted for streaming reproduction from the above Ethernet/wireless LAN interface 17, and controls timing of supplying the stream data to the Mux/Demux 23.
The Mux/Demux 23 demultiplexes a multiplexed MPEG stream inputted from the above buffer controller 22 into an MPEG audio stream and an MPEG video stream and outputs them to the CODEC 24, and multiplexes an MPEG audio stream and an MPEG video stream inputted from the CODEC 24 and outputs them to the buffer controller 22 via the selector 28.
The CODEC 24 performs decoding (decompression) processing on the MPEG audio stream and MPEG video stream demultiplexed by the Mux/Demux 23 to convert them into a digital audio signal and a digital video signal, and outputs the digital audio signal to the D/A converter 27 and the digital video signal to the OSD 25. Further, the CODEC 24 performs encoding (compression) processing on the digital audio signal and digital video signal inputted from the buffer controller 22 by a predetermined format and outputs them to the Mux/Demux 23. Incidentally, the CODEC 24 can also output the digital audio signal and digital video signal inputted from the buffer controller 22 as they are to the D/A converter 27 and the OSD 25.
The D/A converter 27 converts the digital audio signal inputted from the above CODEC 24 into an analog audio signal and outputs it for reproduction to the speaker of the digital TV 3, for example, via the dedicated line.
The OSD 25 generates graphics or the like to be displayed on the display of the digital TV 3, performs processing of combining/switching with the above digital video signal, and outputs video data after the processing to the D/A converter 26.
The D/A converter 26 converts the digital video signal subjected to the graphics processing in the OSD 25 into an analog video signal (NTSC (National Television Standards Committee) signal) and outputs it for display to the display of the digital TV 3, for example, via the dedicated line.
Further, in the broadcast data of the above digital broadcasting, in addition to the audio stream and the video stream, a data broadcast signal, a PSI/SI (Program Specific Information/Service Information) signal to transmit EPG (Electronic Program Guide) data or the like, and the like are included. In the above RAM 12 or flash memory 29, the EPG data extracted from the PSI/SI signal or the like is also stored.
FIG. 3 is a block diagram showing a configuration of the above DVR 2. The DVR 2 includes a HDD 41 in place of the optical disc drive 21 and the flash memory 29 in the above DVR 1. The HDD 41 records content data such as moving image data, various programs, and other data inputted from a digital tuner 35 and other various interfaces on a built-in hard disk, and reads them from this hard disk during reproduction or the like. The above EPG data is also stored in the HDD 41. The configuration other than the above is the same as that of the above DVR 1 shown in FIG. 2, and therefore its description will be omitted.
Next, operations of the DVR 1 and DVR 2 thus configured will be described. As described above, in this embodiment, the DVR 1 and the DVR 2 can move contents between each other via the network 4. An operation when a content is moved from the HDD 41 of the DVR 2 to the optical disc 20 of the DVR 1 will be described below. FIG. 4 is a sequence diagram showing a flow of processing in this case, and FIG. 5 is a flow chart showing a flow of the operation of the DVR 1 in this case.
As shown in both the figures, first, the CPU 11 of the DVR 1 receives an input of an operation requesting processing of moving a content recorded on the HDD 41 of the DVR 2 to the optical disc of the DVR 1 from the user of the DVR 1 via the operation input unit 14 (step 51 in FIG. 4, step 61 in FIG. 5). Incidentally, this operation input is performed, for example, by the DVR 1 acquiring a list of contents recorded on the HDD 41 of the DVR 2 via the network 4 and allowing the display of the digital TV 3 to display it and then by the user selecting a desired content based on the list.
Subsequently, the CPU 11 of the DVR 1 transmits a transmission request signal requesting transmission of a content as an object of the operation input for determining whether it is reproducible or irreproducible to the DVR 2 via the Ethernet/wireless LAN I/F 17 (step 52 in FIG. 4, step 62 in FIG. 5).
Then, in response to the above transmission request signal, a CPU 31 of the DVR 2 transmits the determination content to the DVR 1 via the network 4 by the Ethernet/wireless LAN I/F 17 (step 53 in FIG. 4), and the DVR 1 receives this determination content by the Ethernet/wireless LAN I/F 17 (step 63 in FIG. 5). Incidentally, the transmission/reception of this determination content is performed, for example, by streaming. The streaming makes, for example, buffering to temporarily store the determination content unnecessary.
The CPU 11 of the DVR 1 tries decoding processing on the received determination content by the CODEC 24 to determine whether this determination content is reproducible or irreproducible in the DVR 1 (step 54 in FIG. 4, step 64 in FIG. 5).
More specifically, the CPU 11 determines whether this determination content is a content encoded by the predetermined format supported by the CODEC 24. Here, the predetermined format is, for example, a video compression format such as MPEG-1, MPEG-2, MPEG-4, or MPEG-4 AVC, an audio compression format such as linear PCM, Dolby digital (AC-3), DTS, ATRAC, MPEG-1 Audio, or MPEG-2 Audio, a multiplexing format such as MPEG-2 PS or MPEG-2 TS, or the like, but not limited to the above. It is difficult to perform decoding processing on a content with a format not supported by the CODEC 24, so that this content is determined to be irreproducible.
Depending on contents, sometimes one content has mixed plurality formats by the user editing and generating the content on his or her own, for example, when the user of the DVR 2 combines a content generated by recording a broadcast program with a content shot by a digital video camera or the like or with a content downloaded via the Internet. As concerns such a content with mixed formats, the CODEC 24 is sometimes not allowed to decode this content from some midpoint even if it can decode it to this midpoint, and in contrast, it can sometimes decode it from the midpoint, so that such a content is also determined as an irreproducible content.
When the determination content is determined to be reproducible by this determination (Yes in step 64), that is, when the CODEC 24 is determined to support the decoding processing of the determination content, the CPU 11 transmits a movement request signal requesting movement of the determined content from the HDD 41 of the DVR 2 to the optical disc 20 of the DVR 1 to the DVR 2 (step 55 in FIG. 4, step 66 in FIG. 5).
In response to this movement request signal, the CPU 31 of the DVR 2 transmits the content to move it from the HDD 41 to the optical disc 20 of the DVR 1 (step 56 in FIG. 4), and the CPU 11 of the DVR 1 receives this content by the Ethernet/wireless LAN I/F 17 (step 67 in FIG. 5) and writes this content to the optical disc 20 by the optical disc drive 21 via the selector 28 and the buffer controller 22 (step 68 in FIG. 5). At this time, the moved content is deleted from the HDD 41 of the DVR 2.
When the determination content is determined to be irreproducible by the above determination (No in step 65), that is, the CODEC 24 is determined not to support the decoding processing of the above determination content, the CPU 11 cancels the content movement processing (step 69 in FIG. 5) and allows the display of the digital TV 3 to display a message to the effect that movement of this content is not allowed (step 70 in FIG. 5).
By the above operation, after the determination of whether a content is reproducible or irreproducible is made, this content is written to the optical disc, which can prevent the irreproducible and useless optical disc 20 from being created and prevent a content from being lost because the content is deleted from the HDD 41 of the DVR 2 due to movement, so that the inconvenience of the user can be avoided.
Second Embodiment
Next, a second embodiment of the present invention will be described. Note that in this embodiment, in addition to the above DVR 1 and DVR 2, plural DVRs, for example, having the same configuration as the DVR 2 are communicable with one another via the network 4. Further, in this embodiment, the basic configurations of the DVR1 and the DVR 2 are the same as those in the above first embodiment, so that their description will be omitted.
In this embodiment, in order that the content recording operation in the above first embodiment is performed more smoothly, it is previously determined whether contents recorded on the DVRs other than the DVR 1 existing on the network 4 are reproducible or irreproducible in the DVR 1 while a content movement request is not made by the user.
FIG. 6 is a flow chart showing a flow of an operation of, the DVR 1 in this embodiment. As shown in this figure, first, the CPU 11 of the DVR 1 searches the DVRs other than the DVR 1 existing on the network 4 (step 71), retrieves contents recorded on HDDs of the respective DVRs, and creates a list of these contents (step 72).
Then, the CPU 11 of the DVR 1 sets priorities on the contents shown on the above content list based on metadata (step 73). Here, the metadata are information such as the titles, performers, categories, broadcast start times and broadcast end times of the contents. These metadata may be acquired from the other DVRs together with titles, for example, when the contents on the above network are retrieved, or, when titles of the respective DVRs are acquired, may be retrieved and extracted based on these titles from within the EPG data stored in the flash memory 29, or by newly acquiring EPG data via the Internet or the like, may be retrieved and extracted based on the above titles from within this EPG data.
The priorities are set in order of decreasing degree of preference of the user for the contents. The CPU 11 saves history information on contents recorded and reproduced in past times in the DVR 1, for example, in the flash memory 29, learns the user's preference based on information such as titles, performers, categories in the history information, and saves it as preference degree information. In this preference degree information, types of contents are prioritized according to their titles, performers, categories, and so on. For example, a content having the same title as a content which has been recorded in the DVR 1 is defined as a highest-ranking content in the preference degree, a content including the same title or keyword information such as a performer as a content recorded and reproduced to the end is defined as a second-ranking content, a content which belongs to the same category as a content recorded and reproduced is defied as a third-ranking content, and any other content is defined as a fourth-ranking content. However, prioritization is not limited to the above prioritizing method, and for example, prioritization may be performed according to the degree of similarity of performers and categories or may be performed according to the degree of approximation of broadcast start/end time with respect to a specific time zone. Alternatively, the user may directly input the preference degree information via the operation input unit 14.
FIG. 7 is a diagram showing an example of the above content list created in the step 72 and prioritized in step 73. As shown in this figure, in this content list, titles of contents recorded on respective DVRs existing on the network, DVR names of the respective DVRs, URLs (IP addresses) on the network 4, and model names of these DVRs are described. The DVR name is a name given so as to be easy for the user to remember, for example, a living room DVR if the DVR 2 is for use in a living room, a 2F DVR when another DVR which is different from the DVR 1 and the DVR 2 is for use on the second floor at home, or a father DVR if another DVR which is different from the other DVR is a DVR for a father, but not limited to such a naming method. The URL is information referred to when the DVR 1 transmits/receives a content movement request signal and a content by the Ethernet/wireless LAN I/F 17. The model name is information such as model numbers such as “DVR-xyz” or “DVR-abc-1”. This content list is stored, for example, in the flash memory 29 of the DVR 1. Further, the DVR 1 stores not only model names of the other DVRS but also its own model name, for example, in the flash memory 29 or in the ROM 13.
Then, the CPU 11 compares the model names of the other DVRS in the above created content list and its own model name and confirms whether any content recorded on the same model of DVR as the DVR 1 exists (step 74). When it is determined in step 74 that the content recorded on the same model of DVR exists (Yes), the CPU adds this content as a reproducible content to a reproducible/irreproducible list (described later) describing whether contents are reproducible or irreproducible (step 79). This is because the possibility that this content is reproducible is extremely high since in the case of the same model, the DVR has a similar CODEC. Incidentally, the identity of models may be determined by comparing model IDS, serial numbers, or the like instead of model names.
When the content recorded on the same model of DVR does not exist in step 74 (No), the CPU 11 transmits a transmission request signal to the DVR 2 to request transmission of a content which has the highest priority in the above prioritized content list and concerning which the determination of whether it is reproducible or irreproducible has not been made yet (step 75).
Subsequently, the CPU 11 receives the above highest-priority content by the Ethernet/wireless LAN I/F 17 via the network 4 (step 76), and as in the above first embodiment, it is determined whether this content is reproducible or irreproducible by trying the decoding processing on this content (step 77). If the content is determined to be reproducible in step 77 (Yes), the CPU 11 adds this content as a reproducible content to the above reproducible/irreproducible list (step 79), and if the content is determined to be irreproducible (No), it adds this content as an irreproducible content to the reproducible/irreproducible list and ends the operation (step 80).
FIG. 8 is a diagram showing the reproducible/irreproducible list. As shown in this figure, in the reproducible/irreproducible list, in addition to titles of contents, DVR names of DVRS on which these contents are recorded, UL Rs of these DVRS, information as to whether these contents have been already determined or not (confirmed/unconfirmed) and whether the contents are reproducible or irreproducible (OK/NG) as a result of the determination are described on a content-by-content basis, and the contents are ranked based on the above priorities. This reproducible/irreproducible list is also stored, for example, in the above flash memory 29.
After the above operation, every time the user makes a movement request for a content recorded on any of DVRs other then the DVR 1, referring to the above reproducible/irreproducible list, if a content to be recorded is reproducible, the CPU 11 moves the content to the optical disc 20, and if a content to be recorded is irreproducible, the CPU 11 displays a message to this effect.
Incidentally, it is also possible that when the user requests recording to the optical disc 20, the user is allowed to refer to the above reproducible/irreproducible list and select a reproducible content from the above reproducible/irreproducible list. Consequently, the user makes the movement request only for the reproducible content, so that the processing for the CPU 11 to display a message that the content is irreproducible can be omitted, whereby the operation useless to the user also can be omitted.
By the above operation, it is determined whether contents on the network 4 are reproducible or irreproducible while a movement request of the user is not made, and the results of determination are described in the above reproducible/irreproducible list, so that compared to when, after a movement request of the user is made, it is determined whether a corresponding content is reproducible or irreproducible, the recording processing can be performed more smoothly, which can improve user friendliness. Further, the determination of whether a content is reproducible or irreproducible is performed preferentially from a content with a high preference degree based on the user's preference degree information on contents, and hence when the user makes a movement request for some content, there is a high possibility that whether this content is reproducible or irreproducible has been already determined, which can further improve user friendliness. Furthermore, useless determination processing for a content with a low user's preference degree, that is, with a low possibility that the user makes a movement request is prevented, which can reduce the processing burden on the CPU 11.
It is, of course, to be understood that the present invention is not intended to be limited only to the above embodiments and various changes may be made therein without departing from the spirit of the present invention.
In the above first and second embodiments, when a content is determined to be irreproducible, the DVR 1 displays a message to this effect and adds this content as an irreproducible content to the reproducible/irreproducible list, but the DVR 1 may ask another DVR to convert this content into a reproducible format.
For example, in the flow chart of FIG. 5 in the above first embodiment, when the determination content is determined to be irreproducible in step 65 (No), instead of canceling the movement processing (step 69) and displaying the message to the effect that movement is not allowed (step 70), the CPU 11 may transmit a conversion request signal requesting to convert this determination content into a format reproducible in the DVR 1 to the DVR 2, receive the content after format conversion from the DVR 2, and record this content on the optical disc 20. Further, also in the flow chart of FIG. 6 in the above second embodiment, the same processing may be performed instead of step 80. When it is determined whether a content is reproducible or irreproducible, the CPU 11 determines a format of the content to be determined, for example, by determining a maximum bit rate or an average bit rate of this content, a type of a necessary CODEC, and the like, for example, by analyzing profile information or the like included in the content, and requests the DVR 2 to convert (reencode) the determined content into a format supported by the CODEC 24 of the DVR 1.
Thus, the content determined to be irreproducible in the DVR 1 is subjected to format conversion by another DVR, and consequently, any content can be recorded on the DVR 1, which can improve user friendliness.
Further, in the above two embodiments, the example in which the content is moved from the HDD of the DVR 2 to the optical disc 20 of the DVR 1 is shown, but without being limited to this pattern, for example, when the DVR 1 includes a HDD, the present invention can be applied to a case where a content is moved from the HDD 41 of the DVR 2 to the HDD of the DVR 1. In this case, the DVR 1, like the DVR 2, can function as a DVR being a source of movement of a content. Accordingly, the CPU 11 of the DVR 1 also can transmit a content from its own HDD to another DVR in accordance with a request from the other DVR such as the DVR 2 and subject a content irreproducible in the other DVR to format conversion in accordance with a request from the other DVR.
In the above embodiments, the example in which when it is determined whether a content is reproducible or irreproducible, the determination content is received once in a streaming manner and if the content is determined to be reproducible, the content is received again is shown, but when the DVR 1 has a sufficient buffer area in the RAM 12, the flash memory 29, or the like, the determination content received from the DVR 2 may be temporarily saved with this RAM 12 or flash memory 29 as a buffer memory, and transferred to the optical disc when it is determined to be reproducible. This can save the trouble of receiving the content twice and reduce processing burden.
In the above embodiments, in the processing of determining whether a content is reproducible or irreproducible, the CODEC 24 may perform decode processing at a higher speed compared to normal reproduction processing. This makes speedier determination processing possible. Further, the DVR 1 may further include a CODEC exclusively for the processing of determining whether a content is reproducible or irreproducible in addition to the CODEC 24.
In the above embodiments, the example in which it is determine whether a content existing in another DVR such as the DVR 2 is reproducible or irreproducible in the DVR 1 and then this content is moved to the DVR 1 is shown, but the present invention is also applicable, for example, to a case where a content in the DVR 2 is received by the DVR 1 in a streaming manner and reproduced. In this case, after it is previously determined on the DVR 1 side whether the content to be subjected to streaming is reproducible or irreproducible, only the reproducible content may be received from the DVR 1 in the streaming manner.
In the above embodiments, the network 4 is constituted by the home LAN, but it is, of course, possible that contents may be exchanged not only in the home but also, for example, by connecting the user's home, the outside of the user's home, the office, and so on by the Internet.
In the above embodiments, the description is given with the system in which the plural DVRs and the digital TV as electronic apparatuses are connected by the network as an example, but the types and numbers of electronic apparatuses connected to the network are not limited to the above. The system can be constructed by connecting various electronic apparatuses such as PCs, game machines, DVD players, portable telephones, PDAS in place of the respective video recorders.
Further, in the above embodiments, the description is given with the video content of the broadcast program as an example of the content, but the content is not limited to the above, and various contents including an audio content such as music, a text content such as an electronic book, and so on can be used.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alternations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
1. An electronic apparatus communicable with another electronic apparatus via a network, comprising: receiving means for receiving a content recorded on a first recording medium of the other electronic apparatus from the other electronic apparatus; determining means for determining whether the received content is reproducible or irreproducible by decoding the content; and recording means for recording the content on a second recording medium of the electronic apparatus if the content is determined to be reproducible.
2. The electronic apparatus as set forth in claim 1, further comprising: means for transmitting a retransmission request signal requesting retransmission of the content determined to be reproducible by the determining means to the other electronic apparatus, wherein the receiving means receives the content retransmitted from the other electronic apparatus based on the retransmission request signal, and wherein the recording means records the retransmitted and received content.
3. The electronic apparatus as set forth in claim 1, further comprising: a buffer memory for temporarily storing the received content, wherein the recording means includes means for transferring the content from the buffer memory to the second recording medium if the content is determined to be reproducible by the determining means.
4. The electronic apparatus as set forth in claim 1, further comprising: reproducing means for reproducing the content recorded on the second recording medium by decoding the content at a first speed, wherein the determining means decodes the content at a second speed faster than the first speed.
5. The electronic apparatus as set forth in claim 1, further comprising: storing means for generating a reproducible/irreproducible list describing whether the content is reproducible or irreproducible based on a result of the determination and storing the reproducible/irreproducible list.
6. The electronic apparatus as set forth in claim 5, wherein the receiving means receives attribute information on contents recorded on the first recording medium of the other electronic apparatus, wherein the storing means stores preference degree information on the contents of a user, and wherein the determining means preferentially determines whether a content with a high preference degree out of the contents recorded on the first recording medium is reproducible or irreproducible, based on the received attribute information and the stored preference degree information.
7. The electronic apparatus as set forth in claim 6, further comprising: means for causing the reproducible/irreproducible list to be displayed; and means for inputting a user operation requesting recording of the contents described in the reproducible/irreproducible list based on the displayed reproducible/irreproducible list.
8. The electronic apparatus as set forth in claim 1, further comprising: means for transmitting a conversion request information requesting to convert the content into a reproducible recording format to the other electronic apparatus if the content is determined to be irreproducible by the determining means, wherein the receiving means receives the converted content from the other electronic apparatus.
9. The electronic apparatus as set forth in claim 1, further comprising: means for storing a first model information on a model of the electronic apparatus, wherein the receiving means receives a second model information on a model of the other electronic apparatus from the other electronic apparatus, and wherein the determining means determines that the content is reproducible without decoding the content if the stored first model information and the received second model information matches.
10. A content recording method by which an electronic apparatus communicable with another electronic apparatus via a network records a content, the method comprising: receiving a content recorded on a first recording medium of the other electronic apparatus from the other electronic apparatus; determining whether the received content is reproducible or irreproducible by decoding the content; and recording the content on a second recording medium of the electronic apparatus if the content is determined to be reproducible.
11. A program that causes an electronic apparatus to function as an apparatus communicable with another electronic apparatus via a network, the program comprising: receiving a content recorded on a first recording medium of the other electronic apparatus from the other electronic apparatus; determining whether the received content is reproducible or irreproducible by decoding the content; and recording the content on a second recording medium of the electronic apparatus if the content is determined to be reproducible.
12. An electronic apparatus communicable with another electronic apparatus via a network, comprising: receiving unit for receiving a content recorded on a first recording medium of the other electronic apparatus from the other electronic apparatus; determining unit for determining whether the received content is reproducible or irreproducible by decoding the content; and recording unit for recording the content on a second recording medium of the electronic apparatus if the content is determined to be reproducible.
|
Lower FPS than conventional way!
I tested the supplied demo project within the library, so far I've tested this in iOS simulator (does this specifically needs physical device to test?) and have 'conventional' way always higher FPS than the library FPS. Conventional tab 50-51 FPS where library tab drops to 37 FPS.
use real device and test again
Fantastic! So smoother than even my thoughts are :+1: Thanks for this lovely API, gonna use in my project.
|
PREFACE
This work is the outgrowth of a course of lec tures given by the author for a number of years to the students of the Agricultural Department of the University of Minnesota. During recent years material progress has been made in dairying, and in writing this book it has been the aim briefly to incorporate the results of the more important investigations on the subject. In the prepara tion of the work extensive use has been made of the bulletins and reports of the Agricultural Experiment Stations of the United States and of other works on the subject. It is the aim to present in as concise a form as possible the principal changes that take place in the handling of milk and in its manufacture into butter and cheese. While our present knowledge of some phases of the subject is incomplete, there are many facts that are known and have been found very useful as an aid in the production of dairy products of the highest sanitary and market value.
|
import PropTypes from 'prop-types';
import React, {
forwardRef,
useLayoutEffect,
useImperativeHandle,
useRef,
useState,
} from 'react';
import IconButton from '../icon-button/icon-button.jsx';
import iconClear from '../../svgs/clear.svg';
import styles from './popover.css';
import useFlush from '../../hooks/use-flush.js';
import useMounted from '../../hooks/use-mounted.js';
import useForwardedRef from '../../hooks/use-forwarded-ref.js';
const Popover = forwardRef(function Popover(props, ref) {
const [open, setOpen] = useState(false);
const closeIconRef = useRef();
const sectionRef = useRef();
const selfRef = useForwardedRef(ref);
const { flush } = useFlush({ ref: sectionRef, initialDirection: props.flush, autoflush: props.autoflush, open });
const mounted = useMounted();
function onBlur(event) {
// Target is set to `null` when losing focus to a non-interactive element (e.g. text)
const focusOutsidePopover = event.relatedTarget === null || !sectionRef.current.contains(event.relatedTarget);
if (focusOutsidePopover && props.shouldClose(event)) {
setOpen(false);
}
}
useLayoutEffect(() => {
if (!mounted.current) return;
if (open) {
closeIconRef.current.focus({ preventScroll: true });
} else {
props.onClose();
}
}, [open]);
useImperativeHandle(selfRef, () => ({
get open() {
return open;
},
set open(value) {
setOpen(value);
},
}));
if (!open) {
return null;
}
return (
<section
aria-labelledby="popover-heading"
className={styles.container}
onBlur={onBlur}
ref={sectionRef}
role="dialog"
style={flush}
tabIndex={-1}
>
<div className={styles['heading-container']} id="popover-heading">
<h1 className={styles.heading}>{props.title}</h1>
<IconButton
active={true}
className={styles['close-button']}
icon={iconClear}
label="Close popover"
onPress={() => { setOpen(false); }}
ref={closeIconRef}
/>
</div>
{props.children}
</section>
);
});
Popover.propTypes = {
autoflush: PropTypes.bool,
children: PropTypes.node,
flush: PropTypes.oneOf(['left', 'right']),
onClose: PropTypes.func,
/** Handles either a FocusEvent or MouseEvent. This is an additional predicate for configuring the closing behavior. */
shouldClose: PropTypes.func,
title: PropTypes.string.isRequired,
};
Popover.defaultProps = {
autoflush: false,
children: undefined,
flush: 'left',
shouldClose: () => true,
onClose: () => {},
};
export default Popover;
|
reproduce "Richer Convolutional Features for Edge Detection"
hi,I will try to reproduce Richer Convolutional Features for Edge Detection based on tensorpack recently,it that ok?
why not?
|
Device for converting a pressure into an electric signal, and electronic pressure measuring device comprising such a device
ABSTRACT
The disclosure relates to a device for converting a pressure into an electric signal. The device has a first deformation body in the form of a first membrane, via which the pressure can be introduced into the device, and a second deformation body in the form of a second membrane, by means of the deflection of which the applied pressure can be converted into an electric signal. The device has a force transmitting means for transmitting pressure and/or tensile forces from the first deformation body to the second deformation body. Either the force transmitting means is designed as a separate part and the two membranes have a bore into which the force transmitting means is at least partly introduced and in which the force transmitting means is connected to the respective membrane, or the force transmitting means is integrally formed with one of the two membranes and the corresponding other membrane has a bore into which the force transmitting means is at least partly introduced and in which the force transmitting means is connected to said membrane.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a U.S. National Phase Application under 35 U.S.C. 371 of International Application No. PCT/EP2019/067896, filed on Jul. 3, 2019, which claims the benefit of German Patent Application No. 10 2018 116 476.9, filed on Jul. 6, 2018. The entire disclosures of the above applications are incorporated herein by reference.
FIELD
The disclosure relates to a device for converting a pressure into an electric signal and an electronic pressure measuring device comprising such a device.
BACKGROUND
Such devices are usually based on the fact that the pressure causes a deformation of a deformation body provided for this purpose in the device and this deformation is converted into an electric signal. For example, a bending beam can be provided for a pure force measurement, and a membrane for a pressure measurement.
For certain applications, in particular in process and food technology, a front-flush sensor or measuring device is advantageous in which no medium can be accumulated in the otherwise usual connection channel to the deformation body of the device. With sensors of this type, the deformation of a front-flush deformation body, for example a front-flush membrane, is usually transmitted to the actual pressure transducer structure, which for example comprises a silicon element or strain gauges, via a non-compressible transmission means. Such sensors are complex in terms of production with regard to the required oil filling and have further disadvantages, for example the undesired influence of the expansion of the transmission medium on the sensor signal in the case of a temperature increase.
In this regard, EP 2 663847 B1 discloses a device comprising a first deformation body in the form of a first membrane, which is connected to a second deformation body in the form of a second membrane which comprises at least one sensor element. The first deformation body can be deformed by the action of the pressure to be measured. The deformation of the first deformation body is transmitted to the second deformation body via a force transmitting means in the form of a plunger. Here, the deformation of the second deformation body is converted into an electric signal by use of strain gauges. The force transmitting means is configured in two parts, wherein the two deformation bodies each form an integral section of the force transmitting means. The two sections of the force transmitting means are then connected to one another by means of resistance welding. The disadvantage here is the comparatively complex manufacturing process, because in particular the precision in the manufacture of the individual parts has to meet high requirements and must be machined with high precision in the joining and welding process, since the structural design of this device allows practically no tolerance.
SUMMARY
The object of the disclosure is to provide a device which, on the one hand, ensures a permanently reliable operation and a high measurement accuracy and, on the other hand, can be produced easily and cost-efficiently.
According to the disclosure, either the force transmitting means is designed as a separate part and the two membranes each have a hole into which the force transmitting means is at least partially inserted and is there connected to the respective membrane, or the force transmitting means is formed integral with one of the two membranes and the corresponding other membrane has a hole into which the force transmitting means is at least partially inserted and connected there to this membrane. By use of one of these configurations of the device the production, in particular the joining of the two deformation bodies, has been considerably simplified because tolerances can be compensated in a simple manner by the force transmitting means within the hole.
In addition, in an electronic pressure measuring device which comprises such a device according to the disclosure, an absolutely front-flush design is guaranteed. Furthermore, there is no need for any sealing elements in the area of the connection point between the device and the process connection of the measuring device.
Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the pre-sent disclosure.
DRAWINGS
The disclosure is explained below in more detail based on exemplary embodiments with reference to the drawings.
The drawings schematically show:
FIG. 1 shows an electronic pressure measuring device;
FIG. 2 shows a cross section through a first exemplary embodiment of a device according to the disclosure comprising a process connection of a pressure measuring device;
FIG. 3 shows a cross section through a second exemplary embodiment of a device according to the disclosure comprising a process connection of a pressure measuring device;
FIG. 4 shows a cross section through a third exemplary embodiment of a device according to the disclosure comprising a process connection of a pressure measuring device; and
FIG. 5 shows a cross section through a fourth exemplary embodiment of a device according to the disclosure comprising a process connection of a pressure measuring device.
In the following description of the preferred embodiments, the same reference symbols designate the same or comparable components.
DETAILED DESCRIPTION
Example embodiments will now be described more fully with reference to the accompanying drawings.
FIG. 1 shows an electronic pressure measuring device 100 for use in the process measurement technology, which is manufactured and sold by the applicant under the designation PN xxxx. The measuring device 100 consists essentially of a housing 2, which is divided into an upper part 3 and a lower part 4. The lower part, also referred to as process connection, on the one hand, includes the sensor unit in the form of a pressure measuring cell and, on the other hand, enables the mechanical connection of the measuring device 100 to the container or pipe containing the medium. In the upper part 3 the electronic unit is disposed which is provided for the evaluation and processing of the measurement signals supplied by the sensor unit, which can then be tapped via the plug connection shown and forwarded, for example, to a PLC.
A housing head 5 which, among other things, comprises a display 6 and operating elements 7, is placed on the upper part 3. The measuring device 100 is operated via the operating elements 7, i.e. a parameterization or a setting of essential key data, such as the switching points, is carried out. The respective actions are displayed to the user via the display 6.
FIG. 2 shows a cross section through a first exemplary embodiment of the device 1 according to the disclosure within an electronic pressure measuring device 100, which for reasons of illustration—as also in the following FIGS. 3-5—is only shown reduced to the process connection 4. The device 1 comprises a first deformation body 10 which integrally forms a first membrane 12. The first deformation body 10 is essentially formed pot-shaped in that it comprises a circumferential edge 16. In the center or a center point of the preferably circular first membrane 12 a through-hole 12 a is provided.
The device 1 comprises a second deformation body 20 which integrally forms a second, preferably circular membrane 22, a force transmitting means 30 and an edge 26. The force transmitting means 30 is formed plunger-like and preferably cylindrical, wherein in principle other shapes, for example an expanding or tapering configuration, are conceivable, too.
The pressure p to be measured is introduced via the surface of the second membrane 12 which is facing away from the second deformation body 20. The force transmitting means 30 is passed through the through hole 12 a of the first membrane 12 and is preferably flush with the side of the first membrane 12 facing the medium to be measured. By inserting the force transmitting means 30 into and passing the force transmitting means 30 through the through-hole 12 a, tolerances that have arisen during the manufacturing process can be compensated in a simple manner.
The connection of the force transmitting means 30 to the first membrane 12 is preferably made by firmly bonding, in particular by means of welding, alternatively also by soldering or gluing. As a result, a deformation of the first membrane 12 is transmitted via the force transmitting means 30 to the second membrane 22, both in the case of compressive forces and in the case of tensile forces.
On the surface of the second membrane 22 facing away from the first deformation body 10 at least one sensor element 32 is applied, by means of which a deflection of the second membrane 22 can be converted into an electric signal. The sensor element 32 is preferably a strain gauge the electrical resistance value of which changes by expansion and compression. Two sensor elements 32 can be connected to form a half bridge or four sensor elements 32 can be connected to form a full bridge. In addition to strain gauges, for example also piezoelectric elements are conceivable.
The two deformation bodies 10, 20 are positioned opposite to one another in such a way that the edge 16 of the first deformation body 10 engages around the edge 26 of the second deformation body 20. For this purpose, the first deformation body 10 has a shoulder-like taper in the region of the edge 16, on which the edge 26 of the second deformation body 20 rests. Due to the free mobility of the force transmitting means 30 within the hole 12 a, the second deformation body 20 can be placed on the first deformation body 10 and both deformation bodies 10, 20 can then be connected to one another tension-free. In addition to the welded connection of the force transmitting means 30 to the membrane 12, the two deformation bodies 10, 20 are welded to one another on the side surfaces which are in contact with each other. However, it is also conceivable that the two edges 16, 26 are welded laid on top of one another.
The first deformation body 10 comprises a step-like widening in the region of the edge 16 on which the process connection 4 rests with a complementary counterpart, so that a cylindrical outer contour is formed in the area of the connection between the device 1 according to the disclosure and the process connection 4. By means of this configuration the first deformation body 10 becomes part of the outer surface of the entire pressure measuring device 100. In the area facing the pressure medium, thus, there is no longer any need for a seal. In addition, an absolutely front-flush design of the pressure measuring device 100 is thereby realized.
FIG. 3 shows a cross section through a second exemplary embodiment of the device 1 according to the disclosure within an electronic pressure measuring device 100, which is also shown reduced only to the process connection 4 for reasons of illustration. The basic structure corresponds to the illustration shown in FIG. 2, so that in the following in order to avoid repetitions only differences are discussed.
The main difference to the embodiment according to FIG. 2 is that now the first deformation body 10 in addition to the first membrane and the edge 16 also integrally forms the force transmitting means 30. For this purpose, a through-hole 22 a through which the force transmitting means 30 is passed is provided in the center of the second membrane 22. The schematically indicated sensor element 32 then extends accordingly around the through hole 22 a.
The connection of the force transmitting means 30 to the second membrane 22 is again preferably made by firmly bonding, in particular by means of welding, alternatively also by soldering or gluing. In this embodiment, the connection can also be realized by threading.
In this embodiment, too, an absolutely front-flush configuration of the pressure measuring device 100 is guaranteed. Furthermore, there is no need for any sealing elements in the area of the device 1, and by inserting the force transmitting means 30 into and through the through-hole 22 a any resulting tolerances can be compensated in a simple manner.
FIG. 4 shows a modification of the embodiment known from FIG. 2. Here, the hole 12 a is not designed as a through hole, but as a blind hole. As a result, the force transmitting means 30 is now not passed through, but only inserted. This embodiment is suitable, for example, in order to connect the force transmitting means 30 to the first membrane 12 by means of a threaded connection. For this purpose, the membrane 12 comprises a corresponding widening around the hole 12 a. Alternatively, for example by means of laser welding, a welded connection could come into consideration in which a welding is implemented from below through the membrane 12.
In this embodiment, too, an absolutely front-flush configuration of the pressure measuring device 100 is guaranteed. Furthermore, there is no need for any sealing elements in the area of the device 1, and by inserting the force transmitting means 30 into and through the through-hole 22 a, any resulting tolerances can be compensated in a simple manner.
FIG. 5 shows a further embodiment in which the force transmitting means 30 is not formed onto one of the membranes 12, 22 as in FIGS. 2-4, but is formed as a separate part. For this purpose, the two membranes 12, 22 each comprise a through-hole 12 a, 22 a or, as shown in the figure, a blind hole 12 a. The force transmitting means 30 is accordingly inserted in or through these holes 12 a, 22 a. This embodiment is thus to a certain extent a combination of the connection options described above in FIGS. 2-4 between the membranes 12, 22 and the force transmitting means 30 and further simplifies the manufacturing effort.
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are inter-changeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
1. A device for converting a pressure into an electric signal, wherein the device comprises a first deformation body in the form of a first membrane, via which the pressure can be introduced into the device, and a second deformation body in the form of a second membrane, by means of the deflection of which the applied pressure can be converted into an electric signal, wherein the device comprises a force transmitting means for transmitting compressive and/or tensile forces from the first deformation body to the second deformation body, wherein the force transmitting means is designed as a separate part and the two membranes comprise a hole into which the force transmitting means is at least partially inserted and is connected there to the respective membrane.
2. A device for converting a pressure into an electric signal, wherein the device comprises a first deformation body in the form of a first membrane, via which the pressure can be introduced into the device, and a second deformation body in the form of a second membrane, by means of the deflection of which the applied pressure can be converted into an electric signal, wherein the device comprises a force transmitting means for transmitting compressive and/or tensile forces from the first deformation body to the second deformation body, wherein the force transmitting means is formed integral with one of the two membranes and the corresponding other membrane comprises a hole into which the force transmitting means is at least partially inserted and connected there to this membrane.
3. The device according to claim 1, wherein the force transmitting means respectively connects the two membranes to each other at their centers.
4. The device according to claim 1, wherein the hole is designed as a through hole or as a blind hole.
5. The device according to claim 1, wherein the force transmitting means is connected in the hole to the respective membrane by firmly bonding or by screwing.
6. The device according to claim 1, wherein the first deformation body is formed pot-shaped and engages around the second deformation body.
7. An electronic pressure measuring device, consisting of a process connection, a housing placed on it and a pressure measuring cell for detecting the pressure prevailing in an adjacent medium, wherein the measuring cell is configured as a device according to claim
1. 8. The electronic pressure measuring device according to claim 7, wherein the first deformation body comprises a circumferential, step-like widening on its lateral outer surface, on which the process connection rests at its end facing the device so that in the area of the connection between the device and the process connection cylindrical outer contour is formed.
9. The device according to claim 2, wherein the force transmitting means respectively connects the two membranes to each other at their centers.
10. The device according to claim 2, wherein the hole is designed as a through hole or as a blind hole.
11. The device according to claim 2, wherein the force transmitting means is connected in the hole to the respective membrane by firmly bonding or by screwing.
12. The device according to claim 2, wherein the first deformation body is formed pot-shaped and engages around the second deformation body.
13. An electronic pressure measuring device, consisting of a process connection, a housing placed on it and a pressure measuring cell for detecting the pressure prevailing in an adjacent medium, wherein the measuring cell is configured as a device according to claim
2. 14. The electronic pressure measuring device according to claim 13, wherein the first deformation body comprises a circumferential, step-like widening on its lateral outer surface, on which the process connection rests at its end facing the device so that in the area of the connection between the device and the process connection a cylindrical outer contour is formed.
|
Board Thread:Alliance Recruitment/@comment-26101407-20150212174348/@comment-26101407-20151128200816
I'll see you then.
Thanks.
~smiles~
Colette
~Colette Collects!~
|
Local Ancestry Inference in Large Pedigrees
Local ancestry, defined as the genetic ancestry at a genomic location of an admixed individual, is widely used as a genetic marker in genetic association and evolutionary genetics studies. Many methods have been developed to infer the local ancestries in a set of unrelated individuals, a few of them have been extended to small nuclear families, but none can be applied to large (e.g. three-generation) pedigrees. In this study, we developed a method, FamANC, that can improve the accuracy of local ancestry inference in large pedigrees by: (1) using an existing algorithm to infer local ancestries for all individuals in a family, assuming (contrary to fact) they are unrelated, and (2) improving its accuracy by correcting inference errors using pedigree structure. Applied on African-American pedigrees from the Cleveland Family Study, FamANC was able to correct all identified Mendelian errors and most of double crossovers.
There has been an increasing interest in studying the ancestral spectrum of admixed individuals, such as African Americans and Latino Americans [1][2][3] . Investigating the different ancestral proportions across an individual's genome, i.e. the local ancestries, is useful for estimating population-specific genetic effects via admixture mapping studies [4][5][6] , capturing natural selection signals 1,2,7-9 , understanding population migration history 10 , and correcting for local population structure in genome-wide association studies 11,12 . Many methods have been developed to infer the local ancestries in unrelated admixed individuals from a given study sample using dense genotype data. In particular, Hidden Markov Model (HMM)-based methods, including SABER+ 13 , HAPMIX 14 , LAMP-LD 15 , and PCAdmix 3 , are widely used because of their high accuracy and resolution. In brief, these methods model the observed genotypes of admixed individuals conditioning on the hidden states of their ancestral reference alleles or haplotypes, which are assumed to follow a Markov process. All the above methods can be applied to phased haplotypes or diploid genotypes using a joint HMM applied to the two haplotypes in an admixed individual.
Family data have multiple advantages in genetic studies compared to population-based data, both by increasing the statistical power to identify risk variants through better control of environmental confounding effects and by more precise modeling of heritability. Genomics data in large admixed pedigrees (e.g. the Cleveland Family Study 16 [CFS] and the San Antonio Family Heart Study 17 [SAFHS]) are available for family-based admixture mapping and association analyses. However, existing local ancestry inference methods for family data are limited. LAMP-HAP 15 and PCAdmix 3 were extended to small nuclear families by fitting joint HMMs on shared haplotypes among family members. For example, in a parent-offspring pair, the parent and child share one common haplotype. The family-wise local ancestries can be estimated from a joint HMM of the three independent haplotypes. In a parent-offspring trio, the child inherited one haplotype from each of the parents. The family-wise local ancestries can be estimated from the joint HMM of the four independent haplotypes. However, this design requires a complex computationally intensive process to phase the family members. The model complexity increases quadratically with the number of founders. Therefore, it can be hard to apply to large pedigrees. The common approach for inferring local ancestries in large pedigrees is currently to incorrectly assume individuals are unrelated. This approach may result in multiple Mendelian errors that violate the assumption of family-based genetic analyses.
In this study, we developed a method which estimates local ancestries in large pedigrees by: (1) using existing software (e.g. SABER+ and HAPMIX) to infer local ancestries for all individuals in a family, temporarily Notation. Let τ be the number of generations since admixture occurred. Without loss of generality, we use 1 for an African (true/inferred) allele and 0 for a European (true/inferred) allele at locus t of individual i. Let λ and 1 − λ be the admixture proportions of African and European ancestry in the population. In a pedigree of n individuals and L dense markers on a chromosome, let X X X X ( , , , ) be the inferred (using existing software) diploid local ancestries at the t th locus; A i t , [1] and A i t , [2] be the two ancestry alleles of X i t , , and B i t , [1] and B i t , [2] be the two alleles of Y i t , , so that , [1] , , [1] , . Let Ped be the set of all possible X t at any t in a pedigree satisfying Mendelian inheritance. For example, in a nuclear family with two parents and one offspring (n = 3), the set of all possible X X X X In this study, the accuracy of the local ancestry inference is evaluated by the dosage error rate, defined as the average difference between the inferred and true local ancestries across all individuals and all loci in a given sample: Let ε be the average allelic local ancestry inference error rate, . Under this model, we can compute the inference probabilities of T i,t given the true local ancestry X i,t as shown in Supplementary Table 1, and then the dosage error rate is estimated in terms of ε and λ as follows: Local ancestry inference error detection. After applying existing software, FamANC firstly scans local ancestry errors that arise from: (1) Mendelian inconsistencies identified from the pedigree structure; (2) double crossovers occurring within 2 cM, described as follows. We assume that the number of crossover events R in an interval of d Morgans follows a Poisson distribution, where τ is the number of generations since original admixture. Letting τ = 8, the average number of generations since admixture in African-Americans, in a region of 2 cM the probability of observing two or more recombination events is low: . Therefore, we treat double crossovers within 2 cM as errors. We screen and take care of loci presenting either of these two types of errors or with low local ancestry estimation quality (<90%) based on output of existing software.
Statistical model. FamANC corrects identified local ancestry errors using the known pedigree structure.
Suppose that local ancestry is inferred at loci t L 1, , = …. , where the loci are ordered but are not necessarily adjacent. For any estimated local ancestry Y t at position = … − t n 2, , 1 , we correct potential error by borrowing information from the true local ancestries Y t 1 − and Y t 1 + , which are assumed to be inferred without errors so that using the following probability model: is a constant. Therefore, we want to identify the optimal ∈ X Ped t , having the largest joint probability , and + X t 1 satisfy the Markov property: If an error occurs at the last locus of a chromosome, i.e. when t = L, then, Notably, the accuracy of correcting the inference errors for loci at the boundaries of a chromosome will be worse compared with the interior, because we can collect less information at the boundaries.
The joint probability in Eq. (4) can be decomposed into two parts: the inference probability | P Y X ( ) t t and the transition probability | . We first estimate the transition probability. We assume that the distance between X t and + X t 1 is small enough that, in a family member i, the status of X i t , 1 + depends on X i t , but not on other family members. Therefore, the joint transition probability in a pedigree can be written as the product of the individual transition probabilities: The transition probabilities are estimated from the recombination events, which are the same as those used in HMMs 13 corresponding to European ancestry and African ancestry in African Americans; τ is the number of generations since admixture occurred; and d t is the genetic distance (in Morgans) between the t th and t + 1 th loci, which is usually small enough that for current GWAS array data at most one recombination event can occur. The diploid transition probability matrix is thus We next estimate the inference probability | P Y X ( ) t t , which can be written as the product of the individual inference probabilities: | is given in Supplementary Table 1 with parameters ε and λ.
Parameter estimation. λ is estimated as the average African ancestry proportion of an individual in the admixed population. Next, we estimate ε from the local inferred ancestries using existing software that incorrectly assumes all individuals are unrelated. Here we modified the method of estimating genotyping errors from Mendelian inconsistency in nuclear families proposed by Saunders et al. 19 . Some of the local ancestry inference errors will lead to Mendelian errors and others will not. From the observed number of Mendelian errors in the data, we can estimate ε. We first divide a large pedigree into smaller non-overlapping nuclear families. For simplicity, we assume only one member in a nuclear family can have an error at a particular position. We list all possible patterns of true and inferred local ancestries in the family members and count the number of Mendelian errors for each corresponding pattern (Supplementary Appendix 1, Supplementary Tables 2-7). Let N ME be the number of Mendelian errors in a nuclear family. For a nuclear family with two parents and m children, For a nuclear family with one parent and m children, The mathematical details are shown in Supplementary Appendix 1. With N ME observed from the data and λ assumed known, we can estimate ε by solving Eqs. Simulation. We constructed a large pedigree (N = 20; Fig. 2A), a medium-size pedigree (N = 10; Fig. 2B), and a small pedigree (N = 4; Fig. 2C) as representatives to investigate the performance of our method. The genotype data of 18,210 markers on chromosome 22 were simulated from the HapMap phase 3 data. We first simulated the founders in the three pedigrees using the phased haplotypes from HapMap Yoruba in Ibdan, Nigeria (YRI), and Utah residents with Northern and Western European ancestry from the CEPH collection (CEU). For the first locus on this chromosome, we randomly sampled a YRI or CEU haplotype with probability given by the admixture rates (80%/20%). Moving along the chromosome, the recombination events were modeled with a Poisson distribution. Assuming at most one recombination event could occur between two adjacent loci, a recombination event was sampled with probability e (1 ) d − τ − , where d is the genetic distance (in Morgans) and τ is the number of generations since admixture. We set τ = 8 for all founders. If a recombination event occurs, a new haplotype would be sampled. In the second step, we simulated the offspring given their parents' haplotypes. The offspring Scientific RepoRtS | (2020) 10:189 | https://doi.org/10.1038/s41598-019-57039-w www.nature.com/scientificreports www.nature.com/scientificreports/ inherits one chromosome from each of the parents. A crossover event in the parent was generated with probability e (1 ) d − − , i.e. τ = 1. In practice, we do not know the true ancestral populations. To add more uncertainty in inferring local ancestry, we also inferred the local ancestries using HapMap phase 3 Luhya in Webuye, Kenya (LWK) and Toscani in Italy (TSI) as the reference panel. We used SABER+ 13 and HAPMIX 14 to infer the local ancestries assuming all individuals are unrelated.
We estimated the allelic error rate ε from all markers on chromosome 22 using Eq. (8). Since a local ancestry block is often several cM long, to save computational time we could work on an ancestry block instead of on each single marker.
Application: local ancestry inference in the cleveland family study. We applied FamANC on the African Americans from the Cleveland Family Study (CFS). The CFS is a family-based longitudinal study consisting of laboratory-diagnosed sleep apnea patients, their family members, and neighborhood control families, as described previously 16 . The de-identified genotype data were analyzed at Case Western Reserve University. The CFS study protocol was approved by the Partners Human Research Committee/IRB. The CFS includes 754 African Americans from 148 large families. Among those, 632 were genotyped with the Affymetrix 6.0 array and 122 were genotyped with the Illumina OmniExpress array. We merged the genotype data from the two different platforms and checked for Mendelian errors using PLINK 20 . Individuals with more than 5% Mendelian errors or SNPs with more than 10% Mendelian errors were removed. The remaining errors were set to be missing values. We phased the haplotypes in the CFS using BEAGLE software 21 and inferred the local ancestries using HapMap phase 3 CEU and YRI as reference panels in SABER+ software, assuming all individuals are unrelated. We then applied FamANC on the SABER+ inferred local ancestries.
Results
Simulation. We used SABER+ and HAPMIX to infer the local ancestries on chromosome 22 in the three simulated families. The true dosage error rate for SABER+ and HAPMIX were similar (err = 0.011 vs 0.014). We divided the three families into 10 small nuclear families and estimated the allelic error rate ε from the observed Mendelian errors using Eq. (8) in those nuclear families. From Eq. (2), the estimated dosage error rate is 0.0096, which is consistent with the true inference error rate.
The ancestry inference error rates of SABER+ and HAPMIX for our simulated data are low. We modeled the probabilities of observing different numbers of individuals with ancestry inference errors at one locus in a family using a binomial distribution with inference error probability 0.01 (Supplementary Fig. 1). The probabilities of observing three or more individuals with ancestry error at the same locus were small in all three simulated pedigrees. Therefore, in a family with size no larger than 20, we do not have to search all possible X Ped t ∈ . To save computation time we only considered a smaller set of Ped with at most two values different from the observed Y t . By applying FamANC on the simulated data, we were able to correct local ancestry estimation errors at 195 loci per individual for SABER+ and at 90 loci per individual for HAPMIX ( Table 1). The average dosage error rate of SABER+ was reduced from 0.011 to 0.0086, and the dosage error rate of HAPMIX was reduced from 0.014 to 0.0076.
Local ancestries in CFS.
We applied FamANC to the CFS African Americans. We checked Mendelian errors using PLINK 20 . No individual failed the 5% Mendelian filter threshold and no SNP failed the 10% Mendelian filter threshold. The genotypes with Mendelian errors across the genome were set as missing. We used SABER+ to estimate the local ancestries on phased chromosomes in the CFS assuming all individuals are unrelated. The local ancestry error rate, estimated from 50 nuclear families, was 0.0278, higher than that in the simulated data.
For some pedigrees with missing first-generation genotype, we removed the first generation and divided them into smaller pedigrees (as seen in each dashed rectangle in Supplementary Fig. 2). A function for pedigree division is provided within the FamANC software. This resulted in 142 pedigrees with sizes ranging from 2 to 13. 124 individuals without any relatives collected in this dataset were further removed. The distribution of analyzed family size is presented in Supplementary Fig. 3A. We found the probability of observing three or more individuals with ancestry inference errors at the same locus for any family, given error rate = 0.03, to be small ( Supplementary Fig. 3B). Therefore, to save computation time, we only searched for ∈ X Ped t with at most two values differing from the observed Y t in a pedigree. Finally, we applied FamANC on the 142 families. Figure 3 shows the local ancestry estimates on chromosome 22 in an 11-individual family before and after applying FamANC. Our method was able to correct all identified Mendelian errors and most double crossovers. Having the local ancestries in the CFS with improved accuracy will be useful when using family-based admixture mapping to identify novel ancestry specific genetic risk factors for complex diseases such as sleep apnea. This approach may help to understand population differences of diseases and design personized treatment.
Discussion
We have developed an efficient method, FamANC, that uses known pedigree structure to improve local ancestry inference in recently admixed populations, where local ancestry inference is first obtained by existing methods that (potentially falsely) assume study individuals are unrelated. Specifically, pedigree structure is used to identify and correct Mendelian errors and double crossovers. When applied on family data, this method reduces the systematic errors in local ancestry inference, thus having the potential to improve the performance of disease mapping studies and population genetics inference in recently admixed populations. We have also provided a method to estimate the local ancestry inference error rate for existing software using the observed number of Mendelian errors in nuclear families while falsely assuming the individuals are unrelated. In this method, we assumed that in any small nuclear family only one member can have a local ancestry inference error at a locus. This assumption may be violated for an extremely large nuclear family with two parents and multiple children and lead to underestimation of the error rate.
We estimated a higher local ancestry inference error rate in real data than in simulated data. This could be due to genotyping errors in the real data. However, it is also possible that our simulation strategy may only reflect an ideal mixture of ancestral haplotype segments, which may not represent the complex admixture process of ancestral populations in evolutionary history. This simulation method has been commonly used in many genetic studies, so this possibility raises a concern about the performance of many local ancestry inference methods. Developing mathematical models that mimic a complex and historically accurate admixture history is a topic for future research.
Our method has some limitations. FamANC performs well when the local ancestries inferred in the sample at the first step, assuming independence, are estimated with sufficient accuracy (err < 0.1). It may not be suitable for poorly inferred data. At a given locus, our method works by detecting either a Mendelian or a cross-over error, and correcting it using local ancestry inference from neighboring loci, namely X X , = + + . Clearly, this is not always true. For example, in our simulation we observed a type of local ancestry inference error introduced by shifting recombination points ( Supplementary Fig. 4), which results in incorrect inference along an interval, N Software used in Step 1
Dosage error Number of loci with errors
Step 1 Step 2 Step 1 Step 2 www.nature.com/scientificreports www.nature.com/scientificreports/ and this may overlap multiple loci. However, none of the existing methods appropriately handles such an error. Detecting and correcting for such a shifting recombination point error is a topic for future research. Other potential improvements to FamANC can also be gained by incorporating more markers around each locus for error correction, and by using genotypes. In this study, we only evaluated the performance of FamANC on local ancestries inferred by SABER+ and HAPMIX, which were commonly used in two-way admixed population and showed high accuracy 22 . The performance of FamANC when used following other recently released ancestry inference software such as RFMix 23 should be evaluated in the future.
In summary, we developed a novel method, FamANC, which can improve the accuracy of local ancestry inference in large pedigrees and will benefit future family-based genetic studies.
|
Due to the heterogeneity of neoantigen profiles and the evolving immune escape mechanisms of cancer cells, immunotherapy alone is not effective in the treatment of advanced cancer patients \[[@CR269], [@CR418]\]. The combination of several immunotherapies can simultaneously target different stages of the cancer immune cycle, including antigen release and presentation, immune cell initiation and activation, immune cell metastasis and invasion of the tumor, and the recognition and killing of cancer cells, thereby enhancing anticancer efficacy. For example, neoantigen-based immunotherapy combined with ICBs, neoantigen vaccines combined with ACT therapy, neoantigen-based immunotherapy combined with traditional therapy, etc. Another strategy is to combine therapies with different mechanisms to overcome resistance induced by tumor heterogeneity \[[@CR269], [@CR292], [@CR407], [@CR419]--[@CR424]\]. All targeted cancer cells must have the same pattern of neoantigen expression and presentation; otherwise, resistant clones with no predicted neoantigens can survive and confer clonal growth advantages. Thus, precision immunotherapy can be combined with conventional treatments, such as chemoradiotherapy, to kill cancer cells without relying on neoantigens. This can lead to more significant and lasting therapeutic effects \[[@CR425], [@CR426]\]. The \"cancer-immune cycle\" refers to the sequence of events that must be initiated, conducted, and expanded to achieve an anticancer immune response that effectively eradicates cancer cells. In short, neoantigens produced by tumor formation are released and captured by DCs. The DC transmits the collected neoantigens on the MHC-I and MHC-II molecules to T cells, thereby initiating and activating effector T-cell responses against cancer-specific neoantigens. The activated effector T cells then migrate and infiltrate the tumor bed, where they recognize and ultimately destroy cancer cells. The death of cancer cells produces more tumor-associated neoantigens that amplify and enhance the immune response in subsequent cycles \[[@CR427]--[@CR430]\]. Thus, cancer immunotherapy aims to restart or amplify the self-sustaining cancer immune cycle. A variety of immunotherapies have been developed to target rate-limiting steps in the tumor immune cycle, including enhancing neoantigen release through chemoradiotherapy and oncolytic viruses, increasing the number and quality of tumor-reactive T cells through cancer vaccines and ACTs, and enhancing immune cell invasion and cytotoxic effects through checkpoint inhibitors \[[@CR108], [@CR431]--[@CR435]\]. Therefore, precision immunotherapy can be combined with conventional treatments like radiotherapy and chemotherapy that kills cancer cells independent of the neoantigens, achieving a more prominent and durable therapeutic effect (Fig. [5](#Fig5){ref-type="fig"}).Fig. 5Combined antitumor strategies based on neoantigens. The diagnosis and routine treatment of tumor patients (Step 1). The formation of tumor cells initiates the immune function of T cells, and the tumor cells die and lyse, resulting in the release of neoantigens (Step 2). Neoantigens produced by tumors are released and captured by DCs. The DC transmits the collected neoantigens on the MHC-I and MHC-II molecules to the T cells (Step 3). Immunotherapies targeting neoantigens (neoantigen-based adoptive cell therapy) mainly include TCR-T cells, TILs, CAR-T cells, CAR-NK/NKT cells, CAR-γδ T cells and bispecific antibodies (Step 4). Adoptive back transport of ACT cells and chemotaxis into the tumor play an antitumor role (Step 5). Neoantigen-based DC vaccine therapy is also initiated (Step 6). Immune cells are primed and activated in the lymph node (Step 7). Effector cells develop into effector memory cells through lymphatic homing (Step 8). Effector memory ACT cells target and kill tumor cells (Step 9). After a series of treatments, clinical evaluation and efficacy monitoring are performed (Step 10). In brief, the "Cancer-Immunity Cycle" includes enhancing neoantigen release by chemotherapy, radiation therapy and oncolytic viruses, increasing the quantity and quality of tumor-reactive T cells through cancer vaccines and ACTs, and boosting the infiltration and cytotoxicity efficacy of immune cells via checkpoint inhibitors
### Neoantigen-based immunotherapies and ICBs {#Sec74}
Checkpoint inhibitor-based immunotherapy has achieved prolonged anti-tumor effects in several malignancies, including renal cell carcinoma, NSCLC and melanoma. Patients, however, do not react to ICB therapy in the absence of tumor-specific effector T cells. Moreover, ICB therapy only affects one or two phases of the anti-cancer immunity pathways, such as anti-CTLA4 antibodies regulate the immune cell priming and activation, while anti-PD-1/PD-L1 antibodies focus on the final negative regulation of T effector cells \[[@CR436], [@CR437]\]. ICBs enhance specific T cell responses by targeting neoantigens, including PRKDC, EVI2B and S100A9, in a relapsed multiple myeloma patient. For patients with solid tumors who are unresponsive to, or relapsed following anti-PD-1 therapy, mRNA-based neoantigen vaccines, such as mRNA-4157, mRNA-5671, and BNT122, are used together with immune checkpoint inhibitors in multiple clinical trials. The anti-tumor efficacy of CTLs, including those specific for mutation-associated neoantigens, can be further boosted by ICB therapy \[[@CR438]\]. In addition, Persistent exposure to TSAs promotes the exhaustion of CD8^+^ T cells, which characteristically expressed high levels of PD-1 and CD39. ICBs can reinvigorate the exhausted neoantigen-specific T cells via overcoming the suppressive microenvironment \[[@CR439]\].
### Combinations of neoantigen vaccine and ACT {#Sec75}
Combinations of neoantigen vaccination and ACT have also been utilized successfully to boost clinical efficacy in tumor treatment. Vaccination can increase the amount of neoantigen-reactive T cells in circulation, possibly by boosting better outgrowth of T lymphocytes. Alternatively, the vaccines can induce de novo T cell responses that overcome the insufficient recognition of neoepitope by T cells due to inadequate cross-presentation of a neoantigen by tumor cells. Vaccine is also used to enhance the efficacy of CAR-T therapy to eliminate solid tumors \[[@CR440], [@CR441]\].
### Neoantigen-based immunotherapies and conventional therapies {#Sec76}
The majority of chemotherapeutic agents and radiation therapy were designed based on their direct cytotoxic effects without considering their impact on immune system. Chemotherapy and radiotherapy can be used to increase the release of tumor-specific neoantigens, circumventing issues such as an insufficient number of neoantigens to stimulate T cell response. During the chemotherapy and targeted therapy, the tumor cells often occur new mutations, including reversion mutation, contributing to drug resistance. Many reversions are predicted to encode tumor-specific neoantigens, offering a potential strategy for combating resistance with CAR-T cell therapies, immune checkpoint inhibitors or anti-cancer vaccines \[[@CR270], [@CR442], [@CR443]\].
八、Limitations of TCR-T-cell therapy {#Sec77}
Despite success in hematological malignancies and solid tumors, neoantigen-based immunotherapies have shown objective efficacy in only a few documented patient responses. Therefore, a considerable number of changes are needed to improve clinical outcomes, including increasing the accuracy of neoantigen prediction, overcoming immune evasion, and optimizing the production process (Fig. [6](#Fig6){ref-type="fig"}).Fig. 6Challenges in the clinical application of neoantigen TCR-T-cell therapy. **a** Low neoantigen load results in a lack of suitable neoantigen targets. **b** At present, the accuracy of neoantigen prediction technology is limited. **c** Downregulation of MHC expression causes tumor cells to lose neoantigen targets. **d** The loss of pMHC molecules leads to the interruption and reduction of neoantigen presentation. **e** The expression of adhesion molecules and stroma-rich and abnormal blood vessels in tumor tissues is downregulated, which limits the effective penetration of T cells. **f** Immunosuppressive tumor microenvironments inhibit T-cell function. **g** The technical bottleneck of ACT leads to the production of neoantigen-specific T cells. **h** Tumor heterogeneity leads to the singleness of specific tumor therapeutic targets and the absence of universal neoantigen targets. **i** The neoantigen epitopes developed thus far are mainly for HLA-A2 targets. **j** Safety of TCR-T-cell therapy itself
(一)、The accuracy of neoantigen prediction is limited {#Sec78}
The widespread use of personalized immunotherapy is limited by limitations in the discovery of neoantigens that target tumors. This is due to the heterogeneity of mutation load and the significant differences in neoantigen presentation between different tumor types \[[@CR444]\]. Therefore, the identification and prediction of neoantigens should be carried out for specific individuals with cancer. The prediction of neoantigens is also limited by genetic heterogeneity, especially different somatic mutations within different cancer types, different individuals and even different tumor subclones. Only 10% of nonsynonymous tumor cell mutations produce mutant peptides with high MHC affinity, and only 1% of MHC-binding peptides are recognized by patient T cells \[[@CR445]--[@CR447]\]. In addition, the heterogeneity of mutations within tumors provides additional complexity to neoantigen prediction. The genome of tumor cells has undergone extensive generation, cloning, alteration and mutation loss. At the same time, the diversity of neoantigen-specific T cells present in patients may not be fully captured by a single resected lesion due to the limited infiltration of T lymphocytes. This limits the library of TCRs that can be built for therapeutic purposes \[[@CR448], [@CR449]\]. Theoretically, the higher the TMB, the more neoantigen-specific T cells are detected in the tumor, resulting in a higher immunotherapy response rate. However, low TMB in hematologic malignancies and some epithelial cancers can also produce neoantigen-responsive lymphocytes. Inadequate neoantigen density in low-TMB malignancies requires more robust strategies to accurately identify novel immunogenic epitopes that CD8^+^ T cells can detect \[[@CR450]--[@CR453]\]. Therefore, there may be tumor clones that do not respond to neoantigen-specific T cells. Because of the selection advantage, these cloned cells may outperform other cloned cells, limiting the clinical benefit.
(二)、Evasion of immune surveillance {#Sec79}
Tumors can evade neoantigen-based immunotherapy through many mechanisms, including neoantigen loss, modification of antigenic peptide presentation, and the immunosuppressive tumor immune microenvironment (TME).
### Neoantigen loss {#Sec80}
Loss of tumor-specific neoantigens may be an important strategy for tumor immune escape. In particular, many neoantigens are byproducts of tumorigenesis and do not play a critical role in tumor cell survival \[[@CR454]\]. Neoantigen depletion may also be an intractable mechanism of antitumor immunity, which limits the application of personalized neoantigen-specific immunotherapy. Neoantigen depletion can have a variety of causes, such as copy number loss, transcriptional inhibition, epigenetic silencing, and posttranslational mechanisms. Neoantigens present only in specific tumor cell subpopulations may also be lost due to eradication of entire subclonal cell populations mediated by CD8^+^ T cells. Many of the missing mutations are recognized by the patient\'s T cells, and neoantigen coding genes are unlikely to be produced in tumors with widespread immune cell penetration. This suggests that neoantigen-expressing tumor subclones may be preferentially removed by the immune system \[[@CR455], [@CR456]\]. In addition, loss of neoantigens due to deletion of chromosomal regions or elimination of tumor subclones may lead to acquired resistance to immunotherapies such as ICBs \[[@CR457], [@CR458]\]. 因Therefore, to compensate for the loss of targeted neoantigens in immunotherapy, personalized neoantigen-specific immunotherapy should target multiple neoantigens, thereby expanding the range of neoantigen response \[[@CR292], [@CR459]\].
### Interruption of neoantigen presentation {#Sec81}
Tumors may undergo mutations that alter not only neoantigen expression but also HLA heterozygosity and MHC stability in response to tumor immune pressure. These changes impede the processing and presentation of neoantigens, which inhibit T-cell recognition and tumor killing. If the key antigen presenting gene beta-2 microglobulin (β2m) is mutated or lacks HLA allelic heterozygosity, the tumor may be able to avoid recognition by adoptive metastatic T lymphocytes \[[@CR460]--[@CR462]\]. The second demonstrated mechanism of epitope loss is the downregulation of MHC molecule expression in tumor cells due to abnormal transcription, translation, or protein stability events \[[@CR463]\]. Together, these mechanisms may help partly explain why higher neoantigen loads in some cancers are not associated with better prognosis due to reduced neoantigen presentation. Based on these findings, ICBs may be more effective in cancer treatment if MHC-I presentation is activated using splicing inhibitors or autophagy inhibitors \[[@CR464]--[@CR466]\].
### Immunosuppressive TME {#Sec82}
Tumor cells live in a heterogeneous microenvironment composed of invasive and resident host cells, secretory factors, and extracellular matrix. Infiltrating cells in the TME include T cells (TILs and Tregs), B cells, fibroblasts, macrophages (M1 and M2), MDSCs and other immune cells and secretion factors, including the immunosuppressive cytokines IL-10 and TGFβ. These components can interact with each other to induce a supportive environment for malignant cell growth, migration, and metastasis, thereby escaping the immune system and tumor-specific CTLs \[[@CR467]--[@CR469]\]. Immunosuppressive TME processes can also impair neoantigen recognition and T-cell activation, including suppression of immune checkpoints, immunosuppressive effects of various TME cells, and release of ions or proteins within tumor cells. Immunosuppressive checkpoint ligand molecules (such as PD-L1 and CTLA-4) can biologically limit T-cell growth and function and are commonly increased in tumor cells during immunotherapy \[[@CR470]--[@CR473]\]. Induced neoantigen expression and combined CAR and epitope diffusion are compensation strategies that can be used to address the immune escape of tumor cells. MHC-I immune epitopes can expand by splicing new epitopes due to defective interaction of the splicing complex with RNA, incorrect degradation of auxiliary splicing factors, or abnormal splicing factor PTM \[[@CR273], [@CR474], [@CR475]\]. Most tumor stromal cells in the TME express the immunosuppressive checkpoint ligand PD-L1, which can interact with PD-1 expressed on T cells, leading to antitumor functional inhibition and depletion of adoptive metastatic TILs, CAR T cells and TCR-T cells. This effect can be mitigated by blocking checkpoints with anti-PD-1 and anti-PD-L1 antibodies \[[@CR377], [@CR470]\]. CTLA-4 expressed on activated T cells has a similar effect because CTLA-4 binds to CD80/86 on antigen-presenting cells with a higher affinity, competing with the T-cell costimulatory molecule CD28 to suppress antitumor immunity. Anti-CTLA4 antibodies not only block the interaction between CTLA4 and CD80/86 but also consume Tregs, thereby promoting the costimulation and amplification of tumor-specific CTLs and improving clinical benefits \[[@CR376], [@CR476]\].
Among the multiple immunosuppressive factors secreted by the TME, TGF-β plays a central role in driving tumor signaling, remodeling, and metabolism. TGF-β is produced by many cell types, including tumor cells, stromal cells, and Tregs. It stimulates autocrine and paracrine signaling to promote angiogenesis, inhibits the antitumor response of CD8^+^ and Th1 cells, and induces the epithelial-mesenchymal transformation of tumor cells, thus promoting tumor invasion. Thus, blocking TGF-β signaling in the TME enhances the antitumor response \[[@CR477], [@CR478]\]. Indeed, one possible mechanism for anti-CTLA4 antibody therapy is the depletion of immunosuppressive TGF-β-producing Treg cells, thereby promoting the costimulation and amplification of tumor-specific CTLs \[[@CR479]\]. To further improve the efficacy of checkpoint inhibitor antibodies, bifunctional antibody ligands have been produced that consist of antibodies targeting CTLA-4 or PD-L1 fused into the TGFβ receptor II extracellular domain (TGFβRIIecd), where TGFβRIIecd isolates TGF-β secretion in the TME, while checkpoint inhibitor antibodies consume Tregs and promote CTL costimulation \[[@CR480]\]. This dual strategy may be more effective against cancers resistant to checkpoint inhibitors alone. As an alternative to TGF-β isolation, tumor-reactive T cells can be transduced with dominant negative TGF-β receptor-II (DNTGf-βrii) to produce TGF-β-resistant antitumor T cells. TCR-T cells expressing dnTGFβRII showed complete tumor regression and prolonged survival in mouse models of advanced and invasive prostate cancer \[[@CR481], [@CR482]\]. To take advantage of the high concentration of TGF-β in the TME, a study fused the extracellular domain of TGF-β receptor II (TGF-β RII) into the intracellular domain of 4-1BB. This strategy converted the immunosuppressive effect of TGF-β into an immunostimulatory signal. Coexpression of this transgene receptor produces additive effects and improves the amplification, persistence, tumor lysis, and selective antitumor activity in vivo \[[@CR483]\]. Similar to TGF-β, Fas ligand-mediated T-cell death signals that are highly expressed in the TME can also be transformed into pro-survival signals via CSR. These signals enhance the proliferation and antitumor function of engineered T cells by fusing the Fas extracellular domain with the 4-1BB intracellular domain \[[@CR484]\].
(三)、Insufficient production of neoantigen-specific T cells {#Sec83}
Immunotherapies, including vaccination, adoptive TIL and TCR-T cells, and ICBs, all rely on neoantigen-specific T cells \[[@CR163], [@CR459]\]. Direct detection of neoantigens presented by tumor cells may be the most effective way to build neoantigen-reactive T cells, ensuring that they recognize new epitopes in the body \[[@CR175]\]. However, the implementation of TIL therapy will be hampered by the scarcity of fresh tumor samples and low TIL cold tumors. First, the acquisition of TILs requires invasive surgery to remove resectable lesions, making it suitable for only some patients. Second, inhibition of the TME in cold tumors may reduce the efficiency and number of neoantigen-specific T cells derived from TILs. Readily available peripheral blood may be a suitable source for generating large numbers of neoantigen reactive T cells for ACTs. However, it should be emphasized that large amounts of in vitro amplification can increase both the further differentiation of T cells and the false-positive neoantigen response of T cells. Most cancer patients cannot be treated with TILs due to inadequate TCR libraries \[[@CR485]--[@CR487]\]. Efforts are underway to develop effective strategies to isolate and rapidly amplify neoantigen-specific T cells, which may benefit neoantigen-based behavior. TILs contain a large number of neoantigen-reactive T cells, making them a valuable source of T lymphocytes for ACT \[[@CR237], [@CR255], [@CR391]\]. Currently, mature DC or EBV-transformed B-cell lines that transfect APCs with peptide pulses or TMGs are used to present antigens to T cells \[[@CR488], [@CR489]\]. mRNA transfection of DCs expressing neoantigens was also used to activate autologous naive CD8^+^ T lymphocytes from healthy donors who had not been exposed to the immunosuppressive environment of the tumor host. However, TCRs triggered by APCs that encode mRNA or synthesize peptide pulses may not endogenously recognize antigen-presenting tumor cells \[[@CR490]\]. Naive and very early memory T cells with stem-like properties can be generated using induced pluripotent stem cell (iPSC) techniques. Immature T-cell lines derived from iPSCs can now be further differentiated into neoantigen-specific T cells based on a unique three-dimensional thymus organ culture system that fully generalizes disease in vitro \[[@CR491], [@CR492]\]. Another benefit is that iPSCs can be cloned from a single neoantigen-specific αβ and re-differentiated into a large number of CTLS. In the near future, iPSCs from single CTL clones will be used to generate sufficient numbers of neoantigen-specific TCR-T cells. These cells remain naive and contain an endogenous state of TCR, thereby significantly increasing the efficacy of neoantigen-based immunotherapy \[[@CR493]\].
(四)、The singleness of therapeutic targets {#Sec84}
In current clinical studies on TCR-T cells, only TCR-T cells attacking a single epitope have been utilized \[[@CR176]\]. Heterogeneity is an important characteristic of tumors. This means that this unique target of T-cell attack may be decreased or even disappear, leading to T-cell therapy failure \[[@CR260], [@CR323]\]. The use of single TCR-T cells may also be an important reason for the poor results of such clinical trials. To achieve better tumor treatment results, combination therapy with TCR-T cells targeting different sites must be used to prevent immune escape \[[@CR10]\].
(五)、Immunotherapy resistance {#Sec85}
The mechanism of primary and secondary resistance to TCR-based immunotherapy may manifest as the low expression or heterogeneity of target antigens in tumor cells or the intrinsic resistance of tumor cells to T-cell-mediated cytotoxicity \[[@CR5], [@CR397]\]. The main mechanism of resistance to TCR-T-cell therapy is the loss or reduction of MHC-I/II class molecules on the surface of tumor cells, which prevents TCR-T cells from recognizing target epitopes \[[@CR85], [@CR126], [@CR141], [@CR161], [@CR353], [@CR395]\].
(六)、The targeted antigen is mainly HLA-A2-restricted {#Sec86}
Most of the antigen recognition in clinical trials of TCR-T cells is HLA-A2-mediated \[[@CR119], [@CR494], [@CR495]\]. Although HLA-A2 is the main MHC molecule in the population, the positive rate is high (approximately 1/3 of the Chinese population expresses HLA-A2). However, TCR-T cells that rely solely on the recognition of HLA-A2 presenting antigens are not applicable in a significant number of patients. The search for only HLA-A2-dependent TCRs also limits the search for targeted antigens \[[@CR496], [@CR497]\]. Many of the antigens suitable for TCR-T-cell therapy may be presented by other MHC molecules \[[@CR263], [@CR315]\]. Monoclonal or oligoclonal neoantigen-reactive T cells made by amplification would also be an alternative route \[[@CR10]\]. Therefore, other HLA-polypeptide tetramers must be developed to broaden the search for antigenic and reactive TCRs.
(七)、The safety of TCR-T-cell therapy {#Sec87}
Safety is an issue that must be addressed before TCR-T cells can be used in clinical therapy \[[@CR498], [@CR499]\]. There have been cases of death in clinical trials of TCR-T cells. The reason may be that the enhanced affinity of TCR leads T cells to recognize autoantigens with high homology and then attack normal cells and tissues, eventually leading to death \[[@CR500], [@CR501]\]. Thus, the balance between functionality and security must be considered in TCR transformation. The safety of TCR-T-cell therapy should be carefully determined by sequence alignment, cell experiments and animal experiments \[[@CR502]\]. Another way to ensure the safety of TCR-T cells is to introduce the suicide gene. TCR-T cells with the suicide gene can perform the killing function normally. However, after the injection of suicide gene activating drugs, T cells undergo apoptosis to avoid the nonspecific killing of T cells \[[@CR503]--[@CR505]\].
九、Advances in TCR-T-cell therapy {#Sec88}
In clinical studies, the safety and efficacy of TCR-T-cell therapy have shown major problems. To solve these problems, in the field of TCR-T-cell research, the search for safe and effective target antigens to improve the affinity and efficiency of TCRs has been the focus. However, it is worth noting that in recent years, an increasing number of studies have been conducted to search for reactive TCRs using neoantigens as targets, and good results have been achieved.
(一)、Identification of neoantigen reactive TCRs {#Sec89}
Neoantigens are abnormal peptides produced by genetic mutations. These antigens are often tumor specific, making them good therapeutic targets \[[@CR163]\]. In recent years, with the advancement of sequencing technology, it has become possible to conduct high-throughput sequencing of tumor tissue and normal tissue to look for neoantigens \[[@CR426], [@CR506], [@CR507]\]. After the neoantigen mutation site is identified, the neoantigen reactive TCR can be found by coincubation with tumor-infiltrating T cells by using the presenting cells to express multiple antigenic peptides at different locations of the mutant epitopes \[[@CR159], [@CR350], [@CR508]\]. In addition, the expression of coinhibitory molecules such as PD-1, 4-1BB and ICOS is also helpful in searching for neoantigen-reactive TCRs \[[@CR244], [@CR280], [@CR375]\]. Targeting neoantigens is an attractive therapeutic strategy, but it falls into the category of individualized therapy and is costly. At present, it is not the mainstream strategy of TCR-T-cell therapy \[[@CR176]\]. However, TCR-T-cell therapy targeting neoantigens will be a direction for development in the future, and it could even be the main direction of research \[[@CR260]\].
(二)、The therapeutic effect of CD4^+^ TCR-T cells {#Sec90}
CD8^+^ T cells are traditionally considered to be the primary tumor killer cells \[[@CR188]\]. Therefore, the role of CD4^+^ T cells in tumor therapy has not received much attention. In recent years, however, it has been found that CD4^+^ T cells also play an important role in tumor killing \[[@CR280], [@CR282], [@CR348]\]. In the identification of TCRs, we have always been searching for MHC-I molecular-restricted TCR sequences. The transfer of such TCRS into CD4^+^ T cells can also promote the function of the latter. The role of CD4^+^ T cells in tumor control has also been directly demonstrated in clinical trials. A reduction in tumor charge was also observed after patients received neoantigen-reactive CD4^+^ T cells. However, due to the lack of the stabilizing effect of the CD4 molecule, it is difficult for these CD4^+^ T cells to play a long-lasting role in the body \[[@CR263], [@CR509]\]. In general, it is thought that the recognition of antigens by TCR is MHC-restricted, that is, it recognizes antigens presented by either MHC-I or MHC-II molecules \[[@CR126]\]. However, studies have shown that some TCRs can recognize both MHC-I and MHC-II molecular presented antigens. This means that such TCRS may be stabilized by both CD8 and CD4. The therapeutic effect of CD4^+^ TCR-T cells will be improved by using such TCRs for treatment \[[@CR510]\].
(三)、The selection of TCRα and β chain connection sequences {#Sec91}
The TCR consists of alpha and beta chains. In natural T cells, alpha and beta chains are encoded by two loci, each transcribed and translated, and then the TCR is assembled \[[@CR511]\]. However, in TCR-T cells, to synthesize functional exogenous TCRs, it is necessary to consider introducing both α-chain and β-chain genes into the same T cell. In this process, two vectors or two promoters in the same vector can be considered to express the α or β chains. However, this may lead to an imbalance in the α chain and β chain expression of exogenous TCR, increasing the probability of TCR mismatch \[[@CR232], [@CR512]\]. Therefore, a better solution is to make the α and β chains under the control of a promoter by using the linking sequence to ensure the balance of molecular translation. The internal ribosome entry site sequence (IRES) is a widely used bicistronic junction sequence that enables coexpression of the IRES. However, the efficiency of IRES-mediated translation is low, resulting in low expression levels of genes located behind IRESs. If IRES sequences are used to connect the alpha and beta chains of TCRs, mismatches may still occur \[[@CR513]--[@CR515]\]. In the TCR expression system, the 2A peptide sequence derived from picornavirus or porcine enterovirus is a good choice. Peptide 2A has a self-cutting function. In other words, when the α-β chain is connected by the 2A sequence, the two chains are transcribed and then broken into separate peptide segments after translation. The amount (mole) of TCR chain can be expressed by using the 2A sequence to reduce the probability of mismatch. Notably, the function of TCR is not affected by the 2A binding sequence despite the addition of several additional amino acids to the α and β chains of TCR \[[@CR515], [@CR516]\].
(四)、Improvement of the surface expression efficiency of the TCR membrane {#Sec92}
When a sufficient number of TCRs recognize and bind to pMHC, T cells are activated to function. The amount of TCR required for T-cell activation can be reduced by costimulation. However, the continued activation of T cells still depends on the number and affinity of TCRs on the cell surface \[[@CR517], [@CR518]\]. The assembly of TCR and the localization of the membrane surface are complicated processes. The translated α and β chains are assembled to form heterodimers, which bind to multiple CD3 subtype molecules (γ, δ, ε, and ζ). The amount of CD3 molecules, especially of the ζ subtype, is constant in the cell. If a complete complex cannot be formed with the CD3 molecule, the excess TCR is degraded \[[@CR511], [@CR519]\]. The problem is particularly acute in TCR-T cells. The use of a strong promoter and 2A sequence can ensure the efficient expression of exogenous TCR in host cells. However, the number of TCRs that can bind to CD3 is limited, and there is competition to bind to endogenous TCRs. This will result in the degradation of a large number of TCR transgene products that are unable to form functional TCR-CD3 complexes \[[@CR516], [@CR520], [@CR521]\]. To solve this problem, TCR and CD3 molecules can be expressed together. Animal experiments have shown that when TCR is coexpressed with CD3 subtype molecules, the expression of TCR on the T-cell surface increases by tens of times. This increases the affinity of T cells to target antigens while improving the efficiency of tumor clearance and memory response \[[@CR521]\]. In addition, appropriate modifications of TCR coding sequences, such as the deletion of unstable mRNA sequences and splicing sites, can also upregulate the expression of TCR transgenes and enhance the anticancer activity of TCR-T cells \[[@CR513], [@CR522]\].
(五)、Selection of the TCR constant region (C region) {#Sec93}
The C region of TCRs plays an important role in the correct pairing of alpha and beta chains. If the C region of exogenous TCR can be made different from the C region of the host itself, the correct heterodimer formation of exogenous TCR can be guaranteed \[[@CR520]\]. The experimental observation showed that the expression efficiency of mouse TCR in human T cells was higher than that of human TCR, suggesting that mouse TCR could be correctly assembled and bind to CD3 molecules \[[@CR523]\]. The C region of mouse TCR plays an important role in this phenomenon. Based on this orientation, a hybrid TCR containing the human V region and mouse C region was developed and studied. The results showed that the function of the hybrid TCR was stronger than that of the fully human TCR in human T cells. These results indicated that the C region of mice ensured the preferential pairing of exogenous mixed TCR itself and the stability of the mixed Tcr-CD3 complex while improving the membrane surface expression efficiency of TCR \[[@CR524]--[@CR526]\]. Although the introduction of mouse region C induces immune rejection, the immunogenicity of the mixed TCR can be significantly reduced by modifying a few amino acids in mouse region C. In addition to introducing mouse sequences, amino acid sequences in the C region can also be modified to improve the accuracy of TCR pairing. Cysteine plays an important role in the pairing and stabilization of the alpha and beta chains of TCR \[[@CR527]\]. Using a point mutation technique, the threonine at position 48 of the alpha chain and the serine at position 57 of the β chain were replaced with cysteine, forming an additional disulfide bond in the C region. This change enhanced the pairing efficiency and stability of the exogenous α and β chains, reduced the occurrence of mismatch, increased the expression of exogenous TCR on the cell surface, and enhanced the antigen-specific response. Studies have shown that this strategy can reduce the autoimmune pathological reactions caused by TCR mismatch \[[@CR528]\].
(六)、Other strategies to reduce TCR mismatch {#Sec94}
Endogenous TCR expression can be reduced using siRNA or CRISPR/cas9 techniques. This can not only prevent TCR mismatch but also reduce the competition between TCR and CD3 molecules \[[@CR355], [@CR516]\]. Several studies have shown the feasibility of this strategy. For example, T cells have coexpressed MAGE-A4-specific TCRs (whose codon-optimized C region is different from that of the wild type) and siRNA targeting conserved sequences in the C region of wild-type TCRs. The results showed that the surface expression of the TCR transgene membrane in human T lymphocytes transduced by the MAGE-A4-TCR/siRNA vector was upregulated \[[@CR529], [@CR530]\]. In addition, the transfer of the TCR gene into γδT cells can also prevent the mismatch of endogenous TCR. However, γδT cells did not express CD4 and CD8 molecules. The use of γδT cells to express functional α/βTCR requires simultaneous transduction of CD4 or CD8 molecules to enhance the antigen-specific immune response of γδT cells \[[@CR531]--[@CR533]\].
(七)、Enhanced T-cell activation signals {#Sec95}
CD3ζ and costimulatory or coinhibitory molecules are the main signaling molecules that directly affect the activity of TCR-T cells \[[@CR534]\]. The function of TCR-T cells is affected by altering the expression and function of CD3ζ and/or costimulatory/coinhibitory molecules. By coexpressing CD3ζ in series with the α and β chains of TCR, it can increase the membrane surface expression of TCR and enhance the function of antigen-specific T cells \[[@CR535]\]. In addition, the costimulatory signal CD28 can be introduced into the TCR sequence to enhance TCR signal transduction. This method requires the complete removal of the C region and its replacement with the transmembrane region of CD28. Therefore, such TCRs do not pair with endogenous TCRs and have a stronger activation capacity. However, this method directly connects Vα and Vβ through flexible sequences, which destroys the original TCR sequences and may not be suitable for all TCR sequences, limiting the application of this modification strategy \[[@CR373], [@CR536]\]. In recent years, PD-1 has received great attention. If the PD-1 inhibition of T cells can be removed, the function of TCR-T cells can be effectively improved. Currently, CRISPR/Cas9 can be used to knockdown PD-1 expression or express specific antibodies to block PD-1. It can not only make TCR-T cells have tumor killing specificity but also block the cosuppression signal to avoid the suppression of tumor killing function \[[@CR537]--[@CR541]\].
(八)、Improved TCR affinity {#Sec96}
The affinity between TCR and antigen determines the ability of T cells to recognize and kill tumor cells, which is the main focus of TCR-T cell research \[[@CR350]\]. (1) The isolation of high-affinity TCR from artificial nontolerant environments is performed as follows: This approach utilizes humanized mice to produce high-affinity T cells \[[@CR542]\]. CD8^+^ T cells in the body that have a high affinity for the tumor-associated antigen p53 are often deleted. Immunizing transgenic mice expressing human HLA-A2 with human p53 peptide induced amplification of CD8^+^ T cells with high affinity for human p53 \[[@CR543], [@CR544]\]. When isolated from these high-affinity TCR sequences, these peptides can be used to redirect human CD8^+^ or CD4^+^ T cells. Using the same method, T-cell clones with high affinity for other tumor-associated antigens, including MDM2, CEA, and gp100, have been isolated \[[@CR545]--[@CR547]\]. (2) The isolation of high affinity MHC-restricted TCR from MHC-mismatched donors can be performed as follows: T-cell clones with high affinity for the HLA-A2 presenting antigen can be isolated from the natural lymphocyte pool of HLA-A2 negative donors. Using this method, human T-cell clones with high affinity for tumor-associated antigens, such as cyclin D1, WT1, and MDM2, were isolated, and these high-affinity TCR genes were cloned into vectors for gene therapy \[[@CR548]--[@CR551]\]. In mouse models, human T cells with allogeneic HLA-A2 antigen-specific TCR gene transduction were shown to clear tumor cells \[[@CR549], [@CR552]\]. (3) TCR sequences can be modified to improve affinity as follows: Improving affinity by changing the sequence of the TCR is an effective way to improve the effects of TCR-T-cell therapy. The CDR region responsible for binding to pMHC determines the affinity of TCR, and modification of the CDR region is expected to improve T-cell function \[[@CR520], [@CR527], [@CR553]\]. Substituting an amino acid at site 107 in TCR has been shown to improve the stability of the CDR3β ring structure and increase the antigenic specificity of TCR \[[@CR554], [@CR555]\]. In screening TCRs with high affinity mutations, point mutation techniques can be used to construct a variety of TCR α chain and β chain libraries. The assembled TCRSs can then be displayed using phages, yeast, or T cells, and affinity screening is performed \[[@CR556]--[@CR558]\]. MHC- polypeptide tetramers are an important tool in affinity screening. High affinity TCRs can even be screened from natural TCRs by using tetramers. The high-affinity TCRs screened can theoretically enhance the antigenic reactivity of T cells \[[@CR170], [@CR559]\]. However, some studies have shown that T cells transduced with high-affinity TCRs do not respond to antigens or even have negative reactions. Another concern is that high-affinity TCRs can misrecognize antigens, resulting in off-target effects \[[@CR350], [@CR560], [@CR561]\].
(九)、CD4^+^ T cells are dependent on MHC-I molecule-restricted TCRs {#Sec97}
Early studies on TCR-T cells focused on how to construct antigen-specific CD8^+^ T cells \[[@CR176]\]. Currently, the influence and role of CD4^+^ T cells is also being considered. CD4^+^ T cells recognize the MHC-II class molecular delivery of antigenic peptides. Unlike CD8^+^ T cells, which mainly exert cytotoxic effects, CD4^+^ T cells mainly regulate the adaptive immune system, enhance the function of CD8^+^ T cells, and induce the long-term memory of T cells \[[@CR267], [@CR340], [@CR341]\]. Although the isolated TCRs with high affinity for tumor antigens were mostly MHC-I molecule restricted, they performed best in the presence of CD8 coreceptors \[[@CR282]\]. Studies have shown that these TCRs can function in CD4^+^ T cells even in the absence of CD8 coreceptors. Tumor-specific CD4^+^ T cells can be generated by using MHC-I molecular restrictive TCR, which enhances the tumor-killing ability of specific CD8^+^ T cells. The reason for this phenomenon may be related to the secretion of various immune factors by CD4^+^ T cells \[[@CR344], [@CR349], [@CR562]\]. In addition, antigen-specific CD4^+^ T cells have also been used to treat tumor patients \[[@CR563]--[@CR565]\]. These results suggest that attention should be given to the role of CD4^+^ T cells in TCR-T-cell therapy.
(十)、Prolonging the survival time of TCR-T cells in vivo {#Sec98}
A common challenge faced by adoptive cell therapy is the survival time of the transfused cells in vivo \[[@CR256]\]. Studies of TCR-T cells and CAR T cells have demonstrated that T-cell infusion can lead to the formation of memory and these cells can survive long-term in vivo \[[@CR176], [@CR257], [@CR398]\]. However, the number of such cells is too small to play a role in the treatment of solid tumors. At present, the common ways to maintain the survival of T cells include the administration of exogenous IL-2 and the removal of lymphocytes from the body by radiotherapy or chemotherapy \[[@CR255]--[@CR257]\]. Among them, IL-2 injection has relatively obvious toxicity, especially in the application of high doses, and adverse reactions are more obvious. The adverse effects of IL-2 limit its use. To address this limitation, researchers have tried to express IL-2 in T cells, but the results were not good \[[@CR566]\]. Another approach is to use chemoradiotherapy to eliminate lymphocytes, reduce the number of endogenous T cells, and avoid endogenous T cells competing for cell growth factors \[[@CR567]\]. Animal experiments have shown that without preconditioning, transfused T cells cannot survive or clear tumors. Lymphocyte elimination preconditioning has become the standard practice for infusion cell therapy, including for TCR-T cells \[[@CR268]\]. The TCR sequence can also be transferred into less differentiated T cells or memory T cells. In addition to treating T cells directly, TCRs can also be transferred into hematopoietic stem cells (HSCs). Transgenic HSCs can develop into mature CD8^+^ T cells through positive selection and produce a rapid antigen-specific response \[[@CR568], [@CR569]\]. In addition, it is possible to prolong the survival of TCR-T cells in vivo by influencing T-cell differentiation through metabolically regulated drugs, such as metformin \[[@CR570]\].
十、Summary and outlook {#Sec99}
Neoantigens play a key role in cancer immunotherapy, including cancer vaccines, ACTs, antibody-based therapies, and ICBs \[[@CR571]\]. Therapeutic strategies targeting these cancer-specific neoantigens without destroying normal tissue provide a strong theoretical basis to support the relevance of neoantigens in supporting clinically successful immunotherapies. Many initiatives are underway to develop personalized or off-the-shelf anticancer drugs based on neoantigens. However, to address the timing and economics of advanced personalized neoantigen immunotherapy, experimental and theoretical improvements are needed, including effective patient recruitment, optimized sequencing techniques and neoantigen prediction algorithms, and off-the-shelf therapies targeting common neoantigens \[[@CR272]\].
Effective patient recruitment is key to neoantigen-based immunotherapy. First, early excision may give clinicians more time to carefully design, produce and test neoantigen-based therapeutics to improve clinical outcomes. Second, early excisions make it easier to select patients who may be eligible for off-the-shelf therapies, including vaccines, TCR or TCRm antibodies that target a well-defined cancer-driving mutation. Third, cancer treatments, including chemotherapy, radiation, and ICBs, can stimulate the over-differentiation of T cells. Immediately isolating autologous T cells, or TILs, as soon as a patient is diagnosed with cancer allows for the highest quality and least differentiated T cells to be collected from the patient. In addition, neoantigen-based therapy in patients who were recruited early can allow for the effective infusion of high-quality neoantigen-based cell products and reduce the severity of comorbidities caused by advanced metastatic cancer clones.
Accurate identification of immunogenic neoantigens and their homologous TCRs is a key and rate-limiting step in the development of personalized cancer immunotherapy \[[@CR163]\]. Immunogenic neoantigens can be identified by the immunogenomics methods of constructing virtual peptides based on NGS data and the immunolabeling methods of analyzing MHC-loaded peptides using MS. Genome and transcriptional sequencing data are also combined with MS maps of HLA-related peptides to improve the sensitivity and specificity of neoantigen identification \[[@CR292]\]. However, neoantigen-based therapies may be quite affordable due to the application of more economical high-throughput sequencing and powerful deep learning algorithms. Comprehensive and efficient one-stop computational workflows or classification benchmarks available in computerized neoantigen assay methods are still necessary for clinical applications. Efficient one-stop computing methods can also use large data queues to determine the potential of neoantigens as biomarkers for patient prognosis or ICB response prediction. Most critically, the accuracy of these epitope prediction methods should also be confirmed by thorough immune surveillance in early clinical studies to facilitate the development of neoantigen-based cancer therapies \[[@CR163], [@CR171], [@CR253]\]. In addition to these computerized methods for predicting immunogenic neoantigens and homologous TCRs based on high-throughput sequencing data, several T-cell antigen discovery strategies have recently been developed to unbiasedly identify immunogenic neoantigens. A variety of pMHC libraries have been established, including yeast display libraries, SABRs, BATLLES, etc., which allow the flexible and extensible screening of antigen epitopes. By relying on the physiological activity of T-cell killing rather than evaluating the binding affinity of TCR-pMHC, T-Scan is able to query a significantly larger antigen space independent of predictive algorithms \[[@CR171], [@CR229], [@CR230], [@CR572]\]. Therefore, the simplicity and scalability of T-cell ligand discovery techniques will facilitate study of the immunogenicity of candidate neoantigens and contribute to the development of new neoantigen-based immunotherapies.
Off-the-shelf precision immunotherapies targeting common neoantigens is another possible strategy to overcome the time and funding issues in individualized therapy based on personalized neoantigens. Common neoantigens shared between patients will be produced by driver genes or TSGs with hot mutation polypeptides presented by relatively common HLA alleles. Common neoantigens are more likely to be clonally conserved in metastatic tumors and to reappear systematically in patients. Many general-purpose therapeutic techniques readily target common neoantigens, including vaccines, BSABs, CTLS, and adoptive metastasis of TCR-T cells. Subsequently, TCR libraries that specifically target shared neoantigens in an HLA-specific manner are developed for advanced cancer patients. As more common neoantigens and corresponding TCRs are discovered, more patients with frequent genetic changes that lead to cancer will benefit from the public neoantigen response TCR library. In addition, the widespread use of cancer genome sequencing and neoantigen prediction methods will help to match patients to treatments that target common neoantigens in their tumors \[[@CR5], [@CR573]--[@CR575]\]. Therefore, this off-the-shelf strategy based on common neoantigens is expected to shorten the time required for neoantigen identification and widespread T-cell culture, increasing the use of neoantigen-based therapies in a significant proportion of patients.
In addition to neoantigens produced by spontaneous mutations during carcinogenesis, some covalent molecules can induce the production of tumor-specific public neoantigens by modifying hotspot residues in tumors with highly repetitive somatic mutations. Covalent KRAS-G12C inhibitors, such as ARS1620, irreversibly modify mutant cysteines. Hapten peptides carrying covalently linked small molecules can be presented on the cell surface via MHC-I. Hapten peptide-MHC complexes can act as tumor-specific neoantigens to trigger cytotoxic T-cell responses \[[@CR89], [@CR576], [@CR577]\]. Based on this principle, mutant tumor suppressor proteins can be targeted by a new class of molecules. These molecules induce neoantigen production and trigger specific immune responses by covalently modifying hotspot residues, such as TP53 Y220C and TP53 R273C \[[@CR578], [@CR579]\]. Thus, the range of tumor-specific neoantigens suitable for therapeutic targeting can be significantly expanded by specifically modifying haptens of mutated cancer proteins rather than acting as drug inhibitors.
These neoantigens provide powerful targets for tumor vaccines that not only accurately eliminate residual tumor lesions but also effectively target distant metastatic cells due to their systemic properties. Personalized neoantigen vaccines are produced according to the individual tumor through the following steps: the collection of tumor tissue and normal samples, sequencing and analysis of unique mutations, prediction and validation of immunogenic neoantigens, and vaccine design and production. A variety of platforms, including peptides, nucleic acids and DCS, can be used to develop vaccines based on predicted personalized or matched shared neoantigens. Neoantigen vaccines based on peptides, RNA and DNA are feasible, safe and economical vaccines \[[@CR192], [@CR292], [@CR382], [@CR386]\]. However, most patients cannot reliably produce a substantial neoantigen-specific CD8^+^ T-cell response by peptide-based neoantigen vaccines. All active components of tumor vaccines, such as neoantigens, formulations, and delivery systems, are constantly being improved. Synthetic self-amplified mRNAs (samRNAs) containing replicase genes encoding RNA-dependent RNA polymerase (RdRp) have attracted much attention due to their higher and longer-lasting antigenic expression than traditional mRNAs. In vivo expression of vaccine neoantigens can also be enhanced by vectors that naturally carry genetic instructions, including adenoviruses (Ads), retroviruses, and adeno-associated viruses (AAV). In addition, various nanoparticles are formed, such as lipid nanoparticles, exosomes, viral-like particles, cage protein nanoparticles, bacterial membrane material-based nanocarriers, high-density lipoprotein mimic nanodisks, polymers and polymer nanoparticles, to enhance transport and tissue penetration capabilities and improve the immunogenicity of personalized vaccines. Compared with viral vectors, nanoparticles can also effectively deliver vaccines and immune adjuvants together to lymphatic organs to increase neoantigen presentation \[[@CR292], [@CR382], [@CR580]\].
Monocytes or hematopoietic progenitor cells isolated from blood loaded with tumor neoantigens in vitro can effectively improve the anticancer effect of neoantigen vaccines. Autologous DCs can be loaded with neoantigens in the form of peptides, RNA and DNA \[[@CR382], [@CR581]\]. Compared with the time-consuming and costly sequencing and computational analysis of patient-specific neoantigens, autologous dendritic cell (DC) vaccines with whole-tumor lysate (WTL) induce a T-cell-mediated antitumor response in vitro and are a more convenient and economical method to induce neoantigen-specific immune responses. The entire tumor cell contains both MANAs and nonmutated TAAs, which may overcome potential immune escape and resistance mechanisms. However, the higher abundance of nonimmunogenic autoantigens may limit the ability of neoantigens to induce immune responses. Various immunosuppressive factors also exist in WTL, which inhibit DC maturation and T-cell activation \[[@CR582]--[@CR585]\]. To overcome these problems, extracellular vesicles (EVs) produced by tumor cells have recently been shown to be a vaccination platform that supports DC maturation and neoantigen presentation. Tumor cell-derived EVs can deliver tumor antigen libraries to DCs and promote the cross-presentation of neoantigens. EVs also have high levels of immune stimulators, which can trigger DCs to release natural immune signals. In addition to tumor cell-derived EVs, DC-derived EVs can also serve as neoantigen-presenting units for immune cells. EV-based vaccines can transform \"cold\" tumors into \"hot\" tumors by modulating the tumor immune microenvironment and systemic immune response \[[@CR586]--[@CR589]\]. Thus, EVs may provide an alternative form of neoantigen-based cancer vaccine that might be administered orally.
Although neoantigen-based immunotherapy has shown promising results in early preclinical and clinical studies, significant progress remains to be made, especially in patients with epithelial malignancies. Cancer cells have evolved built-in defenses to evade immune recognition at every stage of the cancer immune cycle. Given the complex mechanisms of immune escape in cancer, combination therapies that simultaneously target different stages of the cancer immune cycle may be more effective. Cell death following chemotherapy, radiotherapy, targeted therapy, photodynamic therapy, and oncolytic virus therapy will promote the production and release of neoantigens, thereby further enhancing the antitumor immune cycle \[[@CR10], [@CR590]\]. The use of IFN-α, GM-CSF, anti-CD40, TLR agonists, and STING agonists promoted neoantigen presentation \[[@CR591]--[@CR594]\]. To promote the infiltration of immune cells into the tumor, TME modifications and intracellular cytokines can be used. ICBs and IDO inhibitors will also alter immunosuppressive TMEs to enhance neoantigen-based immunotherapy \[[@CR322], [@CR378]\]. Nano and EV drug delivery systems have recently been used as an integrated platform for the simultaneous administration of many drugs or therapeutic drugs that work synergistically to activate different stages of the cancer-immune cycle, reverse immunosuppression and create immune support TMEs \[[@CR595], [@CR596]\]. These combination strategies using therapeutic agents with different mechanisms of action induce a strong, effective, durable, and tumor-specific immune response in cancer patients.
The viability of TCR-T cells in treating tumors has been demonstrated. At present, although there are still some problems in TCR-T-cell therapy, more attention should be given to the powerful clinical application prospects of this kind of therapy. With advances in tumor immunology and genetic engineering, TCR-T-cell therapy will become more individualized. At present, TCR-T cells are mainly focused on \"universal targets\" for treatment. When used for treatment, TCR-T cells only consider the expression of antigen and MHC molecules. When these two conditions are met, they are used for the treatment of multiple patients. However, the actual situation of these patients varies greatly, and the low selectivity of TCR-T-cell therapy may be related to poor or ineffective treatment response. Studies have shown that the development of TCR-T cancer immunotherapy that targets certain tumors with intracellular antigens has become a research hotspot in recent years. If individual TCR-T cells can be designed according to the expression of patients\' tumor-associated antigens or even neoantigens, it is possible to significantly improve the efficacy and safety of treatment. The maturation of sequencing technology and advances in cell culture technology have laid the foundation for this scenario. On the other hand, the use of under-differentiated T cells or hematopoietic stem cells to produce TCR-T cells with longer in vivo survival will also improve the effectiveness of such treatments. In addition, TCR-T cells can be used in combination with immune checkpoint blockers to improve the efficacy of TCR-T cell treatments. In summary, TCR-T cells will play an increasingly important role in tumor therapy, and their development will bring more hope to tumor patients.
Jiangping Li conceived and designed the review; Jiangping Li, Zhiwen Xiao, Donghui Wang, Lei Jia, Shihong Nie, Xingda Zeng and Wei Hu wrote and revised the manuscript; Jiangping Li provided funding and supervision. All authors read and approved the final manuscript.
The study was financially supported by project grants from the National Natural Science Foundation of China (82271875).
All the data that support the findings of this study are available from the corresponding authors upon reasonable request.
The authors declare that they have no competing interests.
|
Collapsible bicycle
ABSTRACT
A motor vehicle includes a body having an interior which houses a spare wheel of the motor vehicle, a seat and removable headrest of the motor vehicle, and removable accessories stored in the motor vehicle. The removable accessories include at least a collapsible frame and a geared drive mechanism. The frame, the spare wheel, the removable headrest and the drive mechanism can be assembled together to form a bicycle which is separate from the motor vehicle.
BACKGROUND
Mobility in urban areas becomes increasingly difficult with population growth since increasing the infrastructure to accommodate a larger population can be difficult. For example, adding roads or increasing the size of existing roads to accommodate more passenger vehicles in urban areas can be onerous. Even if more roads were added and/or existing roads expanded, commuters to urban areas may nevertheless encounter increased pollution and parking shortages.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an exemplary bicycle assembled from several vehicle components.
FIG. 2 illustrates one possible arrangement of the disassembled bicycle in a vehicle.
FIG. 3 illustrates an assembly view of the exemplary bicycle.
FIG. 4 illustrates a collapsed view of the exemplary bicycle.
FIGS. 5A-C illustrate different views of an exemplary headrest that may be used as the seat of the bicycle.
FIG. 6 illustrates an exemplary jack incorporated into the bicycle.
FIG. 7 illustrates an exemplary spare wheel that may be incorporated into the bicycle.
DETAILED DESCRIPTION
An exemplary motor vehicle includes a body that houses a removable frame, a spare wheel, a removable headrest, and a jack. The frame, the spare wheel, the removable headrest, and the jack can be assembled into a bicycle. The bicycle, therefore, includes at least one component that has a dual use in a motor vehicle.
In one possible implementation, the motor vehicle may include a body and a removable and collapsible frame disposed in the vehicle body. A spare wheel for the vehicle is formed from a first wheel removably disposed on a second wheel. A seat is located in the vehicle body and supports a removable headrest that can attach to the frame as a bicycle seat. A jack is disposed in the vehicle body and can apply a linear force when operating in a first mode and a rotational force when operating in a second mode. The jack includes a worm gear assembly that can engage at least a portion of the spare wheel to apply the rotational force to at least one of the first wheel and the second wheel when operating in the second mode. The jack further includes a pedal that can receive a user's foot, and the movement of the pedal about an axis causes the jack to apply the rotational force to at least one of the first wheel and the second wheel. Accordingly, the frame, the spare wheel, the removable headrest, and the jack can be assembled into a bicycle.
FIG. 1 illustrates an exemplary bicycle 100 formed from various vehicle components. The bicycle 100 may take many different forms and include multiple and/or alternate components and facilities. While an exemplary bicycle 100 is shown, the exemplary components illustrated in the Figures are not intended to be limiting. Indeed, additional or alternative components and/or implementations may be used.
As shown in FIG. 1, the bicycle 100 includes a frame 105, wheels 110, handlebars 115, a seat 120, and a drive mechanism 125.
The frame 105 may be configured to structurally support one or more other components of the bicycle 100 such as the wheels 110, the handlebars 115, the seat 120, and the drive mechanism 125. The frame 105 may formed from various materials such as steel, aluminum, titanium, carbon fiber, a thermoplastic, magnesium, scandium, beryllium, bamboo, wood, or any combination of these and possibly other materials with sufficient strength to support the other components of the bicycle 100. The frame 105 may be formed from different pieces, and each piece may have a particular cross-sectional configuration. In the implementation shown in FIG. 1, the frame 105 is formed from various tubes including a seat tube 130, a down tube 135, and a head tube 140. The seat tube 130 may support the seat 120. The down tube 135 may be attached to the rear wheel 110A, the seat tube 130, and the head tube 140. The head tube 140 may support the handlebars 115 and attach to the front wheel 110B. The seat tube 130, the down tube 135, and the head tube 140 may each have a generally cylindrical configuration with a generally circular cross-section.
The wheels 110 may include a rear wheel 110A and a front wheel 110B. The rear wheel 110A may be rotatably mounted to the down tube 135 and may be configured to receive a rotational force from the drive mechanism 125, as described in greater detail below. The rotation of the rear wheel 110A may cause the bicycle 100 to move. The front wheel 110B may be rotatably mounted to the head tube 140, which as discussed above may be connected to the handlebars 115. The head tube 140 may define an axis 145, and rotation of the handlebars 115 about the axis 145 may cause the front wheel 110B to rotate about the axis 145. Thus, the front wheel 110B may be used to steer the bicycle 100. Each wheel 110 may include a rim, a tire, a hub, and spokes. In some instances, the tire may include an inflatable tube. The rear wheel 110A and the front wheel 110B may be combined to form a spare wheel 110 that may be used in a vehicle. That is, the rear wheel 110A and the front wheel 110B may be fixed to one another for use as the spare wheel 110 in a vehicle or separated for use in the bicycle 100.
The handlebars 115 may include any steering mechanism that provides the rider with the necessary leverage to steer the bicycle 100. In some instances, the handlebars 115 may allow the rider to adjust a gear ratio of the drive mechanism 125 or apply brakes (not shown). Additionally, the handlebars 115 may support at least a portion of the rider's weight. Therefore, the handlebars 115 may be formed from a relatively lightweight, stiff material such as an aluminum alloy, steel, carbon fiber, or titanium.
The seat 120 or saddle may be configured to at least partially support the rider while riding the bicycle 100. The seat 120 may be attached to the seat tube 130 when the bicycle 100 is assembled. The seat 120 may include a shell surrounded by a padding material. The shell may be formed from a plastic, such as nylon, or carbon fiber. The padding material may be formed from, e.g., a form or gel. In some instances, the seat 120 may also serve as one of the headrests in the vehicle. When the bicycle 100 is disassembled, the seat 120 of the bicycle 100 may be placed on one of the seats in the vehicle for use as a headrest. Thus, the seat 120 may conform to any regulations concerning vehicle headrests.
The drive mechanism 125 may be configured to apply a rotational force to the wheels 110 of the bicycle 100. The drive mechanism 125 may apply the rotational force to the rear wheel 110A, the front wheel 110B, or both wheels 110. In one possible approach, the drive mechanism 125 may include a pedal assembly with pedals 150 configured to receive each of a rider's feet. The pedals 150 may rotate about an axis 155 according to forces applied to the pedal 150 by the rider. The pedals 150 may be operably connected to a worm gear assembly 160. Thus, as the pedals 150 rotate about the axis 155, a first gear 165 operably connected to the pedals 150 may cause a worm gear 170 to rotate according to the rotation of the pedals 150 about the axis 155. The rotation of the worm gear 170 may cause a second gear 175 to rotate. The second gear 175 may be configured to engage, e.g., the rear wheel 110A, resulting in rotation of the rear wheel 110A in accordance with the rotation of the pedal 150 about the axis 155. Accordingly, the drive mechanism 125 may generate the rotational force commensurate with the rotation of the pedals 150 about the axis 155.
The drive mechanism 125 may be configured to operate in different modes. For instance, in a first mode, the drive mechanism 125 may be configured to apply a linear force. This way, the drive mechanism 125 may act as a jack to, e.g., at least partially lift a vehicle for maintenance such as changing a tire. In a second mode, however, the drive mechanism 125 may be configured to apply the rotational force discussed above to, e.g., the rear wheel 110A of the bicycle 100.
The bicycle 100 illustrated in FIG. 1 includes components that can also be used in a vehicle, such as a car, truck, sport utility vehicle, etc. For example, the rear wheel 110A and the front wheel 110B may be combined to form the spare wheel 110 for the vehicle. The seat 120 may be used as a headrest in the vehicle 180. The drive mechanism 125 may be used as a jack to at least partially lift the vehicle 180 during maintenance. Moreover, the frame 105 may be collapsible so that it may be easily stored in the vehicle 180 when the bicycle 100 is not in use. Additionally, a collapsible frame 105 may allow the rider to conveniently store the bicycle 100 in an ultimate destination such as an office.
FIG. 2 illustrates an exemplary view of the disassembled bicycle 100 stored in a vehicle 180. The vehicle 180 may include a body 185 defining various portions of the vehicle 180 such as a passenger compartment, a trunk, a cargo compartment, a hatch, or the like, and housing the components of the vehicle 180 and of the bicycle 100. The passenger compartment may include areas of the vehicle 180 where passengers may sit. The passenger compartment may include a driver seat, a passenger seat, and a rear bench seat 190. The components of the bicycle 100 may be stored behind the rear bench seat 190. In some vehicles 180, storing the bicycle 100 behind the rear bench seat 190 may place the bicycle 100 in the trunk. In other vehicles 180, storing the bicycle 100 behind the rear bench seat 190 may put the bicycle 100 in the cargo compartment. Not all components of the bicycle 100 may be stored behind the rear bench seat 190 in the trunk or cargo compartment, however. In some instances, the seat 120 may act as the headrest to one of the seats in the rear bench seat 190 or another seat in the vehicle 180. The rear wheel 110A and the front wheel 110B may be combined and stored in a spare tire well, which may be located in the trunk, cargo compartment, or possibly outside the vehicle 180 such as behind, in front of, or on top of the vehicle 180. The drive mechanism 125 may be stored in the passenger compartment, trunk, or cargo compartment. For instance, the drive mechanism 125 may be stored behind the rear bench seat 190 (as shown in FIG. 2), in the spare tire well, underneath the rear bench seat 190, or any other place in the vehicle 180.
In the exemplary approach illustrated in FIG. 2, the bicycle 100 is shown in two portions. A front portion includes the handlebars 115, the front tube, and the front wheel 110B. The rear portion includes the seat tube 130, the down tube 135, and the rear wheel 110A. The rear portion as shown further includes the drive mechanism 125. The seat 120 of the bicycle 100 is shown as the headrest of one of the seats in the rear bench seat 190. As discussed above, at least some of these components, such as the drive mechanism 125 and wheels 110, of the bicycle 100 may be stored elsewhere in the vehicle 180.
FIG. 3 illustrates an assembly view of the bicycle 100 shown in FIG. 1. To assemble the bicycle 100, the down tube 135 may be connected to the head tube 140. A connector 195, which may be disposed on or integrally formed with the head tube 140, may be configured to receive the down tube 135. The height of the bicycle 100 may be adjusted by adjusting a length of the down tube 135 and the head tube 140. That is, both the down tube 135 and head tube 140 may have a telescoping configuration to elongate or shorten according to the desires of the rider. Lengthening the down tube 135 and the head tube 140 may make the bicycle 100 longer in width and taller in height. Conversely, shortening the down tube 135 and the head tube 140 may make the bicycle 100 shorter in both length and height. Once the down tube 135 and head tube 140 are connected, the headrest may be removed from one of the seats in the passenger compartment of the vehicle 180 and placed on the seat tube 130 to act as the seat 120 of the bicycle 100. In some instances, the spare wheel 110 may be at least partially disassembled to separate the front wheel 110B and the rear wheel 110A. The front wheel 110B may be attached to the head tube 140 and the rear wheel 110A may be attached to the down tube 135. Furthermore, the jack may be disposed on either the down tube 135 or the seat tube 130 and operably engaged with the rear wheel 110A. As discussed above, the jack may be placed in a mode of operation consistent with generating a rotating force as opposed to a linear force.
FIG. 4 illustrates an exemplary approach where the bicycle 100 may be collapsible, e.g., to make the bicycle 100 easy to carry into a building or other area where riding a bicycle 100 is not typically permitted or desired. The bicycle 100 may be collapsed by removing the down tube 135 from the head tube 140 and removing the seat tube 130 from the down tube 135. The rear wheel 110A and front wheel 110B may be aligned to reduce a footprint of the bicycle 100 and to make the bicycle 100 easier for the rider to carry.
FIGS. 5A-C illustrate different views of an exemplary headrest that may be used as the seat 120 of the bicycle 100. FIG. 5A illustrates a front view of the exemplary headrest. The front view includes the surface of the headrest that would contact the back of a passenger's head when the headrest is placed on a seat 120 in the passenger compartment of the vehicle 180. FIGS. 5B and 5C illustrate perspective views of the headrest of FIG. 5A. The nose 200 and support 205 portions of the seat 120 are viewable in FIGS. 5B and 5C.
FIG. 6 illustrates an exemplary jack that may serve as the drive mechanism 125 for the bicycle 100. As discussed above, the drive mechanism 125 may include a pedal assembly with pedals 150 configured to receive each of a rider's feet. The pedals 150 may rotate about an axis 155 according to forces applied to the pedal 150 by the rider. The pedals 150 may be operably connected to a worm gear assembly 160. Thus, as the pedals 150 rotate about the axis 155, a first gear 165 may cause a worm gear 170 to rotate. The rotation of the worm gear 170 may cause a second gear 175 to rotate. The second gear 175 may be configured to engage, e.g., the rear wheel 110A, resulting in rotation of the rear wheel 110A in accordance with the rotation of the pedal 150 about the axis 155. Accordingly, the drive mechanism 125 may generate the rotational force commensurate with the rotation of the pedals 150 about the axis 155.
FIG. 7 illustrates an exemplary spare tire that may be used for the wheels 110 of the bicycle 100. As shown, the spare tire includes a rear wheel 110A fixed to a front wheel 110B. When used as the spare tire, the rear wheel 110A and front wheel 110B may remain attached. When used in the bicycle 100, however, the rear wheel 110A may be removed from the front wheel 110B. When the bicycle 100 is assembled, the rear wheel 110A may be connected to the down tube 135 and the front wheel 110B may be connected to the head tube 140.
With regard to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments, and should in no way be construed so as to limit the claims.
Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent upon reading the above description. The scope should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the technologies discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the application is capable of modification and variation.
All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the technologies described herein unless an explicit indication to the contrary in made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
The invention claimed is:
1. A motor vehicle comprising: a body having an interior which houses a spare wheel of the motor vehicle, a seat of the motor vehicle, the seat having a removable headrest, and removable accessories stored in the motor vehicle, wherein the removable accessories include at least a frame and a drive mechanism; wherein the frame, the spare wheel, the removable headrest and the drive mechanism are configured to be assembled together to form a bicycle which is separate from the motor vehicle.
2. The motor vehicle of claim 1, wherein the drive mechanism is configured to apply a rotational force to at least a portion of the spare wheel when at least the drive mechanism and the spare wheel are assembled together to form the bicycle.
3. The motor vehicle of claim 2, wherein the drive mechanism includes at least a worm gear assembly configured to engage at least a portion of the spare wheel when at least the drive mechanism and the spare wheel are assembled together to form the bicycle.
4. The motor vehicle of claim 2, wherein the drive mechanism includes at least a pedal configured to receive a user's foot, wherein movement of the pedal about an axis causes the drive mechanism to apply the rotational force to the at least a portion of the spare wheel when at least the drive mechanism and the spare wheel are assembled together to form the bicycle.
5. The motor vehicle of claim 1, wherein the spare wheel is formed from a first wheel and a second wheel disposed adjacent to the first wheel.
6. The motor vehicle of claim 5, wherein the first wheel is removable from the second wheel.
7. The motor vehicle of claim 5, wherein at least one of the first wheel and the second wheel is configured to rotate in accordance with a rotational force applied by the drive mechanism when at least the drive mechanism and at least one of the first wheel and the second wheel are assembled together to form the bicycle.
8. The motor vehicle of claim 1, wherein the removable headrest is configured to attach to the frame and function as a bicycle seat when at least the removable headrest and the frame are assembled together to form the bicycle.
9. The motor vehicle of claim 1, wherein the frame includes portions which are collapsible.
10. A motor vehicle comprising: a body having an interior; a spare wheel disposed in the interior and formed from a first wheel disposed on a second wheel, wherein the first wheel is removable from the second wheel; and a seat disposed in the interior and having a removable headrest; wherein the interior further houses removable accessories stored in the motor vehicle, wherein the removable accessories include at least a frame and a drive mechanism; wherein the frame, the spare wheel, the removable headrest and the drive mechanism are configured to be assembled together to form a bicycle which is separate from the motor vehicle.
|
#! /usr/bin/env sh
set -e
__am_prompt_remove_backup() {
BACKUP=$1
if [ -z "${BACKUP:-}" ]; then
BACKUP='prompt'
fi
REMOVE_PATH="$AM_HOME/backup/$BACKUP"
print-warn "removing path: $REMOVE_PATH..."
rm -rf "$AM_HOME/backup/$BACKUP" 2>/dev/null
}
__am_prompt_remove_backup "$@"
|
Page:Emma Goldman - Anarchism and Other Essays (2nd Rev. ed.) - 1911.djvu/214
Rh the most fair-minded and liberal man, Governor Waite. The latter had to make way for the tool of the mine kings, Governor Peabody, the enemy of labor, the Tsar of Colorado. "Certainly male suffrage could have done nothing worse." Granted. Wherein, then, are the advantages to woman and society from woman suffrage? The oft-repeated assertion that woman will purify politics is also but a myth. It is not borne out by the people who know the political conditions of Idaho, Colorado, Wyoming, and Utah.
Woman, essentially a purist, is naturally bigoted and relentless in her effort to make others as good as she thinks they ought to be. Thus, in Idaho, she has disfranchised her sister of the street, and declared all women of "lewd character" unfit to vote. "Lewd" not being interpreted, of course, as prostitution in marriage. It goes without saying that illegal prostitution and gambling have been prohibited. In this regard the law must needs be of feminine gender: it always prohibits. Therein all laws are wonderful. They go no further, but their very tendencies open all the floodgates of hell. Prostitution and gambling have never done a more flourishing business than since the law has been set against them.
In Colorado, the Puritanism of woman has expressed itself in a more drastic form. "Men of notoriously unclean lives, and men connected with saloons, have been dropped from politics since women have the vote." Could Brother Comstock do more?
|
module Events
module CardEvents
class LiberationTheology < Instruction
needs :countries
def action
instructions = []
instructions << Arbitrators::AddInfluence.new(
player: USSR,
influence: USSR,
country_names: central_america,
limit_per_country: 2,
total_influence: 3
)
instructions << Instructions::Discard.new(
card_ref: "LiberationTheology"
)
instructions
end
def central_america
countries.
select { |c| c.in?(CentralAmerica) }.
map { |c| c.name }
end
end
end
end
|
import { empty as observableEmpty, Subject } from "rxjs";
import { skip, first, take, count } from "rxjs/operators";
import { PerspectiveCamera } from "three";
import { Camera } from "../../src/geo/Camera";
import { ViewportSize } from "../../src/render/interfaces/ViewportSize";
import { RenderCamera } from "../../src/render/RenderCamera";
import { RenderMode } from "../../src/render/RenderMode";
import { RenderService } from "../../src/render/RenderService";
import { AnimationFrame } from "../../src/state/interfaces/AnimationFrame";
import { FrameHelper } from "../helper/FrameHelper";
const createFrame: (frameId: number, alpha?: number, camera?: Camera) => AnimationFrame =
(frameId: number, alpha?: number, camera?: Camera): AnimationFrame => {
const frame: AnimationFrame = new FrameHelper().createFrame();
frame.id = frameId;
frame.state.alpha = alpha != null ? alpha : 0;
frame.state.camera = camera != null ? camera : new Camera();
frame.state.currentCamera = camera != null ? camera : new Camera();
return frame;
};
describe("RenderService.ctor", () => {
it("should be contructed", () => {
let element: HTMLDivElement = document.createElement("div");
let renderService: RenderService =
new RenderService(element, observableEmpty(), RenderMode.Letterbox);
expect(renderService).toBeDefined();
});
});
describe("RenderService.renderMode", () => {
it("should default to fill", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let renderService: RenderService =
new RenderService(element, observableEmpty(), null);
renderService.renderMode$.pipe(
first())
.subscribe(
(renderMode: RenderMode): void => {
expect(renderMode).toBe(RenderMode.Fill);
done();
});
});
it("should set initial render mode to constructor parameter", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let renderService: RenderService =
new RenderService(element, observableEmpty(), RenderMode.Letterbox);
renderService.renderMode$.pipe(
first())
.subscribe(
(renderMode: RenderMode): void => {
expect(renderMode).toBe(RenderMode.Letterbox);
done();
});
});
it("should return latest render mode on subscripion", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let renderService: RenderService =
new RenderService(element, observableEmpty(), RenderMode.Letterbox);
renderService.renderMode$.next(RenderMode.Fill);
renderService.renderMode$.pipe(
first())
.subscribe(
(renderMode: RenderMode): void => {
expect(renderMode).toBe(RenderMode.Fill);
done();
});
});
});
describe("RenderService.size", () => {
it("should be defined", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let renderService: RenderService =
new RenderService(element, observableEmpty(), RenderMode.Letterbox);
renderService.size$.pipe(
first())
.subscribe(
(size: ViewportSize): void => {
expect(size).toBeDefined();
done();
});
});
it("should have an initial value", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let renderService: RenderService =
new RenderService(element, observableEmpty(), RenderMode.Letterbox);
renderService.size$.pipe(
first())
.subscribe(
(size: ViewportSize): void => {
expect(size.width).toBe(0);
expect(size.height).toBe(0);
done();
});
});
it("should emit new value on resize", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let renderService: RenderService =
new RenderService(element, observableEmpty(), RenderMode.Letterbox);
renderService.size$.pipe(
take(2))
.subscribe(
(size: ViewportSize): void => { return; },
(e: Error): void => { return; },
(): void => { done(); });
renderService.resize$.next();
});
});
describe("RenderService.renderCameraFrame", () => {
const createRenderCameraMock: () => RenderCamera = (): RenderCamera => {
const renderCamera: RenderCamera = new RenderCamera(1, 1, RenderMode.Fill);
return renderCamera;
};
it("should be defined", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = createRenderCameraMock();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCameraFrame$.pipe(
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc).toBeDefined();
done();
});
frame$.next(createFrame(0));
frame$.complete();
});
it("should be changed for first frame", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = createRenderCameraMock();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCameraFrame$.pipe(
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc.changed).toBe(true);
done();
});
frame$.next(createFrame(0));
frame$.complete();
});
it("should not be changed for two identical frames", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = createRenderCameraMock();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCameraFrame$.pipe(
skip(1),
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc.changed).toBe(false);
done();
});
frame$.next(createFrame(0));
frame$.next(createFrame(1));
frame$.complete();
});
it("should be changed for alpha changes between two frames", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = createRenderCameraMock();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCameraFrame$.pipe(
skip(1),
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc.changed).toBe(true);
done();
});
frame$.next(createFrame(0, 0.00));
frame$.next(createFrame(1, 0.01));
frame$.complete();
});
it("should be changed for camera changes between two frames", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = createRenderCameraMock();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCameraFrame$.pipe(
skip(1),
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc.changed).toBe(true);
done();
});
let camera: Camera = new Camera();
frame$.next(createFrame(0, 0, camera));
camera.position.x = 0.01;
frame$.next(createFrame(1, 0, camera));
frame$.complete();
});
it("should be changed for resize", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = createRenderCameraMock();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCameraFrame$.pipe(
skip(1),
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc.changed).toBe(true);
done();
});
frame$.next(createFrame(0));
renderService.resize$.next();
frame$.next(createFrame(1));
frame$.complete();
});
it("should be changed for changed render mode", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = createRenderCameraMock();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCameraFrame$.pipe(
skip(1),
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc.changed).toBe(true);
done();
});
frame$.next(createFrame(0));
renderService.renderMode$.next(RenderMode.Fill);
frame$.next(createFrame(1));
frame$.complete();
});
it("should have correct render mode when changed before subscribe", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = createRenderCameraMock();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderMode$.next(RenderMode.Fill);
renderService.renderCameraFrame$.pipe(
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc.renderMode).toBe(RenderMode.Fill);
done();
});
frame$.next(createFrame(0));
frame$.complete();
});
it("should emit once for each frame", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = createRenderCameraMock();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCameraFrame$.pipe(
count())
.subscribe(
(emitCount: number): void => {
expect(emitCount).toBe(4);
done();
});
frame$.next(createFrame(0));
frame$.next(createFrame(1));
renderService.renderMode$.next(RenderMode.Fill);
renderService.resize$.next();
frame$.next(createFrame(2));
frame$.next(createFrame(3));
frame$.complete();
});
});
describe("RenderService.renderCamera$", () => {
it("should be defined", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = new RenderCamera(1, 1, RenderMode.Fill);
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCamera$.pipe(
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc).toBeDefined();
done();
});
frame$.next(createFrame(0));
frame$.complete();
});
it("should only emit when camera has changed", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = new RenderCamera(1, 1, RenderMode.Fill);
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.renderCamera$.pipe(
skip(1),
first())
.subscribe(
(rc: RenderCamera): void => {
expect(rc.frameId).toBe(2);
done();
});
frame$.next(createFrame(0));
frame$.next(createFrame(1));
frame$.next(createFrame(2, 0.5));
frame$.complete();
});
it("should check width and height only once on resize", () => {
let element: any = {
get offsetHeight(): number {
return this.getOffsetHeight();
},
getOffsetHeight(): number {
return 0;
},
get offsetWidth(): number {
return this.getOffsetWidth();
},
getOffsetWidth(): number {
return 0;
},
appendChild(htmlElement: HTMLElement): void { return; },
};
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox);
renderService.size$.subscribe(() => { /*noop*/ });
renderService.size$.subscribe(() => { /*noop*/ });
spyOn(element, "getOffsetHeight");
spyOn(element, "getOffsetWidth");
renderService.resize$.next();
expect((<jasmine.Spy>element.getOffsetHeight).calls.count()).toBe(1);
expect((<jasmine.Spy>element.getOffsetWidth).calls.count()).toBe(1);
});
});
describe("RenderService.bearing$", () => {
it("should be defined", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = new RenderCamera(1, 1, RenderMode.Fill);
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.bearing$.pipe(
first())
.subscribe(
(bearing: number): void => {
expect(bearing).toBeDefined();
done();
});
frame$.next(createFrame(0));
});
it("should be 90 degrees", (done: Function) => {
let element: HTMLDivElement = document.createElement("div");
let frame$: Subject<AnimationFrame> = new Subject<AnimationFrame>();
const renderCamera: RenderCamera = new RenderCamera(1, 1, RenderMode.Fill);
let renderService: RenderService = new RenderService(element, frame$, RenderMode.Letterbox, renderCamera);
renderService.bearing$.pipe(
first())
.subscribe(
(bearing: number): void => {
expect(bearing).toBeCloseTo(90, 5);
done();
});
let frame: AnimationFrame = createFrame(0);
frame.state.camera.lookat.set(1, 0, 0);
frame$.next(frame);
});
});
describe("RenderService.projectionMatrix$", () => {
it("should set projection on render camera", () => {
const element = document.createElement("div");
const camera = new RenderCamera(1, 1, RenderMode.Letterbox);
const renderService =
new RenderService(
element,
observableEmpty(),
RenderMode.Letterbox,
camera);
const projectionMatrix = new PerspectiveCamera(30, 3, 0.1, 1000)
.projectionMatrix
.toArray();
renderService.projectionMatrix$.next(projectionMatrix);
expect(camera.perspective.projectionMatrix.toArray())
.toEqual(projectionMatrix);
});
});
|
ES6 + Angular Controller class, getting this is undefined in callback
Consider the following class
class LoginController{
constructor(authService,$timeout,$state){
let vm = this;
this.loading = false;
this._authService = authService;
this._$timeout = $timeout;
this._$state = $state;
this.loading = false;
this.statusMessage = null;
}
login(){
this.loading = true;
this.statusMessage = null;
let loginModel = {
UserName : this.username,
Password : this.password,
RememberMe : this.rememberMe
};
//Login User
this._authService.login(loginModel).then(function(user){
//Set User Login & send to Dashboard
this._authService.setUser(user);
this._$state.go("dashboard");
}, function(error){
const errorMessage = error ? error.Message : "Undefined Login Issue occurred !";
this.loading = false;
});
}
}
Everything is working fine, except for then I hit the error callback function and it gets to this.loading = false; which for some reason this is undefinded.
How do I keep a reference to the Class "this" in the error callback ?
Did you solve the problem already or need more help?
You have to use fat-arrows to keep the scope.
//Login User
this._authService.login(loginModel).then((user) => {
//Set User Login & send to Dashboard
this._authService.setUser(user);
this._$state.go("dashboard");
}, (error) => {
const errorMessage = error ? error.Message : "Undefined Login Issue occurred !";
this.loading = false;
});
If fat arrows something new in ES6? I used to define this of the parent scope as self or that.
really? sorry i don't know much of ES6 but if i use => i have the same instance of this inside the callback? it's really strange
@seahorsepip yes, I love them. They replace the self/me/vm/function(){}.bind(this) code.
@gianlucatursi It's not strange, it's an awesome feature of ES6 :) (they borrowed the idea from C#)
@NielsSteenbeek awesome! Thank you so much! this is a great idea (copied) :)
"fat-arrows" won't do anything, but arrow functions might ^^ "they borrowed the idea from C#" not really.
This is a very common problem of passing context to a callback function. The generic answer is to declare something like
var self=this; within the context from which you want to keep "this"
and then in your callback function reference it like so
function callback () {
self.myvariable=true;
};
in your particular case you have already declared
let vm = this;
so you could use
vm._authService.setUser(user);
vm._$state.go("dashboard");
as Niels said you can use the => arrows in ecmascript 6 to maintain scope, which is reasonable as you are using the "let" keyword which requires the same version of ecmascript.
function(error){
this.loading = false;
}
The this inside the function is a different scope so this refers to the function instead of the parent function.
Solution,
define this as self:
class LoginController{
constructor(authService,$timeout,$state){
let vm = this;
this.loading = false;
this._authService = authService;
this._$timeout = $timeout;
this._$state = $state;
this.loading = false;
this.statusMessage = null;
}
login(){
var self = this; //Define this as self
this.loading = true;
this.statusMessage = null;
let loginModel = {
UserName : this.username,
Password : this.password,
RememberMe : this.rememberMe
};
//Login User
this._authService.login(loginModel).then(function(user){
//Set User Login & send to Dashboard
self._authService.setUser(user);
self._$state.go("dashboard");
}, function(error){
const errorMessage = error ? error.Message : "Undefined Login Issue occurred !";
self.loading = false; //Self refers to this of parent scope
});
}
}
ES6 fat-arrows solve the ugly self/me/vm variables. 2) Your code still fails inside the then.
My bad fixed the then. Too bad I can't use ES6 yet in my projects, need to support IE :'(
You can use babel-polyfill to support Promise in IE.
same, lots of mobile devices don't support the some of new ecmascript features, Nor does cordova, browserfy, etc =s
Nice, I'll look into it! Though some jQuery plugins I'm writing are still better to write in ES5 since polyfills add another few kB to teh filesize which I try to keep at minimum in projects. Maybe I'll move to ES6 in the future and compile it to ES5 ifit doesn't add too much extra file size.
@MichaelWhinfrey I'm creating Cordova apps :). Using Webpack, babel to transpile the ES6 to ES5. So, no reason to not use ES6.
@NielsSteenbeek =) thanks, that's awesome info. I've only discovered these issues myself in this last week
|
How does the new download link to the 1.3.0.tgz look like?
I would like to download the tgz file.
I would like to download the tgz file.
The files are in gh-pages: https://github.com/Azure/secrets-store-csi-driver-provider-azure/tree/gh-pages/charts
|
Talk:Vasili Bogazianos
Greek?
Beside the obvious, is there an actual good source that says he's Greek? (and I don't mean IMDB) JackO'Lantern 06:49, 7 April 2006 (UTC)
|
36. A system for controlling access to enterprise resources, comprising: a biometric server having stored therein biometric data related to a plurality of users, at least one biometric group that the user is associated with and at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources; at least one computer connected to said biometric server; a plurality of biometric devices, wherein said biometric policy has associated therewith at least one of said plurality of biometric devices; and wherein said biometric server includes means for indicating whether the user can access said enterprise resources, wherein said user may gain access to the enterprise resources by passing one of said biometric policies that has been assigned to the user, wherein said biometric policy is a CONTINGENT policy having a list of biometric policies, wherein said list of biometric policies includes at least two biometric policies, and wherein the user passes said CONTINGENT policy if either the user exceeds a minimum threshold associated with a first biometric policy or if the user exceeds a contingent threshold associated with said first biometric policy and the user exceeds a minimum threshold associated with a second biometric policy.
37. A system for controlling access to enterprise resources, comprising: a biometric server having stored therein biometric data related to a plurality of users, at least one biometric group that the user is associated with and at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources; at least one computer connected to said biometric server; a plurality of biometric devices, wherein said biometric policy has associated therewith at least one of said plurality of biometric devices; and wherein said biometric server includes means for indicating whether the user can access said enterprise resources, wherein said user may gain access to the enterprise resources by passing one of said biometric policies that has been assigned to the user, wherein said biometric policy is a RANDOM policy having a list of biometric policies, wherein said list of biometric policies includes at least two biometric policies, wherein a random biometric policy is determined from said list of biometric policies, and wherein the user passes said RANDOM policy if the user passes said random biometric policy.
38. A system for controlling access to enterprise resources, comprising: a biometric server having stored therein biometric data related to a plurality of users, at least one biometric group that the user is associated with and at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources; at least one computer connected to said biometric server; a plurality of biometric devices, wherein said biometric policy has associated therewith at least one of said plurality of biometric devices; and wherein said biometric server includes means for indicating whether the user can access said enterprise resources wherein said user may gain access to the enterprise resources by passing one of said biometric policies that has been assigned to the user; wherein said biometric policy is a THRESHOLD policy having a list of biometric policies, wherein said list of biometric policies includes at least two biometric policies, and wherein the user passes said THRESHOLD policy if the user exceeds a total threshold while being tested on one or more of said biometric policies in said list of biometric policies.
39. A system for controlling access to enterprise resources, comprising: a biometric server having stored therein biometric data related to a plurality of users, at least one biometric group that the user is associated with and at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources; at least one computer connected to said biometric server; a plurality of biometric devices, wherein said biometric policy has associated therewith at least one of said plurality of biometric devices; and wherein said biometric server includes means for indicating whether the user can access said enterprise resources, wherein said user may gain access to the enterprise resources by passing one of said biometric policies that has been assigned to the user, wherein said biometric policy is an OR policy having a list of policies or devices, wherein said list of policies or devices includes at least two elements, and wherein the user passes said OR policy if the user passes one of said elements in said list of policies or devices.
40. A system for controlling access to enterprise resources, comprising: a biometric server having stored therein biometric data related to a plurality of users, at least one biometric group that the user is associated with and at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources; at least one computer connected to said biometric server; a plurality of biometric devices, wherein said biometric policy has associated therewith at least one of said plurality of biometric devices; and wherein said biometric server includes means for indicating whether the user can access said enterprise resources, wherein said user may gain access to the enterprise resources by passing one of said biometric policies that has been assigned to the user, wherein said biometric policy is an AND policy having a list of policies or devices, wherein said list of policies or devices includes at least two elements, and wherein the user passes said AND policy if the user passes all of said elements in said list of policies or devices.
41. A system for controlling access to enterprise resources, comprising: a biometric server having stored therein biometric data related to a plurality of users, at least one biometric group that the user is associated with and at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources; at least one computer connected to said biometric server; a plurality of biometric devices, wherein said biometric policy has associated therewith at least one of said plurality of biometric devices; and wherein said biometric server includes means for indicating whether the user can access said enterprise resources, wherein said user may gain access to the enterprise resources by passing one of said biometric policies that has been assigned to the user, wherein said biometric policy is a CONTINGENT policy having a list of policies or devices, wherein said list of policies or devices includes at least two elements, and wherein the user passes said CONTINGENT policy if either the user exceeds a minimum threshold associated with a first element or if the user exceeds a contingent threshold associated with said first element and the user exceeds a minimum threshold associated with a second element.
42. A system for controlling access to enterprise resources, comprising: a biometric server having stored therein biometric data related to a plurality of users, at least one biometric group that the user is associated with and at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources; at least one computer connected to said biometric server; a plurality of biometric devices, wherein said biometric policy has associated therewith at least one of said plurality of biometric devices; and wherein said biometric server includes means for indicating whether the user can access said enterprise resources, wherein said user may gain access to the enterprise resources by passing one of said biometric policies that has been assigned to the user, wherein said biometric policy is a RANDOM policy having a list of policies or devices, wherein said list of policies or devices includes at least two elements, wherein a random element is determined from said elements in said list of policies or devices, and wherein the user passes said RANDOM policy if the user passes said random element.
43. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is specified as one of the following: i. an OR policy having a list of devices; ii. an AND policy having a list of devices; iii. a CONTINGENT policy having a list of devices; iv. a RANDOM policy having a list of devices; or v. a THRESHOLD policy having a list of devices.
44. The method according to claim 43, wherein said biometric policy is further specified as one of the following: vi. an OR policy having a list of biometric policies; vii. an AND policy having a list of biometric policies; viii. a CONTINGENT policy having a list of biometric policies; ix. a RANDOM policy having a list of biometric policies; or x. a THRESHOLD policy having a list of biometric policies.
45. The method claim 44, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
46. The method according to claim 43, wherein said biometric policy is further specified as one of the following: vi. an OR policy having a list of policies or devices; vii. an AND policy having a list of policies or devices; viii. a CONTINGENT policy having a list of policies or devices; ix. a RANDOM policy having a list of policies or devices; or x. a THRESHOLD policy having a list of policies or devices.
47. The method claim 46, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
48. The method claim 43, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
49. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is an OR policy having a list of devices, wherein said list of devices includes at least two different biometric devices, and wherein the user passes said OR policy if the user passes one of said biometric devices in said list of devices.
50. The method according to claim 49, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
51. The method claim 49, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
52. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is an OR policy having a list of devices, wherein said biometric policy is an OR policy having a list of devices, wherein said list of devices includes only one biometric device, and wherein the user passes said OR policy if the user passes said biometric device while being tested with at least two biometric measurements.
53. The method according to claim 52, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
54. The method claim 52, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
55. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is an AND policy having a list of devices, wherein said list of devices includes at least two different biometric devices, and wherein the user passes said AND policy if the user passes all of said biometric devices in said list of devices.
56. The method according to claim 55, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
57. The method claim 55, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
58. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is an AND policy having a list of devices, wherein said list of devices includes only one biometric device, and wherein the user passes said AND policy if the user passes said biometric device while being tested with at least two biometric measurements.
59. The method according to claim 58, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
60. The method claim 58, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
61. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a CONTINGENT policy having a list of devices, wherein said list of devices includes at least two different biometric devices, and wherein the user passes said CONTINGENT policy if either the user exceeds a minimum threshold associated with a first biometric device or if the user exceeds a contingent threshold associated with said first biometric device and the user exceeds a minimum threshold associated with a second biometric device.
62. The method according to claim 61, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
63. The method claim 61, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
64. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a CONTINGENT policy having a list of devices, wherein said list of devices includes only one biometric device, wherein a first biometric measurement and a second biometric measurement are associated with said biometric device, and wherein the user passes said CONTINGENT policy if either the user exceeds a minimum threshold associated with said biometric device and said first biometric measurement or if the user exceeds a contingent threshold associated with said biometric device and said first biometric measurement and the user exceeds a minimum threshold associated with said biometric device and said second biometric measurement.
65. The method according to claim 64, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
66. The method claim 64, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
67. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a RANDOM policy having a list of devices, wherein said list of devices includes at least two different biometric devices, wherein a random biometric device is determined from said list of devices, and wherein the user passes said RANDOM policy if the user passes said random biometric device.
68. The method according to claim 67, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
69. The method claim 67, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
70. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a RANDOM policy having a list of devices, wherein said list of devices includes only one biometric device, wherein a random biometric measurement is determined from one or more biometric measurements, and wherein the user passes said RANDOM policy if the user passes said biometric device while being tested with said random biometric measurement.
71. The method according to claim 70, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
72. The method claim 70, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
73. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a THRESHOLD policy having a list of devices, wherein said list of devices includes at least two different biometric devices, and wherein the user passes said THRESHOLD policy if the user exceeds a total threshold while being tested on one or more of said biometric devices in said list of devices.
74. The method according to claim 73, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
75. The method claim 73, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
76. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a THRESHOLD policy having a list of devices, wherein said list of devices includes only one biometric device, and wherein the user passes said THRESHOLD policy if the user exceeds a total threshold while being tested with one or more biometric measurements on said biometric device in said list of devices.
77. The method according to claim 76, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
78. The method claim 76, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
79. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is an OR policy having a list of biometric policies, wherein said list of biometric policies includes at least two biometric policies, and wherein the user passes said OR policy if the user passes one of said biometric policies in said list of biometric policies.
80. The method according to claim 79, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
81. The method claim 79, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
82. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is an AND policy having a list of biometric policies, wherein said list of biometric policies includes at least two biometric policies, and wherein the user passes said AND policy if the user passes all of said biometric policies in said list of biometric policies.
83. The method according to claim 82, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
84. The method claim 82, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
85. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a CONTINGENT policy having a list of biometric policies, wherein said list of biometric policies includes at least two biometric policies, and wherein the user passes said CONTINGENT policy if either the user exceeds a minimum threshold associated with a first biometric policy or if the user exceeds a contingent threshold associated with said first biometric policy and the user exceeds a minimum threshold associated with a second biometric policy.
86. The method according to claim 85, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
87. The method claim 85, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
88. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a RANDOM policy having a list of biometric policies, wherein said list of biometric policies includes at least two biometric policies, wherein a random biometric policy is determined from said list of biometric policies, and wherein the user passes said RANDOM policy if the user passes said random biometric policy.
89. The method according to claim 88, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
90. The method claim 88, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
91. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a THRESHOLD policy having a list of biometric policies, wherein said list of biometric policies includes at least two biometric policies, and wherein the user passes said THRESHOLD policy if the user exceeds a total threshold while being tested on one or more of said biometric policies in said list of biometric policies.
92. The method according to claim 91, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
93. The method claim 91, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
94. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is an OR policy having a list of policies or devices, wherein said list of policies or devices includes at least two elements, and wherein the user passes said OR policy if the user passes one of said elements in said list of policies or devices.
95. The method according to claim 94, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
96. The method claim 94, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
97. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is an AND policy having a list of policies or devices, wherein said list of policies or devices includes at least two elements, and wherein the user passes said AND policy if the user passes all of said elements in said list of policies or devices.
98. The method according to claim 97, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
99. The method claim 97, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
100. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a CONTINGENT policy having a list of policies or devices, wherein said list of policies or devices includes at least two elements, and wherein the user passes said CONTINGENT policy if either the user exceeds a minimum threshold associated with a first element or if the user exceeds a contingent threshold associated with said first element and the user exceeds a minimum threshold associated with a second element.
101. The method according to claim 100, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
102. The method claim 100, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
103. A method for providing user authentication to enterprise resources, comprising the steps of: (a) setting up a biometric server, said biometric server having stored therein at least one biometric policy that determines whether the user can gain access to the enterprise resources, wherein said biometric policy has associated therewith at least one biometric device; (b) determining whether the user is authenticated by executing said biometric policy; and (c) allowing the user access to the enterprise resources if the user passes said biometric policy, otherwise denying access to the user to the enterprise resources, wherein said biometric policy is a RANDOM policy having a list of policies or devices, wherein said list of policies or devices includes at least two elements, wherein a random element is determined from said elements in said list of policies or devices, and wherein the user passes said RANDOM policy if the user passes said random element.
104. The method according to claim 103, wherein said biometric server has stored therein at least two biometric policies that each define different authentication levels, each said authentication level defining a probability that the user is authorized to access the enterprise resources.
105. The method claim 103, further comprising the step of enrolling the user for authentication by having the user create a biometric template for each said biometric device, wherein said biometric template includes biometric data unique to the user.
|
let { createCanvas, loadImage } = require('canvas')
let canvas = createCanvas('canvas')
let ctx = canvas.getContext('2d');
let fs = require('fs')
let sizeOf = require('image-size');
/**
* imgPath: 图片路径
* degree: 旋转角度
*/
module.exports =async (imgPath, degree,rgba) => {
let imgObj = await loadImage(imgPath);
let dimensions = sizeOf(imgPath);
let maxLen = Math.sqrt(Math.pow(dimensions.width, 2) + Math.pow(dimensions.height, 2)) + 200
canvas.width = maxLen
canvas.height = maxLen
ctx.drawImage(imgObj, 50, 50, imgObj.width, imgObj.height);
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.save();
ctx.translate(canvas.width / 2, canvas.height / 2);
ctx.rotate(degree * Math.PI / 180);
ctx.drawImage(imgObj, -imgObj.width / 2, -imgObj.width / 2);
ctx.restore();
let imgData = ctx.getImageData(0, 0, canvas.width, canvas.height).data;
let lOffset = canvas.width,
rOffset = 0,
tOffset = canvas.height,
bOffset = 0
let borderPoint =Math.round((maxLen-dimensions.width)/2)-1;
for (let i = 0; i < canvas.width; i++) {
for (let j = 0; j < canvas.height; j++) {
let pos = (i + canvas.width * j) * 4
if (rgba) {
let rgbaArr = rgba.split(',');
let redVal = rgbaArr[0]/1;
let greenVal = rgbaArr[1]/1;
let blueVal = rgbaArr[2]/1;
if (!(imgData[pos]==0&&imgData[pos + 1]==0&&imgData[pos + 2]==0&&imgData[pos + 3]==0)&&!(imgData[pos] ==redVal && imgData[pos + 1] ==greenVal && imgData[pos + 2]==blueVal)&&
i!=borderPoint &&j!=borderPoint) {
// 说第j行第i列的像素不是透明的
bOffset = Math.max(j, bOffset) // 找到有色彩的最底部的纵坐标
rOffset = Math.max(i, rOffset) // 找到有色彩的最右端
tOffset = Math.min(j, tOffset) // 找到有色彩的最上端
lOffset = Math.min(i, lOffset) // 找到有色彩的最左端
}
}else{
if (imgData[pos] > 0 || imgData[pos + 1] > 0 || imgData[pos + 2]>0 || imgData[pos + 3] > 0) {
// 说第j行第i列的像素不是透明的
// 楼主貌似底图是有背景色的,所以具体判断RGBA的值可以根据是否等于背景色的值来判断
bOffset = Math.max(j, bOffset) // 找到有色彩的最底部的纵坐标
rOffset = Math.max(i, rOffset) // 找到有色彩的最右端
tOffset = Math.min(j, tOffset) // 找到有色彩的最上端
lOffset = Math.min(i, lOffset) // 找到有色彩的最左端
}
}
}
}
// 由于循环是从0开始的,而我们认为的行列是从1开始的
// lOffset++
// rOffset++
// tOffset++
// bOffset++
// 意思是说包含有像素的区域是 左边第1行,到右边第100行,顶部第26行,到底部50行
// 此时如果你想找到外部区域的话,就是 left和top减1 right和bottom加1的区域
// 分别是0, 101, 25, 51.这个区间能够刚好包裹住
let canvas2 = createCanvas("canvas")
let cxt2 = canvas2.getContext("2d");
canvas2.width =Math.abs(rOffset - lOffset+2);
canvas2.height =Math.abs(bOffset - tOffset+2);
let dataImg = ctx.getImageData(lOffset - 1, tOffset - 1, canvas2.width, canvas2.height);
cxt2.putImageData(dataImg, 0, 0, 0, 0, canvas2.width, canvas2.height)
let img2 = canvas2.toDataURL("image/png");
let base64Data = img2.replace(/^data:image\/\w+;base64,/, "");
let dataBuffer = new Buffer(base64Data, 'base64');
fs.writeFileSync(imgPath, dataBuffer);
}
|
// Copyright 2018 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef SRC_LIB_STORAGE_VFS_CPP_REF_COUNTED_H_
#define SRC_LIB_STORAGE_VFS_CPP_REF_COUNTED_H_
#include <zircon/assert.h>
#include <zircon/compiler.h>
#include <zircon/types.h>
#include <atomic>
#include <fbl/ref_counted_upgradeable.h>
namespace fs {
// VnodeRefCounted implements a customized RefCounted object.
//
// It adds an additional method, "ResurrectRef", which allows Vnodes to be re-used after a reference
// count of zero has been reached.
template <typename T, bool EnableAdoptionValidator = ZX_DEBUG_ASSERT_IMPLEMENTED>
class VnodeRefCounted
: private ::fbl::internal::RefCountedUpgradeableBase<EnableAdoptionValidator> {
public:
using ::fbl::internal::RefCountedBase<EnableAdoptionValidator>::AddRef;
using ::fbl::internal::RefCountedBase<EnableAdoptionValidator>::Release;
using ::fbl::internal::RefCountedBase<EnableAdoptionValidator>::Adopt;
using ::fbl::internal::RefCountedBase<EnableAdoptionValidator>::ref_count_debug;
// Don't use this method. See the relevant RefPtr implementation for details.
using ::fbl::internal::RefCountedUpgradeableBase<
EnableAdoptionValidator>::AddRefMaybeInDestructor;
// VnodeRefCounted<> instances may not be copied, assigned or moved.
DISALLOW_COPY_ASSIGN_AND_MOVE(VnodeRefCounted);
// This method should only be called if the refcount was "zero", implying the object is currently
// executing fbl_recycle. In this case, the refcount is increased by one.
//
// This method may be called to prevent fbl_recycle from following the typical path of object
// deletion: instead of destroying the object, this function can be called to "reset" the
// lifecycle of the RefCounted object to the initialized state of "ref_count_ = 1", so it can
// continue to be utilized after there are no strong references.
//
// This function should be used EXCLUSIVELY from within fbl_recycle. If other clients (outside
// fbl_recycle) attempt to resurrect the Vnode concurrently with a call to Vnode::fbl_recycle,
// they risk going through the entire Vnode lifecycle and destroying it (with another call to
// Vnode::fbl_recycle) before the initial recycle execution terminates.
void ResurrectRef() const {
if constexpr (EnableAdoptionValidator) {
int32_t old = this->ref_count_.load(std::memory_order_relaxed);
ZX_DEBUG_ASSERT_MSG(old == 0, "count %d(0x%08x) != 0\n", old, old);
}
this->ref_count_.store(1, std::memory_order_relaxed);
}
protected:
constexpr VnodeRefCounted() = default;
~VnodeRefCounted() = default;
};
} // namespace fs
#endif // SRC_LIB_STORAGE_VFS_CPP_REF_COUNTED_H_
|
Cryogenic fluid vaporizer
ABSTRACT
A liquid cryogenic vaporizer and method of use are disclosed. The vaporizer includes a main tube, a cryogenic fluid inlet positioned proximate a first end of the main tube for receiving cryogenic fluid, and a second tube having a diameter smaller than the main tube, the second tube being in fluid communication with the main tube at a second end of the main tube opposite the cryogenic fluid inlet. The vaporizer further includes an outlet extending from the inner tube for expelling vaporized fluid. The second tube can be positioned within the main tube, and one or more velocity limiters are optionally included within the main tube along a fluid path.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority from Australian Patent Application No.<PHONE_NUMBER> filed on Nov. 15, 2017. The entire content of the priority application is incorporated herein by reference.
FIELD OF THE INVENTION
This invention relates to a liquid cryogenic vaporizer and a method of vaporizing a cryogenic liquid, in particular a cryogenic liquefied gas.
BACKGROUND OF THE INVENTION
The design of current ambient cryogenic vaporizers has not changed substantially over the last 50 years. An enormous amount of energy is required to separate gases from their normal atmosphere and then liquefy those gases. The liquefied gas is then stored in super insulated tanks in order to prevent this energy from escaping. Current conventional vaporizers typically consist of finned tubes and in some cases tubes with no fins. Nearly always, these are approximately 25 mm in diameter and are connected in series to make a longer length of tube in which the internal diameter of the entire passage from one end to the other does not change. Multiple parallel passes of the same design allow for the increase in the capacity. Thus the design of a single pass is also the design of all parallel passes.
The vaporizers mentioned above have certain disadvantages. As liquid turns to vapor along the length of the vaporizer, the quality of the vapor changes from 0% where it is all liquid to 100% where the fluid is all vapor. As the densities of the two phases differ greatly, the velocity of the t-phase mixture increases dramatically along the direction of flow. At the entry end, velocity is low and heat transfer occurs to mostly pure liquid. Vapor forms from boiling liquid at the wall of the tubes. At temperature differences exceeding a critical value, the vapor “blankets” the warmer wall from the cooler fluid, which is a phenomenon known as the Leiden frost Effect or gun barrel effect. This then lowers the heat transfer co-efficient (HTC) and in some cases by orders of magnitude. This effect is akin to placing droplets of water into a hot frying pan where it floats around the pan on a thin film of vapor and does not boil or evaporate. This is the effect that appears within the tubes of conventional vaporizers. Slugging also occurs, where not all the liquid is converted into a gas.
The above effect is augmented by the increasing velocity of the two-phase flow. Considering two-phase flow without heat transfer for the moment, at high enough velocities, the vapor phase makes its own passage along the core through the middle of the tube and, on the outside of this, there is an annular liquid phase. When heat transfer is added to this effect, the Leiden frost Effect will exist as well as annular flow which effectively provides three zones. The first zone starts at the wall of the tube where there is a ring of vapor forming a blanket as described above. There is then a liquid phase in annulus form and within that a vapor core. Overall the heat transfer efficiency of the vaporizer is well below ideal.
Excessive velocities increase frictional losses and therefore increase pressure drop which is another disadvantage of conventional vaporizers. Rapid boiling of the liquid at the entry to a vaporizer, when temperature differentials are at around 200° C., causes surging in many instances and this can cause many problems with downstream instrumentation.
The most common material used for ambient vaporizers is aluminum, as it is relatively cheap compared with other materials and has excellent heat transfer properties. The reason most aluminum extrusions are kept to small bores is because of the limitations of the extruding equipment of the aluminum suppliers. Furthermore, the larger the internal diameter becomes, the thicker the wall thickness needs to be in order to retain pressure. This situation has not changed for up to 50 years.
Conventional vaporizers are also very labor intensive to manufacture as they are big, cumbersome, and difficult to build. Some heat exchangers or vaporizers include very tall stacks of tubes or pipes that have reduced or ineffective resistance to wind and the outside elements. Furthermore, the movement of the giant stacks of tubes leads to cracking of the tubes.
The present invention seeks to overcome any one or more of the above disadvantages by providing a system and process that allows energy transfer in a simple and cost effective way which saves on raw material cost, by up to 45%, in order to build heat exchangers or vaporizers. The present invention takes advantage of the stored energy within the cryogenic liquid.
SUMMARY OF THE INVENTION
According to a first aspect of the invention, there is provided a liquid cryogenic vaporizer including a main tube, a cryogenic fluid inlet positioned proximate a first end of the main tube for receiving cryogenic fluid, and a second tube having a diameter smaller than the main tube, the second tube being in fluid communication with the main tube at a second end of the main tube opposite the cryogenic fluid inlet. The vaporizer also includes an outlet extending from the inner tube for expelling vaporized fluid.
In certain aspects, the vaporizer includes one or more surfaces within the main tube; and at least one velocity limiter positioned within the main tube along a fluid path between the cryogenic fluid inlet and the cryogenic fluid outlet, the at least one velocity limiter including one or more surfaces arranged to limit velocity of fluid flowing within the main tube, the velocity limiter controlling an amount of heat transfer between the fluid and the one or more surfaces within the main tube.
In still further aspects, the second tube is positioned within the main tube. The vaporizer can include a heat exchange unit fluidically connected to the outlet. The heat exchange unit can include a counterflow tube-in-tube heat exchanger and a second heat exchanger having an inlet connected to the outlet and an outlet tube forming a gas feedback path to the counterflow tube-in-tube heat exchanger.
According to a further aspect, there is provided a liquid cryogenic vaporizer including a main tube; an inlet to the main tube for receiving cryogenic liquid; and an outlet from the main tube for expelling vaporized liquid. The velocity of the flow of the liquid in the main tube is controlled to increase heat transfer between the liquid and one or more surfaces within the main tube.
Preferably the main tube is dimensioned to reduce said velocity of the flow of the liquid. The vaporizer may further include one or more inner tubes located within the main tube such that a space is formed between an inner surface of the main tube and an outer surface of said one or more inner tubes. The one or more inner tubes may have said inlet to receive said cryogenic liquid to flow in said one or more inner tubes.
Preferably the liquid is vaporized upon leaving an outlet to said one or more inner tubes, said expelled vaporized liquid is expelled within the main tube and acts as the heat transfer to the liquid remaining in said one or more inner tubes, the expelled vaporized liquid eventually being expelled from said main tube.
According to a still further aspect of the invention, there is provided a liquid cryogenic vaporizer including a main tube; an inlet to the main tube for receiving cryogenic liquid; and an outlet from the main tube for expelling vaporized liquid. The main tube is dimensioned to reduce the velocity of the flow of the liquid through the main tube in order to increase heat transfer between the liquid and the inner surface of the main tube.
The vaporizer may also include an inner tube located within the main tube, such that a space is formed between the outside surface of the inner tube and the inner surface of the main tube. The inner tube can have a first end for receiving the fluid and said outlet is at a second end of the inner tube. Preferably the liquid flows from said inlet to said outlet. The liquid may flow from said inlet, through said space to the first end of the inner tube, through the inner tube to be expelled at said outlet.
The vaporizer preferably further includes one or more plates located at predefined locations in said main tube, said one or more plates having at least one aperture for the fluid to flow through, said one or more plates acting to create turbulence to mix a vapor phase of the liquid and the liquid together to assist in making the liquid contact the inner surface of the main tube. The vaporizer may further include a fluid controller means or fluid motion generator located in said main tube through which the liquid passes, said generator imparting a swirling motion to the fluid such that the liquid contacts the inner surface of the main tube in order to further increase the heat transfer between the liquid and the inner surface of the main tube. The fluid motion generator may have a base or disc and at least one upstanding curved portion. Examples of fluid controller means may include a control valve, an orifice having one or more apertures, a pitot tube, a flow nozzle, a venturi meter, an elbow tap, a wedge meter, or an averaging pitot. A fluid controller means may also include a fluid velocity limiter having one or more surfaces arranged to limit the liquid flow velocity within the main tube. The fluid velocity limiter may also control the amount of heat transfer between the fluid and the one or more surfaces within the main tube. There may be one or more fluid control means, fluid velocity limiters, or fluid motion generators.
According to a further aspect of the invention, there is provided a method of vaporizing a cryogenic liquid including providing a main tube having an inlet and an outlet; receiving cryogenic liquid at said inlet to travel through the main tube; expelling vaporized liquid from the outlet; and reducing the velocity of the flow of the liquid through the main tube by predetermined dimensions of the main tube, such that the heat transfer between the liquid and an inner surface of the main tube is increased.
BRIEF DESCRIPTION OF THE DRAWINGS
A preferred embodiment of the invention will hereinafter be described, by way of example only, with reference to the drawings in which:
FIG. 1A is a side and plan view of a conventional heat exchanger;
FIG. 1B is a side view of another conventional heat exchanger;
FIG. 2 is a side view and plan view of a cryogenic fluid vaporizer according to a first embodiment of the invention;
FIG. 3 is a side view of a tube that forms part of the vaporizer and shows the flow of fluid within the tube;
FIG. 3A is a block diagram of a further embodiment of the cryogenic fluid vaporizer;
FIGS. 4A and 4B are respectively a side view and plan view of a further embodiment of the vaporizer including a series of plates located within the tube of the vaporizer;
FIGS. 4C and 4D are respectively a side view and plan view of yet a further embodiment of the vaporizer including a series of plates located within the tube of the vaporizer;
FIGS. 5A and 5B show side views of a further embodiment of the invention which uses the ambient temperature;
FIG. 6 shows a side view of the vaporizer where additional heating is provided to vaporize the fluid;
FIGS. 7A and 7B are side views of a further embodiment of the vaporizer showing the use of plates in FIG. 7A and the use of a fluid motion generator in FIG. 7B;
FIGS. 8A and 8B are side views of a vaporizer having an exit at the bottom thereof with FIG. 8A showing plates located in the tube and FIG. 8B using a fluid motion generator;
FIGS. 9A to 9E are a series of side views of a vaporizer having a variable number of plates located within the tube and either having a bottom exit or a top exit;
FIGS. 10A to 10F are a series of side views of the vaporizer with a fluid motion generator located in the lower part of the tube and showing various alternatives with a bottom exit and a top exit;
FIGS. 11A, 11B and 11C show alternative arrangements of the vaporizer according to another embodiment;
FIG. 12 is a photograph showing the tube of the vaporizer, the inlet and the outlet being frosted over while the heat exchanger it is attached to is not frosted over;
FIG. 13 is a top schematic view of an embodiment of the cryogenic vaporizer shown in FIG. 2; and
FIG. 14 shows one alternative embodiment of the cryogenic vaporizer having a feedback loop to optimize heat transfer.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring to FIG. 1A, there is shown a conventional heat exchange unit (2) having a series of tube sections (4), with fins, in which fluid flows to either be cooled or heated. A further example of a conventional heat exchanger is shown in FIG. 1B where an exchanger (6) has looped sections (8) joining upright tube sections (10) that are each covered by a series of fins.
In FIG. 2, there is shown a first embodiment of the present invention in which a cryogenic fluid vaporizer (12) includes a vaporizer tube (14) of variable length which has an input (16) to receive a super cooled liquid, such as nitrogen, oxygen, argon, carbon dioxide, liquid natural gas (LNG), and many other liquids. The liquid flows through the tube (14) and eventually is output at outlet (18) to a first tube (20) of a heat exchange unit (22). The tube (14) is preferably made from stainless steel and not aluminum, as sometimes it is not desirable to have too much heat transferred to the liquid or liquid/gas flow that can cause the Leiden frost Effect.
Referring to FIG. 3, the vaporizer tube (14) is preferably cylindrical and has an inner, preferably cylindrical, core tube (24) which is surrounded by a space, such as an annulus (26), that is an open area that extends from the outside surface/wall of the inner tube (24) to the inside surface or inner wall of the vaporizer tube (14). Fluid is input through inlet (16) upwards in the annulus (26) around the core inner tube (24) until it reaches the top (28) of the tube (14). It then enters an inlet (30) at the top of the inner tube (24) and travels downwardly through to the outlet (18) to then travel to the heat exchange unit. On its upward path, a number of plates or baffles (32) are positioned at predefined intervals and surround the inner tube (24). Each of the plates or baffles (32) may have any number of small apertures in which the fluid passes. This is designed to slow down the velocity of the fluid.
Referring to FIG. 3A, there is shown a further embodiment of a liquid cryogenic vaporizer 43 which has a main tube 31. Located within the main tube 31 is a series of inner tubes 33 that can be connected to one another or, alternatively, there can be one continuous inner tube 33. The series of tubes or the single tube 33 has an inlet 35 through which cryogenic liquid is input through the whole vaporizer 43. It travels initially upward and then downward and then upward again to exit at outlet 37 to the series of tubes 33. Here, it is in the form of a vaporized liquid or gas which exits the outlet 37 and then flows downwardly as indicated by the arrows in a space 39 of the main tube 31. The gas that flows downwardly within the space 39 contacts the outer surface of the tube or tubes 33 and acts as a heat transfer medium to the liquid that is flowing within those tubes. In this way a more accurate flow system is achieved that offers a precise flow rate of the liquid and precise pressure drop that is experienced within the main tube 31. A number of the tubes and the length of tubes will relate directly to the maximum flow rate capacity of each vaporizer 43. Therefore the number of tubes and the length of the tubes are variable. Eventually, the gas that is expelled from the outlet 37 after travelling downwardly in the space 39 exits at an outlet 41 from the main tube 31 and can be input to a separate heat exchanger or the like.
It may still be beneficial to have the expelled gas exiting from the top of the main tube 31 or shell such that the outlet 37 is co-located with the top of the main tube 31. As with other embodiments, any number of plates or baffles 32 can be positioned at pre-defined intervals in the main tube 31 and/or within the tubes 33.
The loss of efficiency described in the background part of the invention is overcome by redistributing the two-phases, that is gas and liquid, which have segregated from each other as stated above. The tube (14) is generally of a larger diameter to existing conventional heat exchange pipes and this assists in slowing down the velocity and any unhelpful flow characteristics. As mentioned previously, the fluid is slowed down even further with any number of orifice plates or baffles that contain one or more apertures placed in the flow path of the two-phase flow. The large diameter tubing (14) slows down the velocity of the two-phase flow within the initial stage of the vaporizer (12), that is, as it enters inlet (16) and is just about to move upwardly as shown in FIG. 3. At this point the temperature differential is at its greatest, at around 200° C., and there is no requirement for extended surface areas and it is in fact creating the segregated effect stated above. The plates or baffles (32) are distributed along the flow path of the larger tube (14) and create disturbance or turbulence which mixes the vapor and liquid phases together in such a way that it reduces the segregation as described above. The liquid phase will be caused to collide with the inside wall of tube (14) in such a way that it improves the heat transfer co-efficient. The number and spacing of the plates (32) can be optimized to give the best heat transfer co-efficient based on the flow rate and the liquefied gas being vaporized.
The larger diameter tube (14) is generally used without fins or an extended surface area. In the particular case of vaporizing liquid nitrogen, the temperature differential between ambient temperature and the liquid temperature will be in excess of 200° C. This is where free energy is absorbed in order to vaporize the liquid nitrogen.
By including the plates (32), as mentioned previously, it reduces the flow rate and enables better mixing between the two phases. Slugging can take place whereby the liquid is not all converted into gas. It is similar to a champagne bottle opening where it is all bubbles and liquid, with no clear separation of vapor from the liquid. All of the gas needs to be uniformly converted from the liquid phase. The optimum length of the tube (14) is dependent on the flow rate and can be 12 meters or higher, standard sizes are about 6 meters.
Referring to FIGS. 4A and 4B, the tube (14) is shown in this example as being of about 6 meters in length. Each of the tubes (34) of the heat exchanger, into which the tube (14) is connected, are of a length of about 5 meters. The outer tube (14) has a cross-sectional area of about 5,384 mm², an internal diameter of 82.8 mm, and an outside or external diameter of 88.9 mm. The inner or core tube (24) has a length also of 6 meters, a cross-sectional area of 506 mm2, an internal diameter of 23 mm and an external diameter of 25.4 mm. The aluminum finned tubes (34) of the heat exchanger are stainless steel lined tubes and are 1.64 m of surface area per linear meter of tubing, an internal diameter of 23 mm, and overall 20 meters in length, that is 4×5 meters. The plates (32) inside the tube (14) in this example have up to ten holes having a diameter of 7 mm each, with the holes equally spaced around an internal aperture in which the core tube (24) fits and protrudes through. Each plate (32) has a total area of holes which is 384 mm² and, in this example, there are nineteen internal plates (32). The spacing between each adjacent plate (32) is about 300 mm, however this can be varied.
Referring to FIGS. 4C and 4D, there is shown a further example of the vaporizer (12) in which the tube (14) is only 2 meters in total length, being a 2.5 inch pipe having 66.9 mm for its internal diameter and 73 mm external diameter. Its total cross-sectional area is 3,515 mm². The inner tube or core tube (24) is also 2 meters in length, has a cross-sectional area of 506 mm2, an internal diameter of 23 mm and an external diameter of 25.4 mm. The aluminum tubing (34) of the heat exchanger again is a stainless steel lined tube, having 1.65 m of aluminum external surface area per linear meter of tube and the internal diameter is 23 mm. It is 8 meters in total length, that is four rungs of 2 meters in length each. There are nineteen internal plates (32), each having up to ten holes of 7 mm diameter and a total area (of holes) being 384 mm² as in FIG. 4A.
Referring to FIG. 5A there is shown a further embodiment of a vaporizer (36) that has a tube (38) connected to a heat exchange unit (40). The tube (38) has an inner core tube (42) and an annulus (44) surrounding the inner core tube (42). Near the bottom of the tube (36) is a fluid controller means or a fluid movement generator (46) which is a disc or base (45) having an upstanding portion (47) from the disc, which is a curved reducer, that has a reducing aperture from its connection to the disc or base to its open end. The base has an aperture in the middle which the inner tube (42) goes through. It is so shaped to generate a swirling motion within the fluid that comes through inlet (48), then through the generator (46) and more particularly through the upstanding curved portion and then upwardly through the annulus (44), so that the velocity of the two-phase mixture increases and this will force the liquid flow to the inside wall of the tube (38) and therefore maximize the HTC so that the fluid is efficiently converted into a gas. The liquid travels up the annulus (44) to the top (50) of the core tube (42), and then is forced down the inner tube (42) to exit at the outlet (52) and into the heat exchanger (40).
In FIG. 5B, a vaporizer (54) has an outlet (56) at the top of its tube (58). A fluid 30 movement generator (60) is located adjacent the bottom of the tube (58) to impart a motion to the fluid entering through inlet (62). It has the same effect on the fluid as in FIG. 5A and likely is more effective at moving the liquid to the inside wall of the tube (58) as there is more space within the interior of the tube. A generator (60) also has a base/disc (61) to which is connected to a hollow upstanding portion, shown as a reducing diameter portion (63) through which the fluid (two-phase flow) flows and is directed, by the curved shape of that reducing diameter portion (63), towards the inner surface or wall of tube (58). The swirl or motion generator (60) is angled and either concentric or eccentric with respect to the base (61). If there is an inner tube going through the base (61) then the generator is generally eccentrically located on the base (61). There can be more than one generator and more than one upstanding portion on each generator, depending on the rate of flow of the fluid. The angle at which the generator directs the fluid flow is at approximately 10 degrees to the horizontal plane, as seen by the arrows in FIG. 5B. The reducer or generator will increase the velocity of the fluid as it passes therethrough, but will cause very little pressure drop. The speed is increased to force momentum in order to achieve the swirl effect.
It has been noted that the swirl generation in the flow path that forces the liquid flow against the interior wall of the tube (38) or (58), removes the blanketing effect alluded to in the background of the invention part of the description. This is one of the features that aids in reducing the Leiden frost Effect.
In FIG. 6, there is shown a further embodiment wherein tube (66) is connected to a heat exchange unit (68) which is heated by a heat source such as steam or hot water at inlet (70). An outlet (72) to the tube (66), and more particularly, an inner core tube (74), is connected to the heat exchange unit (68) through a separate tube (76). A fluid movement generator (78) is located at the bottom of the tube (66) near an input (80).
In FIGS. 7A and 7B, there is shown an alternative vaporizer (82). However, FIG. 7B differs from FIG. 7A with the replacement of plates (84) in FIG. 7A by a fluid movement generator (86) in FIG. 7B. This arrangement is called an open rack vaporizer (82) in which an outlet tube (88) extends from the top of the tube (90) from its outlet at (92) and has three portions to the outlet pipe (88), being a riser tube (89), a horizontal tube (91) and a down tube (94). The down tube (94) has water flowing either side from a unit (96) as the heated gas expels from outlet (98). In FIG. 7A, initially cryogen liquid is input at inlet (100) and moves upwardly through a series of plates or baffles (84) that have a single aperture (102) in the middle of each plate (84), which liquid then slowly vaporizes on its path to the outlet (98). In FIG. 7B, the only difference from FIG. 7A is the inclusion of the fluid movement generator (86) which is more clearly shown in the plan view and is termed a top exit quad swirl, with four upstanding portions.
Referring to FIGS. 8A and 8B, there is shown an alternate vaporizer arrangement (104) which again is an open rack vaporizer unit where a tube (106) has a central inner tube (108) from which the fluid exits at an outlet (110) and goes through a horizontal portion (112) of an outlet pipe (111) and then vertically upwards through an upward portion (114) of that pipe (111) and out through exit (116). The liquid is input at inlet (118). Water flows down the outside of the vertical portion (114) from a unit (120). The only difference between the two figures is that, in FIG. 8A, there is a series of plates (122) spaced at predetermined positions that each have a central aperture (124) that surrounds the inner tube (108) and a series of smaller apertures (126) through which the fluid flows. Initially the fluid in FIG. 8A flows upwardly in an annular part (107) of the tube (106) until it reaches the top of the inner tube (108) at (128) and then flows downwardly inside tube (108) to the outlet (110). In FIG. 8B, instead of plates being used, there is a fluid movement generator (129) located in the bottom part of the tube (106) to circulate and move the fluid as it enters the inlet (118) in the annular part (107) of tube (106). The particular generator (129) is a quad swirl with a bottom exit.
Referring to FIG. 9, there is shown a series of respective side and plan views of different embodiments of a vaporizer using either no plates or plates located at predefined spaces within the main tube of each vaporizer. In FIG. 9A, no plates are used in main tube (131). The liquid cryogen enters at a bottom inlet (133) and travels up an annulus part of tube (131) until it reaches a top (135) of an inner tube (132). There are no plates that slow the movement of the fluid in this particular arrangement, this is performed solely by the larger diameter of the main tube (131). The liquid or gas mixture then flows down the inner tube (132) and exits at outlet (134). In FIG. 9B, a vaporizer (140) includes a tube (141) that has an annulus surrounding an inner tube (142) and includes five plates or baffles (146) separated by a distance W. The plates (146) have a central aperture and six smaller holes through which the fluid flows. It is the same arrangement of the gas or fluid flow as in FIG. 9A where the fluid enters inlet (143) and travels upwardly through an annulus part and through the plates (146) until it reaches a top part (145) of the inner tube (142). The fluid then flows downwardly through the inner tube (142) and out of outlet (144). In FIG. 9C, only one plate (156) is used that has a central aperture with six small holes around the central aperture. The plate (156) is located midway in a tube (151) a distance V from each end thereof. Again, the fluid travels through inlet (153) and goes upwardly through an annulus of the tube (151) through the plate (156) until it reaches a top portion (155) of an inner tube (152). It then travels downwardly and out of outlet (154). In the arrangement of FIG. 9D, a vaporizer (160) does not have an inner tube but just has a main tube (161) and five plates (166) each separated by distance W. The plates (166) have a similar arrangement of apertures as plate (146). The cryogen liquid travels through inlet (163), then upwardly through a tube (161) and through the plates (166) to exit at outlet (164) at the top of the tube (161). Lastly in FIG. 9E, the vaporizer (170) has a main tube (171) which has a central plate (176) located at the middle of the tube (171) a distance Z from the top and bottom ends. Again the plate (176) has the same arrangement of apertures as plates (166, 156, and 146). The liquid to be vaporized enters at inlet (173) and travels upwardly through the plate (176) and out through outlet (174).
As seen in FIG. 9 generally, FIGS. 9A to 9C have an outlet and inlet at the bottom of the main tube, while in FIGS. 9D to 9F, the inlet is located at the bottom of the main tube and the outlet is located at the top of the main tube. By way of comparison, ad referring to FIG. 10, there is a series, from FIGS. 10A to 10F, of different embodiments of a vaporizer having a fluid motion generator located at the bottom of the main tube.
Referring to FIGS. 10A to 10C, they each depict respectively, vaporizers (180, 190, 200). Each has a main tube (181, 191, 201) with an inner tube (182, 192, 202). An inlet (183, 193, 203) is respectively located at the bottom as are outlets (184, 194, 204). At the bottom, or located near the bottom, of each of the respective tubes is a fluid motion generator (183, 196, 206). In the embodiment of vaporizer (180), the generator (186) is a disc or plate that has two curved upstanding portions around a central aperture through which the inner tube (182) protrudes. In this example, the generator (186) can be referred to as a double swirl generator. In FIG. 10B, the generator (196) is a triple swirl generator having three curved upstanding portions located around a central hole through which the inner tube (192) extends. In FIG. 10C, the generator (206) is a quad swirl having four curved upstanding portions to pass the fluid formed around a central hole through which the inner tube (202) extends. In all of these embodiments (FIGS. 10A to 10C), fluid arrives at the inlet at the bottom of the respective tube and travels upwardly until it respectively arrives at the top (185, 195, 205) of the inner tubes (182, 192, 202). From there, it travels downwardly through the inner tube and out of the respective outlet (184, 194, 204) at the bottom of the respective inner tube.
Referring to FIGS. 10D, 10E, and 10F, they show a vaporizer (210,220, 230) that each have a main tube (211, 221, 231). There is no inner tube located inside the main tubes. An inlet exists at the bottom being designated by (213,223, 233) respectively and an outlet is at the top of each tube being (214, 224, 234). Located near bottom of each main tube is a fluid motion generator (216, 226, 236 respectively). Fluid to be vaporized is applied at the inlet at the bottom of each main tube which travels upwardly through the respective fluid motion generator and then out through the outlet at the top of each of the main tubes. The fluid motion generator (216) has two upstanding portions that are curved in shape and eccentrically located. The generator (226) has three upstanding reducing diameter portions, termed a triple swirl, and generator (236) is a quad swirl having four separate upstanding portions through which the fluid flows. In all of the generators disclosed in FIGS. 10A to 10F, fluid travels through these generators and their effect on the fluid is to force liquid to the inside wall of each of the main tubes and therefore maximize the heat transfer co-efficient.
With reference to FIG. 11A, there is shown a further embodiment where a vaporizer (240) has a main tube (242) having an inlet (244), an outlet (246), and a plate (248) having a series of apertures therein. The outlet (246) is joined through a segment of pipe (250) to a heat exchanger apparatus (252) that includes three vertical sections of tubing (254) all joined one to the other and having an outlet at (256). Cryogenic fluid flows through the inlet (244) and upwardly through the plate (248) of the main tube (242) where it starts to vaporize into a gas and then is forced to move into the first upright portion (254) of the heat exchange unit (252).
In FIG. 11B there is shown a further embodiment of a vaporizer (260) which has a main tube (262), an inner tube (264), an inlet (266), an outlet (268), and a single plate (270). It is connected through piping (272) to a heat exchange unit (274) that has a pair of upright portions of tubes (276) and a further outlet (278). The motion of cryogenic fluid changing from the liquid phase to the gas phase is similar to that described in relation to FIGS. 3 and 4.
Shown in FIG. 11C is a further embodiment of a vaporizer (280) having a main tube (282), an inner tube (284), an inlet (286), an outlet (288), and a series of spaced apart plates (290). This is connected through a tubing arrangement (292) to heat exchange unit (294) that includes four vertical sections of piping (296) and an outlet (298). Again the flow of cryogenic fluid from the liquid phase to gas phase is similar to that described in FIGS. 3 and 4.
Finally, in relation to FIG. 12, example advantages of the present invention are illustrated. In particular, FIG. 12 illustrates a main pipe or tube (300) that is connected to a heat exchange unit (302) in a similar fashion to that described in FIGS. 11A to 11C. As can be seen, the main tube (300) is frosted up through the heat transfer between the inside of the main pipe and the external ambient atmosphere. Notably, there is no frosting up shown on the heat exchanger (302) or any of the fins (304) of the heat exchanger.
Additional features may be added to the above-described embodiments. FIG. 13 shows a top schematic view of a cryogenic vaporizer (12) as shown in FIG. 2. This embodiment may be ideally suited for a flowrate of 200 to 4,000 Nm³/hr. For a higher flowrate, an alternative embodiment may be employed, as illustrated in the top schematic view shown in FIG. 14, which represents an alternative configuration of such a cryogenic vaporizer, and which is applicable to any of the cryogenic vaporizer designs illustrated in FIGS. 2-12. In particular, an outlet (18) of the vaporizer tube (14) may be routed to a tube-side of a separate shell-and-tube (e.g., tube-in-tube) heat exchanger (400). Such a heat exchanger (400) will generally be located “downstream” of the vaporizer tube (14), e.g., after the outlet (18). A tube-side outlet (404) of this heat exchanger operates as a feedback loop and is routed to the inlet at its own shell side. This shell-side inlet (402) has a diameter that is larger than the tube-side outlet diameter in order to minimize the potential increase in fluid velocity and thereby minimize ice formation. The fluid stream on the shell side of the shell-and-tube heat exchanger exchanges heat with the fluid stream passing through the tube side of the shell-and-tube heat exchanger (400). Passing the fluid through a first vaporizer, such as vaporizer (12), ensures that fluid in the gaseous phase enters the shell-and-tube heat exchanger tube-side inlet and minimizes two-phase flow. This heat integration configuration provides for enhanced heat recovery and promotes heat exchange efficiency. The resulting temperature differential for the shell-and-tube heat exchanger will be in the range of about 40 to 50 degrees. The shell-side gas exit temperature will be closer to ambient temperature than that of a conventional configuration. Such conditions are ideal to maximize heat transfer efficiency and will result only in minimal ice formation and no clogging in the vaporizer tubes.
The benefits of the alternative embodiment are readily observable when comparing heat exchange performance with a conventional ambient vaporizer, and these benefits are best observed at a flow rate of 3,000 to 50,000 Nm³/hr. The alternative embodiment of FIG. 14 results in a higher degree of heat transfer efficiency at a reduced materials cost. In another embodiment, multiple cryogenic vaporizers are connected in series. The feedback loop must always be implemented downstream of at least one cryogenic vaporizer.
In still further embodiments, other types of enhanced operational features may be incorporated into the cryogenic vaporizers disclosed herein. For example, in some embodiments, a forced-air feature can be included, in which forced air is introduced to ambient portions of the cryogenic vaporizer designs of FIGS. 1-14. This involves, for example, arranging a fan or other forced-air system to pass air along the exposed tubes, such as the vaporizer tube (14), the heat exchanger (22), and/or the shell-and-tube heat exchanger (400). Any of a variety of forced air systems may be used, only one example of which is a fan-forced air system.
Referring to FIGS. 1-14 overall, it is noted that the present invention provides a number of advantages relative to existing cryogenic vaporizer designs. For example, the present invention improves the heat transfer coefficient between the liquid to be vaporized and the wall of the main tube of the vaporizer. It reduces the raw material costs of making a heat exchanger or vaporizer by up to 45% and reduces labor costs by about a third. The present invention will not clog with ice or snow and smaller extrusions can be used on the “gas side” as there will be no clogging. Different combinations of heat exchangers can be utilized for warming the gas, expelled by this system, in the last 30-40° C. that can be even more economical than aluminum extrusions.
While the disclosure has been described in detail with reference to the specific embodiments thereof, these are merely examples, and various changes, arrangements and modifications may be applied therein without departing from the spirit and scope of the disclosure.
1. A liquid cryogenic vaporizer including: a main tube; a cryogenic fluid inlet positioned proximate a first end of the main tube for receiving cryogenic fluid; a second tube having a diameter smaller than the main tube, the second tube being in fluid communication with the main tube at a second end of the main tube opposite the cryogenic fluid inlet; and an outlet extending from the inner tube for expelling vaporized fluid.
2. The liquid cryogenic vaporizer of claim 1, further comprising: one or more surfaces within the main tube; and at least one velocity limiter positioned within the main tube along a fluid path between the cryogenic fluid inlet and the cryogenic fluid outlet, the at least one velocity limiter including one or more surfaces arranged to limit velocity of fluid flowing within the main tube, the velocity limiter controlling an amount of heat transfer between the fluid and the one or more surfaces within the main tube.
3. The liquid cryogenic vaporizer of claim 1, wherein the second tube is positioned within the main tube.
4. The liquid cryogenic vaporizer of claim 3, further comprising a heat exchange unit fluidically connected to the outlet.
5. The liquid cryogenic vaporizer of claim 4, wherein the heat exchange unit includes a counterflow tube-in-tube heat exchanger and a second heat exchanger having an inlet connected to the outlet and an outlet tube forming a gas feedback path to the counterflow tube-in-tube heat exchanger.
6. A liquid cryogenic vaporizer including: a main tube; an inlet to the main tube for receiving cryogenic liquid; an outlet from the main tube for expelling vaporized liquid; wherein the velocity of the flow of the liquid in the main tube is controlled to increase heat transfer between the liquid and one or more surfaces within the main tube.
7. The liquid cryogenic vaporizer according to claim 6, wherein the main tube is dimensioned to reduce the velocity of the flow of the liquid.
8. The liquid cryogenic vaporizer according to claim 7, further including one or more inner tubes having an inlet to receive cryogenic liquid and located within the main tube such that a space is formed between an inner surface of the main tube and an outer surface of the one or more inner tubes.
9. The liquid cryogenic vaporizer according to claim 8, wherein the liquid is vaporized upon leaving an outlet of the one or more inner tubes, and the vaporized liquid is expelled within the main tube and acts as the heat transfer fluid to the liquid remaining in the one or more inner tubes, the expelled vaporized liquid eventually being expelled from the main tube.
10. The liquid cryogenic vaporizer according to claim 6, further including an inner tube located within the main tube, such that a space is formed between the outside surface of the inner tube and the inner surface of the main tube.
11. The liquid cryogenic vaporizer according to claim 10, wherein the inner tube has a first end for receiving the fluid and the outlet is at a second end of the inner tube, and the liquid flows from the inlet, through the space formed between the outside surface of the inner tube and the inner surface of the main tube.
12. The liquid cryogenic vaporizer according claim 6, further including one or more plates located in the main tube, the one or more plates having at least one aperture for the fluid to flow through, the one or more plates acting to create turbulence to mix a vapor phase of the liquid and the liquid together to assist in making the liquid contact the inner surface of the main tube.
13. The liquid cryogenic vaporizer according to claim 12, further including a fluid motion generator located in the main tube through which the liquid passes, wherein the fluid motion generator imparting a swirling motion to the fluid such that the liquid contacts the inner surface of the main tube in order to further increase the heat transfer between the liquid and the inner surface of the main tube.
14. The liquid cryogenic vaporizer according to claim 13, wherein the fluid motion generator has a base or disc and at least one upstanding curved portion.
15. The liquid cryogenic vaporizer according to claim 6, wherein the vaporizer outlet is routed to a heat exchanger having a first inlet and a first outlet and a second inlet and a second outlet, wherein the fluid from the vaporizer outlet enters the first inlet of the heat exchanger and exchanges heat with the fluid from the first outlet of the heat exchanger, and the fluid from the first outlet of the heat exchanger travels through the second inlet of the heat exchanger and exits through the second outlet of the heat exchanger.
16. The liquid cryogenic vaporizer according to claim 6, wherein at least one of the one or more fluid limiters comprises an orifice plate having one or more apertures.
17. A method of vaporizing a cryogenic liquid including: providing a main tube having an inlet and an outlet; receiving cryogenic liquid at the inlet to travel through the main tube; expelling vaporized liquid from the outlet; and controlling the velocity of the flow of the liquid through the main tube to increase heat transfer between the liquid and one or more surfaces within the main tube.
18. The method of vaporizing a cryogenic liquid according to claim 17, further including providing one or more inner tubes located within the main tube such that a space is formed between an inner surface of the main tube and an outer surface of the one or more inner tubes.
19. The method of vaporizing a cryogenic liquid according to claim 17, wherein the one or more inner tubes has the inlet to receive the cryogenic liquid to flow in the one or more inner tubes.
20. The method of vaporizing a cryogenic liquid according to claim 19, wherein the liquid is vaporized upon leaving an outlet to the one or more inner tubes, the expelled vaporized liquid is expelled within the main tube and acts as the heat transfer to the liquid remaining in the one or more inner tubes, and the expelled vaporized liquid is eventually expelled from the main tube.
21. The method of vaporizing a cryogenic liquid according to claim 17, further comprising one or more plates located at predefined locations in the main tube, the one or more plates having at least one aperture for the fluid to flow through, the one or more plates acting to create turbulence to mix a vapor phase of the liquid and the liquid together to assist in making the liquid contact the inner surface of the main tube.
22. The method of vaporizing a cryogenic liquid according to claim 17, further comprising a fluid motion generator located in the main tube through which the liquid passes, wherein the generator imparts a swirling motion to the fluid such that the liquid contacts the inner surface of the main tube in order to further increase the heat transfer between the liquid and the inner surface of the main tube.
|
# coding=utf-8
# noinspection PyPackageRequirements
from pyromancy import pyromq
import torch
from torch.nn import functional as F
class LossGroup(pyromq.PublisherGroup):
def __init__(self, optimizer, grad_clip_norm=-1, **kwargs):
super(LossGroup, self).__init__(**kwargs)
self.optimizer = optimizer
self._accumulated_loss = None
self.grad_clip_norm = grad_clip_norm
def zero_grad(self):
"""
TODO
"""
self.optimizer.zero_grad()
self._accumulated_loss = None
def _accumulate_loss(self, computed_loss):
"""
TODO
"""
if self._accumulated_loss is None:
self._accumulated_loss = computed_loss
else:
self._accumulated_loss += computed_loss
def step(self, post_zero_grad=True):
"""
TODO
"""
self._accumulated_loss.backward()
if self.grad_clip_norm > 0:
for group in self.optimizer.param_groups:
torch.nn.utils.clip_grad_norm(group['params'],
self.grad_clip_norm)
self.optimizer.step()
if post_zero_grad:
self.zero_grad()
def compute(self, in_dict, out_dict, data_type):
for loss in self.members_by_target[data_type]:
loss_result = loss(in_dict, out_dict, data_type)
self._accumulate_loss(loss_result['pytorch_value'])
class Loss(pyromq.ComputePublisher):
def __init__(self, name, weight=1.0, **kwargs):
super(Loss, self).__init__(name, **kwargs)
self.weight = weight
class NegativeLogLikelihood(Loss):
def __init__(self, name, target_name, output_name, **kwargs):
super(NegativeLogLikelihood, self).__init__(name, **kwargs)
self.target_name = target_name
self.output_name = output_name
def compute(self, in_dict, out_dict):
"""
TODO
"""
y_true = in_dict[self.target_name]
y_pred = out_dict[self.output_name]
computed_loss = self.weight * F.cross_entropy(y_pred, y_true)
return computed_loss
class SiameseLoss(Loss):
def __init__(self, name, output1_name, output2_name, labels_name, **kwargs):
super(SiameseLoss, self).__init__(name, **kwargs)
self.output1_name = output1_name
self.output2_name = output2_name
self.labels_name = labels_name
def compute(self, in_dict, out_dict):
"""
TODO
"""
y1 = out_dict[self.output1_name]
y2 = out_dict[self.output2_name]
labels = in_dict[self.labels_name]
return self.weight * F.cosine_embedding_loss(y1, y2, labels)
|
1. Introduction {#sec1-antioxidants-09-00397}
Fatty acid esters of hydroxy fatty acids (FAHFAs) are a new class of endogenous lipids with beneficial biological effects, including antidiabetic and anti-inflammatory properties \[[@B1-antioxidants-09-00397]\]. To date, their biosynthesis and metabolism are not well elucidated \[[@B2-antioxidants-09-00397]\]. They are found in blood and many tissues, such as adipose tissue, liver, heart, kidneys, and pancreas in mammals, and have been identified in several food materials \[[@B3-antioxidants-09-00397]\]. In particular, palmitic acid esters of hydroxy stearic acid (PAHSAs) are the major esters in human and mice. PAHSAs received considerable attention because their levels were significantly decreased in serum of insulin-resistant patients \[[@B4-antioxidants-09-00397]\], breast cancer patients \[[@B5-antioxidants-09-00397]\], and the milk of obese women compared to lean mothers \[[@B6-antioxidants-09-00397]\], as well as suppressing inflammatory markers \[[@B7-antioxidants-09-00397]\], decreasing T-cell activation in ulcerative colitis \[[@B8-antioxidants-09-00397]\], and improving glucose tolerance by stimulating insulin secretion via GPR120 signaling \[[@B1-antioxidants-09-00397]\]. Supplementation of ω-3 fatty acids such as docosahexaenoic acid (DHA) in diabetic patients and obese mice increased the circulating levels of DHA esterified hydroxy linoleic acid FAHFAs and plays a protective role \[[@B9-antioxidants-09-00397]\].
Cellular defense against reactive oxygen species (ROS) is mediated by direct and indirect antioxidants. Direct antioxidants are often small molecules, which scavenge ROS through redox reactions, whereas indirect antioxidants induce a series of antioxidant enzymes by Nrf2 activation and catalyze ROS breakdown in cells \[[@B10-antioxidants-09-00397]\]. Under normal conditions, Nrf2 is constantly degraded in a Keap1-dependent manner via the ubiquitin-proteasome pathway \[[@B11-antioxidants-09-00397]\]. However, the presence of oxidants and electrophiles could lead to the disruption of Keap1-Nrf2 interaction \[[@B12-antioxidants-09-00397],[@B13-antioxidants-09-00397]\]. Recently, there is growing interest in promising Nrf2 activators because many physiological studies showed the Nrf2 activation alleviated pathological conditions, including chronic kidney disease, pulmonary hypertension, neurodegenerative disorder, and cancer \[[@B14-antioxidants-09-00397],[@B15-antioxidants-09-00397]\]. In other words, the activation of Nrf2 could lead to the upregulation of antioxidant enzymes and protect cells against oxidative stress-induced damages. Despite the diverse functions of FAHFAs, studies revealing their potential antioxidant activity have not been reported. A recent study showed that the Nrf2-mediated antioxidant defense mechanism may be linked to the biosynthesis of PAHSAs \[[@B16-antioxidants-09-00397]\], however there are no studies on the evaluation of the intrinsic antioxidant capabilities of FAHFAs. In this study, we aim to explore the plausible antioxidant potential of novel synthetic fatty acid esters of 12-hydroxy stearic acid (12-HSA), 12-hydroxy oleic acid (12-HOA) and their possible mode of action in the Nrf2 defense mechanism and lipid droplet oxidation in cultured human hepatocytes by reporter gene assay and fluorescence imaging analysis. We also interrogated their possible biosynthesis in liver cells.
2. Materials and Methods {#sec2-antioxidants-09-00397}
12-hydroxystearic acid, methyl ricinoleate, oleoyl chloride, linoleoyl chloride, dry pyridine, anhydrous dichloromethane, CDCl~3~, and 4-dimethylaminopyridine were purchased from Tokyo Chemical Industry (Tokyo, Japan). Dicyclohexyl carbodiimide, lithium hydroxide, Methanol for LC/MS, and all other reagents of synthetic grade were obtained from Wako Pure Chemical Corporation (Tokyo, Japan). Docosahexaenoic acid and Eicosapentaenoic acid with purity ≥98%, were purchased from Cayman Chemicals (Ann Arbor, MA, USA). Thin-layer chromatography (TLC) Silica gel 60G F~254~ glass plates (20 × 20 cm) were obtained from Merck (Tokyo, Japan) and spots were visualized by spraying 5% methanolic H~2~SO~4~. The column chromatography was performed by using spherical type Silica Gel N60 of particle size 40--50 μm, obtained from Kanto Chemical Industry (Tokyo, Japan). ^1^H and ^13^C NMR spectra were acquired using 400 MHz JNM-ECX400P (JEOL, Japan). The spectra were processed using ACD/NMR software and the chemical shifts(*δ*) values are given in ppm. The accurate mass measurements were performed by LTQ Orbitrap XL (Thermo Fisher scientific). Low-resolution electrospray ionization mass spectra (LR-ESI-MS) was recorded by LXQ (Thermo Fisher scientific), The methyl ester of 12-hydroxystearic acid (12-HSA) was prepared by direct methylation 12-HSA by refluxing at 80 °C in methanolic 2N HCl.
2.1. General Procedure for Synthesis of Oleoyl and Linoleoyl Esters of 12-Hydroxy Fatty Acids {#sec2dot1-antioxidants-09-00397}
To a solution of 12-HSA-OMe or 12-hydroxyoleic acid methyl ester (12-HOA-OMe) (100 mg, 0.318 mmol) in dry dichloromethane (4 mL) and dry pyridine (125.6 μL, 5 equivalents (eq)) at 0 °C, oleoyl chloride or linoleoyl chloride (1.1 eq) was added slowly and stirred for 24 h at room temperature under nitrogen atmosphere. Reaction mixture was extracted with dichloromethane by successive washing with 0.5 M HCl, saturated NaHCO~3~ and NaCl, and dried over sodium sulfate. The crude extract was concentrated under reduced pressure and the resulting mixture was purified with flash column chromatography (hexane/ethyl acetate = 99/1\~95/5) to give methyl esters of 12-OAHSA, 12-LAHSA, 12-OAHOA, and 12-LAHOA with a yield of about 65--70%. Further, their methyl esters dissolved in THF and were subjected to saponification using 1M aqueous LiOH (2 eq) at room temperature for a period of 24 h. Then, the reaction was neutralized with 2N HCl under an ice bath, followed by extraction with diethyl ether, concentrated in vacuum and purified via silica gel column chromatography using hexane/ethyl acetate/AcOH (93/6/1) to give 12-OAHSA (**2**), 12-LAHSA (**3**), 12-OAHOA (**2a**), and 12-LAHOA (**3a**), FAHFAs with a yield ranging between 65% and 70%.
### General Procedure for Synthesis of Eicosapentaenoic Acid Esters of 12-Hydroxy Fatty Acids
To a solution of 12-HSA-OMe or 12-hydroxyoleic acid methyl ester (12-HOA-OMe) (200 mg, 0.63 mmol) in dry dichloromethane (15 mL), DMAP (38.7 mg, 0.5 eq) and DCC (327.5 mg, 2.5 eq) at 0 °C under nitrogen atmosphere, EPA (264.7 µL,1.3 eq) was added slowly and stirred for 24 h at room temperature under nitrogen atmosphere. The reaction mixture was extracted with dichloromethane by successive washing with 0.5M HCl, saturated NaHCO~3~, NaCl and dried over sodium sulfate. The crude extract was concentrated under reduced pressure and the resulting mixture was purified with flash column chromatography (hexane/ethyl acetate = 98/2\~90/10) to give methyl esters of 12-EPAHSA, and 12-EPAHOA with a yield of about 60--65%. Further, their methyl esters were subjected to alkali hydrolysis as described above and the reaction was neutralized with 2N HCl under an ice bath, followed by extraction with ethyl acetate, concentrated in vacuum and purified via silica gel column chromatography (hexane/ethyl acetate/AcOH = 95/5/0\~80/20/1) to give 12-EPAHSA (**4**) and 12-EPAHOA (**4a**), FAHFAs with a yield ranging between 70% and 75%.
2.2. Cell Culture and Assays {#sec2dot2-antioxidants-09-00397}
The cell culture and assays were performed by the previously published report with minor modifications \[[@B17-antioxidants-09-00397]\]. The C3A cells (derivative of Human HepG2, CRL-10741) were purchased from ATCC (Manassas, VA, USA) and kept at 37 °C in an incubator under a humidified atmosphere of 5% CO~2~ using minimum essential medium (MEM) with 10% (*v/v*) fetal bovine serum (FBS), and 1% Penicillin--Streptomycin--Neomycin mixture (all were obtained from Thermo Fisher Scientific, Japan). The cell cytotoxicity is evaluated using CCK-8 assay kit (Dojindo Molecular Technologies). The Graph Pad Prism 6.03 software was used to determine the half-maximal inhibitory concentration (IC~50~) of each FAHFAs. The reporter gene assay was performed according to the previously established protocol in our laboratory \[17). At first, cultured C3A cells were seeded into 96-well plates and incubated for 24 h, followed by transfecting the cells using the FuGENE HD Transfection Reagent. The vectors, pGL4.37\[luc2p/ARE/Hygro\] and pGL4.75\[hRluc/CMV\] (Promega) of mass ration 20:1, were used to co-transfect the cells and incubated for 24 h. Then, the transfection reagent was washed off, and pre-dissolved FAHFAs in MEM was applied to the cells and incubated for 24 h. Dual-Glo Luciferase Assay System (Promega) was used to measure the luciferase activity in control and FAHFA-treated samples. The fold change in relative luciferase activity was calculated by dividing the fluorescence intensity in FAHFAs treated samples to that of control.
### 2.2.1. Nuclear Protein Extraction and Western Blot Analysis {#sec2dot2dot1-antioxidants-09-00397}
The nuclear Nrf2 accumulation was determined by the nuclear protein extraction kit and Western blotting analysis. Cells were incubated with the FAHFAs in MEM without FBS for 24 h. Nuclear protein was fractionated using a Nuclear Extraction Kit (Active Motif). Samples (10 µg protein/well) were electrophoresed in NuPAGE 4--12% Bis-Tris Gels and transferred to Immobilon-P Membranes. The membranes were soaked in a solution of 5% dry milk in TBS containing 0.05% (*v/v*) Tween-20. Then, the membranes were exposed to either rabbit anti-Nrf2 antibodies (1:5,000, ab31163; Abcam, Cambridge, UK) or rabbit anti-Lamin B1 antibodies (1:50,000, ab16048; Abcam, Cambridge, UK) overnight. After washing the membranes in tris-buffered saline (with 0.05% Tween-20), the membrane was incubated for 2 h after treatment with anti-rabbit IgG (horseradish peroxidase-labelled) antibodies (1:10,000). The signals obtained were identified using a ChemiDoc MP Imaging System and LumiGLO Chemiluminescent Substrate System (Bio-Rad Laboratories Inc., Tokyo, Japan), and the bands were quantified by ImageJ software. The relative Nrf2 level was determined by taking the ratios of intensity of Nrf2 band to Lamin B1 band, and data were expressed by increase in fold change compared to control.
### 2.2.2. First-Strand cDNA Synthesis and Real-Time PCR {#sec2dot2dot2-antioxidants-09-00397}
Approximately 4.0 × 10^5^/well of cultured C3A cells were seeded into 24-well plates. After 24 h of incubation, the FAHFAs dissolved in MEM were added to the cells and further incubated for 24 h. The cells were harvested by scraping and total RNA was extracted using ReliaPrep RNA Cell Miniprep System (Promega). Extracted RNA samples were treated with DNAse and GoScript Reverse Transcription assay System (Promega, Japan) was used for synthesis of cDNA by the procedures established earlier in our laboratory \[[@B17-antioxidants-09-00397]\]. The relative expression of Nrf2 target genes was determined using real-time PCR (RT-PCR).
The PCR mixture contained 10 µL of SsoFast EvaGreen Supermix (Bio-Rad Laboratories Inc.), 1:5-diluted cDNA (2 µM) and 0.5 µM of each primer in a total volume of 20 µL. The reaction cycle comprised a holding stage of 30 s at 95 °C, followed by denaturation cycles of 5 s at 60 °C and 10 s extension at 72 °C. The amount of PCR product formed is confirmed by the fluorescence observed at the end of each cycles using CFX Connect (Bio-Rad Laboratories Inc.). The expression level of each target gene was normalized by glyceraldehyde-3-phosphate dehydrogenase (GAPDH). The primer sequences of the target genes, heme oxygenase-1 (HO-1), NAD(P)H:quinone oxidoreductase 1 (NQO1), glutamate--cysteine ligase modulatory subunit (GCLM), glutamate--cysteine ligase catalytic subunit (GCLC), catalase (CAT), and superoxide dismutase 1(SOD1) used in the study are identical to that of our earlier report \[[@B17-antioxidants-09-00397]\].
2.3. Fluorescent Imaging Analysis {#sec2dot3-antioxidants-09-00397}
Fluorescent imaging of lipid droplet (LD) and oxidized lipid droplet (oxLD) was performed according to a previously established method in our laboratory \[[@B18-antioxidants-09-00397]\]. Briefly, the cells were precultured in 0.1% gelatin-coated glass bottom dish with 6 × 10^5^ cells in 3 mL of the medium for 24 h. Then, the cells were treated with 400 µM linoleic acid (LA) with 0--125 µM 12-EPAHSA. After 8 h incubation, the cells were stained using 5 µM SRfluor 680-phenyl (Funakoshi co. Ltd., Tokyo, Japan) a fluorescent probe for neutral lipids, 10 µM Liperfluo, and 10 µg/mL Hoechst33342 (Dojindo laboratories, Kumamoto, Japan) a probe for lipid peroxides and nuclei, for 30 min at 37 °C. Fluorescence was recorded using the BZ-9000 Keyence fluorescence microscope having filter sets; excitation: 360/40 nm, emission: 460/50 nm, dichroic mirror: 400 nm (blue); excitation: 470/40 nm, emission: 525/50 nm, dichroic mirror: 495 nm (green); excitation: 620/60 nm, emission: 700/75 nm, dichroic mirror: 660 nm (red). Acquired images were processed by ImageJ 1.50i software. The intersectional images were obtained from binarized images of bright field, SRfluor and Liperfluo in the same visual field. Statistical analyses were performed using the GraphPad Prism 6 software. Data derived from three or more replicates are shown as mean values ± SEM. Student's *t-*test (two-tailed) was used to study statistically significant differences between the groups and a *p*-value of \<0.05 was considered as statistically significant.
3. Results {#sec3-antioxidants-09-00397}
The synthesis of fatty acid esters of 12-hydroxy stearic acid (12-HSA) and 12-hydroxy oleic acid(12-HOA) is described in [Scheme 1](#antioxidants-09-00397-sch001){ref-type="scheme"}. The methyl esters of the respective fatty acids were subjected to acylation reaction under two different conditions. The FAHFAs of oleic acid (OA) and linolenic acid (LA) were synthesized by direct acylation using respective acyl chlorides, followed by mild alkali hydrolysis using lithium hydroxide to afford 12-oleic acid hydroxy stearic acid (12-OAHSA (**2**)), 12-linoleic acid hydroxy stearic acid (12-LAHSA (**3**)), 12-oleic acid hydroxy oleic acid (12-OAHOA (**2a**)), and 12-linolenic acid hydroxy oleic acid (12-LAHOA (**3a**)). 12- eicosapentaenoic acid hydroxy stearic acid (12-EPAHSA (**4**)), and 12-eicosapentaenoic acid hydroxy oleic acid (12-EPAHOA (**4a**)) were synthesized by direct coupling of EPA with methyl esters of 12-HSA and 12-HOA, followed by mild alkali hydrolysis. The ^1^H-NMR and ^13^C-NMR chemical shifts for each compound are provided below and their spectra are given in the [Supporting Information](#app1-antioxidants-09-00397){ref-type="app"}.
12-OAHSA **(2):** R~f~ = 0.25 (hexane/ethyl acetate = 7/1); ^1^H-NMR (400 MHz, CDCl~3~) *δ* 5.36--5.33 (m, 2 H), 4.90--4.84 (m, 1 H), 2.35 (t, 2H, *J* = 7.3,15.1 Hz), 2.28 (t, 2H, *J* = 7.3,15.1 Hz), 2.03--1.98 (m, 4 H), 1.65--1.59 (m, 4 H), 1.51--1.50 (m, 4 H), 1.30--1.26 (m, 42 H), 0.90--0.86 (m, 6 H). ^13^C-NMR (100 MHz, CDCl~3~) *δ* 180.08, 174.06, 130.28, 130.05, 74.42, 35.04, 34.47, 34.32, 32.21, 32.06, 30.07, 30.01, 29.82, 29.78, 29.69, 29.62, 29.50, 29.47, 29.44, 29.35, 27.52, 27.47, 25.61, 25.58, 25.47, 24.97, 22.98, 22.87, 14.40, 14.36. The HRMS calculated for C~36~H~67~O~4~ \[M-H\]^−^, 563.50448, found 563.50334 (−2.02 ppm).
12-OAHOA **(2a):** R~f~ = 0.25 (hexane/ethyl acetate = 7/1); ^1^H-NMR (400 MHz, CDCl~3~ *δ* 5.49--5.29 (m, 4 H), 4.90--4.87 (m, 1H), 2.36--2.24 (m, 6H), 2.1-1.99 (m, 6 H), 1.67--1.60 (m, 4 H), 1.54--1.52 (m, 2 H), 1.31--1.26 (m, 36 H), 0.90--0.0.86 (m, 6 H); ^13^C-NMR (100 MHz, CDCl~3~) *δ*180.10, 173.93, 132.79, 130.27, 130.04, 124.65,74.01, 34.98, 34.31, 33.93, 32.28, 32.20, 32.04, 30.06, 30.00, 29.80, 29.61, 29.49, 29.43, 29.37, 29.31, 27.61, 27.51, 27.46, 27.44, 25.64, 25.41, 24.95, 22.97, 22.86, 14.39, 14.34. HR-ESIMS calculated for C~36~H~65~O~4~ \[M-H\]^−^,561.48883, found 561.48771 (−1.99 ppm).
12-LAHSA **(3):** R~f~ = 0.25 (hexane/ethyl acetate = 7/1); ^1^H-NMR (400 MHz, CDCl~3~) *δ* 5.40--5.31 (m, 4 H), 4.88--4.85 (m, 1 H), 2.75 (t, 2H, *J* = 6.8,13.3 Hz), 2.32 (t, 2H, *J* = 7.3,15.1 Hz), 2.26 (t, 2H, *J* = 7.8,15.1 Hz), 2.07--2.02 (m, 4 H), 1.67--1.59 (m, 4 H), 1.51--1.50 (m, 4 H), 1.30--1.26 (m, 36 H), 0.91--0.86 (m, 6 H). ^13^C-NMR (100 MHz, CDCl~3~) *δ*179.87, 173.92, 132.80, 130.52, 128.22, 124.66, 74.03, 34.99, 34.27, 33.95, 32.29, 32.05, 31.83, 29.92, 29.81, 29.65, 29.45, 29.38, 29.33, 29.31, 27.62, 27.50, 27.48, 25.93, 25.65, 25.41, 24.96, 22.98, 22.87, 14.40, 14.36. HR-ESIMS calculated for C~36~H~65~O~4~ \[M-H\]^−^, 561.48883, found 561.48786 (−1.7 ppm).
12-LAHOA **(3a):** R~f~ = 0.25 (hexane/ethyl acetate = 7/1); ^1^H-NMR (400 MHz, CDCl~3~) *δ* 5.48--5.31 (m, 6 H), 4.90--4.85 (m, 1 H), 2.37--2.25 (m, 6H) 2.08--1.99 (m, 6H), 1.65--1.60 (m, 4 H), 1.54--1.51 (m, 2 H), 1.38--1.26 (m, 32 H), 0.91--0.86 (m, 6 H). ^13^C-NMR (100 MHz, CDCl~3~) *δ*179.87, HR-ESIMS calculated for C~36~H~63~O~4~ \[M-H\]^−^, 559.47318, found 559.47279 (−0.69 ppm).
12-EPAHSA **(4):** R~f~ = 0.2 (hexane/ethyl acetate = 5/1); ^1^H-NMR (400 MHz, CDCl~3~) *δ* 5.41--5.30 (m, 10 H), 4.90--4.84 (m, 1 H), 2.86--2.80 (8H, m), 2.36--2.28 (m, 4H), 2.14--2.06 (m, 4 H), 1.74--1.59 (m, 4 H), 1.51--1.50 (m, 4 H), 1.26 (m, 22 H), 0.98 (t, 3H, J = 7.3, 15.1 Hz), 0.88 (t, 3H, J = 6.9, 13.7 Hz). ^13^C-NMR (100 MHz, CDCl~3~) *δ*179.49, 173.54, 132.10, 129.15, 128.80, 128.65, 128.33, 128.19, 127.96, 127.10, 74.33, 34.22, 34.04, 31.82, 29.56, 29.46, 29.28, 29.12, 26.74, 25.70, 25.62, 25.41, 25.37, 25.11, 24.75, 22.66, 20.63, 14.34, 14.13. HR-ESIMS calculated for C~38~H~63~O~4~ \[M-H\]^−^, 583.47318, found 583.47218 (−1.7 ppm).
12-EPAHOA **(4a):** R~f~ = 0.2 (hexane/ethyl acetate = 5/1); ^1^H-NMR (400 MHz, CDCl~3~) *δ* 5.47--5.30 (m, 12 H), 4.90--4.87 (m, 1 H), 2.84--2.81 (8H, m), 2.36--2.27 (6H, m), 2.11--2.01 (m, 6H), 1.73--1.61 (m, 4 H), 1.31--1.26 (m, 18 H), 0.99--0.95 (m, 3H), 0.89--0.86 (m, 3H). ^13^C-NMR (100 MHz, CDCl~3~) *δ*179.65, 173.33, 132.51, 132.0, 130.19, 129.04, 129.0, 128.71, 128.54, 128.23,128.14, 128.08, 128.06,127.86, 127.63, 127.0, 124.28, 73.82, 34.06, 33.97, 33.60, 33.32, 31.94, 31.71, 29.48, 29.11, 29.05, 28.99, 27.29, 26.61, 26.44, 25.60, 25.52, 25.33, 24.95, 24.64, 22.55, 20.53, 14.23, 14.02. HR-ESIMS calculated for C~38~H~61~O~4~ \[M-H\]^−^,581.45753, found 581.45655 (−1.6 ppm).
The cytotoxicity of all the synthetic FAHFAs and authentic hydroxy fatty acids were evaluated against human hepatoma cells (C3A) by CCK-8 assay, and the values of representative species in mean ± SD (*n* = 6) are given as, 12-OAHSA (499 ± 2.9), 12-LAHOA (673 ± 1.7), 12-EPAHSA (415 ± 2.5), 12-EPAHOA (358 ± 2.5), EPA (117 ± 2.5), and 12-HSA (161 ± 2.7), respectively. The cell viability logarithmic plots for compounds **4** and **4a** are provided in [Figure 1](#antioxidants-09-00397-f001){ref-type="fig"}A. The results show that FAHFAs are relatively less toxic when compared to their respective free fatty acids. The ability of each FAHFA at its non-cytotoxic concentrations regarding the activation of Nrf2 was examined with a reporter gene assay using Dual-Glo Luciferase Reporter Assay System (Promega), in which the antioxidant response element (ARE) drove the transcription of the luciferase reporter gene according to previously established protocol in our lab, with minor modifications \[[@B17-antioxidants-09-00397]\].
Among all the screened compounds **2**, **2a**, **3**, and **3a** did not shows any significant activity against Nrf2 activation ([Figure S1](#app1-antioxidants-09-00397){ref-type="app"}). In other words, EPA-derived compounds such as **4**, and **4a** shows the activation of Nrf2 in a dose-dependent manner ([Figure 1](#antioxidants-09-00397-f001){ref-type="fig"}B). The relative luciferase activity drastically increased to more than twenty-fold at a concentration of 250 µM of each compound **4** and **4a**. Further, the effect of the compound 12-EPAHSA (**4**) on accumulation of Nrf2 protein in nuclear fraction was examined using nuclear protein extraction kit, and Western blotting analysis, and the results are shown in [Figure 1](#antioxidants-09-00397-f001){ref-type="fig"}C, D. The 12-EPAHSA shows the significantly increased levels of nuclear Nrf2 with increasing concentration.
Next, the relative expression of Nrf2 target antioxidant genes in the samples derived was determined using real-time PCR and the data of significantly altered genes are shown in [Figure 1](#antioxidants-09-00397-f001){ref-type="fig"}E. The 12-EPAHSA significantly increased the expression levels of Nrf2-targeted cytoprotective genes, such as *NQO1, GCLM, GCLC,* and *SOD-1,* in a dose-dependent manner. The expression levels of *CAT,* and *HO-1,* showed the increasing tendency with change in concentration. The levels of GAPDH were used as an internal control in each experiment. The 12-EPAHSA treatment significantly reduced the oxidative stress biomarker, DDIT3/CHOP relative expression levels, however the effect was not concentration-dependent ([Figure S2](#app1-antioxidants-09-00397){ref-type="app"}). Furthermore, the antioxidant potential of 12-EPAHSA was evaluated by fluorescent imaging by treating it with lipid droplet (LD) and oxidized lipid droplet (oxLD) by the method established earlier in our laboratory \[[@B18-antioxidants-09-00397]\]. After 8-h incubation, small and a few large LDs were formed and oxidized in the linoleic (LA)-treated C3A cells ([Figure 2](#antioxidants-09-00397-f002){ref-type="fig"}A). The oxidized LDs were significantly reduced with increasing concentrations of 12-EPAHSA treatment. The number of total LDs were unchanged ([Figure 2](#antioxidants-09-00397-f002){ref-type="fig"}B1). On the other hand, the number of small oxLDs significantly decreased in a dose-dependent manner ([Figure 2](#antioxidants-09-00397-f002){ref-type="fig"}B2). The degree of oxidation in small LDs was also showed a decreasing trend ([Figure 2](#antioxidants-09-00397-f002){ref-type="fig"}B3) in response to 12-EPAHSA treatment when compared to untreated cells. The LDs detected were not only the neutral lipids; the lipid hydroperoxides were also stained with green fluorescent probe, which reduced significantly with the 12-EPAHSA treatment.
Next, to examine the whether C3A cells could synthesize FAHFAs, we added the equimolar concentrations of EPA and 12-HSA to the C3A cells and incubated for 12 h at 37 °C. Then, the cells were washed with phosphate-buffered saline, and extracted the total lipids (\~1 × 10^5^ cells) for LC-MS/MS analysis. [Figure 3](#antioxidants-09-00397-f003){ref-type="fig"} shows the schematic representation of 12-EPAHSA biosynthesis, with the concentration-dependent production of 12-EPAHSA in C3A cells. The concentrations of each fatty acids is less than their IC~50~ values (i.e., 16 and 32 µM). The results show 6-fold higher levels of 12-EPAHSA, at 32 µM, compared to 16 µM treated samples.
4. Discussion {#sec4-antioxidants-09-00397}
Lipids are signaling molecules with roles in the membrane structurer and sources of energy. However, recently discovered classes of lipids such as FAHFAs are known to protect diabetes and inflammation via the activation of G-protein coupled receptors \[[@B1-antioxidants-09-00397]\]. These lipids are found in adequate amounts in mammalian tissues such as adipose tissue, liver, heart, and daily foods \[[@B3-antioxidants-09-00397]\]. Despite their structural diversity and the biological activity of these endogenous lipids, the antioxidant potential through the induction of the Nrf2 activation pathway is unrevealed. Nrf2 is a transcription factor that regulates the gene expression of antioxidant and enzymes. In this research, we synthesized and screened 12-HSA, 12 HOA-derived FAHFAs against Nrf2 activation and found that EPA esterified FAHFAs increased the Nrf2-transactivation in a dose-dependent fashion compared to untreated cells ([Figure 1](#antioxidants-09-00397-f001){ref-type="fig"}B). 12-EPAHSA (**4**) led to a considerable increase in nuclear Nrf2 protein levels ([Figure 1](#antioxidants-09-00397-f001){ref-type="fig"}C,D). These results are concurrent with a previous study, which showed that free n-3 fatty acids such as EPA and DHA can protect the astrocytes against oxidative stress via Nrf2-dependent signaling \[[@B19-antioxidants-09-00397]\]. Nrf2 translocation is one of the key events required for the regulation of Keap1-Nrf2 pathway and it is considered as evidence of the activation of the system. Our analysis results showed the accumulation of higher amount of Nrf2 in the nucleus of cells treated with 12-EPAHSA compared to the control, suggesting the possible activation of nuclear Nrf2.
Besides this, Nrf2 target antioxidant enzymes such as *NQO1, GCLM, GCLC, SOD-1,* and *HO-1* were increased by the 12-EPAHSA treatment ([Figure 1](#antioxidants-09-00397-f001){ref-type="fig"}E), suggesting that EPA-derived FAHFAs are involved in the upregulation of these antioxidant defense enzymes, possibly via Nrf2 activation, which is evident in the literature \[[@B10-antioxidants-09-00397],[@B17-antioxidants-09-00397]\]. Moreover, the knockdown of Nrf1 can increase the expression of antioxidant genes including NQO1, and the activation of NQO1 is induced by Nrf2 much more strongly than by Nrf1Δ30 in a reporter gene assay system \[[@B20-antioxidants-09-00397]\]. Considering these facts, we assume that the translocated nuclear Nrf2, rather than Nrf1, plays a major role in the FAHFAs-induced expression of antioxidant enzymes. However, the limitation of our study was that the ARE reporter is not specific for Nrf2 but also reports Nrf1 activation \[[@B20-antioxidants-09-00397]\]. Hence, there is a possibility of the involvement of Nrf1 in antioxidant enzymes' upregulation upon 12-EPAHSA treatment, which needs further experimental studies such as the specific knockdown of Nrf2 via siRNA. Furthermore, studies are required to confirm the effect of FAHFAs on Nrf2 activation using multiple cell lines. The overall antioxidant effects of 12-EPAHSA were summarized in [Figure 4](#antioxidants-09-00397-f004){ref-type="fig"}. The activation of antioxidant enzymes and their cytoprotective mechanisms against oxidative stress induced damages by 12-EPAHSA. The cellular transporters of FAHFAs are still unknown, but our results revealed that 12-EPAHSA could activate the nuclear Nrf2 and induce the activation of Nrf2-targeted genes to protect cells against oxidative-stress-related damages. We tested the activities of free fatty acids (EPA and 12-HSA) to know which structural part of 12-EPAHSA is active, and the results suggested that EPA is the most potent activator of Nrf2, but not 12-HSA ([Figure S1](#app1-antioxidants-09-00397){ref-type="app"}). However, its activity is much lower than the parent 12-EPAHSA, which is comparatively less toxic than EPA, hence suggesting its possible effect on the cytoplasmic Nrf2 signaling. The antioxidant effect of 12-EPAHSA was evaluated by inducing oxidative stress in HepG2 cells using H~2~O~2~. The results showed the decreasing tendency in oxidative-stress-induced reactive oxygen species levels with increasing concentration of 12-EPAHSA ([Figure S3](#app1-antioxidants-09-00397){ref-type="app"}), which further supports its possible action as an antioxidant.
Further, we examined the biosynthesis of FAHFAs in C3A cells by spiking the equimolar concentrations of respective fatty acids (EPA and 12-HSA) at different levels to C3A cells with an incubation period of 12 h, and analyzed the total lipid extracts by LC-MS/MS. The results revealed that FAHFAs are biosynthesized in a dose-dependent manner ([Figure 3](#antioxidants-09-00397-f003){ref-type="fig"}), indicating the existence of hydroxy fatty acid acyltransferases in the liver cells. However, 12-EPAHSA was not detected in the control samples, suggesting this type of FAHFAs as non-endogenous lipids which may be produced when the fatty acid and hydroxy fatty acid sources are available endogenously. The FAHFAs are assumed to be biosynthesized by acyltransferases in adipocytes, however there is no experimental evidence to confirm their involvement \[[@B1-antioxidants-09-00397]\]. Our results using cultured C3A cells suggest that FAHFAs could be biosynthesized in liver and have beneficial health effects. Moreover, this type of in vitro technique could be employed for the large-scale production of FAHFAs of biological interest. With our expertise in lipid droplet (LD)-specific imaging analysis, we examined the effect of 12-EPAHSA on linoleic acid-induced LDs and oxidized LDs (including lipid peroxides). Our results showed a significant reduction in several small sizes of oxidized LDs ([Figure 2](#antioxidants-09-00397-f002){ref-type="fig"}B2), and these results had the same tendency as the change in LDs of DHMBA treated HepG2 cells, suggesting that 12-EPAHSA also has an antioxidant effect on LD oxidation \[[@B18-antioxidants-09-00397]\]. Knowing the new function of lipid is a critical piece of information needed to understand its endogenous functions. In this study, we have utilized a combination of chemical and biochemical techniques to identify the novel function of FAHFAs as an Nrf2 activator and induced the expression of antioxidant target genes, which could lead to antioxidant effects, which is of great significance for the treatment of oxidative-stress-related disorders.
5. Conclusions {#sec5-antioxidants-09-00397}
In summary, our findings demonstrate that EPA-derived FAHFAs are a novel class of lipids with less cytotoxicity compared to their free fatty acids and potent Nrf2 activators. They also efficiently suppressed the oxidation of small size lipid droplets, oxidative stress induced by H~2~O~2~, and enhanced the gene expressions of cytoprotective antioxidant enzymes. The detailed mechanism involving the biosynthesis, transport of FAHFAs and molecular mechanisms involving Nrf2 activation will of future interest.
S.G.B.G. thanks Faculty of Health Sciences, Hokkaido University for providing start-up grant (87011Q). This work is supported by JSPS Kakenhi grants (17H01879, 18K07434, 19H03117, 19K07861). We are thankful to. Toshihiro Sakurai for helpful discussions and. Eddy for cell culture support.
The following are available online at <https://www.mdpi.com/2076-3921/9/5/397/s1>, Figure S1: Reporter gene assay results of oleic acid, linoleic acid esters of 12-HSA and 12-HOA and free fatty acids (EPA and 12-HAS), Figure S2: Relative expression of DDIT3 (CHOP) in response to 12-EPAHSA treatment, Figure S3: Relative intensity folds of oxidative stress induced by H~2~O~2~ and effect of 12-EPAHSA treatment. and ^1^H and ^13^C-NMR spectra of FAHFAs
Click here for additional data file.
Conceptualization, S.G.B.G.; Data curation, S.G.B.G., H.F. and T.T.; Funding acquisition, H.F. and S.-P.H.; Methodology, S.G.B.G., H.F. and T.T.; Resources, H.C. and S.-P.H.; Supervision, H.C. and S.-P.H.; Visualization, H.F.; Writing---original draft, S.G.B.G.; Writing---review and editing, H.F., T.T., H.C. and S.-P.H. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
The authors declare no conflict of interest.
Figures and Scheme
::: {#antioxidants-09-00397-sch001 .fig}
Schematic representation of the chemical synthesis of fatty acid esters of hydroxy fatty acids (FAHFAs).
::: {#antioxidants-09-00397-f001 .fig}
Evaluation of the antioxidant potential of eicosapentaenoic (EPA)-derived FAHFAs in C3A cells. (**A**) Cell viabilities (IC~50~) of 12-EPAHSA and 12-EPAHOA (**B**) Reporter gene assay results of 12-EPAHSA and 12-EPAHOA (**C**) Relative nuclear Nrf2 protein expression induced by 12-EPAHSA (**D**) Protein levels of Nrf2 by Western blot analysis after treated with 12-EPAHSA. (**E)** Relative expression of antioxidant enzymes in response to 12-EPAHSA treatment. \* *p* \< 0.05, \*\* *p* \< 0.01, \*\*\* *p* \< 0.001, ns: not significant (one-way ANOVA) (*n* = 6). (NQO1: NAD(P)H quinone oxidoreductase 1, HO1: heme oxygenase-1, GCLM: glutamate--cysteine ligase modulatory subunit, GCLC: glutamate--cysteine ligase catalytic subunit, CAT: catalase, and SOD1: superoxide dismutase 1).
::: {#antioxidants-09-00397-f002 .fig}
Effect of 12-EPAHSA on oxidation of LA-induced LDs in C3A cells. Fluorescent images showed neutral lipid (red), lipid peroxide (green) and nuclei (blue) (**A**). The scale bar shown in each image is 10 µm. To quantify the number of total LDs and oxLDs, fluorescent images were analyzed with ImageJ software. The results are presented as the number of total LDs (**B1**), the number of oxLDs (**B2**), and degree of oxidation (**B3**) for small (\< 3 µm^2^) and large (≥ 3 µm^2^) LDs. Columns and bars represent the mean ± SEM (*n* = 3). \*\* *p* \< 0.01, \*\*\* *p* \< 0.001.
::: {#antioxidants-09-00397-f003 .fig}
Biosynthesis of 12-EPAHSA in C3A cells spiked with concentrations of single components of the equimolar mixture of EPA and 12-HSA.
::: {#antioxidants-09-00397-f004 .fig}
Summary of plausible actions of 12-EPAHSA which could lead to its antioxidant effects.
|
High speed printer
Dec. 5, 1961 J. P. ECKERT, JR., EAL 3,012,232
HIGH SPEED PRINTER ll Sheets-Sheet 1 Filed Jan. 2'7, 1953 INVENTORS JOHN PRESPER ECKERT,JR. JOHN C. SIMS,JR.
ATTORNEY Dec. 5, 1961 J. P. ECKERT, JR., EI'AL HIGH SPEED PRINTER l1 Sheets-Sheet 2 Filed Jan. 27, 1953 INVENTORS' JOHN PRESPER ECKERT,JR. JOHN C. SIMS,JR.
Arm NE) Dec. 5, 1961 J. P. ECKERT, JR, EI'AL 3,012,232
HIGH SPEED PRINTER Filed Jan. 2?, 1953 11 Sheets-Sheet 4 FLIP-FLOP OSCILLATOR TIMING DISC 191 HORIZONTAL SCANNI RESTORE 5 AMPLIFIER INFORMATION INPUT IN V EN TORS JOHN PRESPER ECKERT,JR.
JOHN C. SIMS,JR.
ATT R/VEY Dec. 5, 1961 J. P. ECKERT, JR., EI'AL 3, ,2
HIGH SPEED PRINTER 11 Sheets-Sheet 5 Filed Jan. 27, 1953 CHANNELS TIME ZONES INVENTORS JOHN PRESPER ECKERT,JR. JOHN c. S|MS,JR.
Fig. 8
Dec. 5, 1961 J. P. ECKERT, JR., EI'AL 3,012,232
HIGH SPEED PRINTER 11 Sheets-Sheet 8 Filed Jan. 2'7, 1953 TO FIG. 12
INVENTORS' JOHN PRESPER ECKERT,JR. JOHN c. SlMS,JR.
ATT NE) Dec. 5, 1961 J. P. ECKERT, JR., EI'AL 3,012,232
HIGH SPEED PRINTER Filed Jan. 27. 1953 11 Sheets-Sheet 9 INVENTORS JOHN PRESPER ECKERT,JR JOHN c. S|MS,JR.
ATTORNEY m -ow m NNQ wow m 5&5 wow 322w 253d 358 5 r -l I w w om W mQNKW fi W fi 3N mom oo- Dec. 5, 1961 J. P. ECKERT, JR, EI'AL 3,012,232
HIGH SPEED PRINTER ll Sheets-Sheet 10 Filed Jan. 27, 1953 Ni QE Nmw zmnmm 20E A 0v 9. zwmm MQOFE. 20mm 0mm AO E nZ E INVENTORS JOHN PRESPER ECKERT,JR. JOHN C. SIMS, JR.
ATTORNEY Dec. 5, 1961 J. P. ECKERT, JR., EI'AL 3,0 3
HIGH SPEED PRINTER ll Sheets-Sheet 11 Filed Jan. 27, 1953 v NZ:
2 m2: .hDnrrDO v mz j .SnEbO 0mm AO Jm nZ E mm? IwDmm v mw ImDmm mm? Im3mm N0? ImDmm I'm: kmw i mwtfi wEmwzww 'l Ef iw mmzoN m2;
DISTRIBUTOR INVENTORS' JOHN PRESPER ECKERT,JR JOHN C S|MS,JR
AT ENE) United States Patent 6 This application is a continuation in part of the pending application Serial Number 221,362, and the invention disclosed herein relates to an electromagnetic apparatus adapted to produce original typographical images.
In many. cases it is desired to create typographical images which are not to be reproduced from preexisting typographical originals, but which are to be translated from coded electric signals transmitted to the printer either from some remote point or from computing and business machines as the result of their manipulations and computations.
Accordingly, it is a primary object of this invention to provide an electromagnetic printing apparatus capable of translating coded electric signals into typographical images.
Another object of this invention is to provide a high speed printer whose rapidity permits the translation of electronic signals without. any delay.
A' further object of the invention is to provide an improved type of magnetic head structure which permits heads to be mounted closely enough to record the magnetic images of whole characters in one single scan.
Further objects of the invention will become apparent from the following specification in. conjunction with the accompanying figures in which:
FIGURE 1 is a perspective view of a multiple gap electromagnetic recording head;
FIGURE 2 is a perspective and, in part, exploded view illustrating another type of a multiple gap electromagnetic recording head;
FIGURE 3 is a perspective view of a recording apparatus utilizing the device either shown in FIGURE 1 or in FIGURE 2;
FIGURE 4 is a fragmentary perspective view illustrating the mode of operation of the recording apparatus shown in FIGURE 3;
FIGURE 5 is a perspective view illustrating a method of obtaining prints from the image receiving member of the recording apparatus shown in FIGURE 3;
FIGURE 6 is a block diagram of various component parts and circuits connecting a coded signal input to the recording apparatus;
FIGURE 7 gives a side view of a cathode ray tube ice tionship of the various signals produced in the operation of the electromagnetic recording device.
Like reference characters identify like parts throughout. For convenient reference, all voltage supply buses bear reference characters corresponding to their operating supply potential. An odd number indicates a negative supply potential, an even number a positive supply potential.
Electromagnetic printing FIGURE 1 shows a multiple gap electromagnetic head which may be used in conjunction with a magnetic surface to develop letter shaped magnetic images. The
head consists of a comb-like member 42 of high magnetic permeability, which might be made, for example, of soft iron, or the well known alloys such as those sold under the trademarks Hyper sil or Permalloy. The member 42 may be provided with a plurality of notches leaving gaps 43. For purpose of illustration, ten gaps 43 are shown. The portions of the member 42 corresponding to the respective gaps 43 may be wound with separate energizing coils 44. The leads from the ends of said sep arte energizing coils 44 are connected in proper combinations to a suitable letter forming signal source. In response to the various excitation patterns derived from this letter forming signal source, any combination of energizing coils 44 may be activated. The number of turns per coil 44 and the electric current required is dependent upon the material used in the printing cylinder 59 and the flux density required for proper magnetization on the surface of cylinder 59 (FIGURE 3).
FIGURE 2, in part in exploded view, represents a laminated type electromagnetic recording head 82 which may beused in place of the comb-type electromagnetic recording head 40 shown in FIGURE 1. This ganged magnetic transducer comprises alternating laminations of relatively diamagnetic conductors and relatively paramagnetic dielectrics. Accordingly, the head is built of copper conductors 84 inter spaced and laminated between soft iron spacers 83 of a material similar to comb 42.1 Each copper conductor 84 contains at its two outer extremities holes 85 to which leads may be attached and connected to a letter forming signal source. All but one of the soft iron spacers 83 show a protrusion 83A.- After the spacers 83 and conductors 84 are laminated together, the spacers 83 will be in contact with one another-v through these protrusions and provide the means for a for use in connection with the invention and a front view of the face of such a tube; I
FIGURE 8 is an enlarged view of a selected part of the face of the cathode ray tube;
FIGURE. 9 illustrates schematically the circuits for I positioning and deflecting the cathode ray'beam;
' FIGURE 10 shows partly schematically and partly in block diagram the circuits" for energizing'the coils on the electromagnetic head;
FIGURE 11 illustrates a perspective view of a timing disc connected to the image receiving member of the recording apparatus;
FIGURE 11A gives a timing disc; 7
FIGURE 12 shows schematically the circuits which are energized in connection with the rotation of'said timing disc; I
FIGURE 13 is a schematic representation of the cira fragmentary front view of such 'cuits used for obtaining overall synchronization of signal distribution and operation; and 7 FIGURE 14 gives a diagram illustrating the time rela complete magnetic path. It should be obvious that no such protrusion 83A is required on the first spacer 83.
The laminated type ganged head 82 may be used as a substitute for the comb-type head 40, the individual copper conductor 84 on head 82 per-forming the function of the individual coil 44 on head 40.
The head 40 is mounted on a head adapter 56 which in turn is attached to an arm 57 fixed to a block 58, as shown in FIGURE 3. It reposes on, or slightly above, a magnetic cylinder 59, and by the method previously described in the pending application Serial Number 221,362, is translated parallel to the axis of the cylinder 59. As a result of the rotation of the cylinder and the linear motion of the head, information elements are recorded in a helical path 60. The width of the helical recording 60 is determined by the width of the ganged to its bottom most gap This same width determines the height of the i printing and soon, are equally applicable here.
where a unit would correspond to the height of the characters recorded plus the spacing desired betwen recorded lines. As a result, the lines of recording on the cylinder angle as indicated in FIGURE 3. Such a position of the head will ensure a proper position of the character lines in relation to the helical recording line. It is self- Referring now both to FIGURES 3 and 4, the exact character desired is recorded upon the cylinder by having a the correct gap 43 activated at thelcorrect time in relation to the motion of the head and the rotation of the cylinder. More precisely, to record ga' given character A, as shown in FIGURE 3 under 61, there may be considered to be superimposed upon this A a matrix or grid 76, where one axis of the grid is time and the other is the number of channels (FIGURE 4). The character to be recorded determines which channels shall be activated in a given time zone. Thus, it can be seen in the case of the letter A that in time zone number one gaps number one, two, three, four, five, six and seven would be activated. As the cylinder rotates,
, time zone number two occurs, and gaps number five and number eight are activated. In time zone number three the cylinder has again rotated slightly, and gaps number five and number nine are activated. For the crossbar of letter A gap number five would remain activated from time zone number one to time zone 7 number N The number of gaps 43 within a. given width or distance determine the proximity'of the graphic spots which compose the character outlines in the final print, It is, therefore, a compromise between the degree of definition desired in the resulting print and the cost of forming and operating the gaps 43 that leads to the decision of how many gaps are to be provided. An inspection of FIG- URE'4 makes it evident that the height of the recorded character is in accordance with the width across the head from its left-most gap 43N to the gap 43N at its extreme right.
Special attention should be given to the direct ion of t the flux lines which are produced by each of the magnetic transducers of the head. 'The projections of these flux lines upon the magnetic cylinder (FIGURE '3) are parallel to the channel axis .of the superimposed grid 76 and perpendicular toits time axis (FIGURE 4). This ensures a proper placing of the magnetic ink i which, as
has ben explained in the-earlier application Ser. No,
221,362, can be expected to adhere only to the poles of 4 It should be stressed that, although the recording appears in parallel lines of information on the cylinder 5? at the helix angle these lines do not form one continuous path. Whenever the final print is supposed to shift from'one line to the next, a corresponding interruption of the path formed by the helical lines on the cylinder is necessary, as will be easily understood. As
to the spacing of the lines in the final print, it follows evident, however, that there could be numerous other ar-' rangemeuts to accomplish the same purpose.
the spacing of the lines on the cylinder. 7
Means for rotating the cylinder, for moving the magnetic head along the cylinder and' for feeding the printing paper as well as means for the synchronization of such motions are known Any of the known devices may be used for the aforementioned purposes without specific V explanation.
The interrelationship of circuits and functions The specification has dealt so far with the printing process as such, and with the electromagnetic and mechanical means used for this purpose. The remainder of the specification shall v describe and explain the various,
component parts and circuits which connect the coded signal input terminal to the electromagnetic recording ap- As a primary effect of the operation of this-apparatus, the
electron beam in the cathode ray tube 124 is deflected to the area occupied by a specific character on the tubes face 131,"which operation will be discussed'in connection with FIGURE 7.
' Y As'soon as this primary sheet is accomplished, the scanning ofthe specific character begins (FIGURE 8). The
vertical motion within this scanning process is caused by theboperation of the vertical scanning circuits, shown on the right side bottom of FIGURE 10, from which pulses are sent into the beam deflection circuits 123 (FIGURE 9). The'horizontal motion within the scanning process is brought about by the operation of the horizontal scanning circuits, illustrated in FIGURE 12, which likewise send pulses into the aforementioned beam deflection'circuits. The driving pulses which are produced through the rotation of timing disc,191 (FIGURES 1] and 11A) are used to initiate the individual horizontal scanning steps (FIGURE 12), 'to set the flip-flop circuit 550 (FIGURE 13) and to cause the transmission ofa new set of informa the magnets. If the position'of the transducers and their flux lines were rotated *by 90 degrees, a similar adhesion effect could be obtained only through application of modulated carrier waves. The positioning of the transducers and their flux lines eliminates, therefore, the need for the use of such modulated carrier waves and, thus, simplifies the circuitry involved. I i i In order to produce a-final print in which thewri-ting is in horizontal lines, thejpaper is fed tangentially to the cylinder 59 at an angle equal to the helix angle o, as shown in FIGURE 5; Thus, straight lines of print are produced on the paper. This is the difference between steps to'be performed within one single horizontalposithe printing process disclosed in the earlier application Serial Number 221,362 and-the one described here.
Otherwise, all other details of the printing process shown in'the earlier application, such'as powder dusting, color tion signals from the information input 122 as soon as p the last set of such signals has been transformed. into 7 print. The flipeflop circuit 550 stimulates, when set, the oscillator circuit, 589 to emit for, every individual'driving' pulse which initiate sa horizontal scanning step a number of clock pulses.. The numberof these clock pulses may be, preferably, equal to the number of vertical scanning tion and also equal to the number of channels on the mag netic transducer. The clock pulses. are consecutively d istributed by distributor 126 among a corresponding nu rn'- ber of output lines leading both to the vertical scanning, circuits and ,to the individual channels on the magnetic transducer (FIGURE 10). f Any light emissions from the cathode ray-tube are observed by the photocell The signals emanating from r the photocell are then amplified by the amplifier circuit 158 and'travel to the'second inputs of all gates 244 which form a part ofthe output circuits 0fdistributor 126.
"They pass through'these gates only'when there. isa co incidence'of signals'coming b'oth fr'om the photocell and from the distributor, the second group of signals arriving Iv at the first input to these gates. These signals are then applied to the coils on the magnetic transducer which, in turn, magnetizes the surface of the magnetic cylinder 59.
The characters on the face of the tube FIGURE 7 gives a side view of the cathode ray tube 124 and a front view of its face 131 upon which various characters 128 have been applied. Such application may be either directly to the face itself or to a stencil which could be fitted over the face. In the latter case, it would be quite easy to remove the stencil and rearrange the characters in any desired form and order before putting it back again.
There are two ways of presenting these characters. Either the characters may be opaque in comparison with the background or vice versa. It also may be possible to use fluorescent material either for the characters or for the background. In either case the difference between black and white or between dark and light spots will be observed by the photo-electric cell 125 which would send the corresponding signals to the recording apparatus.
It should also be noted that the application of fluorescent paint either to the characters or to the background will create two areas of different sticking potential, because there would be a greater flow of secondary electrons from the fluorescent area than from the plain glass area. It might be possible, therefore, to eliminate the photocell and place, instead, a screened capacitor over the face of the tube. Such a device could be used, as is well known to anyone versed in the art, to detect the variations in voltage due to the variations in sticking potentials and to emit the signals which, otherwise, would be sent out by the photocell. It should be obvious that, for this purpose, any painting material other than fluorescent sub stance could be used which would give rise to a sticking potential diflerent from that of plain glass.
In yet another method, the characters could be cut out of a sheet of conducting material and said sheet placed in the target area of the cathode ray beam. Through a suitable circuit a lead can be taken from said sheet and sent through an applicable amplifier to the distributor 126. Thus, during the scanning process signals would result from the background, but not from the character itself. Conversely, the characters could be composed of a conducting material, and not so the background. The characters could then be linked together, and any resulting signals from the scanning operation could be sent through an amplifier to the distributor 126.
It is to be understood that the characters on the face of the cathode ray tube are set up in the normal horizontal position. They are reproduced on cylinder 59 as a mirror image and rotated 90 degrees plus the helix angle qs due to the direct ion of scanning, the motion of the cylinder and the positioning and motion of the electro-magnetic head. During the printing operation indicated in FIGURE 5 the characters are reversed again and put back into the normal horizontal position on the printed paper, thus producing legible information,
FIGURE 8 represents an enlarged sample character A upon which, for purposes of explanation, a matrix or grid 129 is assumed to be superimposed. Said grid 129, similar to grid 76 shown in FIGURE 4, has time as one axis and channel numbers as the other. A cathode ray beam is then caused, by the action of the beam positioning and scanning apparatus 123, to scan the A by first passing in time zone number one from channel number one to channel number N then, advancing to time zone number two, by passing from channel number one to channel number N then advancing to time zone number three and so on to time zone num- The modulated light is observed by the photo-electric cell 125, and the resulting signals, which denote either a black spot or a white spot, are transmitted through amplifier. 158 to the. gates 244 in the output circuits of distributor 126 (FIGURE 10) It should be noted that, for best results, the characters which are applied to the face of cathode ray tube 124 should be of such a shape as to best conform to the assumed grid 129. This will give rise to smoother and more uniform characters in the resulting magnetic recording on cylinder 59. It should further be noted, that the movement of the cathode ray beam from one time zone to another is synchronized with the rotation of this cylinder. The movement of the cathode ray beam within a given time zone to all the channels is assumed to constitute one time unit while the corresponding sweep within the subsequent time zone is considered to be another time unit. The number of time zones, of which a character may be composed is predetermined and derived from a compromise between the precision desired and the limitations and cost of the equipment necessary to perform the task. It has been found that the use of not less than about time zones per inch leads to advantageous results.
Deflection and scanning A method of positioning a beam within a cathode ray tube was disclosed in the pending application--Serial Number 98,178-of Messrs. John Pres per Eckert, Jr. and Herman Lukotf, filed on June 10, 1949 now Pat. No. 2,969,478. Its practical application, for the purposes of this invention, is shown in FIGURE 9.
In this method, a combination, for example, of six possible signals is used to determine the beam position desired. That is, each different combination of these six signals represents a specific character. The first three signa1sthe top three signals in the example of FIGURE 9-deflect the cathode ray beam in a horizontal direct ion, and the last three signals the bottom three signals in the example-deflect the beam in a vertical direct ion. As a result, a dilferent and independent position is allocated to each character. There is no danger that two difierent characters would possess the same location on the face of the tube, since no two characters possess the same combination of signals.
The beam positioning and deflecting circuits as well as the horizontal and the vertical scanning circuits are operated upon the same principles. In each case, resistors are arranged in series and used to produce predetermined voltage drops, if and when a valve assigned to each individual resistor begins to operate. First, such voltage drops are used to selectively deflect the cathode ray beam, both horizontally and vertically, from its original position (FIGURE 9). In the same way, the grid potentials in the horizontal scanning tube 377 and in the vertical scanning tube 359 are gradually decreased, due to the increasing voltage drops occurring within the resistor arrangements shown in FIGURES 10 and 12, respectively. As a result of this decrease in grid potential, both scan ing tubes draw less and less current from their respective positive potential supply terminals 86 and 88, thus gradually decreasing in very small degrees the already existing voltage drop within the resistor arrangements shown both on top and at the bottom of FIGURE 9. I The effect of this gradual decrease in the voltage drop is that the voltages both on the horizontal and vertical beam defi ection plates 385 and 384 gradually increase in very small degrees and cause the beam to sweep in a direct ion which is opposite to the direct ion of increasing deflection. Scanning consists, therefore, in a gradual reduction of deflection in intermediate steps, thus producing a gradual and partial retrogression of the beam through a plurality of intermediate positions.
The circuit for obtaining horizontal deflection of the beam is shown in the top half of FIGURE 9 while a practically similar circuit for obtaining vertical deflection is illustrated in the bottom half of the diagram. Through each of the six input lines a to either a positive or a negative signal is sent, depending upon the combination which determines the character. For purposes of ex the signal input grid 367 of triode 365a and also to the anode of'diode 374a. The cathode 368 of said triode 365a is linked through a resistor to a zero potentiallterminal, while the cathode of said diode 374a is linked to a plus 10 volt potential.
It a negative potential is sent through input line Him, the minus 11 volt potential will have no efi ect upon diode 374a, nor will it cause triode 365a to operate. Similarly, there will be no effect on diode 374b and triode 361% or on diode 3740 and triode 365e, if a negative signal is arriving through input lines 12012 and 1200, respectively. 7
Assuming now that all three signals coming through input lines 120a, 12% and 1200 are negative, then none of the three triodes 365a, 365b and 365s will operate. in consequence, none of these three triodes Will draw any current from the positive potential terminal 86 and thereby cause a voltage drop across the resistors 370a, 370b, etc. Under this condition, and provided, that triode 377 is not operating, the electron beam of the cathode ray tube may be deflected to any desired position at the extreme right of the face of the tube. This may be accomplished through the application of an appropriateyoltageto terminal 86 and from there to the horizontal deflection plate 385. Any voltage drop or combination of voltage drops, on the other hand, will-result in a lesser degree of deflection. Such a voltage drop or voltage drops may be brought about by the arrival of positive signals in the input lines 120a, 1120b and 1200.
f diode 374a.
In a similar manner the triode 3651: will be caused to operate Whenever a positive signal arrives through in.- put' line 12Gb. In this case, however, the anode 366 of triode 3455b is connected to the positive potential terminal 86 through resistor 370a plus resistor 370b, thus creating a greater voltage drop than was caused by the operation of, triode 365a. It can similarly be seen that a positive signal in input line 1200 will, through the consequent operation of triode 365e, cause astill larger drop across resistors. 370a,. 3761) and 379c in the line connecting positive potential terminal 86 and deflection plate .385.
The distance between horizontal deflection plates 385 and Y383 may be subdivided into eight segments. The value of the resistors 370a, 37Gb and 37% can be so chosen that the operation of triodes. 365a, 3651; and 3550, either singly or in combination, will produce the follow "ing results: If neither one of these tubes operates,' the beam will be in the original'position a If triode 365a operates, the signal impressed upon-deflection plate 385 wilLbe such as tol move the beam 2 or 1 unit'to the left." If triode 365i; operates, the beam will move 2 or 2 units from the original position. If both triodes 365a and 365/) operate, the beam will move 29 plus 2 or 3 units from the original position. If triode. 365a o perf ates, the beam will .move 2 or 4 units from the original position. Similarly, the operation of valves 365aiand 3a5c'wiil result in a. motion of 2 plus 2 or 5 units,
" the opeifation 'of valves 365b and365c will produce a m otion"of;2 plus 2 or 6 units, and the operation ofytubestiia', 365biand 365C will result ina motion of 2 plus 2 plus 2 or 7 units. This arrangement makes it The signal from the possible to use eight different horizontal positions on the face of the cathode ray tube 124 where the individual characters may be placed.
Due to the symmetry of the circuits connected to the first, three input lines and those connected to the other three input lines we can, by using the last or bottom three input lines 120d, ll ile andlZfif, cause the beam to move to any one of eight vertical positions. Thus,
there are eight times eight or sixty-four locations on the face of the cathode ray tube 12,4 to be allocated to Once the original beam position is secured, the next I process is to scan the'character. This is accomplished by triode 359 in the vertical deflection circuit and by triode 377 in the horizontal deflection circuit. Triode 359 causes the vertical scanning due to the action of an incoming signal, similar to the one shown above the input line, to grid 361. This signal operates triode 359 in such a manner as to cause successively lower voltage drops across the series resistors 349a, 340b, 340p and 340d. As a result, the cathode ray beam rises slightly with each successively lower voltage drop. The total amount of the beam rise during scanning is equal to the height of a standard character 128 as applied to the face of the tube. The rise is also caused to occur in N steps, the information scanned in each step being trans mitted, in the manner previously discussed, to the corresponding gap on the magnetic head. When the N th or last step is reached, the signal is removed and the beam drops correspondingly to the original level-at which it was, before the signal was applied. The cathode 360 of said triode 359 is connected to a zero potential ter-v minalthrough resistor 363 which is variable for purposes of scanning adjustment. v
While this is occurring, triode 377 in the horizontal deflection circuit is receiving a signal, similar to the one shown below the input line, to its grid 379. The cathode 380 of this triode is similarly connected to a Zero potential terminal through variable resistor 381, said resistor is accomplished in N steps, each step occurring in one corresponding time zone, as was previously discussed. It should be noted, that the horizontal rightward beam mov einehtdoes fnotoccur until after the beam has reached the N th step inits vertical climb.
Thus, "in any given square or rectangle allotted to any individual character on the face of the tube, the cathode 'ray beam will start at the bottom left hand corner, and
rise vertically to thetop of the square; the beam will'then drop back to the bottom of the square, be displaced slightly to the right and then again rise vertically to the 7 top of the square at which time iit will again drop down and'be displaced slightly to the'right before beginning its next climb. This will continue until the entire square has been scanned. 7 I 1 t Vertical scanning a a I FIGURE to "illustrates schematically and partly in block diagram, circuits which may be" used both to en'- ergize the selected coils onthe magnetic head'and to operate the triode 359' which, in"turn,'s'up ervises the.
That is, as shown in FIGURE 8, the original vertical scanning action of the beam in the cathode ray tube.
The distributor 126 emits signals which are sent out in consecutive order from output lines one to N. Thus. only one output line is transmitting a signal at a time. The number of output lines N is equal to the number of channels on the assumed grids 76 and 129 which also equals the number of gaps provided in the magnetic head.
The consecutive signal distribution from output lines one to N is synchronized with the vertical sweep of the cathode ray beam within a given time zone. The time involved in going from output line N back to output line one is synchronized to occur slightly before or just during the time involved in going from one given time zone to the next time zone on the assumed grids 76 and 129. Furthermore, both these actions are in turn synchronized with the rotation of the cylinder 59 and with the linear motion of the magnetic head.
All output lines from the distributor 126 involve similar circuits but, for the sake of simplicity, only the complete circuit for the first output line is shown.
The signal from output line one as depicted to the left of output line one is fed to the input grid 233 of triode 235, said input grid being returned through a resistor to ground. The anode 234 of said triode 235 is connected through a resistor to a suitable positive potential terminal. In a similar manner, the cathode is connected to a suitable positive potential terminal through a third resistor. The resulting signal which is now reversed-as depicted above the anode output line of triode 235is sent through a suitable resistor 237 to the first control electrode 241 of gate 244. Said first control electrode 241 is connected to a negative potential terminal through resistor 238. The cathode 242 of said gate is linked direct ely to a zero potential terminal while the anode 240 is connected to a suitable positive voltage through resistor 239.
The signals which are produced by the photo-electric cell 125 are simultaneously sent through amplifier 158 to the second control electrodes 243 of the gates 244 in each of the output line circuits of the distributor 126. These gates in the different output line circuits will, however, not conduct unless a positive gating signal is also present upon their first control electrodes. Since the signal to the first control electrode is sent out by the distributor 126 consecutively into the output lines one to N, the signals from the photo-electric cell 125, which result from the vertical scanning of a character on the-cathode ray tube 124, will be passed just as consecutively through these gates.
If, for the purpose of explanation, the number N is considered to be equal to 10, then the following series of events will occur:
(1) The beam of the cathode ray tube will vertically scan a given segment (i.e. within a given time zone) of a character in ten even steps.
(2) The photo-electric cell 125 will observe this scanning and send signals through amplifier 158 to the second control electrodes of the ten gates 244, one gate 244 existing in each of the ten output line circuits of the distributor 126. The signals which the photoelectric cell .sends out substantially represent either a black spot or a white spot.
(3) At the same time that the photo-electric cell emits its first of the ten information signals, the distributor will send a gating pulse to the first control electrode of gate 244 in the first output line circuit, allowing the information signal to pass through the gate of output line one,
and of output line one only. Then, when the photo-- 7 electric cell emit sits second signal, the distributor will activate the first control electrode of gate 244 in output line two, thus permitting the information signal to pass through the gate of output line two. In this way,
the ten information signals will be distributed consecutively among the ten output lines.
(4) Each output line circuit is connected in a suitable manner to one of the ten coils 44 which exist on the magnetic head. Thus, having passed the gate 244, the signal energizes the coil in the corresponding channel of the head. As a result, a magnetic dot is produced in sequence on the previously described cylinder 59 for every black spot observed by the photo-electric cell as the beam of the cathode ray tube makes one vertical sweep.
After leaving the anode 240 of the selected gate 244, the information signal is sent through capacitor 245 to resistor 246. Said resistor 246 terminates at a suitable negative potential. Diode 247 acts as a DC. restorer to keep the signal at the desired voltage level by preventing the average voltage from shifting in the presence of an unsymmetrical wave. The anode of said diode 247 terminates at the same negative potential terminal as does resistor 246. The signal is then sent through resistor 248 to the input grid 250 of the power tube 252. The anode 249 of said power tube 252 is linked to a suitable positive potential terminal while the cathode 251 is connected through coil 44 on the magnetic head to ground.
The very same gating pulse which consecutively activates the first control electrodes 241 of the gates 244 in each of the individual output line circuits also activates consecutively the input grids 354 of triodes 356, one of which exists in connection with each individual output line circuit, as also shown in FIGURE 10. These triodes 356 are used to operate the aforementioned triode 359 (FIGURE 9) which, in turn, supervises the vertical scanning action of the beam in the cathode ray tube.
The cathodes 355 of all N triodes 356 are connected to a zero potential terminal while the anodes are connected through suitable resistors 357 to a positive potential terminal and also to the input grid 361 of the triode 359. When the first output line is operated, a signal is sent to the input control grid 354 of triode 356a. The triode 356a then operates with a resulting voltage drop across the resistor 357a which action sends, at the same time, a signal to the input grid 361 of triode 359. By the action of the distributor 126, the second output line circuit will next be operated and a signal sent to the corresponding input grid of triode 356b. In this case, however, a higher voltage drop will result, due to the action of the additional resistor 357b and, thus, a correspondingly lower voltage will arrive at the input grid of triode 359. As each one of the triodes 356 is operated successively, there will be higher voltage drops and accordingly lower voltages at the input grid of triode 359. In this way, the desired signals are sent to triode 359 not only to cause the vertical scanning action, but the scanning operation itself is also synchronized with the action of the distributor 126,.so that any information observed by the photo-electric cell during the scanning operation will always be sent to the correct and corresponding channel 43 on the magnetic head.
H orizon tal scanning FIGURE 11 illustrates a timing disc 191 which may be attached to the same shaft to which, the magnetizable cylinder 59 is fixed. Said disc 191 contains at its outer extremities three circuit lines 196, 197, and 198, as indicated in FIGURE 11A, each of them being grounded in a suitable manner through a fourth circuit line and brush 187. Two columns of contacts 180 and 181 are connected to the circuit line 196, while one column of such contacts 182 and 183, respectively, is attached to circuit lines 197 and 198, respectively. There are four brushes 192, 193, 194, and to make connections to the columns of contacts 180, 181, 182, and 183, respectively, as the disc rotates.
The contacts in these columns are arranged in repetitive groups along the circumferential lines 196, 197, and 198, each group comprising an equal number of rows as,
7 1i for example, ten rows in the given illustration. The space occupied by any one such group along the circumferential lines on the cylinder represents the distance required for the printing of the width of a character plus the space desired'betwee'n this and the subsequent character. The first seven rows of contacts, which occur in columns 180, 181, and 182, cause the horizontal scanning operation to develop in seven equal steps, while the last contact which is three rows long and occurs in column 183, is used both to supply the necessary spacing between subsequent characters and to send a signal to the information source 122 to set up the signals for the next character. It should be remembered that, for purposes of explanation, the number of time zone N on both assumed grids 76 and 129 has been chosen equal to seven. This, however, is not a limiting case. I
' It should further be noted that the repetitive groups of contacts do not occur continuously around the circuit lines 196, 197, and 198. A suitable vacant space is necessary in order that the resulting magnetic printing will not occur in a continuous helix on the cylinder 59. The reason for desiring this was explained in the previously discussed method of print transfer.
The circuits required in connection with the operation ofthe timing disc 191 are shown in FIGURE 12. The brushes, 192, 193, and 194 are connected, respectively, to input grids 212, 219, and 226 of triodes 210, 217, and
224.. Said grids 212, 219, and 226 are each, returned of triode 210 and cause the triode to conduct. The anode of said triode 210 is connected to a positive potential terminal through resistor 215a and also 'to the signal input grid 379 of the aforementioned triode 377 which,
in turn, supervises the horizontal scanning action of the beam in the cathode ray tube (FIGURE 9). Thus, operation of triode 210 will cause a voltage drop across resistor215a, and an appropriate signal will be sent to the input grid 379 of triode 377. i
The next connection with a contact will be made by brush 193. This, inturn, will cause triode 217 to. operate witha resulting voltage drop across resistances 215a and -215b. After this, both brushes 192 and 193 will make 7 connections, and next, only brush 194- will make contact. This is followed by combinations of brushes 192 and 194,193 andf194, and192 and 193 and 194.
The value of the resistors 215a, 215b and 215c'can be so chosen that-the operation of triodes 210, 217 and 224,
at the first connection of brush 195 with the contact in column 183, a sharp positive cathode signal will be sent and, when the brush breaks connection with this contact, a sharp negative cathode signal 'will result. Since no scanning occurs duringthe passing of the last three rows, a proportional space will be effected on the surface of the cylinder 59 without any magnetic printing. This distance represents the space between the characters to be recorded. The sharp positive and negative cathode signals are sent to the information source 122 shown in FIGURE 9. The first positive pulse causes the apparatus 122to remove the combination of signals which represents the character just scanned andalso to set up the signal combination which represents the next character to be printed. The following negative pulse causes the apparatus to emit the new signal combination. Thus, time is provided, equal to the time necessary for'three rows of contacts to pass a given point, which is used to remove the cathode ray beam from the preceding character and to position the beam at the next character to be scanned.
' Synchronization FIGURE 13 represents a circuit which maybe used to synchronize the horizontal and vertical scanning operations. Some appropriate-wave forms'are shown above the segments of the circuit in which they occur.
The series of signals which'is set to triode 210 from brush 192 in FIGURE 12 is also transmitted to one side of differentiating capacitor 430 in FIGURE 13. There is a positive signal from capacitor 430 whenever brush 192 connects to one of the contacts in column 180, and a negative signal whenever this brush disconnects (FIG- URE 11A). Since there are four such contacts within each repetitive group of column 180, capacitor 430 emits j four positive and four negative signals during the passage of any one of these groups.
The other side of capacitor430 is linked to a zero potential terminal through a suitable resistor 431. The
series of signals emanating from this capacitor'is then directed into two paths. The negative signals of the series are effective on the cathode 434 of diode 432. The
I anode 433 of said diode is linked to a zero potential terminal through resistor 435 and also connected to the input grid 438 of-triode 436. Said triode 436 has its cathode 437 linked through a resistor to ground potential and its anode 439 connected to a suitable positive potential terminal through resistor'440. The normally con either singly or in'combination, will result in seven difiertent forms of voltage drops, as was, previously explained in connection with the horizontal deflection circuits. These gradually increasing voltage drops will lead, in
tu'r-n, to a gradual decrease in the grid potential of the horizontalscanhing tube 377, the effect of which has a1- ready been discussedihereinabove in reference to FIG- URE 9; v V 7 Only the first seven rows of contacts, in a given group of ten rows, are used to produce the horizontal scanning action in the cathode ray tube. After the seventh row of contacts has passed the brushes 1 92, 193 and "194, the contact in column 183 which is three rows long will connect only with brush 1 95. Said brush 195'is connected to a minus 21 voltage potential {through 'regis tor 200.
. Upon connection of brush 195, the resulting signal'is sent through a differentiating capacitor 201 to 'the'input I grid 2060f triode204. Boththejcapacitor 201 and the input grid 206 are connected to a minus 2l,voltage'poten-' tial through resistor 202. The anode of said triode .204
1 While its, anode 449is connected through a suitable resistor 554 to a positive potential terminal. The positive] signals arriving at said input grid 450 are inverted and ducting triode 436'is'shut off by the signal from diode 432, and the resulting signal is sent'through anode 439 to the condenser 442. The other endof condenser 442 is linked to a negative potential terminal through resistor 443 and also to the first control electrode 445 of dual control grid tube v444. The cathode 447 of tube 444 is linked directly to ground potential. Thus, the signal which is sent out by diode 432 is amplified and reversed by triode 436, then, after passing through condenser 442, finally amplified and again reversed by tube 444.
This action of tube 444 occurs only when there is no blocking or opposing signal present on the second control electrode 446. Assuming for the instant that thereis no opposing signal present on this second control elec- "trode, the resulting negative signal is sent out through anode 441 into' the set line of flip-flop circuit 550.
The positive signals in the aforementioned series,
which were not accepted by diode 432, are, however, eifective on triode 448. They are applied to its input grid 450 through a resistanceacapacitance coupling arrangement including capacitor 452. The cathode451 of said triode 448 is linked directly. to ground potential amplified and then applied to the set line of flip-flop circuit 550. V a 1 Thus, in the line connecting triode 448 and tube 444 Withthe flip-flop circuit 550, there occurs a series of consecutive negative signals. Each signal represents the fact that the cathode ray beam has moved horizontally from one time zone to another. Thus, the vertical scanning operation should begin at the arrival of one signal and be completed before the arrival of the next signal. This is' accomplished by the flip-flop circuit 550 and the oscillator circuit 589, shown in FIGURE 13, as follows:
The negative signal sent either by tube 444 or by triode 448 sets the flip-flop. Thus, the triode 562, which is normally on, is shut off by the incoming negative signal, and, consequently, triode 556 begins to conduct.
A signal, which results from the operation of said triode 556 is sent to the cathode of diode 572. The anode of said diode is connected through a resistor to ground potential and also to the input grid of triode 576 of the oscillator circuit 589. This circuit is of the conventional rn-ultivibrator type, its operation being dependent upon the operation of diode 572. Said clamp apparatus 572, when not operating, causes the oscillator circuit to stop in predetermined position, so that, when the diode 572 again operates and, thus, causes the circuit to function, the resulting signals to the distributor 126 will always start at the same position and follow the very same pattern. The main requirement for the oscillator apparatus is that it be fast enough to furnish the desired number of signals in the desired time. That is, from the time one negative pulse reaches the flip-flop circuit 550 until the time at which the next negative pulse arrives, the oscillator 589 must send out N signals, where, for our illustration, N equals ten. In this way, the tenth step in the vertical scanning process is assured of occurring before the cathode ray beam moves to the next horizontal position.
The last and final pulse emanating from the oscillator, which is the tenth pulse in the given example, is not only used, however, to initiate the last and final vertical scanning step, but serves also as the pulse which restores the flip-fi op. It has been explained above that the signals from the oscillator circuit 589 are sent to the distributor 126 through which they are distributed consecutively among the N output lines and, thereby, cause the vertical scanning action as previously described. The signal which is allocated to the last and final output line arrives at the triode 356N in FIGURE from which it brings the vertical scanning action to the topmost point desired. It is indicated both in FIGURE 6 and in FIGURE 10 that the same pulse which reaches the triode 356N is also sent to the input grid of triode 567; as shown in the upper left of FIGURE 13. The cathode of said triode is linked directly to ground potential while the anode is connected to a positive potential terminal through resistor 561. This resistor is, however, also a part of the flip-flop circuit 550. Thus, the operation of triode 567, by the action of the incoming signal, will cause the flip-flop circuit 550 to restore with the result that triode 556 ceases conducting, and, consequently, triode 562 begins to conduct. This action also causes diode 572 to stop conduction and, thus, also cuts off the oscillator. That condition remains until the arrival ofthe next pulse either from tube 444 or from triode 448, at which time the process repeats itself.
i It has already been stated that the original series of signals coming from capacitor 430 consists of eight consecutive positive and negative pulses, the four'negative pulses going to tube 444 and the four positive pulses going to triode 448. This would give rise to a total of eight pulses sent from tube 444- and triode 448 to the flip-flop circuit which, consequently, would establish eight rows (time zones) of the horizontal scanning process. Since only seven rows are desired, the last negative pulse must be removed before it reaches the flip-flop circuit.
h This is accomplished in the following manner:
..}For the first three negative pulses out of each group 14 of four signals which are sent to the first control electrode 445 of tube 444 there is no opposing signal on the second control electrode 446, and the signals are, therefore, transmitted to the flip-flop circuit. Before the arrival of the fourth pulse, which represents the pulse to be removed, an opposing negative signal is sent to the second control electrode 446 which is large enough to completely block out the positive signal on the first control electrode. As a result, tube 444 will not operate,
and no signal will be sent by anode 441 to the fi ip-fi0p= circuit. Such a blocking signal can be obtained from brush 195 on the timing disc (FIGURES 11A and 12). This signal from brush 195 is sent through a diflerentiating circuit, as shown in the bottom section of FIGURE 13, which consists of a capacitor 489 and a resistor 490 connected to a negative potential terminal. The resulting signal is then transmitted to the input grid of triode 491, said grid being returned to a negative potential terminal through the aforementioned resistor 490. The cathode of said triode is linked directly to ground potential while the anode is connected through a resistor to a suitable positive potential terminal. Said triode 491 operates only upon receipt of a positive pulse. The resulting signal, which is inverted and amplified, is sent through the anode of this triode to the cathode of diode 496. The anode of diode 496 is connected to a positive potential terminal, the purpose of this being to cut off the signal at a desired level. The signal then proceeds to the pulse lengthening condenser 499. Said condenser 499 is connected through a resistor to ground potential and also linked to the second control electrode 446 of the aforementioned tube 444. In this way, when the negative signal is finally impressed upon the second control electrode 446, is is widened and occurs slightly before the positive signal activates the first control electrode 445, so that the positive signal will arrive during the duration of the widened negative signal. As a result, the final signal from capacitor 430 will be blocked out and tube 444 will not operate, and, thus, only the desired seven signals will arrive at the flip-flop circuit.
The contacts in column 183 are so placed on the timing disc 191 (FIGURE 11A) that they reach brush 195 before brush 192 loses connection with the fourth contact in column 180. As a result, the first signal from brush 195 is emitted before the final negative pulse leaves brush 192. This is necessary, if we are to be assured that the blocking signal from brush 195 will be present at the second control electrode of tube 444 before and while the final signal stimulated by brush 192 arrives at the first control electrode.
FIGURE 14 is a timing diagram of the complete process of scanning a given character. A review of this diagram shows the results of the synchronization of all the movements. Thus, the first three lines illustrate the pulses generated by the brushes 192, 193 and 194 which ultimately cause the horizontal scanning action to occur, as indicated in line eleven. Line four shows the pulse arriving from brush 195 just slightly before the end of the letter generation time to produce the blocking signal previously discussed and also to cause the coded information input terminal 122 to set up the signals for the next following character. Line five indicates the operation of flip-flop 550 which causes the oscillator circuit 589 to emit the ten vertical scanning signals (line six) during each individual horizontal position. Lines seven, eight I and nine show, in part, how these ten signals are distributed by the distributor among the ten circuits which cause the vertical scanning. This corresponds to the distribution of the same ten pulses among the ten circuits .leading to the ten channels on the magnetic transducer.
, Line ten shows, as the result, the vertical scanning action accomplished in ten consecutive steps. Line eleven illust rates the horizontal scanning operation accomplished in seven consecutive steps while the last three time. units are used for the spacing between consecutive characters information, in combination, scanning means for scanning the symbol to be recorded along one line in one scanning direct ion, means for consecutively applying N clock pulses to said scanning means so as to produce N consecutive scanning steps along that line, each of said scanning steps covering a specific portion of said symbol, a rotatable magnetic recording medium, a plurality of magnetic heads arranged in one line and communicating with said recording medium at an angle skew to the axis of rotation thereof, said means for consecutively applying N clock pulses being also connected to said plurality of magnetic heads so as to consecutively and separately render each head operable'in synchronism with said scanning steps, a pick-up means associated with said scanning means for developing electrical signals characteristic of the intelligence in the symbol portion being scanned, means for applying said signals to said magnetic heads, means synchronized with the rotation of said recording medium for displacing said scanning means in a direct ion at right angles to said one scanning direct ion so as to inaugurate a new series of N consecutive scanning steps along a different line in the same scanning direct ion; thereby synchronizing the displacement of said scanning means with the motion of said recording medium.
2. In a system for magnetically recording symbolic information, in combination, scanning means -for the.
stepwise scanning of the symbol to be recorded along one line in one scanning direct ion, each of the steps in said stepwise scanning covering a specific portion of said Symbol, a rotatable magnetic recording medium, a"plutrality of magnetic heads arranged in one line and communicating with said recording medium at an angle skew to the axis of rotation thereof, synchronizing means for controlling the steps "in said stepwise scanning of said scanning means and for consecutively and separatively rendering operable each head in said plurality of heads in synchronism with the steps in said step-wise scanning, a pickup means associated with said scanning means for developing electrical signals characteristic of the intelligence in the symbol portion being scanned, means for applying said signals to said heads, displacing means synchronized with the rotation of-said recording medium for displacing said scanning means in a direct ion at right angles to said one scanning direct ion, thereby synchronizing the displacement of said scanning meanswiththe rotation of said recording medium, and a connection between said displacing means and said synchronizing means for inaugurating the operation of said synchronizing means through the application of signals from said displacing means r 3. The combination according to claim l wherein said symbol is positioned upon'the face of a cathode ray tube,
' and wherein said scanning means includes an electron j beam.
'4. The combination according to claim 2 wherein said pick-up means are photoelectric means. T
5. T-he combinationfaccording to claim 2 wherein said pick-up means are electrostatic means.
6. The combination according to claim 2 wherein said plurality of 1 magnetic heads comprises a plurality of independent electric coils spaced apart from each other. and,
positioned upon a single magnetic structure.
7. The combination as claimed in claim 6 wherein said magnetic structure comprises active surface areas in form of gaps, and a part of the windings of said coils being positioned within said gaps gf i l a containing a plurality of symbols to be selectively re-- corded, scanning means adjacent tov said symbol'bearing medium for the stepwise scanning of the symbol to be recorded along one line in one scanning'direct io-n, positioning means; including a source of coded electrical signals connected to said scanning means for selectively positioning said scanning means upon the symbol to be recorded, each of the steps in said stepwise scanning cover ing a specific portion of said symbol, a rotatable magnetic recording medium, a plurality of magnetic heads arranged in one line and communicating with said recording medium at an angle skew to the axis of rotation thereof, synchronizing means for controlling the steps in said stepwise scanning of said scanning means and for consecutively and separatively rendering operable each head in said plurality of heads in synchronism with the steps in said plurality of heads in synchronism with the steps in said stepwise scanning,a pick-up means associated with said scanning means for developing electrical signals characteristic of the intelligence in the symbol portion being scanned, means for applying said signals to said heads, displacing means synchronized with the rotation of recording medium. for displacing said scanning means in a direct ion at right angles to said one scanning direct ion, thereby synchronizing the displacement of said scan-' ning means with the rotation of said recording medium,
and a connection between saiddisplacing means and said synchronizing means for inaugurating the operation of said synchronizing means through the application of signals from said displacing means.
12. In a system for magnetically recording symbolic .informatiom'in combination, a symbol bearing medium containing a plurality of symbols to be selectively re corded, scanning means including an electron beam, said scanning means adjacent to said symbol bearing medium for the stepwise scanning of the symbol to be recorded along one line in one scanning direct ion, each of the steps in said stepwise scanning covering a specific portion of said symbol, positioning means including a source of coded electrical signals connected to said scanning means for selectively positioning said scanning means upon the signal to be recorded, a first and a second beam deflection circuit, each'of said deflection circuits including a 'series arrangement of a-multitude of impedance members, circuit closing members, each of said impedance members being connected to a specific circuit closing member, and each of said circuit closing members being operative only when activated through the application of a specific signal from said source of coded electrical signals, a 'rotating magnetic recording medium, a plurality of magneticheadsarranged in one line a'nd communicating with said record said plurality of heads in synchronism with the-steps in I scanned,-.means for applying said signals to'said heads, displacing means synchronized with the rotation'of said".
r recording medium and responsive to its motion for di sing medium at an angle skew to the axis of rotation thereof, synchronizing means for controlling the steps in said stepwise scanning of said scanning means and for consecutively and separatively rendering operable each head in said plurality of heads in synchronism with the stepsiri-v said stepwise scanning a pick-up' means associatedfwith said scanning means for developing'electrical signals char-' I acteris'tic of the intelligence in the symbol portion being displacement of said scanning means with the rotation of said recording medium, and a connection between said displacing means and said synchronizing means for inaugurating the operation of said synchronizing means through the application of signals from said displacing means.
13. In a system for magnetically recording symbolic information, in combination, a symbol bearing medium containing a plurality of symbols to be selectively recorded, scanning means adjacent to said symbol bearing medium for the stepwise scanning of the symbol to be recorded along one line in one scanning direct ion, positioning means including a source of coded electrical signals connected to said scanning means for selectively positioning said scanning means upon the symbol to be recorded, each of the steps in said stepwise scanning covering a specific portion of said symbol, a movable magnetic recording medium, a plurality of magnetic heads arranged in one line and communicating with said recording medium, synchronizing means for controlling the steps in said stepwise scanning of said scanning means and for consecutively and separatively rendering operable each head in said plurality of heads in synchronism with the steps in said plurality of heads in synchronism with the steps in said stepwise scanning, a pick-up means associated with said scanning means for developing electrical signals characteristic of the intelligence in the symbol portion being scanned, means for applying said sig nals to said heads, displacing means synchronized with the motion of said recording medium for displacing said scanning means in a direct ion at right angles to said one scanning direct ion, thereby synchronizing the displacement of said scanning means with the motion of said recording medium, and a connection between said displacing means and said synchronizing means for inaugurating the operation of said synchronizing means through the application of signals from said displacing means.
14. The combination according to claim 11 wherein said displacing means include means for inaugurating the operation of said positioning means after a predetermined motion of said recording medium.
15. The combination according to claim 12 wherein each of said deflection circuits comprises an additional circuit closing member, and wherein said additional circuit closing member in said first deflection circuit is connected to said scanning means and said additional circuit closing member in said second deflection circuit is connected to said displacing means.
16. The combination according to claim 2 wherein said synchronizing means include means for terminating the operation of said scanning means after a predetermined number of steps.
17. The combination according to claim 2 comprising, a plurality of symbols to be selectively recorded, positioning means including a source of coded electrical signals connected to said scanning means for selectively positioning said scanning means upon the area of the symbol to be recorded, and a connection between said source of coded electrical signals and said displacing means wherein signals from said displacing means condition said source for the emission of a new set of coded electrical signals.
18. The combination according to claim 1 comprising a distributor and a plurality of gating members, each of said gating members being connected to a specific head in said plurality of magnetic heads and all gating members being connected to said pick-up system, said distributor receiving said clock pulses and applying same -to said gating members in sequence.
19. The combination according to claim 2 wherein said scanning means include a circuit comprising a series arrangement of a multitude of impedance members, a multitude of circuit closing members, each of said impedance members being connected to a specific circuit closing member, and each of said circuit closing members being operative on iy when activated through the application of a specific electrical signal, and wherein said synchronizing means comprise, a flip-flop circuit connected to said displacing means and set by signals from said displacing means, an oscillator circuit connected to said flip-flop circuit, said oscillator circuit being activated by signals from said flip-flop circuit when set and producing a number of clock pulses, the number of clock pulses emitted during said set state of said flip-flop circuit being equal to the number of circuit closing members in said scanning means and to the number of heads in said plurality of magnetic heads, a distributor interposed between said oscillator circuit, on one hand, and said circuit closing members in said scanning means and said heads, respectively, on the other hand, for distributing said clock pulses consecutively and separately among all said circuit closing members and among all said heads, and a connection between said distributor and said flip-flop circuit for restoring said flip-flop circuit with the final pulse in said number of clock pulses.
20. In a system for magnetically recording symbolic information; the combination of a scanning system for scanning the symbol to be recorded along one line in one scanning direct ion, a pick-up means associated with said scanning system for developing electrical signals characteristic of the intelligence in the symbol portion being scanned, a rotatable magnetic recording medium, a plurality of magnetic heads arranged in a line and communicating with said magnetic recording medium at an angle skew to the axis of rotation thereof, means for applying the output of said pick-up means to said heads, control means connected to said line scanning system for inaugurating a line scanning cycle thereby and further connected to said heads to render said heads operable in succession in synchronism with the line scanning action of said scanning system, and displacement means connected to said line scanning system and responsive to the rotation of said rotatable recording medium to displace said scanning line at right angles to said one scanning direct ion in accordance with the rotation of said rotatable recording medium.
21. A mechanism for forming a character pattern on a moving magnetic surface comprising in combination a magnetic head having a plurality of individual juxtapositioned head elements which are operable to form a character pattern when selectively energized during alignment of the head with the plural increments of a character position on said magnetic surface, a character generator having electrical subdivisions corresponding in number to the number of increments of a character position on said magnetic surface, each of said electrical subdivisions having individual selectively condition able means corresponding in number to the number of said magnetic head elements, means conditioning said condition able means to form a pattern corresponding to the characters to be formed on said magnetic surface, means responsive to the alignment of increments of a character position with said magnetic head for pulsing the corresponding electrical subdivision of the character generator, and means responsive to the pulsing of said electrical subdivisions for energizing said head elements corresponding to the conditioned condition able means in each pulsed electrical subdivision.
22. A mechanism for forming a character pattern on a moving magnetic surface comprising in combination a magnetic head having a plurality of individual juxtapositioned head elements selectively energizable as said head is aligned with a subdivision of a character position, a timing track having bits corresponding in number to the subdivisions of the character positions on said magnetic surface, a character generator having electrical subdivisions corresponding to the number of subdivisions
|
package me.tatarka.gsonvalue;
import com.squareup.javapoet.*;
import me.tatarka.gsonvalue.annotations.GsonBuilder;
import me.tatarka.gsonvalue.annotations.GsonConstructor;
import javax.annotation.processing.*;
import javax.lang.model.SourceVersion;
import javax.lang.model.element.*;
import javax.lang.model.element.Modifier;
import javax.lang.model.type.DeclaredType;
import javax.lang.model.type.TypeKind;
import javax.lang.model.type.TypeMirror;
import javax.lang.model.util.ElementFilter;
import javax.lang.model.util.SimpleTypeVisitor6;
import javax.lang.model.util.Types;
import javax.tools.Diagnostic;
import javax.tools.JavaFileObject;
import java.io.IOException;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.io.Writer;
import java.lang.reflect.*;
import java.util.*;
@SupportedSourceVersion(SourceVersion.RELEASE_7)
public class GsonValueProcessor extends AbstractProcessor {
private Messager messager;
private Filer filer;
private Types typeUtils;
private List<ClassName> seen;
private SearchUtils searchUtils;
@Override
public synchronized void init(ProcessingEnvironment processingEnv) {
super.init(processingEnv);
messager = processingEnv.getMessager();
filer = processingEnv.getFiler();
typeUtils = processingEnv.getTypeUtils();
seen = new ArrayList<>();
searchUtils = new SearchUtils(messager, typeUtils);
}
@Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
for (TypeElement annotation : annotations) {
Set<? extends Element> elements = roundEnv.getElementsAnnotatedWith(annotation);
for (Element element : elements) {
if (element.getAnnotation(GsonConstructor.class) != null || element.getAnnotation(GsonBuilder.class) != null) {
try {
process(element);
} catch (IOException e) {
StringWriter stringWriter = new StringWriter();
e.printStackTrace(new PrintWriter(stringWriter));
messager.printMessage(Diagnostic.Kind.ERROR, "GsonValue threw an exception: " + stringWriter.toString(), element);
}
}
}
}
return false;
}
private void process(Element element) throws IOException {
SearchUtils.Search search = searchUtils.forElement(element);
ExecutableElement executableElement = search.findConstructorOrFactory();
if (executableElement == null) {
return;
}
boolean isConstructor = search.isConstructor();
boolean isBuilder = search.isBuilder();
TypeElement classElement = search.findClass();
ClassName className = ClassName.get(classElement);
if (seen.contains(className)) {
// Don't process the same class more than once.
return;
} else {
seen.add(className);
}
ClassName creatorName = ClassName.get((TypeElement) executableElement.getEnclosingElement());
ClassName typeAdapterClassName = ClassName.get(className.packageName(), Prefix.PREFIX + StringUtils.join("_", className.simpleNames()));
Names names = new Names();
// constructor params
for (VariableElement param : executableElement.getParameters()) {
names.addConstructorParam(param);
}
// builder params
TypeElement builderClass = null;
ExecutableElement buildMethod = null;
if (isBuilder) {
TypeMirror builderType;
if (isConstructor) {
builderClass = (TypeElement) element.getEnclosingElement();
builderType = builderClass.asType();
} else {
builderType = executableElement.getReturnType();
builderClass = (TypeElement) typeUtils.asElement(builderType);
}
if (builderClass == null) {
messager.printMessage(Diagnostic.Kind.ERROR, "Cannot find builder " + builderType + " in class " + classElement, element);
return;
}
for (ExecutableElement method : ElementFilter.methodsIn(builderClass.getEnclosedElements())) {
names.addBuilderParam(builderType, method);
if (method.getReturnType().equals(classElement.asType()) && method.getParameters().isEmpty()) {
buildMethod = method;
}
}
if (buildMethod == null) {
messager.printMessage(Diagnostic.Kind.ERROR, "Missing build method on " + builderType + " in class " + classElement, builderClass);
return;
}
}
addFieldsAndGetters(names, classElement);
try {
names.finish();
} catch (ElementException e) {
e.printMessage(messager);
return;
}
TypeName classType = TypeName.get(classElement.asType());
List<TypeVariableName> typeVariables = new ArrayList<>();
if (classType instanceof ParameterizedTypeName) {
ParameterizedTypeName type = (ParameterizedTypeName) classType;
for (TypeName typeArgument : type.typeArguments) {
typeVariables.add(TypeVariableName.get(typeArgument.toString()));
}
}
TypeSpec.Builder spec = TypeSpec.classBuilder(typeAdapterClassName.simpleName())
.addTypeVariables(typeVariables)
.addModifiers(Modifier.PUBLIC)
.superclass(ParameterizedTypeName.get(GsonClassNames.TYPE_ADAPTER, classType));
// TypeAdapters
for (Name name : names.names()) {
TypeName typeName = TypeName.get(name.getType());
TypeName typeAdapterType = ParameterizedTypeName.get(GsonClassNames.TYPE_ADAPTER, typeName.box());
spec.addField(FieldSpec.builder(typeAdapterType, Prefix.TYPE_ADAPTER_PREFIX + name.getName())
.addModifiers(Modifier.PRIVATE, Modifier.FINAL)
.build());
}
// Test_TypeAdapter(Gson gson, TypeToken<Test> typeToken)
{
MethodSpec.Builder constructor = MethodSpec.constructorBuilder()
.addModifiers(Modifier.PUBLIC)
.addParameter(GsonClassNames.GSON, "gson")
.addParameter(ParameterizedTypeName.get(GsonClassNames.TYPE_TOKEN, classType), "typeToken");
for (Name<?> name : names.names()) {
String typeAdapterName = Prefix.TYPE_ADAPTER_PREFIX + name.getName();
DeclaredType typeAdapterClass = findTypeAdapterClass(name.annotations);
CodeBlock.Builder block = CodeBlock.builder()
.add("this.$L = ", typeAdapterName);
if (typeAdapterClass != null) {
if (isInstance(typeAdapterClass, GsonClassNames.TYPE_ADAPTER.toString())) {
block.add("new $T(", typeAdapterClass);
} else if (isInstance(typeAdapterClass, GsonClassNames.TYPE_ADAPTER_FACTORY.toString())) {
block.add("new $T().create(gson, ", typeAdapterClass);
appendFieldTypeToken(block, name, typeVariables, /*allowClassType=*/false);
} else {
messager.printMessage(Diagnostic.Kind.ERROR, "@JsonAdapter value must by TypeAdapter or TypeAdapterFactory reference.", name.element);
}
} else {
block.add("gson.getAdapter(");
appendFieldTypeToken(block, name, typeVariables, /*allowClassType=*/true);
}
block.add(");\n");
constructor.addCode(block.build());
}
spec.addMethod(constructor.build());
}
// @Override public void write(JsonWriter out, T value) throws IOException
{
CodeBlock.Builder code = CodeBlock.builder();
code.beginControlFlow("if (value == null)")
.addStatement("out.nullValue()")
.addStatement("return")
.endControlFlow();
code.addStatement("out.beginObject()");
for (Name name : names.fields()) {
code.addStatement("out.name($S)", name.getSerializeName())
.addStatement("$L.write(out, value.$L)", Prefix.TYPE_ADAPTER_PREFIX + name.getName(), name.getCallableName());
}
for (Name name : names.getters()) {
code.addStatement("out.name($S)", name.getSerializeName())
.addStatement("$L.write(out, value.$L())", Prefix.TYPE_ADAPTER_PREFIX + name.getName(), name.getCallableName());
}
code.addStatement("out.endObject()");
spec.addMethod(MethodSpec.methodBuilder("write")
.addAnnotation(Override.class)
.addModifiers(Modifier.PUBLIC)
.addParameter(GsonClassNames.JSON_WRITER, "out")
.addParameter(classType, "value")
.addException(IOException.class)
.addCode(code.build())
.build());
}
// @Override public T read(JsonReader in) throws IOException
{
CodeBlock.Builder code = CodeBlock.builder();
code.beginControlFlow("if (in.peek() == $T.NULL)", GsonClassNames.JSON_TOKEN)
.addStatement("in.nextNull()")
.addStatement("return null")
.endControlFlow();
Iterable<Name> params = names.params();
boolean isEmpty = true;
for (Name name : params) {
isEmpty = false;
code.addStatement("$T $L = $L", name.getType(), Prefix.ARG_PREFIX + name.getName(), getDefaultValue(name.getType()));
}
if (isEmpty) {
code.addStatement("in.skipValue()");
} else {
code.addStatement("in.beginObject()")
.beginControlFlow("while (in.hasNext())")
.beginControlFlow("switch (in.nextName())");
for (Name name : params) {
code.add("case $S:\n", name.getSerializeName()).indent();
code.addStatement("$L = $L.read(in)", Prefix.ARG_PREFIX + name.getName(), Prefix.TYPE_ADAPTER_PREFIX + name.getName())
.addStatement("break").unindent();
}
code.add("default:\n").indent()
.addStatement("in.skipValue()")
.unindent();
code.endControlFlow()
.endControlFlow()
.addStatement("in.endObject()");
}
if (isBuilder) {
String args = StringUtils.join(", ", names.constructorParams(), TO_ARGS);
if (isConstructor) {
code.add("return new $T($L)", builderClass, args);
} else {
code.add("return $T.$L($L)", creatorName, element.getSimpleName(), args);
}
code.add("\n").indent();
for (Name name : names.builderParams()) {
code.add(".$L($L)\n", name.getCallableName(), Prefix.ARG_PREFIX + name.getName());
}
code.add(".$L();\n", buildMethod.getSimpleName()).unindent();
} else {
String args = StringUtils.join(", ", params, TO_ARGS);
if (isConstructor) {
code.addStatement("return new $T($L)", classType, args);
} else {
code.addStatement("return $T.$L($L)", creatorName, executableElement.getSimpleName(), args);
}
}
spec.addMethod(MethodSpec.methodBuilder("read")
.addAnnotation(Override.class)
.addModifiers(Modifier.PUBLIC)
.returns(classType)
.addParameter(GsonClassNames.JSON_READER, "in")
.addException(IOException.class)
.addCode(code.build())
.build());
}
Writer writer = null;
boolean threw = true;
try {
JavaFileObject jfo = filer.createSourceFile(typeAdapterClassName.toString());
writer = jfo.openWriter();
JavaFile javaFile = JavaFile.builder(className.packageName(), spec.build())
.skipJavaLangImports(true)
.build();
javaFile.writeTo(writer);
threw = false;
} finally {
try {
if (writer != null) {
writer.close();
}
} catch (IOException e) {
if (!threw) {
throw e;
}
}
}
}
private void addFieldsAndGetters(Names names, TypeElement classElement) {
// getters
for (ExecutableElement method : ElementFilter.methodsIn(classElement.getEnclosedElements())) {
names.addGetter(classElement, method);
}
// fields
for (VariableElement field : ElementFilter.fieldsIn(classElement.getEnclosedElements())) {
names.addField(field);
}
for (TypeMirror superInterface : classElement.getInterfaces()) {
addFieldsAndGetters(names, (TypeElement) typeUtils.asElement(superInterface));
}
TypeMirror superclass = classElement.getSuperclass();
if (superclass.getKind() != TypeKind.NONE && !superclass.toString().equals("java.lang.Object")) {
addFieldsAndGetters(names, (TypeElement) typeUtils.asElement(classElement.getSuperclass()));
}
}
private void appendFieldTypeToken(CodeBlock.Builder block, Name<?> name, List<TypeVariableName> typeVariables, boolean allowClassType) {
TypeMirror type = name.getType();
TypeName typeName = TypeName.get(type);
if (isComplexType(type)) {
TypeName typeTokenType = ParameterizedTypeName.get(GsonClassNames.TYPE_TOKEN, typeName);
List<? extends TypeMirror> typeParams = getGenericTypes(type);
if (typeParams.isEmpty()) {
block.add("new $T() {}", typeTokenType);
} else {
block.add("($T) $T.getParameterized($T.class, ", typeTokenType, GsonClassNames.TYPE_TOKEN, typeUtils.erasure(type));
for (Iterator<? extends TypeMirror> iterator = typeParams.iterator(); iterator.hasNext(); ) {
TypeMirror typeParam = iterator.next();
int typeIndex = typeVariables.indexOf(TypeVariableName.get(typeParam.toString()));
block.add("(($T)typeToken.getType()).getActualTypeArguments()[$L]", ParameterizedType.class, typeIndex);
if (iterator.hasNext()) {
block.add(", ");
}
}
block.add(")");
}
} else if (isGenericType(type)) {
TypeName typeTokenType = ParameterizedTypeName.get(GsonClassNames.TYPE_TOKEN, typeName);
int typeIndex = typeVariables.indexOf(TypeVariableName.get(name.getType().toString()));
block.add("($T) $T.get((($T)typeToken.getType()).getActualTypeArguments()[$L])",
typeTokenType, GsonClassNames.TYPE_TOKEN, ParameterizedType.class, typeIndex);
} else {
if (allowClassType) {
block.add("$T.class", typeName);
} else {
block.add("TypeToken.get($T.class)", typeName);
}
}
}
@Override
public Set<String> getSupportedAnnotationTypes() {
return new LinkedHashSet<>(Arrays.asList(
GsonConstructor.class.getCanonicalName(),
GsonBuilder.class.getCanonicalName()
));
}
private static String getDefaultValue(TypeMirror type) {
switch (type.getKind()) {
case BYTE:
case SHORT:
case INT:
case LONG:
case FLOAT:
case CHAR:
case DOUBLE:
return "0";
case BOOLEAN:
return "false";
default:
return "null";
}
}
private DeclaredType findTypeAdapterClass(List<? extends AnnotationMirror> annotations) {
for (AnnotationMirror annotation : annotations) {
String typeName = annotation.getAnnotationType().toString();
if (typeName.equals(GsonClassNames.JSON_ADAPTER.toString()) || typeName.equals(GsonClassNames.JSON_ADAPTER_METHOD.toString())) {
Map<? extends ExecutableElement, ? extends AnnotationValue> elements = annotation.getElementValues();
if (!elements.isEmpty()) {
AnnotationValue value = elements.values().iterator().next();
return (DeclaredType) value.getValue();
}
}
}
return null;
}
private boolean isComplexType(TypeMirror type) {
Element element = typeUtils.asElement(type);
if (!(element instanceof TypeElement)) {
return false;
}
TypeElement typeElement = (TypeElement) element;
return !typeElement.getTypeParameters().isEmpty();
}
private boolean isGenericType(TypeMirror type) {
return type.getKind() == TypeKind.TYPEVAR;
}
private List<? extends TypeMirror> getGenericTypes(TypeMirror type) {
DeclaredType declaredType = asDeclaredType(type);
if (declaredType == null) {
return Collections.emptyList();
}
ArrayList<TypeMirror> result = new ArrayList<>();
for (TypeMirror argType : declaredType.getTypeArguments()) {
if (argType.getKind() == TypeKind.TYPEVAR) {
result.add(argType);
}
}
return result;
}
private static DeclaredType asDeclaredType(TypeMirror type) {
return type.accept(new SimpleTypeVisitor6<DeclaredType, Object>() {
@Override
public DeclaredType visitDeclared(DeclaredType t, Object o) {
return t;
}
}, null);
}
private boolean isInstance(DeclaredType type, String parentClassName) {
if (type == null) {
return false;
}
TypeElement element = (TypeElement) type.asElement();
for (TypeMirror interfaceType : element.getInterfaces()) {
if (typeUtils.erasure(interfaceType).toString().equals(parentClassName)) {
return true;
}
}
TypeMirror superclassType = element.getSuperclass();
if (superclassType != null) {
if (typeUtils.erasure(superclassType).toString().equals(parentClassName)) {
return true;
} else {
return isInstance(asDeclaredType(superclassType), parentClassName);
}
}
return false;
}
private static final StringUtils.ToString<Name> TO_ARGS = new StringUtils.ToString<Name>() {
@Override
public String toString(Name value) {
return Prefix.ARG_PREFIX + value.getName();
}
};
}
|
Thread:<IP_ADDRESS>/@comment-22439-20120306003352
Hi, welcome to Dunderpedia: The Office Wiki! Thanks for your edit to the Zach Woods page.
Please leave me a message if I can help with anything!
|
Add CLI
This would be useful for people not using this add-on as a JS hook in electron-builder. Similar to how electron-osx-sign has them.
Someone has made a separate electron-notatize-cli package, but it's inconvenient because of being tied to whatever version of this package it specifies as a dependency. I asked them to open a PR here instead but it seems like they're not interested: https://gitlab.com/fozi/electron-notarize-cli/issues/1
I've generally come around to the opinion that CLI scripts should not be in tooling that function as libraries, because of the additional dependencies that are added. (As an example, Electron Packager currently depends on a number of libraries that depend on yargs, minimist, commander, and yargs-parser, despite the Packager CLI only using yargs-parser.) I'd be OK with converting this into a monorepo and adding a CLI package. The main roadblocks I see currently are:
semantic-release does not yet support monorepos (although there are people who are trying to get it working).
Someone to actually do the work.
Where would I subscribe to get (low noise!) notifications about releases?
@f0zi I suppose the RSS feed of the repo releases/tags is the closest you'll get: https://github.com/electron/electron-notarize/releases.atom
@malept fair enough, one or two small extra dependencies doesn't seem like a lot when talking about electron though haha
I personally don't think this library should have a CLI and didn't add one when it was originally written, it is very lightweight by itself and adding something like yargs or commander would be very unnecessary, writing your own CLI wrapper for notarize is trivial
|
A Possible Link between Dysregulated Mitochondrial Calcium Homeostasis and Citrullination in Rheumatoid Arthritis
Rheumatoid arthritis (RA) is characterized by citrullination of peptides and proteins mediated by calciumdependent peptidyl arginine deiminase enzymes (PADs). Various mechanisms of intracellular and extracellular protein citrullination have been elucidated so far. Here, we highlight one more important possible mechanism that could lead to intracellular citrullination due to dysregulated mitochondrial Ca2+ homeostasis i.e., inability of mitochondrial uptake of Ca2+ during physiological signaling and triggering the generation of autoimmune disease. Spontaneous secretion of intracellular PAD may be the result of high cytosolic Ca2+ due to a disturbance in mitochondrial calcium homeostasis secondary to oxidative damage without any compromise to the cell membrane. Various environmental triggers like air pollutants also induce oxidative stress (OS) which may compromise the mitochondrial integrity and disturb Ca2+ homeostasis. Adoption of simple lifestyle modification like yoga and meditation optimizes reactive oxygen species (ROS) levels and maintains mitochondrial integrity by increasing COX activity, thus may curb citrullination process and its sequelae.
Introduction
Citrullination is an essential contributor to the pathogenesis of rheumatoid arthritis (RA). This post-translational conversion of arginine to citrulline residues requires peptidyl arginine deiminase enzymes (PADs) which are calcium-dependent. Under physiological conditions, calcium stimulation is required to activate the mostly inactive PADs resulting in citrullination of a number of structural proteins e.g., vimentin, fibrinogen, filaggrin, and keratin and proteins involved in the regulation of gene transcription e.g., histones [1]. Hence, the enzymatic activity of PADs may impact gene expression by directly citrullinating transcription factors or by regulating histones and thus impact the epigenome. Recently, it has been reported that hypoxia promotes citrullination and PAD production in human fibroblast-like synoviocytes [2]. Although, the mechanisms promoting citrullination in RA are not fully elucidated but heightened citrullination propagates inflammation in an NF-κB-dependent expression of IL-1β and TNFα [3]. Zhou Y et al. highlights novel pathways of extracellular protein citrullination, cellular localization of calcium (Ca 2+ ) dependent peptidyl arginine deiminases (PAD) and their activity in physiological and pathological conditions. This finding also sheds light upon mechanisms responsible for intracellular protein citrullination which are distinct from the ones mediated by PAD activation via Ca 2+ influx due to the compromised cell membrane [4]. Various other pathologies have also been associated with the abnormal protein citrullination like multiple sclerosis, Alzheimer's disease, prion diseases, psoriasis as well as cancer [5][6][7][8].
Millimolar Ca 2+ levels are required by intracellular PAD activity, but how citrullination takes place even in the presence of an intact cell membrane remains uncertain. One possibility could be dysregulation of mitochondrial Ca 2+ homeostasis i.e., inability of mitochondrial uptake of Ca 2+ during physiological signaling [9]. Mitochondria play a crucial role in different intracellular pathways of signal transduction, ranging from energy production to cell death. The fine modulation of mitochondrial Ca 2+ homeostasis plays a fundamental role in various pathophysiological processes. It regulates different cell processes controlled by buffering intracellular cytosolic Ca 2+ as well as exerts a positive role on oxidative metabolism within mitochondria by regulating Ca 2+ dependent Krebs' cycle enzymes [10]. However, excessive mitochondrial Ca 2+ entry consequent to stress stimuli causes opening of the mitochondrial permeability transition pore (mPTP) and release of proapoptotic factors which eventually lead to cell death [11]. The mPTP is voltage-gated inner membrane channel which is activated by matrix calcium overloading and reactive oxygen species (ROS) [12]. Full opening of the mPTP results in increased production of mitochondrial reactive oxygen species (mROS) and release of most matrix metabolites including mROS, calcium, NAD + , and glutathione. As a result, the mitochondrial membrane potential collapses, oxidative phosphorylation and mitochondrial metabolism are inhibited, the matrix swells, and on prolonged opening the outer membrane ruptures, releasing intermembrane space proteins [13]. Prolonged pore opening in a large number of mitochondria in the cell can lead to cell death by necrosis or similar pathways.
Discussion
Activation of mPTP by mitochondrial calcium overloading and mROS is seen enhanced during aging and in aging-associated degenerative diseases like depression, osteoarthritis, RA [14]. The oxidative damage to mitochondria is more severe and persistent than damage to nuclear DNA. The process of immunosenescence is accelerated and occurs prematurely in RA [15]. There is accelerated telomere length attrition in antigen-unprimed naive T cells of RA patients due to inefficient telomere maintenance resulting in loss of telomeres, unraveling of heterochromatin, low expressions of DNA
Journal of Molecular and Genetic Medicine
repair nucleases like MRE11A, changed the expression of cell-cycle regulators which leads to accumulation of T cell aging [16]. Oxidative stress (OS) is responsible for the fragmentation of nuclear/ mitochondrial DNA, increased oxidative DNA damage, inefficient DNA mismatch repair system and accumulation of DNA adducts like 8-oxo-hydroxy-7,8-dihydro-2'-deoxyguanosine, 1,N(6)-etheno-2'deoxyadenosine, and heptanone-etheno-2'-deoxycytidine which results in progression of autoimmune disease like RA [17]. Excessive physiological buffering capacity due to ROS generation during respiratory bursts results in OS. There is perturbation in mitochondrial homeostasis leading to excessive ROS production, impaired mitochondrial dynamics, electron transport chain defects, bioenergetics imbalance and increased AMPK activity, decreased mitochondrial NAD + and altered metabolism, and mitochondrial calcium accumulation. Such mitochondrial signals activate p53/p21 and/or p16/pRb pathways resulting in cellular senescence [18]. Oxidative damage to calcium transporters leads to calcium overload and hence inducing more frequent opening of mPTP. Intracellular Ca 2+ levels in normal cells are much lower than the optimal Ca 2+ levels required for PAD activation. Hence, it has been proposed that PAD activation occurs only during cell death or necrosis when PAD enzymes leak out into the extracellular matrix from dying cells, or vice versa, with the high Ca 2+ concentration activating PADs [19].
Hence, spontaneous secretion of intracellular PAD may be the result of high cytosolic Ca 2+ due to a disturbance in mitochondrial calcium homeostasis secondary to oxidative damage to mitochondria and flares of RA is seen with increased environmental exposure especially air pollution. Air pollutants also induce OS which may compromise the mitochondrial integrity and disturb Ca 2+ homeostasis. Mitochondria are the powerhouse organelle, a place where oxidative metabolism takes place, which plays a pivotal role in health and disease. Most of the mitochondrial activities are driven in a Ca 2+ dependent manner, in turn affecting the activities of various Ca 2+ dependent enzymes like PADs. Development of specific drugs that act on mitochondrial Ca 2+ homeostasis is being developed which may open the way to new biochemically designed therapeutic approaches in the treatment of several disabling disorders like RA. Studies in our laboratory have shown yoga-based lifestyle intervention (YBLI) improves mitochondrial integrity as is evident by increasing COX activity, reducing OS, upregulating the total anti-oxidant capacity and telomerase enzymes, which both aid in maintenance of telomere length and genomic stability, thus reducing and delaying onset of age-related chronic diseases and complex lifestyle disorders [20][21][22][23]. Also, YBLI reduces the levels of acute phase reactants like CRP, pro-inflammatory cytokines, OS levels in RA [24].
Conclusion
The ongoing studies in our laboratory on the impact of yoga and meditation have shown significant upregulation in the expression of antioxidant, anti-inflammatory genes, genes of cell cycle control, DNA repair genes and elevation of tolerogenic molecules which both aid in immune-modulation and reversal of autoimmunity. Improvement in mitochondrial integrity with normal Ca 2+ levels by maintaining homeostasis, thus prevent activation of PAD and citrullination of tissue fibrinogen. This may prevent numerous sequelae of excessive citrullination and dysregulated gene expression. Thus, we hypothesize that activation of PAD may be due to loss of mitochondrial integrity secondary to OS. OS can be induced due to various exogenous and endogenous factors. Majority of factors that induce oxidative stress are modifiable like quitting smoking, decreasing intake of fast nutritionally depleted processed food, minimizing alcohol consumption, minimize exposure to xenobiotics and insecticides and pesticides and adopt simple mind body stress reduction techniques to delay accelerated aging and improve mitochondrial integrity. Thus, simple lifestyle intervention may maintain optimal ROS levels, mitochondrial integrity, Ca 2+ homeostasis and prevent activation of PADs and thus reduce the incidence/severity of RA.
Author Contributions
RD and SG conceived the idea behind this commentary; SG wrote the manuscript; SG, AG and PC generated experimental data and; RD finalized the article.
|
Added an open link for approver to process request
The link is like: /api/approval/v#/stageaction, the views are based on PM's example, as follows:
The request page:
The response page:
@hsong-rh was just an image scaling matter? The first screenshot the Red Hat icon and Hybrid Cloud Manager looks smaller but Catalog Request and its content look larger.
@bzwei Yes, I have to zoomed out in the request page, so all elements can be taken in the screenshot.
@hsong-rh can you take a look at the Hakiri security warning?
|
Write the next sentence.
Kenneth put the furniture in the truck for Christopher because
Possible answers: A). Kenneth needed to move to a new house.; B). Christopher needed to move to a new house.;
Answer:
B).
|
Why can't I find this drawable in the android source?
When I grep the android source for divider_holo_light I get these results:
~/platform_frameworks_base/core/res/res master gg divider_holo_light .
./values/arrays.xml:137: <item>@drawable/list_section_divider_holo_light</item>
./values/arrays.xml:239: <item>@drawable/list_divider_holo_light</item>
./values/arrays.xml:240: <item>@drawable/list_divider_holo_light</item>
./values/styles.xml:2031: <item name="android:background">@android:drawable/list_section_divider_holo_light</item>
./values/themes.xml:1298: <item name="listDivider">@drawable/list_divider_holo_light</item>
./values/themes.xml:1316: <item name="listDividerAlertDialog">@android:drawable/list_divider_holo_light</item>
Yet when I grep the drawable directory for divider, this is all I see:
~/platform_frameworks_base/core/res/res/drawable master ls | grep divider
action_bar_divider.xml
Where can I find where list_divider_holo_light is defined?
It is defined in android-sdk/platforms/android-xx/data/res/drawable-xxx folders.
Thanks, it's also in the android source in ~/platform_frameworks_base/core/res/res/drawable-xxx
In which folder is your platform_frameworks_base located? I couldn't find that anywhere in my directories...
That's too bad. I got my sdk from Android developers website :\
I do have an sdk. That's the android source (it's different from the sdk).
|
Talk:Magic Duel/@comment-<IP_ADDRESS>-20121203082220/@comment-5398923-20121204024609
Good point. Maybe there's some biological/physical limit. (in universe yeaaah)
|
A FREAKING BUTTON.
and you can tell he is TRYING because he is trying so hard to get an almost useless button.
There are people who don't care about opinions when they get promote dto Rollback.
Why..? BECAUSE IT'S A FREAKING BUTTON.
Please respect the fact he is trying so ahr dto get a useless button.
|
#include "String.h"
#include "TFunc.h"
#include "Utils.h"
#if DEBUG == 1
#define _CRTDBG_MAP_ALLOC
#include <cstdlib>
#include <crtdbg.h>
#ifdef _DEBUG
#define new new ( _NORMAL_BLOCK , __FILE__ , __LINE__ )
#endif
#endif // DEBUG == 1
size_t StrLen(const char* Str) {
size_t i = 0;
for (; Str[i] != 0; ++i);
return i;
}
void StrCopy(char * Dest, const char * Source) {
size_t i = 0;
for (; Source[i] != 0; ++i)
Dest[i] = Source[i];
Dest[i] = 0;
}
short StrCmp(const char* Str1, const char* Str2) {
size_t i = 0;
for (; Str1[i] != 0 && Str2[i] != 0; ++i) {
if (Str1[i] < Str2[i])
return -1;
else if (Str1[i] > Str2[i])
return 1;
}
return Str1[i] != Str2[i];
}
bool StrEquals(const char* Str1, const char* Str2, size_t Limit) {
for (size_t i = 0; i < Limit; ++i)
if (Str1[i] != Str2[i])
return 0;
else if (Str1[i] == 0 || Str2[i] == 0)
return Str1[i] == Str2[i];
return 1;
}
void RReverse(char* Str, size_t Len) {
if (Len < 2)
return;
swap(Str[0], Str[Len - 1]);
RReverse(Str + 1, Len - 2);
}
bool IsEmptySpace(char Ch) {
return Ch == ' ' || Ch == '\n' || Ch == '\t';
}
size_t String::calcSize(size_t StrLen) {
size_t i = 1;
for (; i <= StrLen; i *= 2);
return i;
}
void String::onlySet(const char* Str) {
size = StrLen(Str);
realsize = calcSize(size);
str = new char[realsize];
StrCopy(str, Str);
}
void String::onlySet(char Char) {
size = 1;
realsize = 2;
str = new char[realsize];
str[0] = Char;
str[1] = 0;
}
void String::grow() {
if (realsize > size + 1) // Check if a grow is needed i.e the string is full
return;
realsize *= 2;
char* newStr = new char[realsize];
StrCopy(newStr, str);
delete[] str;
str = newStr;
}
void String::shrink() {
size_t newLen = calcSize(size);
if (realsize <= newLen) // Check if a shrink is needed i.e the string is taking too much space
return;
realsize = newLen;
char* newStr = new char[realsize];
StrCopy(newStr, str);
delete[] str;
str = newStr;
}
String::String() {
onlySet("");
}
String::String(char Char) {
onlySet(Char);
}
String::String(const char* Str) {
onlySet(Str);
}
String::String(const String& Str) {
onlySet(Str.str);
}
String::~String() {
delete[] str;
}
String& String::Set(char Char) {
delete[] str;
onlySet(Char);
return *this;
}
String& String::Set(const char* Str) {
delete[] str;
onlySet(Str);
return *this;
}
String& String::Set(const String& Str) {
if (&Str != this) {
Set(Str.str);
}
return *this;
}
String& String::SetAt(char Char, size_t Position) {
if (Position < size)
str[Position] = Char;
return *this;
}
String& String::operator=(char Char) {
Set(Char);
return *this;
}
String& String::operator=(const char* Str) {
Set(Str);
return *this;
}
String& String::operator=(const String& Str) {
Set(Str);
return *this;
}
String& String::Append(char Char) {
grow();
str[size] = Char;
size++;
str[size] = 0;
return *this;
}
String& String::Append(const char* Str) {
size_t length = StrLen(Str);
for (size_t i = 0; i < length; ++i) {
Append(Str[i]);
}
return *this;
}
String& String::Append(const String& Str) {
const String& ref = (&Str == this) ? Clone() : Str;
for (size_t i = 0; i < ref.size; ++i) {
Append(Str.str[i]);
}
return *this;
}
String & String::Prepend(const String& Str)
{
return InsertAt(0, Str);
}
String& String::operator+=(const String& Str) {
return Append(Str);
}
String& String::InsertAt(size_t Pos, const String& Str) {
if (Pos > size) // If it's bigger return
return *this;
size_t newLen = size + Str.size;
size_t newSize = calcSize(newLen);
char* newStr = new char[newSize];
size_t curPos = 0;
for (; curPos < Pos; ++curPos) // Copy the beginning
newStr[curPos] = str[curPos];
for (size_t i = 0; i < Str.size; ++i, ++curPos) // Copy the addition
newStr[curPos] = Str[i];
for (size_t i = Pos; str[i] != 0; ++i, ++curPos) // Copy the remainder after the replaces sub string
newStr[curPos] = str[i];
newStr[curPos] = 0;
size = newLen;
realsize = newSize;
delete[] str;
str = newStr;
return *this;
}
String operator*(String Str, size_t Number) {
String tmp("");
for (size_t i = 0; i < Number; ++i) {
tmp.Append(Str.str);
}
return tmp;
}
String operator*(size_t Number, String Str) {
return operator*(Str, Number);
}
std::ostream& operator<<(std::ostream& stream, const String& Str) {
stream << Str.str;
return stream;
}
std::istream& operator>>(std::istream& stream, String& Str) {
Str.Clear();
char ch = '\n';
while (stream.get(ch) && ch != '\n')
Str.Append(ch);
return stream;
}
String operator+(const char * Str1, const String & Str2) {
String tmp(Str1);
tmp.Append(Str2);
return tmp;
}
String & String::ReadStream(std::istream & stream)
{
Clear();
char ch = 0;
while (stream.get(ch))
Append(ch);
return *this;
}
char String::operator[](const size_t Position) const {
if (Position < size)
return str[Position];
return 0;
}
const char* String::Get() const {
return str;
}
bool String::operator<(const String& Str) const {
return StrCmp(str, Str.str) < 0;
}
bool String::operator>(const String& Str) const {
return StrCmp(str, Str.str) > 0;
}
bool String::operator<=(const String& Str) const {
return StrCmp(str, Str.str) <= 0;
}
bool String::operator>=(const String& Str) const {
return StrCmp(str, Str.str) >= 0;
}
bool String::operator==(const String& Str) const {
return StrCmp(str, Str.str) == 0;
}
bool String::operator!=(const String& Str) const {
return !operator==(Str);
}
size_t String::Size() const {
return size;
}
size_t String::RealSize() const {
return realsize;
}
size_t String::Lines() const
{
return IndexAll("\n").Size() + 1;
}
String String::Concat(const String& Str) const {
String newStr = *this;
newStr.Append(Str);
return newStr;
}
String String::operator+(const String& Str) const {
return Concat(Str);
}
String String::Substr(size_t From, size_t To) const {
if (To < From)
swap(To, From);
if (To >= size)
To = size;
String newStr;
for (; From < To && From < size; ++From)
newStr.Append(str[From]);
return newStr;
}
String String::GetLine(size_t Index) const {
if (Index > size)
return String();
String newStr;
for (; str[Index] != '\n' && str[Index] != 0; ++Index)
newStr.Append(str[Index]);
return newStr;
}
String String::Clone() const {
return String(*this);
}
String& String::Clear() {
delete[] str;
onlySet("");
return *this;
}
String& String::ToLower() {
for (size_t i = 0; i < size; ++i)
if (str[i] >= 'A' && str[i] <= 'Z')
str[i] += 'a' - 'A';
return *this;
}
String& String::ToUpper() {
for (size_t i = 0; i < size; ++i)
if (str[i] >= 'a' && str[i] <= 'z')
str[i] -= 'a' - 'A';
return *this;
}
String& String::TrimStart() {
size_t removeTill = 0;
for (size_t i = 0; i < size; ++i) {
if (str[i] == ' ' || str[i] == '\t' || str[i] == '\n')
++removeTill;
else
break;
}
RemoveRange(0, removeTill);
return *this;
}
String& String::TrimEnd() {
size_t removeFrom = size;
for (size_t i = size - 1; i >= 0; --i) {
if (str[i] == ' ' || str[i] == '\t' || str[i] == '\n') {
removeFrom = i;
}
else
break;
}
RemoveRange(removeFrom, size);
return *this;
}
String& String::Trim() {
TrimStart();
TrimEnd();
return *this;
}
String& String::RemoveAt(size_t Index) {
if (Index < size) {
size_t newLen = size - 1;
size_t newSize = calcSize(newLen);
char* newStr = new char[newSize];
for (size_t i = 0; i < Index; ++i) {
newStr[i] = str[i];
}
StrCopy(newStr + Index, str + Index + 1);
delete[] str;
str = newStr;
size = newLen;
realsize = newSize;
shrink();
}
return *this;
}
String& String::RemoveRange(size_t From, size_t To) {
if (To < From)
swap(To, From);
if (From >= size)
return *this;
if (To > size)
To = size;
size_t newSize = size - (To - From);
size_t newRealSize = calcSize(newSize + 1);
char* newStr = new char[newRealSize];
size_t curPos = 0;
for (; curPos < From; ++curPos) // Copy the beginning
newStr[curPos] = str[curPos];
for (; To < size; ++To, ++curPos) // Copy the ending
newStr[curPos] = str[To];
newStr[curPos] = 0;
delete[] str;
str = newStr;
size = newSize;
realsize = newRealSize;
return *this;
}
String& String::Replace(const String& What, const String& With, size_t AfterIndex, size_t BeforeIndex) {
size_t index = Index(What, AfterIndex, BeforeIndex);
if (index != (size_t)-1) {
size_t newLen = size - What.size + With.size;
size_t newSize = calcSize(newLen + 1);
char* newStr = new char[newSize];
size_t curPos = 0;
for (; curPos < index; ++curPos) // Copy the beginning
newStr[curPos] = str[curPos];
for (size_t i = 0; i < With.size; ++i, ++curPos) // Copy the replacement
newStr[curPos] = With[i];
for (size_t i = index + What.size; str[i] != 0; ++i, ++curPos) // Copy the remainder after the replaces sub string
newStr[curPos] = str[i];
newStr[curPos] = 0;
delete[] str;
str = newStr;
size = newLen;
realsize = newSize;
}
return *this;
}
String& String::ReplaceAt(const String & What, const String & With, size_t IndexAt)
{
if (IndexAt != (size_t)-1) {
size_t newLen = size - What.size + With.size;
size_t newSize = calcSize(newLen + 1);
char* newStr = new char[newSize];
size_t curPos = 0;
for (; curPos < IndexAt; ++curPos) // Copy the beginning
newStr[curPos] = str[curPos];
for (size_t i = 0; i < With.size; ++i, ++curPos) // Copy the replacement
newStr[curPos] = With[i];
for (size_t i = IndexAt + What.size; str[i] != 0; ++i, ++curPos) // Copy the remainder after the replaces sub string
newStr[curPos] = str[i];
newStr[curPos] = 0;
delete[] str;
str = newStr;
size = newLen;
realsize = newSize;
}
return *this;
}
String& String::ReplaceAll(const String& What, const String& With, size_t AfterIndex, size_t BeforeIndex) {
size_t index = Index(What, AfterIndex, BeforeIndex);
while (index != (size_t)-1) {
ReplaceAt(What, With, index);
index = Index(What, AfterIndex, BeforeIndex);
}
return *this;
}
String& String::ReplaceAllSafe(const String& What, const String& With) {
Vector<size_t> indexes = IndexAll(What);
return ReplaceAllSafe(What, With, indexes);
}
String& String::ReplaceAllSafe(const String & What, const String & With, Vector<size_t>& Indexes) {
if (Indexes.Size() == 0)
return *this;
size_t newSize = size + Indexes.Size() * With.Size() - Indexes.Size() * What.Size();
size_t newRealSize = calcSize(newSize);
char* newStr = new char[newRealSize];
size_t replaceIndex = 0;
size_t newStrIndex = 0;
for (size_t i = 0; i < size; i++) {
if (i == Indexes[replaceIndex]) {
for (size_t j = 0; j < With.Size(); j++) {
newStr[newStrIndex + j] = With[j];
}
newStrIndex += With.Size();
i += What.Size() - 1;
if (replaceIndex < Indexes.Size())
replaceIndex++;
}
else {
newStr[newStrIndex] = str[i];
newStrIndex++;
}
}
size = newSize;
realsize = newRealSize;
newStr[newSize] = 0;
delete[] str;
str = newStr;
return *this;
}
String& String::ReplaceAllOutsidePairs(const String & What, const String& With, Vector<Pair<size_t, size_t>> Pairs) {
if (Pairs.Size() == 0)
return *this;
Vector<size_t> indexes = IndexAll(What);
size_t j = 0;
bool shouldRemove = false;
for (size_t i = 0; i < indexes.Size(); ++i) {
shouldRemove = false;
for (size_t j = 0; j < Pairs.Size(); ++j) {
if (Pairs[j].Key() < indexes[i] && indexes[i] < Pairs[j].Value()) {
shouldRemove = true;
break;
}
}
if (shouldRemove) {
indexes.RemoveAt(i);
--i;
}
}
return ReplaceAllSafe(What, With, indexes);
}
String & String::ReplaceAllInsidePairs(const String & What, const String& With, Vector<Pair<size_t, size_t>> Pairs) {
if (Pairs.Size() == 0)
return *this;
Vector<size_t> indexes = IndexAll(What);
bool shouldRemove = true;
for (size_t i = 0; i < indexes.Size(); ++i) {
shouldRemove = true;
for (size_t j = 0; j < Pairs.Size(); ++j) {
if (Pairs[j].Key() <= indexes[i] && indexes[i] <= Pairs[j].Value()) {
shouldRemove = false;
break;
}
}
if (shouldRemove) {
indexes.RemoveAt(i);
--i;
}
}
return ReplaceAllSafe(What, With, indexes);
}
String& String::Remove(const String& What) {
return Replace(What, "");
}
String & String::ReplaceFromUntil(size_t From, const String & To, const String& With) {
size_t nextIndex = Index(To);
RemoveRange(From, nextIndex);
InsertAt(From, With);
return *this;
}
String & String::ReplaceFromTo(const String& From, const String& To, const String& With) {
Pair<size_t, size_t> pair = IndexPair(From, To);
ReplaceFromTo(pair, With);
return *this;
}
String & String::ReplaceFromTo(Pair<size_t, size_t> Indexes, const String & With) {
RemoveRange(Indexes.Key(), Indexes.Value());
InsertAt(Indexes.Key(), With);
return *this;
}
String& String::RemoveAll(const String& What) {
return ReplaceAll(What, "");
}
String& String::Reverse() {
RReverse(str, size);
return *this;
}
Vector<String> String::Split(const String & Str) {
Vector<String> strings;
Vector<size_t> indexes = IndexAll(Str);
if (indexes.Size() == 0) {
strings.Add(*this);
return strings;
}
String buffer;
size_t currentIndex = 0;
for (size_t i = 0; i < indexes.Size(); ++i) {
buffer = Substr(currentIndex, indexes[i]).RemoveAll(Str);
if (buffer != "") {
strings.Add(buffer);
currentIndex = indexes[i] + Str.size;
}
}
buffer = Substr(currentIndex, size).RemoveAll(Str);
if (buffer != "")
strings.Add(buffer);
return strings;
}
String& String::RemoveEmptyLines(bool NewLines, bool Spaces, bool Tabs, bool Trimstart, bool Trimend) {
Vector<char> spaces;
if (NewLines)
spaces.Add('\n');
if (Spaces)
spaces.Add(' ');
if (Tabs)
spaces.Add('\t');
Pair<size_t, size_t> pair(0, 0);
while (true) {
pair = IndexPairAdv("\n", "\n", spaces);
if (pair.Key() == (size_t)-1 || pair.Value() == (size_t)-1)
break;
Replace(Substr(pair.Key(), pair.Value()), "\n");
}
if (Trimstart)
TrimStart();
if (Trimend)
TrimEnd();
return *this;
}
bool String::Contains(const String& Str) const {
return Index(Str) != (size_t)-1;
}
bool String::StartsWith(const String& Str) const {
return StrEquals(str, Str.str, Str.size);
}
bool String::EndsWith(const String& Str) const {
if (Str.size > size)
return 0;
size_t start = size - Str.size;
return StrEquals(str + start, Str.str, Str.size);
}
size_t String::Index(const String& Str, size_t AfterIndex, size_t BeforeIndex) const {
if (Str.size == 0)
return -1;
size_t Cap = size - Str.size + 1 > BeforeIndex ? BeforeIndex : size - Str.size + 1;
for (size_t i = AfterIndex; i < Cap; ++i) {
if (StrEquals(str + i, Str.str, Str.size))
return i;
}
return -1;
}
size_t String::IndexNotPrecededBy(const String & Str, size_t AfterIndex, char NotPrecededBy) const {
size_t tmpIndex = Index(Str, AfterIndex);
while (tmpIndex != (size_t)-1) {
if (tmpIndex > 0 && str[tmpIndex - 1] != NotPrecededBy)
break;
tmpIndex = Index(Str, tmpIndex + Str.size);
}
return tmpIndex;
}
size_t String::IndexNotFollowedBy(const String & Str, size_t AfterIndex, char NotFollowedBy) const {
size_t tmpIndex = Index(Str, AfterIndex);
while (tmpIndex != (size_t)-1) {
if (tmpIndex > 0 && str[tmpIndex + Str.size + 1] != NotFollowedBy)
break;
tmpIndex = Index(Str, tmpIndex + Str.size);
}
return tmpIndex;
}
size_t String::LastIndex(const String& Str, size_t BeforeIndex) const {
size_t Cap = size - Str.size - 1;
if (Cap > BeforeIndex)
Cap = BeforeIndex;
for (size_t i = Cap; i >= 0; --i) {
if (StrEquals(str + i, Str.str, Str.size))
return i;
}
return -1;
}
size_t String::LastIndexNonEmpty(size_t BeforeIndex) const {
size_t Cap = size - 1;
if (Cap > BeforeIndex)
Cap = BeforeIndex;
for (size_t i = Cap; i >= 0; --i) {
if (!IsEmptySpace(str[i]))
return i;
}
return -1;
}
Vector<size_t> String::IndexAll(const String& Str, size_t AfterIndex, size_t BeforeIndex) const {
Vector<size_t> indexes;
size_t index = 0;
while (true) {
index = Index(Str, AfterIndex);
if (index == (size_t)-1 || index > BeforeIndex)
break;
AfterIndex = index + Str.size;
indexes.Add(index);
}
return indexes;
}
Vector<size_t> String::IndexOverlapping(const String & Str, size_t AfterIndex, size_t BeforeIndex) const {
Vector<size_t> indexes;
size_t index = 0;
while (true) {
index = Index(Str, AfterIndex);
if (index == (size_t)-1 || index > BeforeIndex)
break;
++AfterIndex;
indexes.Add(index);
}
return indexes;
}
Pair<size_t, size_t> String::IndexPair(const String & From, const String & To, size_t AfterIndex, size_t BeforeIndex) const {
Pair<size_t, size_t> pair(-1, -1);
size_t key = Index(From, AfterIndex);
size_t val = Index(To, key + From.size);
if (key == (size_t)-1 || val == (size_t)-1 || val > BeforeIndex) {
pair.Key(-1);
pair.Value(-1);
}
else {
pair.Key(key);
pair.Value(val + To.size);
}
return pair;
}
Pair<size_t, size_t> String::IndexPairAdv(const String & From, const String & To, Vector<char>& CharsBetween, size_t AfterIndex, size_t BeforeIndex) const {
Pair<size_t, size_t> potential(0, 0);
bool valid = true;
while (potential.Key() != (size_t)-1 && potential.Value() != (size_t)-1) {
valid = true;
potential = IndexPair(From, To, AfterIndex, BeforeIndex);
for (size_t i = potential.Key() + From.size; i < potential.Value() - To.size; ++i) {
if (!CharsBetween.Contains(str[i])) {
AfterIndex = potential.Key() + From.size;
valid = false;
break;
}
}
if (valid)
return potential;
}
return potential;
}
Pair<size_t, size_t> String::IndexPairExcludingNested(const String & From, const String & To, size_t AfterIndex, size_t BeforeIndex) const {
Pair<size_t, size_t> potential(0, 0);
potential.Key(Index(From, AfterIndex, BeforeIndex));
if (potential.Key() == (size_t)-1) {
potential.Value(-1);
return potential;
}
int fromNumber = 0;
AfterIndex = potential.Key() + From.size;
while (true) {
size_t tmpFromIndex = Index(From, AfterIndex, BeforeIndex);
size_t tmpToIndex = Index(To, AfterIndex, BeforeIndex);
if (tmpFromIndex < tmpToIndex) {
++fromNumber;
AfterIndex = tmpFromIndex + From.size;
}
else if (tmpToIndex == (size_t)-1) {
break;
}
else {
if (fromNumber == 0) {
potential.Value(tmpToIndex + To.size);
return potential;
}
--fromNumber;
AfterIndex = tmpToIndex + To.size;
}
}
return Pair<size_t, size_t>(-1, -1);
}
Vector<Pair<size_t, size_t>> String::IndexAllPairs(const String & From, const String & To, size_t AfterIndex, size_t BeforeIndex) const {
Vector<Pair<size_t,size_t>> indexes;
Pair<size_t, size_t> index;
while (true) {
index = IndexPair(From, To, AfterIndex);
if (index.Key() == (size_t)-1 || index.Value() == (size_t)-1 || index.Value() > BeforeIndex)
break;
AfterIndex = index.Value() + To.size;
indexes.Add(index);
}
return indexes;
}
Vector<Pair<size_t, size_t>> String::IndexAllPairsOutsidePairs(const String & From, const String & To, size_t AfterIndex, size_t BeforeIndex) const {
// TODO
return Vector<Pair<size_t, size_t>>();
}
Vector<Pair<size_t, size_t>> String::IndexAllPairsExcludingNested(const String & From, const String & To, size_t AfterIndex, size_t BeforeIndex) const {
Vector<Pair<size_t, size_t>> indexes;
Pair<size_t, size_t> index;
while (true) {
index = IndexPairExcludingNested(From, To, AfterIndex, BeforeIndex);
if (index.Key() == (size_t)-1 || index.Value() == (size_t)-1 || index.Value() > BeforeIndex)
break;
AfterIndex = index.Value() + To.size;
indexes.Add(index);
}
return indexes;
}
|
package escalonador;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class GeradorNumerosAleatorios {
private List<Integer> valores;
private int semente;
public GeradorNumerosAleatorios(int semente) {
this.valores = new ArrayList<Integer>();
this.semente = semente;
}
public List<Integer> metodoMisto(int a, int c, int mod) {
valores = new ArrayList<Integer>();
valores.add(semente);
if (a < mod && c < mod) {
for (int i = 0; i < mod - 1; i++) {
int xn = valores.get(i);
int xn1 = ((a * xn) + c) % mod;
valores.add(xn1);
}
}
return valores;
}
public List<Integer> metodoMultiplicativo(int a, int mod) {
return metodoMisto(a, 0, mod);
}
public List<Integer> metodoAditivo(List<Integer> sequenciaInicial, int mod) {
valores.clear();
valores.addAll(sequenciaInicial);
for (int i = 0; i <= sequenciaInicial.size(); i++) {
int novoNum = (valores.get(valores.size() - 1) + valores.get(i)) % mod;
valores.add(novoNum);
}
return valores;
}
public void setSemente(int novaSemente){
this.semente = novaSemente;
}
public static void gerarArquivo(String metodo, String valores) {
PrintWriter writer = null;
try {
writer = new PrintWriter("saida/" + metodo, "UTF-8");
} catch (Exception e) {
e.printStackTrace();
}
writer.println(valores);
writer.close();
}
public static void main(String[] args) {
GeradorNumerosAleatorios gerador = new GeradorNumerosAleatorios(7);
gerarArquivo("metodo misto1",Arrays.toString(gerador.metodoMisto(1, 1, 10).toArray()));
gerarArquivo("metodo misto2",Arrays.toString(gerador.metodoMisto(1, 1, 5).toArray()));
gerarArquivo("metodo multiplicativo",Arrays.toString(gerador.metodoMultiplicativo(7, 10).toArray()));
gerarArquivo("metodo aditivo",Arrays.toString(gerador.metodoAditivo(new ArrayList<Integer>(Arrays.asList(25, 55, 78, 87, 32)), 100).toArray()));
}
}
|
SPHERE ** (out of four) -by Bill Chambers<EMAIL_ADDRESS>(For more lame-ass reviews visit my scum-hearted website: FILM FREAK CENTRAL! http://www.geocities.com/Hollywood/Set/7504 Lots to read, and a special section where you can tell me (and others) what to see. Visit it, you damn filthy apes!)
starring Dustin Hoffman, Sharon Stone, Samuel L. Jackson, Liev Schrieber screenplay by Stephen Hauser and Paul Attanasio, based on Michael Crichton's novel directed by Barry Levinson
I have a bit of a history with Sphere. In May of 1996, I began writing a screenplay adaptation of Crichton's novel with the naive hope that nice letters to Mr. Levinson would coax him into taking a look at it. After scripting 96 pages, I broke my arm, couldn't type for a while... I also hadn't received any word back from Barry Baby after firing off two notices. I wasn't aware that he had already put his assistant/script reader to work at it. In the end it was great practice--I was teaching myself the art of adaptation. Still, Mr. Baltimore could have mailed me a form letter, a polite screw you.
Hoffman stars as Norman, a psychologist summoned to the bottom of the ocean to investigate a 300 year old spacecraft as part of the "ULF", a team he proposed during the Bush administration when asked to write a report on dealing with the possibility of alien contact. He is joined by Stone as Beth, a neurotic biochemist, Schrieber as Ted, a neurotic astrophysicist, and Jackson as Harry, a curiously un-neurotic mathematician. What they discover (along with Peter Coyote as the obligatory military hard-ass) in their exploration is that ship may not be alien at all but, in fact, American. And they encounter the titular sphere, a shiny golden ball-bearing the size of a house that reflects people only as they are about to get sucked into it. Soon the proverbial giant squid attacks the ship.
Surprisingly, the first third of the film does the novel justice: the pacing is good and the dynamic between the characters is quickly established. Now, I try always to judge a film adaptation and its source material separately, mostly due to my not being well-read and having no basis for comparison. But in this case, I must. Levinson and his writers make a wrongheaded left turn in the plot away from the Crichton novel at the end of act one, a misjudgment that ultimately robs the climax of what could have been a visually delightful and enlightening revelation. In other words, the filmmakers play their big card too early in a throwaway moment that is only the first of several head-scratching bits to come. The acting is strong, Jackson in particular, and they're basically working from nothing; one would be hard pressed to determine what these characters' occupations are if they entered the cinema after the opening sequences. The art direction and cinematography are generally lazy; we've seen these steaming pipes and rusted catwalks before. Did the engineers of the spaceship refer back to '80's science fiction films before designing their vessel?
The movie also, unfortunately, accentuates the book's flaws, logic-holes that one looks over in a page-turner. I don't want to discuss them here, now, lest I give away the (goofy, curious) ending, which stays true to the novel in many ways. One could do worse than spend a night at Sphere, but it proves that the not-bad Wag The Dog, last month's Levinson picture, was a blip on the radar. His career is headed the way of Sphere's spaceship.
-Bill Chambers; February, 1998
The review above was posted to the
rec.arts.movies.reviews newsgroup (de.rec.film.kritiken for German reviews).
The Internet Movie Database accepts no responsibility for the contents of the
review and has no editorial control. Unless stated otherwise, the copyright
belongs to the author.
Please direct comments/criticisms of the review to relevant newsgroups.
Broken URLs inthe reviews are the responsibility of the author.
The formatting of the review is likely to differ from the original due
to ASCII to HTML conversion.
Related links: index of all rec.arts.movies.reviews reviews
|
EVERYTHING YOU CAN'T DENY! (starts dancing)"
- Gregory to Simon
Personality
He's also an excellent liar: deceptive, unpredictable and deeply manipulative.
Virginia
Post-Apocalypse
"Knots Untie"
"Go Getters"
"Hearts Still Beating"
"Rock in the Road"
"The Other Side"
"Something They Need"
"Mercy"
"Monsters"
"The Big Scary U"
"The King, the Widow, and Rick"
"How It's Gotta Be"
"Dead or Alive Or"
"Do Not Send Us Astray"
He is spared during Simon's exposure and later tries to sneak away during the fight between Negan and Simon, but Dwight gives him the plans to warn Rick. Later he arrives back at Hilltop and despite giving them the map to warn them of the attack he is placed back in the cell now alone.
"Wrath"
"A New Beginning"
Gregory will appear in this episode.
Relationships
{{Scroll-1|
Negan
Paul Rovia
Rick Grimes
Maggie Rhee
Sasha Williams
Simon
Gabriel Stokes
Kal
Appearances
{{scroll
* content=
Season 6
* "Knots Untie"
Season 7
* "Go Getters"
* "Hearts Still Beating"
* "Rock in the Road"
* "The Other Side"
* "Something They Need"
Season 8
* "Mercy"
* "Monsters"
* "The Big Scary U" (Flashback)
* "The King, the Widow, and Rick"
* "How It's Gotta Be"
* "Dead or Alive Or"
* "Do Not Send Us Astray"
* "Worth"
Season 9
}}
* "A New Beginning"
Trivia
* The casting call name for this character was "Rich".
|
Outdoor studio flash
FIG. 1 is a Perspective view of an outdoor studio flash showing my new design;
FIG. 2 is a Front view thereof;
FIG. 3 is a Rear view thereof;
FIG. 4 is a Right-side view thereof;
FIG. 5 is a Left-side view thereof;
FIG. 6 is a Top view thereof; and,
FIG. 7 is a Bottom view thereof.
The broken lines immediately adjacent to the shaded areas depict the bounds of the claimed design, while all other broken lines are directed to environment. The broken lines form no part of the claimed design.
The “NEEWER” lettering and icon forming part of the disclosure is a registered trademark of HENZHEN XING YING DA INDUSTRY CO. LTD.
CLAIM We claim the ornamental design for an outdoor studio flash, as shown and described.
|
Populating a dropdown list from an SQL database
So I'm creating a photo gallery for a social network. At the minute I'm creating a pre-upload page. I have set up 5 fields to enter images with appropriate captions. However, my main issue is populating the drop-down list. The drop-down list should contain categories of photos from a database (photo_category), but the drop-down list is turning up empty. Any suggestions on where I'm going wrong?
Edit: so now I've changed what's in $row by adding category_id and category_name but it now says " Parse error: syntax error, unexpected T_ENCAPSED_AND_WHITESPACE, expecting T_STRING or T_VARIABLE or T_NUM_STRING on line 16". The line in quesion is preceded by 2 stars.
<?php
require 'configphot.inc.php';
$_photo_upload_fields='';
$count=1;
$nofields=(isset($_GET['number_of_fields']))?
(int) ($_GET['number_of_fields']):5;
$result = mysqli_query($link,"SELECT category_id, category_name FROM photo_category");
while($row=mysqli_fetch_array($result)){
$_photo_category_list .= <<<__HTML_END
**<option value="$row['category_id']">$row['category_name']**</option>n
__HTML_END;
}
mysqli_free_result($result);
while($counter<=$nofields)
{
$photo_upload_fields .= <<<__HTML_END
<tr><td>
Photo{$counter}:
<input name = "photo_filename[]" type= "file" />
</td></tr>
<tr><td>
Caption:
<textarea name ="photo_caption[]" cols = "30" rows ="1"></textarea>
</td></tr>
__HTML_END;
$counter++;
}
//End output
echo <<<__HTML_END
<html>
<head>
<title> Upload photos here</title>
</head>
<body>
<form enctype="multipart/form-data" action="upload.php" method="post" name= "upload_form">
<table width="90%" border="0" align="center" style="width: 90%;">
<tr>
<td>Select Category
<select name="category">
$photo_category_list</select></td>
</tr>
<!image fields here -->
$photo_upload_fields
<tr><td>
<input type ="submit" name="submit" value= "Add photos">
</td></tr>
</table>
</form>
</body>
</html>
__HTML_END;
?>
You only select category_name so the array wont see $row[1] try to change the query to
SELECT id_category,category_name FROM photo_category
so you now the array 0 is id_category and the array 1 is category name
I did as you said and it's still empty
try to use row name $row['id_Category'] , $row['category_name']
Just posted a re-edited version according to your recommendations. See above for new error
First of all: Plase format your code. Some tabs really help to keep track of the code.
In your SQL statement you selecting just one column: SELECT category_name FROM photo_category
2 lines later you referencing to two columns: $row[0] (that's your category_name from your sql query) and $row[1]. The $row[1] does not exist and therefore is empty (and should produce an PHP Notice at least).
I changed it to "$result = mysqli_query($link,'SELECT category_id, category_name FROM photo_category');" however nothing still comes up unfortunately.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.