content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
The S.T.E.M. department hosts three major events throughout the school year. The purpose of these events is to further peak student interest in STEM related fields. Many scientists from our community donate their time to mentor, judge, and/or teach our IUSD students.
If you or someone you know is interested in volunteering as a scientist for any of these events please fill out this form IUSD STEM Events Volunteer Registration. A STEM background is required.
For more information please call the STEM department at (949) 936-5057 or via email at [email protected]
Ask-A-Scientist Night
Sponsored by Irvine Public Schools Foundations, in partnership with Broadcom Foundation, annually, this event prepares 6-12 grade students for their science fair project. During Ask-A-Scientist Night, students are able to ask scientists, engineers, and STEM professionals questions and get advice about their ideas for their science fair project.
Date: Wednesday, September 29, 2021
Time: 6:00PM – 7:30PM
Location: Learning Center next to Creekside High School
21st century career conference
Select Middle School students have the opportunity to attend four break-out sessions. Students hear from a variety of dynamic speakers about the many exciting facets of scientific, technological, and engineering careers. Members of the community volunteer to speak or conduct demonstrations about their STEM related careers. Due to the limited space for this event, the selection process of which students may attend is at the discretion of the teacher.
Date: Tuesday, December 14, 2021
Time: 9:30 AM -12:15 PM
Location: Claire Trevor School of the Arts, UCI
Science Fair
2021-2022 IUSD Science Fair Description
2021-22 IUSD Science Fair - - Sponsored by Irvine Public Schools Foundation in partnership with Broadcom Foundation.
This year IUSD 6th-12th grade students will be provided the opportunity to explore their STEM passions through participation in IUSD’s annual Science Fair. Students who choose to participate will submit a Google Site containing a digital presentation, video and brief abstract.
A virtual fair, in which all projects will be available for public viewing, will be held January 24-February 4, 2022. During this time, projects will be judged by community members in STEM professions. An awards ceremony celebrating all participants is scheduled to be held in-person at University High School on February 15, 2022. Projects that are chosen to advance to the 2022 Orange County Science and Engineering Fair (OCSEF) will be notified by February 4, 2022. Registration for projects advancing to OCSEF must be completed by February 15, 2022.
For all students in Grades 6-12, participation in the Science Fair will be optional and completed outside of school hours. Support will be provided by mentor teachers at each site. Students may complete individual projects or work in groups of up to 3 people.
A Canvas course providing detailed instructions, resources, and examples is available to all students. Teacher mentors and Science Specialists will provide the link to students.
Prior to beginning their project, students are required to complete the OCSEF Initial Project Screening, including all necessary pre-approval forms. Digital project submissions for all students participating in the IUSD District Science Fair are due no later than January 5, 2021 at 4 p.m.
Please see the Science Fair Rules for additional information.
Important Dates
Aug. 1, 2021: OCSEF Initial Project Screening opens - required for ALL projects
September 13, 2021, 6:00-7:00 p.m.: IUSD Science Fair Parent/Student Information Night
If you missed the Information Night, use the links below to access the presentations. Also, students should join the IUSD Science Fair Canvas Course for the most up-to-date Science Fair information and resources to support students.
September 16 – December 18, 2021, 3:00-4:00 p.m.: IUSD Science Fair Office Hours
A weekly opportunity for 6th grade students to get questions answered and guidance on projects.
September 29, 2021, 6:00-7:30 p.m.: Ask a Scientist Night
- Location: Creekside Education Center
- Registration is not required
- Masks are required for all attendees regardless of vaccination status.
October 1, 2021: Deadline for 6th Grade Students to submit Intent to Participate to Science Specialists
January 5, 2022: IUSD Digital Project Submission Deadline
January 24-February 4, 2022: IUSD Virtual Science Fair
February 15, 2022: IUSD Science Fair Awards Ceremony
OCSEF Resources
OCSEF provides additional support for students interested in participating in a Science & Engineering Fair.
OCSEF How to Pick a Science/Engineering Project Topic Workshop
Sat. Sept. 25, 2021,10-11 am
Designed for students who have never completed a science / engineering project or students who want to review the basic tips and resources that will spark your imagination to create an attention-grabbing project. We will guide you through a process to identify your interests and introduce you to a variety of resources to explore.
Click here to register.
OCSEF Research and Engineering Design Academy
Based on the highly successful 2020 programs, OCSEF is offering the OCSEF "Research and Engineering Design Academy" in October-November, 2021, followed by weekly "Ask a Scientist/Engineer" sessions from November, 2021 through January, 2022.
- Oct. 2–Nov. 6, 2021, 6 sessions Science Fair Academy, Beginners (1.5 hrs./session)
- Oct. 2–Nov. 6, 2021, 6 sessions Science Fair Academy, Advanced (1.5 hrs./session)
- Nov.13–Jan. 15, 2022, 7 sessions Ask a Scientist/Engineer Program
- (11/13/21, 11/20, 12/4, 12/11, 12/18, 1/8/22, 1/15/22)
Check the OCSEF website (www.ocsef.org) in the Fall for updates and details on Academy workshop registration. | https://iusd.org/about/departments/education-services/academics/stem/science/science-programs-and-events |
Ulrike Passe, co-author of Designing Spaces for Natural Ventilation, discusses some of the misconceptions of natural ventilation and current trends in the field.
Why did Designing Spaces for Natural Ventilation need to be written?
Natural Ventilation is a key design strategy for sustainable buildings. While a growing number of contemporary architects desire the integration of natural ventilation flows into architectural design concepts, the evaluation and prediction of natural air movement and related energy performance are still difficult to accomplish due to the complexity of the underlying physics. We perceived an apparent gap between engineering knowledge of natural ventilation and its implementation into spatial design strategies by architects. The book is the first step to close this gap.
How is it different from other books in the field?
This is the first design guide on natural ventilation for architects, at least the first complete book dedicated to natural ventilation and written for architects. Quite a few environmental forces books like Sun, Light, Wind of course discuss natural ventilation, but they do not go much beyond basic rules of thumbs. Then, there are quite a few recent books on current mechanical and natural ventilation available for engineers, discussing current engineering research, but no really in-depth guidelines actually exist for designing architects. Existing literature communicates in a very technical manner and contains mathematical formulae many architects are ill equipped to incorporate for lack of knowledge, patience, or time. Although integrative design is on the rise and the first sketch, which sites the building in the urban context or landscape, determines the success of the design for natural ventilation, very little guidance is available to do this beyond looking at prevailing wind directions.
Are there any key messages you’d like to highlight?
Natural ventilation is a design issue, it requires spatial thinking and architectural understanding. Natural Ventilation cannot only be discussed in HVAC classes or Environmental Controls courses alone, it has to be intrinsically embedded in the Design of the building and cannot be added at a later stage.
When used properly, natural ventilation can provide a large amount of the required energy for free, without harm to climate and atmosphere. Naturally ventilated buildings need to be designed, operated, and controlled differently than mechanically ventilated buildings. The book discusses and illustrates many proportional relationships useful to understand natural ventilation and to apply to design projects. While building envelope characteristics are important, the space of the building, multiple connections and networks of spaces are the key parameter for a well-designed naturally ventilated building.
What are some of the misconceptions of Natural Ventilation?
Actually quite a few misconceptions exist related to natural ventilation. While a lot about natural ventilation is about ‘opening windows’ that in and of itself is often not sufficient. Especially in urban context or warmer climates, the most successful natural ventilation strategies often include some component of stack ventilation, but stack ventilation specifically requires spatial composition strategies, which are not commonly used amongst architects, for example the position of the ‘neutral plane’ and the raising of the top outlet way beyond the upper floors, so that the hot air rising does not flow out through the upper spaces, which then just get too hot and receive occupants complain.
Another misconception is that opening windows always helps to cool a space. There are obviously moments during the summer season and on a hot summer day, when one should not open a window. Many climates require a fairly intricate operation and control schedule. For example the Harm A. Weber Academic Center at Judson University in Elgin Illinois, one of the cases studied in the book has at least 15 different control strategies, but can be naturally ventilated in the challenging climate of Chicago for about 60% of the year.
The effect of night time cooling is often under-utilized. For example here in Iowa I have once heard from an engineer, that Iowa has only two seasons: heating or cooling, but when you look in more detail at the Iowa climate, there are about 4 to 5 months where natural ventilation can easily be achieved with the addition of thermal mass for night time cooling and properly positioned windows and doors as well as well insulated envelopes. We are currently studying these strategies with our 2009 entry into the US DOE Solar Decathlon, the Interlock House.
What are some current trends in the field?
Computational Fluid dynamics simulations are improving the understanding of natural ventilation predictions and novel expertise in CFD will soon enable more precise design predications. In addition an improved knowledge of the near building environmentl the microclimate and urban wind patterns are current research topics,which will advance the field of natural ventilation. A team at the Center for Building Energy Research (CBER) at Iowa State University is currently working on the validation of CFD models with measured data in our Iowa test bed.
The complexity of thermodynamics has been a major challenge in the prediction of natural ventilation. The interaction of natural ventilation flow with thermal heat transfer properties in solid materials is computationally very intensive and thus not well integrated into engineering and design prediction tools yet at all. Air velocity and thermal capacity of materials are difficult to simulate in one equation system, and turbulences in larger spaces cannot be predicted with certainty. But turbulence is the basis for the mixing of air and thus for good natural ventilation It is very complex in computational fluid dynamics (CFD) to model flows between solid materials and air currents[i] and further research into architectural design and fluid dynamics is needed.
What experience led you to write this book?
The concept for the research started in my own architecture practice in Berlin, Germany. My partner Thomas Kaelber and I were designing House Marxen[ii] a single family home actually with my father as the client. We designed the house in the 2000/-01 with spatially interconnected volumes to support the air flow to such a degree that the temperature was kept within an acceptable range. In this approach, architecture serves as a passive-energy device. Anecdotal information confirmed that natural ventilation flows in this house support the cooling and heating of the building. Yet, at the time of design and construction, there were no easily accessible design tools to quantify the spatial effect on this flow. This structure is overlaid by a spatial composition using volumetric proportions of the Fibonacci sequence, which connects rhythms and sequences of space on three different levels, opening up spatial connections for vision, movement as intuitively predicted by the architect and experienced by the user during the occupancy over the last 10 years.
This sparked the decision to embark on a ten-year journey to reveal the hidden physics of natural ventilation to the designing architect. I teamed up with Francine Battaglia and expert in computational fluid dynamics to address this challenge.
What suggestions would you make for change, future research or interventions?
The next big challenge for sustainable design strategies is the adaptation of passive design strategies like natural ventilation to climate change and further research is needed regarding the near building environment as well as buildings adaptability to future climate scenarios. New technologies for opening and control mechanisms for occupant interaction need to be developed, much in the same way as dynamic shading strategies. And beyond the concepts and strategies laid out in great detail in the book, we need more refined design tools. A lot has already been achieved over the past 10 years and new computational design tools are more readily available. We at Iowa State University are currently working at a design plug-in for natural ventilation, but that is still in its infancy.
[i]Preston Stoakes, Ulrike Passe, and Francine Battaglia, "Predicting Natural Ventilation Flows in Whole Buildings. Part 1: The Viipuri Library," Building Simulation 4, no. 3 (2011).
[ii] Ulrike Passe, Thomas Kaelber: 'Casa a Marxen', In 'l' architettura naturale', international review on sustainable architecture; year IX no 30, March 2006, Edicom Edizione, Milano, p. 62 – 65. | https://www.routledge.com/architecture/posts/9025?utm_source=adestra&utm_medium=email&utm_campaign=SBU3_mbs_2pr_1em_1arh_nba16_stan16_86555_X |
---
abstract: '[In this paper, we characterize bounded ancient solutions to the time-dependent Stokes system with zero boundary value in various domains, including the half-space.]{}'
author:
- 'Hao Jia, Gregory Seregin, Vladimír Šverák'
title: 'Liouville theorems in unbounded domains for the time-dependent Stokes System '
---
*Dedicated to Professor Peter Constantin on the occasion of his 60th birthday.*
[Introduction]{} In this paper, we show that any solution $u(x,t)\in L^{\infty}(\Omega\times (-\infty,0))$ to $$\begin{aligned}
\left.\begin{array}{rl}
\partial_t u-\Delta u+\nabla p &=0\\
\mbox{div}~~u&=0
\end{array}\right\}&& \quad \mbox{in $\Omega\times (-\infty,0)$, and}\label{eq:main1}\\
u|_{\partial \Omega}=0&& \label{eq:main2}\end{aligned}$$ satisfies $$\begin{aligned}
\label{eq:result}
u(x,t)=\left\{\begin{array}{ll}
0 &~~~ \mbox{if $\Omega\subset R^n$ is a bounded domain and $n\ge 2$,}\\
u(t) &~~~ \mbox{if $\Omega=R^n$ and $n\ge 2$,}\\
u(t,x_n) ~~~\mbox{with}~~~u_n=0 &~~~ \mbox{if $\Omega$ is half space $x_n>0$ and $n\ge 2$,}\\
a(t)+O(\frac{1}{|x|^{n-2}}) ~~~\mbox{as}~~~|x|\to \infty &~~~\mbox{if $\Omega\subset R^n$ is an exterior domain with $n\ge 3$. }
\end{array}\right.\end{aligned}$$
Throughout the paper we assume that the domains $\Omega$ have smooth boundary. One has to be somewhat careful with the definition of the boundary condition $u|_{\partial\Omega}=0$ since a-priori we only assume $u$ to be bounded, with no further regularity assumptions. The usual definition is the following:
We call $u(x,t)\in L^{\infty}(\Omega\times (0,\infty))$ a very weak ancient solution to equations (\[eq:main1\])(\[eq:main2\]) if $$\label{eq:main3}
\int_{-\infty}^0\int_{\Omega}u(x,t)(\partial_t \phi+\Delta \phi)(x,t)dxdt=0$$ for any $\phi\in C_c^{\infty}(\overline{\Omega}\times (-\infty,0))$ with $\mbox{div}~~\phi=0$, $\phi|_{\partial \Omega\times (-\infty,0)}=0$; and $$\label{eq:main4}
\int_{-\infty}^0\int_{\Omega}u(x,t)\nabla \psi(x,t)dxdt=0$$ for any $\psi\in C_c^{\infty}(\overline{\Omega}\times (-\infty,0))$.
Solutions defined in this way are often called very weak solutions in the literature and we also use this terminology. For smooth solutions the definition coincides with the usual one, as one can easily check by integration by parts. Our main result is as follows:
Let $u$ be a bounded very weak ancient solution to equations (\[eq:main1\]),(\[eq:main2\]), in the sense that $u$ satisfies equations (\[eq:main3\]),(\[eq:main4\]). Then $u$ is given by (\[eq:result\]).
**Remarks**: The results are essentially sharp. This is obvious in the cases when $\Omega$ is $R^n$ or a bounded domain. In the case of a half space, one can take $u=(u_1(t,x_n),\dots,u_{n-1}(t,x_n),0)$, $u_i$ verifies: $\partial_tu_i-\partial_n^2u_i=f_i(t), ~~u_i|_{x_n=0}=0$ for $1\leq i\leq n-1$, where $f_i\in C_c^{\infty}(-\infty,0)$. Then $u$ is a solution to equations (\[eq:main1\]),(\[eq:main2\]). This is the example given in [@SV10]. In exterior domain, the decay rate we obtain is as good as that of the fundamental solution of steady Stokes equations.\
Our work is motivated by boundary regularity for the Navier-Stokes equations. The main interest is in the case of the half-space, the other case are included for completeness. The connection between regularity and Liouville-type theorems is of course classical. In the context of the Navier-Stokes equations it is discussed for example in [@KNSS; @SS]. In a recent note [@SV10] a bounded shear flow for unsteady Stokes equations is constructed which is not fully regular although the boundary value is zero. This example simplifies earlier constructions of [@kK01]. The lack of boundary regularity in the time-dependent case is in contrast with the case of steady Stokes equations, see e.g. [@Kang]. In the time-dependent Stokes equations and Navier-Stokes equations, one usually treats pressure as an auxiliary variable, determined by $u$. Such treatment is valid as long as we have some decay of $u$ at spatial infinity. On the other hand, it has been known that in unbounded domains, if we do not assume decay of $u$, the pressure may act as an external force ‘driving’ the fluid motion, as in the case of [@SV10]. In such situations, we lose boundary regularity even with the vanishing boundary conditions. In this context, our result could be understood as showing that the solutions in [@SV10] are in some sense the only obstacle to full boundary regularity (in suitable solution classes).
The paper is organized as follows. In section 2 we introduce some technical lemmas to be used below. Section 3 deals with the simple cases when $\Omega$ is a bounded domain or the whole space. Section 4 and 5 deal with the more subtle cases when $\Omega$ is a half space or an exterior domain. For the exterior domains, we use a standard extension argument together with some estimates of linear Stokes system. For the half space, which is the most interesting case, we use Fourier transform. There is also a proof based on duality arguments, which requires some additional point-wise estimates of solutions to linear Stokes system in half space. The estimates may be of independent interest, but the calculations are somewhat lengthy. This alternative proof will appear elsewhere.\
**Notation**. We will use standard notations. For example, $\Omega$ will be one of the four types of domain in $R^n$ mentioned above. $B_r(x_0):=\{x\in R^n||x-x_0|<r\}$, $Q_r(x_0,t_0):=B_r(x_0)\times (t_0-r^2,t_0)$, $Q_r:=Q_r(0,0)$. $C$ denotes an absolute positive number. $C(\alpha, \lambda,\cdots)$ denote a positive number depending only on $\alpha,\lambda,\cdots$, $\overline{A}$ denotes the closure of $A$, $A\Subset \mathcal{O}$ means the closure of $A$ is a compact subset of $\mathcal{O}$, $\partial_i:=\frac{\partial}{\partial x_i}$.
[Some technical lemmas]{} In the sequel we will make use standard mollifications. For completeness we include the following standard lemma:
\[lm:mollification\] Let $\Omega$ be as above, $u\in L^{\infty}_{x,t}(\Omega\times(-\infty,0))$. Take a standard smooth cutoff function $\eta(t)$ with $supp~~ \eta\Subset(0,1)$ and $\int\eta=1$. For each $\epsilon>0$, we define $u^{\epsilon}$ as a distribution in $\Omega\times (-\infty,0)$ in the following way, $$(u^{\epsilon},\phi)=\int_{-\infty}^0\int_{\Omega}u(x,t)\int_{-\infty}^0\frac{1}{\epsilon}\eta(\frac{s-t}{\epsilon})\phi(x,s)dsdxdt$$ for any smooth $\phi$ with $supp~~\phi\Subset\Omega\times(-\infty,0)$.\
Then $u^{\epsilon}$ is a bounded function with bounded distributional derivatives $\partial_t^ku^{\epsilon}$, $k=0,1,2,\cdots$. Moreover, we have the following estimates: $$\|\partial_t^ku^{\epsilon}\|_{L^{\infty}(\Omega\times(-\infty,0))}\leq C\epsilon^{-k}\|u\|_{L^{\infty}(\Omega\times(-\infty,0))}\mbox{.}$$
**Proof and Remarks**: The proof follows immeditely from well-known properties of convolution. We note that due to our special choice of the support of $\eta$, the mollified function $u^{\epsilon}$ is still defined in $\Omega\times(-\infty,0)$. It is clear from definition that $u^{\epsilon}$ converge weakly$^\ast$ to $u$ in $L^{\infty}(\Omega\times(-\infty,0))$. It is also clear that, after possibly changing the value of $u^{\epsilon}$ on a set of measure zero, the map $t\to u^{\epsilon}(\cdot,t)$ is continuous from $(-\infty,0)$ to $L^p(K)$ for any $K\Subset\Omega$, $1<p<\infty$.\
Let $u$ be a bounded distributional solution to the linear Stokes equations (\[eq:main1\]) in $Q_1$ with some distribution $p$. It is well known that we have regularity of $u$ in $x$, for almost every $t$ in $Q_{1/2}$. We can not, however, expect to have any regularity in $t$ for $u$, or any reasonable estimate on $p$ in general, assuming only that $u$ is bounded in $Q_1$. This point is usually illustrated with the example where $u(t,x)=f(t)$, $p=-f'(t)\cdot x$. Here $f(t)$ is bounded, but $f'(t)$ can be arbitrarily large. On the other hand, if we assume some estimate on $\partial_tu$, then we can improve estimates on $p$. The following lemma summarizes the above discussion.
\[lm:linearregularity\] Let $u$ be a bounded distributional solution to linear unsteady Stokes equations in $Q_1$. Let $\|u\|_{L^{\infty}_{x,t}(Q_1)}\leq 1$. Then for any multi-index $\alpha$ with $|\alpha|\ge 0$, $\|\partial_x^{\alpha}u\|_{L^{\infty}_{x,t}(Q_{1/2})}\leq C(n,\alpha)$. If in addition $\|\partial_tu\|_{L^{\infty}_{x,t}(Q_1)}\leq M$, then there exists a pressure field $p(x,t)$ such that (\[eq:main1\]) is satisfied and $\|\partial_x^{\alpha}p\|_{L^{\infty}_{x,t}(Q_{1/2})}\leq C(\alpha,M,n)$ for any multi-index $\alpha$ with $|\alpha|\ge 0$.
**Proof**: For the first part of the lemma, note that the vorticity $\omega_{ij}:=\partial_iu_j-\partial_ju_i$, $1\leq i,j\leq n$, satisfies heat equation $\partial_t\omega_{ij}-\Delta \omega_{ij}=0$ in $Q_1$. Thus $\omega_{ij}$ is smooth with all derivatives bounded by constants depending only on $n$ in $Q_{3/4}$. From the divergence free condition, we get $\Delta u_i=-\sum_{j=1}^n\partial_j\omega_{ij}$. Then the first part of the lemma follows from interior estimate of Laplace equations. For the second part, note that $\|\nabla p\|_{L^{\infty}_{x,t}(Q_{3/4})}\leq C(n,M)$ from the assumption on $\partial_tu$ and first part of the lemma. Since we also have $\Delta p=0$, the estimate follows.\
**Remark**: The pressure is only determined up to an arbitrary function of $t$. (If we change $p$ to $p+c(t)$, equation (\[eq:main1\]) is not affected.) In estimates below we will usually assume a suitable choice of $c(t)$.\
We shall need the following extension result (which is interesting in its own right) below.
[(Extension of divergence-free vector field)]{}\[lm:extension\] For any smooth compactly supported vector field $g=(g_1,\cdots,g_{n-1},0)$ in $R^{n-1}$, there exists a smooth divergence free vector field $\phi=(\phi_1,\cdots,\phi_n)$ with ${\rm supp~~}\phi\Subset \overline{R^n_{+}}$ such that $\phi|_{x_n=0}=0$ and $\frac{\partial \phi}{\partial x_n}|_{x_n=0}=g$.
**Proof**: We seek $\phi$ in the form of $\phi_i=\sum_{j=1}^n\partial_j w_{ij}$, with some $w_{ij}\in C^{\infty}_c(\overline{R^n_{+}})$ and $w_{ij}=-w_{ji}$ for $1\leq i,j\leq n$. Note that under such conditions on $w_{ij}$, $\mbox{div~~}\phi=0$ is automatically satisfied. To satisfy boundary conditions for $\phi$, we need: $$\label{eq:boundary1}
\sum_{j=1}^n\frac{\partial w_{ij}}{\partial x_j}|_{x_n=0}=0\quad\mbox{for $1\leq i\leq n$,}$$ and $$\label{eq:boundary2}
\sum_{j=1}^n\frac{\partial^2w_{ij}}{\partial x_n\partial x_j}|_{x_n=0}=g_i\quad \mbox{for $1\leq i\leq n$, $g_n$=0.}$$ It is easy to verify that the $n$-th equation in (\[eq:boundary2\]) is also automatically satisfied once the rest of the equations in the above are satisfied. To satisfy equations (\[eq:boundary1\])(\[eq:boundary2\]), we first require $w_{ij}|_{x_n=0}=0$, for all $1\leq i,j\leq n$. Then equations (\[eq:boundary1\]) become $\frac{\partial w_{in}}{\partial x_n}|_{x_n=0}=0$ for $1\leq i\leq n$. We further require that $w_{ij}=0$ if $1\leq i,j\leq n-1$, then equations (\[eq:boundary2\]) reduce to $\frac{\partial^2w_{in}}{\partial^2x_n}|_{x_n=0}=g_i$ for $1\leq i\leq n-1$. Summarizing the above analysis, it is sufficient to find $w_{in}\in C_c^{\infty}(\overline{R^n_{+}})$ for $1\leq i\leq n-1$, $w_{in}=-w_{ni}$ satisfying $$\left.\begin{array}{rl}
w_{in}|_{x_n=0}&=0\\
\frac{\partial w_{in}}{\partial x_n}|_{x_n=0}&=0\\
\frac{\partial^2w_{in}}{\partial^2x_n}|_{x_n=0}&=g_i
\end{array}\right\}\quad\mbox{for $1\leq i\leq n-1$.}$$ It is clear that we can always find such $w_{in}$. Thus $\phi$ satisfying conditions in the lemma exists.\
We collect some facts about the operator $|\nabla|$ which will be used in our proofs.\
For $f\in \mathcal{S}(R^n)$, we define $|\nabla|f(x)=(|\xi|\hat{f}(\xi))^{\vee}(x)$ where we have used Fourier transform $\hat{f}(\xi)=\frac{1}{(2\pi)^{n/2}}\int_{R^n}e^{-ix\cdot \xi}f(x)dx$ and inverse Fourier transform $\check{f}(x)=\frac{1}{(2\pi)^{n/2}}\int_{R^n}e^{ix\cdot \xi}f(\xi)d\xi$. One can write $|\nabla|f=\sum_{j=1}^n-\frac{\partial_j}{|\nabla|}\partial_jf=\sum_{j=1}^n R_j\partial_j f$, where $R_j$ denotes the Riesz transform. Clearly $|\nabla|$ can be considered as a continuous operator from $\mathcal{S}(R^n)\to L^1(R^n)$. By duality, we can extend $|\nabla|$ to act on $L^{\infty}(R^n)$ according to the usual fromula $\langle|\nabla|f,\phi\rangle=\langle f,|\nabla|\phi\rangle$ for any $\phi\in \mathcal{S}(R^n)$.\
We recall the following obvious continuity result.
\[L:lemmano1\] Let $|\nabla|$: $L^{\infty}(R^n)\longmapsto \mathcal{S}'(R^n) $ be defined as above. If $u_m\in L^{\infty}$, with $u_m$ converges weakly$^\ast$ to $u$ in $L^{\infty}$ (viewed as the dual of $L^1(R^n)$), then $|\nabla|u_m$ converges to $|\nabla|u$ in $\mathcal{S}'(R^n) $
**Proof**: This follows directly from the definitions.\
Recall the definition of Hölder norm in $R^n$: $\|u\|_{C^{m,\alpha}(R^n)}:=\sum_{i=0}^m\sup_{|\beta|\leq m}\sup_{x\in R^n}|\partial^{\beta}u(x)|+\sup_{|\beta|=m}\sup_{x,y\in R^n,~~x\neq y}\frac{|\partial^{\beta}u(x)-\partial^{\beta}u(y)|}{|x-y|^{\alpha}}$ for any $m\ge 0$, $0<\alpha<1$. The Hölder space $C^{m,\alpha}(R^n)$ is consisted of all $u$ with $\|u\|_{C^{m,\alpha}}<\infty$. We will use the following estimate:
\[L:lemmano2\] $|\nabla|:C^{m+1,\alpha}(R^n)\mapsto C^{m,\alpha}(R^n)$ is bounded, for $m\geq 1,~~0<\alpha<1$.
**Proof**: This follows from the representation $|\nabla|=\sum_j R_j\partial_j$ and the Schauder estimates for the Riesz transform.
We will denote by $|\nabla'|$ the analogue of $|\nabla|$ acting only on the variables $x_1,\dots,x_{n-1}$. For Schwartz function $f$, $|\nabla'|f(x',x_n)=(2\pi |\xi'|\hat{f}(\xi',x_n))^{\vee}(x')$, where the Fourier transform and inverse Fourier transform are both with respect to the first $n-1$ variables. From definition, it is clear if $f(x',x_n,t)\in L^{\infty}(R^{n-1}\times(x_1,x_2)\times(t_1,t_2))$, then $|\nabla'|f\in \mathcal{D}'(R^{n-1}\times(x_1,x_2)\times(t_1,t_2))$. Moreover, if $\partial_{x'}^l\partial_n^k\partial_t^mf\in L^{\infty}$, then $|\nabla'|\partial_{x'}^l\partial_n^k\partial_t^mf=\partial_{x'}^l\partial_n^k\partial_t^m|\nabla'|f$ in $\mathcal{D}'(R^{n-1}\times(x_1,x_2)\times(t_1,t_2))$.\
For bounded harmonic functions in the upper half spaces, we have the following result (see also [@sU87], for example).
\[lm:lemmano3\] Let $f$ be a bounded harmonic function in the upper half space $R^n_{+}$, we have $(\partial_n f+|\nabla'|f)(x)=0$ in the sense of distributions in $R^n_{+}$.
**Remarks:** By classical regularity for harmonic functions and lemma \[L:lemmano2\], we see both $\partial_nf$ and $|\nabla'|f$ are smooth functions in the interior of $R^n_+$.
**Proof**: Let $P(x,y)$ be the Poisson kernel. By classical representation results there exists a $g\in L^{\infty}(R^{n-1})$, such that $f(x)=\int_{R^{n-1}} P(x,y)g(y)\,dy.$ By approximation and continuity properties of $|\nabla'|$ we can assume without loss of generality that $g$ is smooth and compactly supported. Applying Fourier transform in the $x_1,\dots,x_{n-1}$ variables, we have $\hat f(\xi',x_n)=\hat g(\xi')e^{-|\xi'|x_n}$ and the result follows.
[The cases $\Omega=R^n$ or a bounded domain]{} In this section, we first deal with the (easy) cases when $\Omega=R^n$ or $\Omega$ is a bounded domain. Recall that our goal is to show that bounded very weak ancient solutions to (\[eq:main1\])(\[eq:main2\]) are given by (\[eq:result\]).\
1. $\Omega=R^n$.\
In this case, it is not difficult to see that equations (\[eq:main3\])(\[eq:main4\]) are equivalent to $$\left.\begin{array}{rl}
\partial_tu-\Delta u+\nabla q&=0\\
\mbox{div}~~u&=0
\end{array}\right\}\quad\mbox{in $R^n\times(-\infty,0)$}$$ in the sense of distributions for some $q\in \mathcal{D}'(R^n\times(-\infty,0))$.\
For $1\leq i,j\leq n$, let $\omega_{ij}=\partial_ju_i-\partial_iu_j$, then clearly $\partial_t\omega_{ij}-\Delta \omega_{ij}=0$ in $R^n\times(-\infty,0)$. Since $\omega_{ij}$ are bounded in some negative Sobolev space, we immediatly get $\omega_{ij}$ are bounded functions from parabolic regularity. Thus $\omega_{ij}$ are so called bounded ancient solution to heat equation, and consequently $\omega_{ij}=$constants $c_{ij}$. Since $u$ is divergence free, we get $\Delta u_i=-\sum_{j=1}^n\partial_j\omega_{ij}=0$ in $R^n\times(-\infty,0)$. Therefore $u(t,x)=f(t)$ for some bounded measurable $f$ a.e t. This completes the proof when $\Omega=R^n$.\
2. $\Omega$ is a bounded domain.\
In this case our goal is to show that bounded very weak ancient solutions $u$ to (\[eq:main1\])(\[eq:main2\]) are identically 0. We use a duality argument as follows. For any $f\in C_c^{\infty}(\Omega\times(0,+\infty))$, let $\tilde{\phi}$ solve $$\begin{aligned}
\left.\begin{array}{rl}
\partial_t\tilde{\phi}-\Delta \tilde{\phi}+\nabla q&=f\\
\mbox{div}~~\tilde{\phi}&=0
\end{array}\right\}&&\quad\mbox{in $\Omega\times(0,\infty)$,}\\
\tilde{\phi}(\cdot,t)|_{\partial\Omega}=0\mbox{.}&&\end{aligned}$$ The existence, uniqueness and regularity of such solutions are well known, one can see e.g [@Galdi]. Moreover, we have $\lim_{t\to \infty}\|\tilde{\phi}(\cdot,t)\|_{L^2(\Omega)}=0$ (the decay is actually exponential). Take a standard smooth cutoff function $\eta(t)$ with $\eta(t)=0$ for $t>2$. For any $R>0$, let $\phi_R(x,t)=\eta(-\frac{t}{R})\tilde{\phi}(x,-t)$ for $t\in (-\infty,0)$. Then from equations (\[eq:main3\])(\[eq:main4\]) we obtain $$\begin{aligned}
&&0=\int_{-\infty}^0\int_{\Omega}u(x,t)(\partial_t\phi_R+\Delta\phi_R)dxdt\\
&&=\int_{-\infty}^0\int_{\Omega}u(x,t)(-\partial_t\tilde{\phi}+\Delta\tilde{\phi})(x,-t)\eta(-\frac{t}{R})dxdt\\
&&-\frac{1}{R}\int_{-\infty}^0\int_{\Omega}u(x,t)\eta'(-\frac{t}{R})\tilde{\phi}(x,-t)dxdt\\
&&=-\int_{-\infty}^0\int_{\Omega}u(x,t)f(x,-t)\eta(-\frac{t}{R})dxdt-\frac{1}{R}\int_{-\infty}^0\int_{\Omega}u(x,t)\eta'(-\frac{t}{R})\tilde{\phi}(x,-t)dxdt\\
&&+\int_{-\infty}^0\int_{\Omega}u(x,t)\eta(\frac{t}{R})\nabla q(x)dxdt\end{aligned}$$ Using the fact that $f$ is compactly supported in $t$, $q$ is smooth in $x$, $u$ is bounded and $\lim_{t\to \infty}\|\tilde{\phi}(\cdot,t)\|_{L^1(\Omega)}=0$ (since $\Omega$ is bounded), we can send $R\to \infty$ and obtain $$\int_{-\infty}^0\int_{\Omega}u(x,t)f(x,-t)dxdt=0\mbox{.}$$ Since $f$ is arbitrary, we must have $u\equiv 0$.
[The case $\Omega=R^{n}_{+}$]{} Now let us deal with the more subtle case when $\Omega$ is a half space. In fact one can still use the idea of duality as in the case of bounded domains. In this case, however, one has to study the decay property of solution to the linear Stokes equations quite carefully. One also has to appropriately localize $\tilde{\phi}$ (assuming notations from the last section) since in (\[eq:main3\]) the test function $\phi$ is required to be of compact support. The authors have obtained a proof using such a method, which will appear elsewhere.\
Here we take a different approach based on the Fourier transform in which the calculations are simpler.\
Let $u$ be as above, take a smooth mollifier $\eta(x',t)$ with $supp~~\eta\Subset B_1(0)\times(0,1)\subseteq R^{n-1}_{x'}\times R_t$ and $\int \eta=1$. We define the mollified vector field $u^{\epsilon}$ similar as before, again by duality: for any smooth $\phi$ with $supp~~\phi\Subset R^n_{+}\times(-\infty,0)$, $$(u^{\epsilon},\phi):=\int_{-\infty}^0\int_{R^n_{+}}u(x,t)\int_{R^{n-1}}\int_{-\infty}^0\epsilon^{-n}\eta(\frac{y'-x'}{\epsilon},\frac{s-t}{\epsilon})\phi(y',x_n,s)dy'dsdxdt\mbox{.}$$ Again similar as before, one can show $u^{\epsilon}$ is bounded with bounded distributional derivatives $|\partial_t^k\partial_{x'}^{\alpha}u^{\epsilon}|\leq C(k,\alpha,n)\epsilon^{-k-|\alpha|}\|u\|_{L^{\infty}_{x,t}}$. We have the following result:
Let $u$ be a bounded very weak ancient solution to (\[eq:main1\])(\[eq:main2\]) in $R^n_{+}\times(-\infty,0)$, $u^{\epsilon}$ is defined as above. Then $u^{\epsilon}$ is smooth with all derivatives bounded in $\overline{R^n_{+}}\times (-\infty,0)$. Moreover, $u^{\epsilon}$ still satisfies equations (\[eq:main3\]),(\[eq:main4\]) and $u^{\epsilon}(\cdot,t)|_{x_n=0}=0$.
**Proof**: From equations (\[eq:main3\])(\[eq:main4\]) and definition of $u^{\epsilon}$ we see $$\label{eq:no1}
\int_{-\infty}^0\int_{\Omega}u^{\epsilon}(x,t)(\partial_t \phi+\Delta \phi)(x,t)dxdt=0$$ for any $\phi\in C_c^{\infty}(\overline{\Omega}\times (-\infty,0))$ with $\mbox{div}~~\phi=0$, $\phi|_{\partial \Omega\times (-\infty,0)}=0$; and $$\label{eq:no2}
\int_{-\infty}^0\int_{\Omega}u^{\epsilon}(x,t)\nabla \psi(x,t)dxdt=0$$ for any $\psi\in C_c^{\infty}(\overline{\Omega}\times (-\infty,0))$. These clearly imply $$\left.\begin{array}{rl}
\partial_tu^{\epsilon}-\Delta u^{\epsilon}+\nabla\cdot q&=0\label{eq:mollified}\\
\mbox{div}~~u^{\epsilon}&=0
\end{array}\right\} \quad\mbox{in $\mathcal{D}'(R^n_{+}\times(-\infty,0))$.}$$ We first show $u^{\epsilon}$ is smooth up to boundary $\{x_n=0\}$. From the differentiablity property of $u^{\epsilon}$ in $x',t$, we see $q$ is well defined for each $t\in(\infty,0)$ modulo some $c(t)$. Moreover, from $\Delta q=0$ and elliptic estimates, we know $q$ is smooth in $x$ away from boundary $\{x_n=0\}$. Now let us rewrite the $n$-th equation of (\[eq:mollified\]) as $$0=\partial_tu^{\epsilon}_n-\Delta u^{\epsilon}_n+\partial_nq=\frac{\partial}{\partial x_n}(q-\partial_nu^{\epsilon}_n)-\Delta_{x'}u^{\epsilon}_n+\partial_tu^{\epsilon}_n\mbox{.}$$ Note that $\partial_n u^{\epsilon}_n=-\sum_{i=1}^{n-1}\partial_i u^{\epsilon}_i$ is bounded up to $x_n=0$. We see $q-\partial_nu^{\epsilon}_n$ is bounded up to boundary, thus $q$ is bounded up to boundary $\{x_n=0\}$. The same argument also shows $\nabla_{x'}^{\alpha}q$ is bounded up to boundary. Use $\Delta q=0$ we obtain that $q$ is smooth in spatial variables up to $x_n=0$. Then from $\partial_n^2u^{\epsilon}=\partial_tu^{\epsilon}-\Delta_{x'}u^{\epsilon}+\nabla q$ we get $u\in C^2$. By differentiating the equations in $x_n$ and applying similar arguments we obtain smoothness of $u^{\epsilon}$. Next we show $u^{\epsilon}|_{x_n=0}=0$. Since $u^{\epsilon}$ is smooth in $\overline{R^n_{+}}$, we can use equations (\[eq:main3\])(\[eq:main4\]) and integration by parts in equations (\[eq:no1\])(\[eq:no2\]) to obtain: $$\begin{aligned}
\int_{-\infty}^0\int_{R^{n-1}}u^{\epsilon}_n\psi dxdt&=&0\mbox{,}\\
\int_{-\infty}^0\int_{R^{n-1}}u^{\epsilon}\phi dxdt&=&0\mbox{.}\end{aligned}$$ Clearly $\psi$ can be arbitrary smooth compactly supported function, thus $u^{\epsilon}_n|_{x_n=0}\equiv 0$. By lemma \[lm:extension\], $\phi|_{x_n=0}$ can be any smooth compactly supported vector field with zero $n$-th component, thus $u^{\epsilon}_i|_{x_n=0}\equiv 0$ for $1\leq i\leq n-1$. Therefore, $u^{\epsilon}|_{x_n}\equiv 0$.\
Now we can prove our main theorem in this section.
Let $u$ be a bounded very weak ancient solution to equations (\[eq:main1\])(\[eq:main2\]) in $R^n_{+}\times (-\infty,0)$, then we must have $u(x,t)=u(x_n,t)$ and $u_n\equiv 0$.
**Proof**: By the above results, it is clear we only need to prove our theorem in the case $u(t,x)$ is smooth up to boundary, with all derivatives bounded and $u|_{x_n=0}=0$. Then we see $\frac{\partial p}{\partial x_i}$ is bounded, for $1\leq i\leq n$. Since $\Delta \frac{\partial p}{\partial x_i}=0$, from lemma \[lm:lemmano3\] we get $(\partial_n+|\nabla'|)\frac{\partial p}{\partial x_i}=0$. Applying operator $(\partial_n+|\nabla'|)$ to the $n$-th equation of (\[eq:main1\]), noting the commutativity of various Fourier multipliers (since $u$ is smooth), we infer that $(\partial_n+|\nabla'|)u_n$ satisfies the heat equation in $R^n\times (-\infty,0)$ with $(\partial_n+|\nabla'|)u_n$ bounded and $(\partial_n+|\nabla'|)u_n|_{x_n=0}=0$ (since $\partial_nu_n=-\sum_{i=1}^{n-1}\partial_iu_i$ and $u|_{x_n=0}=0$). Thus by Liouville’s theorem for heat equation in a half space, we get $(\partial_n+|\nabla'|)u_n\equiv 0$, and consequently $\Delta u_n=0$. Since we also have $u_n|_{x_n=0}=0$, we see $u_n\equiv 0$. Therefore $\frac{\partial p}{\partial x_n}=0$. Thus $|\nabla'|\frac{\partial p}{\partial x_i}=0$ for $1\leq i \leq n$. Again applying operator $|\nabla'|$ to the first $n-1$ equations of (\[eq:main1\]), we get $|\nabla'|u'$ satisfies heat equation in $R^n\times (-\infty,0)$ with $(|\nabla'|u')|_{x_n=0}=0$ and $|\nabla'|u'$ bounded. Using Liouville’s theorem for heat equation in a half space again, we obtain $|\nabla'|u'\equiv 0$. Thus $u'(t,x)=u'(t,x_n)$. Summarizing the above, we obtain $u(t,x)=u(t,x_n)$, and $u_n(t,x)\equiv 0$.
[The case $\Omega$ is an exterior domain]{} Let $u$ be a bounded very weak ancient solution to (\[eq:main1\])(\[eq:main2\]) in an exterior domain $\Omega$ (i.e, the complement of $\Omega$ is homeomorphic to a ball), we show $u(x,t)=f(t)+O(\frac{1}{|x|^{n-2}})$ with some bounded $f$ and $n\ge 3$, in this section. More precisely we have the following theorem:
Let $u$ be a bounded very weak ancient solution to equations (\[eq:main1\])(\[eq:main2\]) in $\Omega\times(-\infty,0)$, where $\Omega\subset R^n$ ($n\ge 3$) is an exterior domain with $\Omega^c\subset B_R$ for some $R>0$. $\|u\|_{L^{\infty}_{x,t}}\leq 1$. Then there exists a function $a(t)$ with $|a(t)|\leq 1$ for almost every $t$ such that $$|u(x,t)-a(t)|\leq \frac{C(n,R)}{|x|^{n-2}} \quad\mbox{for almost every $|x|\ge 4R$ and $t<0$.}$$
For such purpose, we first mollify $u$ in $t$ variable as in lemma \[lm:mollification\], it is clear that $u^{\epsilon}$ thus obtained still satisfies equations (\[eq:main3\])(\[eq:main4\]). Our first goal is to show that $u^{\epsilon}$ is smooth in $\overline{\Omega}\times (-\infty,0)$ and $u^{\epsilon}|_{\partial\Omega,t<0}=0$.
Let $u$ and $u^{\epsilon}$ be as above. Then $u^{\epsilon}$ is smooth in $\overline{\Omega}\times (-\infty,0)$ and $u^{\epsilon}|_{\partial\Omega,t<0}=0$.
**Proof**: Clearly $u^{\epsilon}$ verifies $$\left.\begin{array}{rl}
\partial_tu^{\epsilon}-\Delta u^{\epsilon}+\nabla q&=0\\
\mbox{div}~~u^{\epsilon}&=0
\end{array}\right\}\quad\mbox{in $\Omega\times(-\infty,0)$.}$$
In equations (\[eq:main3\])(\[eq:main4\]), take test functions as $\eta(t)\phi(x)$, $\eta(t)\psi(x)$ respectively for smooth $\phi$, $\psi$ with ${\rm supp}~~\phi$, ${\rm supp}~~\psi\Subset \overline{\Omega}$, $\mbox{div}~~\phi=0$, $\phi|_{\partial\Omega}=0$ and $\eta\in C_c^{\infty}(-\infty,0)$. We obtain by integration by parts (and definition of $u^{\epsilon}$): $$\begin{aligned}
\int_{-\infty}^0\int_{\Omega}(-\partial_tu^{\epsilon}\phi+u^{\epsilon}\Delta\phi)\eta(t)dxdt&=&0\mbox{,}\\
\int_{-\infty}^0\int_{\Omega}u^{\epsilon}\cdot\nabla\psi\eta(t)dxdt&=&0\mbox{.}\end{aligned}$$ Since $\eta$ is arbitrary, we get for any $t\in (-\infty,0)$, $$\begin{aligned}
\int_{\Omega}-\partial_tu^{\epsilon}\phi+u^{\epsilon}\Delta\phi dxdt &=&0\mbox{,}\label{eq:no3}\\
\int_{\Omega}u^{\epsilon}\cdot\nabla\psi dxdt &=&0\mbox{.}\label{eq:no4}\end{aligned}$$ Take $R>0$ sufficiently large such that $\Omega^{c}\subset B_R(0)$. For fixed $t<0$, we can find $v\in C^{1,1/2}(\Omega\cap B_R)$ satisfying $$\begin{aligned}
&&\left.\begin{array}{rl}
-\Delta v+\nabla p&=-\partial_tu^{\epsilon}(\cdot,t)\label{eq:no5}\\
\mbox{div}~~v&=0
\end{array}\right\}\quad \mbox{in $\Omega\cap B_R$,}\\
&& v|_{\partial \Omega}=0, \quad v|_{\partial B_R}=u^{\epsilon}(\cdot,t)|_{\partial B_R}\mbox{.}\end{aligned}$$ Note in the interior of $\Omega$, $u^{\epsilon}$ is smooth by lemma \[lm:linearregularity\] and definition of $u^{\epsilon}$. The existence of $v$ follows from well-known results of steady Stokes system, we only remark here that the usual no outflow condition required by existence theory is satisfied in our situation and can be easily seen by setting $\psi$ to be 1 in a neighborhood of $\partial \Omega$ in equation (\[eq:no4\]). Set $w=u^{\epsilon}(\cdot,t)-v$, we claim $w\equiv 0$ in $\Omega\cap B_R$. To prove the claim, take any $\phi\in C^{\infty}(\overline{\Omega\cap B_R})$ with $\mbox{div}~~\phi=0$ and $\phi|_{\partial (\Omega\cap B_R)}=0$, $\psi\in C^{\infty}(\overline{\Omega\cap B_R})$, we write $\phi=\phi_1+\phi_2$, $\psi=\psi_1+\psi_2$ with the following properties: $\mbox{div}~~\phi_1=\mbox{div}~~\phi_2=0$, $\phi_1,~~\phi_2$, $\psi_1,~~\psi_2$ are smooth; $\phi_1$, $\psi_1$ equal $\phi$ and $\psi$ in a neighborhood of $\partial \Omega$, vanishes in a neighborhood of $\partial B_R$ respectively. The existence of such decomposition of $\psi$ is clear. To obtain such this decomposition for $\phi$, one can localize $\phi$ by a standard cutoff function vanishing in a neighborhood of $\partial B_R$, then use Bogovski’s theorem to deal with the divergence free condition, we omit the details here. With these decompositions, equations (\[eq:no3\])(\[eq:no4\]), the definitions of $v$ and the fact that $u^{\epsilon}$ is smooth away from $\partial \Omega$, we easily obtain: $\int_{\Omega}w\Delta\phi dx=0$ and $\int_{\Omega}w\nabla \psi dx=0$. Thus by result in section 3, this implies $w=0$. Therefore, $u^{\epsilon}(\cdot,t)\in C^{1,1/2}(\overline{B_R\cap \Omega})$ and $u^{\epsilon}|_{\partial\Omega}=0$. A simple boostraping argument gives smoothness of $u^{\epsilon}$. The lemma is proved.\
**Proof of main result of this section**\
Let us first summarize the above results as follows:\
$u^{\epsilon}$ is in $C^{\infty}(\overline{\Omega}\times(-\infty,0))$ with all derivatives bounded (with bounds depending on $\epsilon$) and, $u^{\epsilon}$ satisfies $$\begin{aligned}
\left.\begin{array}{rl}
\partial_tu^{\epsilon}-\Delta u^{\epsilon}+\nabla q&=0\\
\mbox{div}~~u^{\epsilon}&=0
\end{array}\right\}&&\quad\mbox{in $\Omega\times(-\infty,0),$}\\
u^{\epsilon}(\cdot,t)|_{\partial \Omega}=0\mbox{.}&&\end{aligned}$$ We extend $u^{\epsilon}$ to $R^n$ by setting $u^{\epsilon}=0$ in $\Omega^c$. It is not hard to see the extended $u^{\epsilon}$ satisfies $$\left.\begin{array}{rl}
\partial_tu^{\epsilon}-\Delta u^{\epsilon}+\nabla q&=\mu\\
\mbox{div}~~u&=0
\end{array}\right\}\quad\mbox{in $R^n\times(-\infty,0)$.}$$ for the measure $\mu=f^{\epsilon}(x,t)d\sigma$, where $f^{\epsilon}=\frac{\partial u^{\epsilon}}{\partial n}-qn$ on $\partial \Omega$ and $d\sigma$ is the surface measure of $\partial\Omega$. We set $$v^{\epsilon}(x,t)=\int_{-\infty}^tPe^{\Delta (t-s)}\mu(\cdot,s)ds=\int_{-\infty}^t\frac{1}{(t-s)^{n/2}}\int_{\partial \Omega}k(\frac{x-y}{\sqrt{t-s}})f^{\epsilon}(y,s)d\sigma(y)ds\,\,,$$ where $P$ is the Helmholtz projection to divergence free vector field and $k(\cdot)$ is the kernel of $Pe^{\Delta}$. Thus $|k(y)|\leq \frac{C(n)}{(1+|y|)^n}$. Simple calculations show $v^{\epsilon}$ verifies the following estimates: $\|v^{\epsilon}(\cdot,t)\|_{L^1(B_{2R})}\leq C(R,n,\epsilon), \quad \mbox{and}~~ |v^{\epsilon}(x,t)|\leq \frac{C(n,\epsilon,R)}{|x|^{n-2}}$, $|\nabla v^{\epsilon}(x,t)|\leq \frac{C(n,\epsilon,R)}{|x|^{n-1}}$, for $|x|\ge 2R$. In the above calculations $n\ge3$ is important. One can check if $n=2$ the integral might diverge. Clearly $w^{\epsilon}:=u^{\epsilon}-v^{\epsilon}$ satisfies $$\left.\begin{array}{rl}
\partial_tw^{\epsilon}-\Delta w^{\epsilon}+\nabla q&=0\\
\mbox{div}~~w^{\epsilon}&=0
\end{array}\right\}\quad\mbox{in $R^n\times(-\infty,0)$.}$$ Thus $w^{\epsilon}=a^{\epsilon}(t)$ and consequently $u^{\epsilon}=v^{\epsilon}+a^{\epsilon}(t)$. At this stage, we would like to pass $\epsilon$ to zero. The decay estimate for $v^{\epsilon}$, however, depends on $\epsilon$ (since the bounds of $f^{\epsilon}$ depends on $\epsilon$). Thus we must first remove this dependence. To do this, let us consider vorticity $\omega^{\epsilon}_{ij}=\partial_iu^{\epsilon}_j-\partial_ju^{\epsilon}_i$ for $1\leq i,j\leq n$. $\omega^{\epsilon}_{ij}$ satisfy $\partial_t\omega^{\epsilon}_{ij}-\Delta \omega^{\epsilon}_{ij}=0$ in $(R^n\backslash B_{2R})\times (-\infty,0)$. By interior regularity of solution to heat equation, the scaling invariance $\omega^{\epsilon}_{ij}(x,t)\to \omega^{\epsilon}_{ij}(Mx,M^2x)$, and the $L^{\infty}$ bound on $u$, we easily conclude:\
*$\omega^{\epsilon}_{ij}$ is smooth in $(R^n\backslash B_{3R})\times (-\infty,0)$ with $|\partial_t\omega^{\epsilon}_{ij}(x,t)|\leq \frac{C(n,R)}{|x|^3}$, $|x|\ge 3R$, the estimate being independent of $\epsilon$.*\
Now fix $\epsilon=\epsilon_0>0$. From estimates of $u^{\epsilon_0}$ above, we know $|\omega^{\epsilon_0}_{ij}(x,t)|\leq \frac{C(n,\epsilon_0,R)}{|x|^{n-1}}$, $|x|\ge 3R$. By estimates of $\partial_t \omega_{ij}$ and definition of $\omega^{\epsilon}_{ij}$, we see $|\omega_{ij}(x,t)-\omega^{\epsilon_0}_{ij}(x,t)|\leq \frac{C(n,\epsilon_0,R)}{|x|^3}$, $|x|\ge 3R$. Therefore, $|\omega_{ij}(x,t)|\leq C(n,\epsilon_0,R)(\frac{1}{|x|^3}+\frac{1}{|x|^{n-1}})$, $|x|\ge 3R$. In fact, by a boostrap argument (better decay estimate of $\omega_{ij}$ improves estimate of $\partial_t\omega_{ij}$ which in turn implies better decay estimate of $\omega_{ij}$), one can upgrade this estimate to $|\omega_{ij}(x,t)|\leq \frac{C(n,\epsilon_0,R)}{|x|^{n-1}}$. Thus $|\omega^{\epsilon}(x,t)|\leq \frac{C(n,\epsilon_0,R)}{|x|^{n-1}}$, (now independent of $\epsilon$) for $|x|\ge 3R$. For each fixed $t\in (-\infty,0)$, $u^{\epsilon}_i$ solves $$-\Delta u^{\epsilon}_i=-\sum_{j=1}^n\partial_j\omega^{\epsilon}_{ij} \quad\mbox{in $R^n\backslash B_{3R}$.}$$ From this and the boundedness of $u$, it is not hard to see $u^{\epsilon}_i$ verifies the following bound: $|u^{\epsilon}(x,t)-a^{\epsilon}(t)|\leq \frac{C(n,\epsilon_0,R)}{|x|^{n-2}}$ for $|x|\ge 4R$. Passing $\epsilon\to 0$ the conclusion of this section is reached.
Fefferman,C., Stein,E.M., *$\mathcal{H}^p$ spaces of several variables*, Acta Math, 129(1972), no.3-4, 137-193\
Galdi,G.P., *An introduction to the mathematical theory of the Navier-Stokes equations, Vol I. Linearized Steady problems*, Springer Tracts in Natural philosophy 38. Springer-Verlag, New York, 1994\
Geissert,M., Hech, H., Hieber, M., *On the equation $\mbox{div}~~u=g$ and Bogovskii’s operator in Sobolev spaces of negative order*, Partial differential equations and functional analysis, 113-121, Oper. Theory. Adv. Appl., 168, Birkhäuser,Basel,2006\
Kang,K., *On regularity of Stationary Stokes and Navier-Stokes equations near boundary*, J. Math. Fluid. Mech, 6(2004), no.1 78-101\
Kang,K., *Unbounded normal derivative for the Stokes System near boundary*, Math. Ann. 331(2005), no.1, 87-109\
Koch, G., Nadirashvili, N., Seregin, G., Sverak, V., [*Liouville theorem for the Navier-Stokes equations and applications*]{}, Acta Math., 203 (2009), no. 1, 83�105.\
Seregin, G., Sverak, V., [*On type I singularities of the local axi-symmetric solutions of the Navier-Stokes equations*]{}, Comm. Partial Differential Equations 34 (2009), no. 1-3, 171–201.\
Seregin,G., Šverák,V., [*On a bounded shear flow in half-space*]{}, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Math. Inst. Steklov. (POMI) 385 (2010), Kraevye Zadachi\
Solonnikov,V.A., *On nonstationary Stokes problem and Navier-Stokes problem in a half space with initial data nondecreasing at infinity*, Journal of Mathematical sciences, Vol 114. No. 5, 2003.\
Solonnikov,V.A., *Estimates for solutions of the nonstationary Stokes problem in anisotropic sobolev spaces and estimates for the resolvent of the Stokes operator*, Russian Math. Surveys 58, (2003), no.2 page 331-365.\
Stein,E.M., *Singular integrals and Differentiability properties of functions*, Princeton University Press, 1970\
Ukai,S., *A solution formula for the Stokes equation in $R^n_{+}$*, Comm. Pure Appl. Math 40 (1987), no.5, page 611-621\
| |
The present invention relates to an optical device (1), comprising a lens (10) having an adjustable focal length, the lens (10) comprising a container (11) that encloses a lens volume (V) and a reservoir volume (R) that is connected to the lens volume (V), the two volumes (R, V) are filled with a transparent liquid (L), the container (11) further comprises a flat lateral wall structure (12) havinga front side (12a) and a back side (12b), an elastically deformable and transparent membrane (20), a transparent cover element (30), and an an elastically deformable wall portion (22), wherein the membrane (20) is connected to the back side (12b) of the lateral wall structure (12), the cover element (30) is connected to the front side (12a) of the lateral wall structure (12) such that the lens volume (V) is arranged between the cover element (30) and the membrane (20), and the wall portion (22) is arranged adjacent the reservoir volume (R), and the wall portion (R) comprises an inside (22a) and an outside (22b) facing away from said inside (22a), the inside (22a) contacts the liquid (L) residing in the reservoir volume (R), and the lens (10) further comprises a lens shaper (40) that is connected to the membrane (20) and defines an area (21) of the membrane (20), which area (21) has an adjustable curvature and contacts the liquid (L) in the lens volume (V), and the lens (10) further comprises a movable piston (50) connected to the outside (22b) of the wall portion (22) and configured to act on said outside (22b) to pump liquid (L) from the reservoir volume (R) into the lens volume(V) or from the lens volume (V) into the reservoir volume (R) so as to change the curvature of said area (21) of the membrane (20) and therewith the focal length of the lens (10). | |
---
abstract: 'The swimming of a pair of spherical bladders that change their volumes and mutual distance is efficient at low Reynolds numbers and is superior to other models of artificial swimmers. The change of shape resembles the wriggling motion known as [*metaboly*]{} of certain protozoa.'
author:
- |
J.E. Avron, O. Kenneth and D.H. Oaknin\
Department of Physics, Technion, Haifa 32000, Israel\
title: 'Pushmepullyou: An efficient micro-swimmer'
---
Swimming at low Reynolds numbers can be remote from common intuition because of the absence of inertia [@childress]. In fact, even the direction of swimming may be hard to foretell [@purcel]. At the same time, and not unrelated to this, it does not require elaborate designs: Any stroke that is not self-retracing will, generically, lead to some swimming [@wilczek]. A simple model that illustrates these features is the three linked spheres [@najafi], Fig. \[fig:swimmer\] (right), that swim by manipulating the distances $\ell_{1,2}$ between neighboring spheres. The swimming stroke is a closed, area enclosing, path in the $\ell_1-\ell_2$ plane. Another mechanical model that has actually been built is Purcell’s two hinge model [@blades].
Swimming efficiently is an issue for artificial micro-swimmers [@agk]. As we have been cautioned by Purcell not to trust common intuition at low Reynolds numbers [@purcel], one may worry that efficient swimming may involve unusual and nonintuitive swimming styles. The aim of this letter is to give an example of an elementary and fairly intuitive swimmer that is also remarkably efficient provided it is allowed to make large strokes.
The swimmer is made of two spherical bladders, Fig. \[fig:swimmer\] (left). The bladders are elastic bodies which impose no-slip boundary conditions. The device swims by cyclically changing the distance between the bladders and their relative volumes. For the sake of simplicity and concreteness we assume that their total volume, $v_0$, is conserved. The swimming stroke is a closed path in the $ v-\ell$ plane where $v$ is the volume of, say, the left sphere and $\ell$ the distance between them. We shall make the further simplifying assumption that the viscosity of the fluid contained in the bladders is negligible compared with the viscosity of the ambient fluid. For reasons that shall become clear below we call the swimmer pushmepullyou.
Like the three linked spheres, pushmepullyou is mathematically elementary only in the limit that the distance between the spheres is large, i.e. when ${\varepsilon}_i=a_i/\ell\ll 1$. ($a_i$ stands for the radii of the two spheres and $\ell$ for the distances between the spheres.) We assume that the Reynolds number $R={\rho av/\mu}\ll
1$, and that the distance $\ell$ is not too large: ${\ell v}\ll
\mu/\rho$. The second assumption is not essential and is made for simplicity only. (To treat large $\ell$ one needs to replace the Stokes solution, Eq. (\[stokes\]), by the more complicated, but still elementary, Oseen-Lamb solution [@batchelor].)
Pushmepullyou is simpler than the three linked spheres: It involves two spheres rather than three; it is more intuitive and is easier to solve mathematically. It also swims a larger distance per stroke and is considerably more efficient [@movie]. If large strokes are allowed, it can even outperform conventional models of biological swimmers that swim by beating a flagellum [@lighthill]. If only small strokes are allowed then pushmepullyou, like all squirmers [@agk], becomes rather inefficient.
2 cm![*Five snapshots of the pushmepullyou swimming stroke (left) and the corresponding strokes of the three linked spheres (right). Both figures are schematic. After a full cycle the swimmers resume their original shape but are displaced to the right. Pushmepullyou is both more intuitive and more efficient than the three linked spheres.*[]{data-label="fig:swimmer"}](swimmer-z.eps "fig:"){width="5cm"}![*Five snapshots of the pushmepullyou swimming stroke (left) and the corresponding strokes of the three linked spheres (right). Both figures are schematic. After a full cycle the swimmers resume their original shape but are displaced to the right. Pushmepullyou is both more intuitive and more efficient than the three linked spheres.*[]{data-label="fig:swimmer"}](swimmer-i.eps "fig:"){width="5cm"}
The swimming velocity is defined by $\dot X= (U_1+U_2)/2$ where $U_i$ are the velocities of the centers of the two spheres. To solve a swimming problem one needs to find the (linear) relation between the (differential) displacement ${\kern-.1em{\raise.8ex\hbox{ -}}\kern-.6em{d}}X$, and the (differential) controls $(d\ell,dv)$. This relation, as we shall show, takes the form: $$\label{swim}
2\, {\kern-.1em{\raise.8ex\hbox{ -}}\kern-.6em{d}}X= \frac {a_1-a_2}{a_1+a_2}\ d\ell\ + \frac 1
{2\pi\ell^2} \ d v,$$ where $a_1,a_2$ are the radii of the left and right spheres respectively and $v$ is the volume of the left bladder. ${\kern-.1em{\raise.8ex\hbox{ -}}\kern-.6em{d}}X$ stresses that the differential displacement does not integrate to a function $X(\ell,v)$. Rather, the displacement $X(\gamma)$ depends on the stroke $\gamma$, defined as a closed path in $\ell-v$ plane. The first term says that increasing $\ell$ leads to swimming in the direction of the small sphere. It can be interpreted physically as the statement that the larger sphere acts as an anchor while the smaller sphere does most of the motion when the “piston” $\ell$ is extended. The second term says that when $\ell$ is held fixed, the swimming is in the direction of the contracting sphere: The expanding sphere acts as a source pushing away the shrinking sphere which acts as a sink to pull the expanding sphere. This is why the swimmer is dubbed pushmepullyou.
To gain further insight consider the special case of small strokes near equal bi-spheres. Using Eq. (\[swim\]) one finds, dropping sub-leading terms in ${\varepsilon}_i=a_i/\ell$: $$\label{curvature}
\delta X= \,\frac{1}{6}\, d\log v\wedge d\ell$$ The distance covered in one stroke scales like the area in $\log
v-\ell$ plane. Note that the swimming distance [*does not*]{} scale to zero with ${\varepsilon}$, when the spheres are far apart. This is in contrast with the three linked spheres where the swimming distance of one stroke is proportional to ${\varepsilon}$. For a small cycle in the $\ell_1-\ell_2$ plane Najafi et. al. find for a symmetric swimmer (Eq. (11) in [@najafi]): $$\label{curvature-iran}
\delta X=0.7 {\varepsilon}\,d\log \ell_2\wedge d\ell_1$$ When the swimmer is elementary, (=when ${\varepsilon}$ is small), it is also poor.
Consider now a large stroke associated with the closed rectangular path enclosing the box $\ell_s\leq\ell\leq\ell_L,\; v_s\leq v_1,
v_2 \leq v_L\equiv v_0-v_s$, where $v_1 = v$ and $v_2$ are, respectively, the volumes of the left and right bladders. If $a_s\ll a_L$ then from Eq. (\[swim\]), $X(\gamma)$ is essentially $\ell_L-\ell_s$: $$\label{step}
X(\gamma) = \left( \frac {a_L-a_s}{a_L+a_s}\right)\ (\ell_L-
\ell_s) \left(1+O({\varepsilon}^3)\right)
$$ This says that the distance covered in one stroke is of the order of the size of the swimmer, i.e. the distance between the balls $\ell$.
Certain protozoa and species of [*Euglena*]{} perform a wriggling motion known as [*metaboly*]{} where, like pushmepullyou, body fluids are transferred from a large spheroid to a small spheroid [@metaboly]. Metaboly is, at present not well understood and while some suggest that it plays a role in feeding others argue that it is relevant to locomotion [@theriot]. The pushmepullyou model shows that at least as far as fluid dynamics is concerned, metaboly is a viable method of locomotion. Racing tests made by R. Triemer [@triemer] show that Euglenoids swim 1-1.5 their body length per stroke, in agreement with Eq. (\[step\]) for reasonable choices of stroke parameters. Since Euglena resemble deformed pears — for which there is no known solution to the flow equations — Pushmepullyou is, at best, a biological over-simplification. It has the virtue that it admits complete analysis.
The second step in solving a swimming problem is to compute the power $P$ needed to propel the swimmer. By general principles, $P$ is a quadratic form in the velocities in the control space and is proportional to the (ambient) viscosity $\mu$. The problem is to find this quadratic form explicitly. If the viscosity of the fluid inside the bladders is negligible, one finds that in order to drive the controls $\ell$ and $v$, Pushmepullyou needs to invest the power $$\label{metric}
\frac P {6\pi\mu}= \left(\frac 1 {a_1} +\frac 1 {a_2}\right)^{-1}
\, \dot\ell^2 +\frac {2} {9\pi}\ \left(\frac 1 {v_1} +\frac 1
{v_2}\right) \dot{v}^2$$ Note that the dissipation associated with $\dot\ell$, is dictated by the [*small*]{} sphere and decreases as the radius of the small sphere shrinks. ( The radius can not get arbitrarily small and must remain much larger than the atomic scale for Stokes equations to hold.) The moral of this is that pushing the small sphere is frugal. The dissipation associated with $\dot v$ is also dictated by the small sphere. However, in this case, dilating a small sphere is expensive.
The drag coefficient is a natural measure to compare different swimmers. It measures the energy dissipated in swimming a fixed distance at fixed speed. (One can always decrease the dissipation by swimming more slowly.) Let $\tau$ denote the stroke period. The drag is formally defined by [@lighthill; @samuel]: $$\label{delta}\delta(\gamma)=\frac { \tau\int_0^\tau P dt}{6\pi\mu
X^2(\gamma)}\,.$$$X(\gamma)$ is the swimming distance of the stroke $\gamma$. The smaller $\delta$ the more efficient the swimmer. $\delta$ has the dimension of length (in three dimensions) and is normalized so that dragging of a sphere of radius $a$ with an external force has $\delta=a$.
To compute the dissipation for the rectangular path we need to choose rates for traversing it. The optimal rates are constant on each leg provided the coordinates are chosen as $(\ell,\arcsin\sqrt {v\over v_0})$. This can be seen from the fact that if we define $x=\arcsin\sqrt{v\over v_0}$, then $4v_0\dot{x}^2= \left(\frac 1 {v_1} +\frac 1 {v_2}\right)
\dot{v}^2$ and the Lagrangian associated with Eq. (\[metric\]) is quadratic in $(\dot \ell,\dot x)$ with constant coefficients, like the ordinary kinetic Lagrangian of non relativistic mechanics. It is a common fact that the optimal path of such a Lagrangian has constant speed.
>From Eq. (\[metric\]) we find, provided also $\ell_L^2\gg\ell_s^2, \ \ell_L/a_s\gg \sqrt{v_L/v_s}$ $$\label{dissipation}
\frac 1 {6\pi\mu} \int P dt\approx \frac{2
a_s\ell_L^2}{T_\ell}\left(1+O\left({\varepsilon}^2 \frac {v_L}{v_s} \,\frac
{T_\ell}{T_v}\right)\right),\quad T_\ell+T_v=\tau/2\,$$ where $T_\ell$ ($T_v$) is the time for traversing the horizontal (vertical) leg. (Here ${\varepsilon}^2$ is actually $(a_s/\ell_L)^2$ rather then the much larger $(a_L/\ell_s)^2$. Also note that the second term in Eq. (\[metric\]) contributed $O(v_L/T_\ell)$ rather then $O(v_L^2/(v_sT_\ell))$ as one may have expected from Eq. (\[metric\]) which is dominated by the small volume.) The optimal strategy, in this range of parameters, is to spend most of the stroke’s time on extending $\ell$. By Eqs. (\[delta\],\[step\],\[dissipation\]) this gives the drag $$\label{delta-best}
\delta \approx 4 a_s$$ where $a_s$ is the radius of the small bladder. [*This allows for the transport of a large sphere with the drag determined by the small sphere.*]{} To beat dragging, we need $a_s=a/4$, which means that most of the volume, $63/64$, must be shuttled between the two bladders in each stroke.
It is instructive to compare Pushmepullyou with the swimming efficiency of models of (spherical) micro-organisms that swim by beating flagella. These have been extensively studied by the school of Lighthill and Taylor [@lighthill; @blake] where one finds $\delta \ge 100\, a$. This is much worse than dragging. (We could not find estimates for the efficiency $\delta$ for swimming by ciliary motion [@cilia], but we expect that they are rather poor, as for other squirmers [@agk].) For models of bacteria that swim by propagating longitudinal waves along their surfaces Stone and Samuel [@samuel] established the (theoretical) lower bound $\delta \ge \frac{4}{3} a$. (Actual models of squirmers do much worse than the bound.) If the pushmepullyou swimmer is allowed to make large strokes, it can beat the efficiency of all of the above.
Eqs. (\[metric\],\[delta-best\])do not strictly apply to metaboly because the viscosity of the fluid inside the organism can not be neglected and presumably dominates the dissipation. Euglena are not as efficient as Pushmepullyou.
It is likely that some artificial micro-swimmers will be constrained to make only small (relative) strokes. Small strokes necessarily lead to large drag [@agk], but it is still interesting to see how large. Suppose $\delta\log
\ell\sim\delta\log v,\; a_1\sim a_2$. The dissipation in one stroke is then$$\label{dissipation-small-stroke} \frac {\int P
dt} {6\pi
\mu}=(\delta\ell)^2\left(\frac{a}{T_\ell}\right)
\left(1+O\left({\varepsilon}^2\frac{T_\ell}{T_v}\right)\right)$$ >From Eq. (\[curvature\]) and noting that $T_\ell=\frac 1 2
\tau$, one finds $$\label{delta-squirmer}
\delta\approx \frac{72}{(\delta\log v)^2}\ a\ .$$
We shall now outline how the key results, Eqs. (\[swim\],\[metric\]), are derived. The flow around a pair of spheres is a classical problem in fluid dynamics which has been extensively studied [@JO; @KK]. We could have borrowed from the general results, e.g. in [@JO], and adapt them to the case at hand. However, it is both simpler and more instructive to start from scratch: The classical Stokes solution [@batchelor] describing the flow around a single sphere of radius $a$ dragged by a force $f$ and, in addition, dilated at rate $\dot v$ $$\label{stokes}\pi \vec{u}({\vec x};a,f,\dot v) = \frac 1
{6\mu|x|}\left(\left(3+\frac{a^2}{x^2}\right)\vec{f}+\left( 1
-\frac {a^2}{x^2}\right) 3(\vec{f}\cdot\hat x)\hat x\right)+ \frac
{\dot v}{ x^2} \hat x.$$ $\vec{u}(\vec{x};a,f,\dot v)$ is the velocity field at a position $\vec{x}$ from the center of the sphere. The left term is the known Stokes solution. (A Stokeslet, [@batchelor], is defined as the Stokes solution for $a=0$.) The term on the right is a source term.
Since Stokes equations are linear, a superposition of the solutions for two dilating spheres is a solution of the differential equations. However, it does not quite satisfy the no-slip boundary condition on the two spheres: There is an error of order ${\varepsilon}$. The superposition is therefore an approximate solution provided the two spheres are far apart.
The (approximate) solution determine the velocities $U_i$ of the centers of the two spheres: $$\label{U}
U_i= \vec{u}(a_i
\hat{f};a_i,(-)^jf,0)+\,\vec{u}((-)^i\ell\hat{f};a_j,(-)^if,(-)^i\dot
v),\quad i\neq j\in\{1,2\}$$ The first term on the right describes how each sphere moves relative to the fluid according to Stokes law as a result of the force $\vec f$ acting on it. The second term (which is typically smaller) describes the velocity of the fluid surrounding the sphere (at distances $\gg a$ but $\ll\ell$) as a result of the movement of the other sphere. By symmetry, the net velocities of the two sphere and the net forces on them are parallel to the axis connecting the centers of the two spheres, and can be taken as scalars. To leading order in ${\varepsilon}$ Eq. (\[U\]) reduces to $$2\pi U_i=(-)^j \frac f \mu \left(\frac 1 {3 a_i}-\frac 1 {2
\ell}\right) +\frac {\dot v}{2 \ell^2}$$ Using $ \dot \ell =-U_1+U_2$ gives the force in the rod $$\label{force}
f =-6\pi\mu \left(\frac 1 {a_1} +\frac 1{a_2}\right)^{-1} \ \dot \ell$$ Dropping sub-leading terms in ${\varepsilon}$ gives Eq. (\[swim\]).
We now turn to Eq. (\[metric\]). Consider first the case $\dot
v=0$. The power supplied by the rod is $-f(U_2-U_1)=-f\dot \ell$ which gives the first term. Now consider the case $\dot\ell=0$. The stress on the surface of the expanding sphere is given by $$\label{stress}
\sigma=-\frac{2\mu \dot v}{4\pi}\, \left(\frac 1
{x^2}\right)^\prime=\frac{\mu\dot v}{\pi a^3}$$ The power requisite to expand one sphere is then $$\label{dilating}
4\pi a^2\sigma\dot a=\sigma \dot v =\frac{4\mu}{3 v} (\dot v)^2$$ Since there are two spheres, this give the second term in Eq. (\[metric\]).
There are no mixed terms in the dissipation proportional to $\dot\ell\dot v$. This can be seen from the following argument. To the leading order in ${\varepsilon}^0$, which is all we care about, the metric must be independent of $\ell$, (see Eq. (\[metric\]). Sending $\ell\to -\ell $ is equivalent to exchanging the two spheres. This can not affect the dissipation and hence the metric must be even function of $\dot \ell$. In particular, there can not be a term $\dot v \dot\ell$ in the metric. This completes the proof of Eq. (\[metric\]).
[**Acknowledgment**]{} This work is supported in part by the EU grant HPRN-CT-2002-00277. We thank H. Berg, H. Stone, and especially Richard Triemer for useful correspondence and for the Euglena racing tests.
[10]{} S. Childress, Mechanics of Swimming and Flying, (Cambridge University Press, Cambride, 1981). E.M. Purcell, [*Life at low Reynolds numbers*]{}, Am. J. Physics [**45**]{}, 3-11 (1977). A. Shapere and F. Wilczek, [*Geometry of self-propulsion at low Reynolds numbers*]{}, J. Fluid Mech., [**198**]{}, 557-585 (1989); [*Efficiency of self-propulsion at low Reynolds numbers*]{}, J. Fluid Mech., [**198**]{}, 587-599 (1989). A. Najafi and R. Golestanian, Phys. Rev. [**E69**]{} (2004) 062901, cond-mat/0402070 L.E. Becker, S.A. Koehler, and H.A. Stone, J. Fluid Mech. 490 , 15 (2003); E.M. Purcell, Proc. Natl. Acad. Sci. 94 , 11307-11311 (1977). J. Avron, O. Kenneth and O. Gat, [*Optimal Swimming at Low Reynolds Numbers*]{}, Phys. Rev. Lett. [**98**]{}, 186001, (2004). G.K. Batchelor, [*An Introduction to Fluid Dynamics*]{}, (Cambridge University Press, Cambridge, 1967). A competetion between the three linked spheres and pushmepullyou can be viewed at http://physics.technion.ac.il/ avron. The competition is made with the following rules: The spheres have the same (average) radii and the same (average) $\ell$. Furthermore, the strokes are similar rectangles in shape space with identical periods. Pushmepullyou is then both faster and spends considerably less energy. J. Lighthill, [*On the squirming motion of nearly spherical deformable bodies through liquids at very small Reynolds numbers*]{}, Comm. Pure. App. Math. [**5**]{}, 109-118 (1952). Beautiful movies of metaboly can be viewed at the web site of Richard E. Triemer athttp://www.plantbiology.msu.edu/triemer/Euglena/Index.htm
D.A. Fletcher and J.A. Theriot, [*An introduction to cell motility for the phsyical scientist*]{}, Physical Biology [**1**]{}, T1-T10 (2004). R.E. Triemer, private communication.
H.A. Stone and A.D. Samuel, [*Propulsion of micrro-organisms by surface distortions*]{}, Phys. Rev. Lett. [**77**]{}, 4102-4104 (1996).
J.R. Blake, [*A spherical envelops approach to ciliary propulsion*]{}, J. Fluid. Mech. [**46**]{}, 199-208 (1971)
C. Brennen and H. Winet, [*Fluid Mechanics of Propulsion by Cilia and Flagella*]{}, Annual Review of Fluid Mechanics, (1977), Vol. 9: Pages 339-398
D.J. Jeffrey and Y. Onishi, J. Fluid. Mech, [**139**]{}, 261 (1984).
S. Kim and S.J. Karrila, [*Microhydrodynamics*]{}, Butterworth-Heinemann, Boston, (1991); B. Cichocki, B.U. Felderhof, R. Schmitz, Physico. Chemical Hydro. [**10**]{}, 383 (1988).
| |
BACKGROUND OF THE INVENTION
SUMMARY
DETAILED DESCRIPTION
1. Field of the Invention
The field relates generally to systems and methods for semiconductor processing and, more particularly, to systems and methods for scheduling processes and actions for a semiconductor processing system.
2. Description of the Related Art
Semiconductor devices are commonly used in electronic devices, power generation systems, etc. These semiconductor devices may be manufactured using semiconductor substrates, which may be processed in batches in order to provide high throughput. For example, semiconductor substrates (e.g., wafers made of silicon or other semiconductor material in various embodiments) may be processed in large numbers so that tens, hundreds, or even thousands of substrates may simultaneously undergo a particular processing step. In various embodiments, each substrate may form many individual devices, such as, e.g., integrated circuits and/or solar cells or each substrate may form one device such as, e.g. a solar cell.
Processing substrates in large batches allows manufacturers to produce many devices in a short amount of time. In a processing system, each substrate may typically undergo numerous transport steps and one or more processing steps that may occur at different processing stations or reactors. Scheduling the transport and processing steps in the appropriate order may significantly affect the number of substrates processed in a particular amount of time, e.g., the throughput of the processing system. Accordingly, there is a continuing need for improved methods and systems for increasing the throughput of various types of processing systems.
The systems, methods and devices of the present disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.
In one embodiment, a semiconductor processing system is disclosed. The semiconductor processing system may include a controller configured to schedule the operation of the processing system. The controller may be programmed to determine a current state of the system. The current state may be defined at least by a location of one or more substrates and a processing status of the one or more substrates. The controller may be further programmed to generate a search tree having one or more branches. Each branch may identify an action that is capable of being performed by the system in the current state and, when performed, brings the system into a next or subsequent state. Each branch may be provided with one or more further branches, each further branch defining an action capable of being performed by the system in the next state, and so on. The branches together can form one or more branch pathways and can define one or more consecutive actions that are capable of being performed by the system in the current state. Further, the controller may be programmed to score each branch pathway of the generated search tree. The controller may be programmed to select a branch pathway based at least in part on the score of the branch pathway.
In another embodiment, a method for semiconductor processing is disclosed. The method may comprise determining a current state of a semiconductor processing system. The current state may be defined at least by a location of one or more substrates and a processing status of the one or more substrates. The method may further comprise generating a search tree having one or more branches. Each branch may identify an action that is capable of being performed by the system in the current state and, when performed, brings the system into a next or subsequent state. Each branch may be provided with one or more further branches, each further branch defining an action capable of being performed by the system in the next state, and so on. The branches together can form one or more branch pathways and can define one or more consecutive actions that are capable of being performed by the system in the current state. Further, the method may include scoring each branch pathway of the generated search tree. The method may also include selecting a branch pathway of identified action(s) to be performed based at least in part on the score of the branch pathway.
Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims.
The following detailed description of certain embodiments presents various descriptions of specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings, where like reference numerals indicate identical or functionally similar elements.
It will be appreciated that processing systems may comprise multiple substrate locations such as Input/Output locations, processing locations, storage locations and intermediate locations. Further, a processing system may comprise multiple devices for moving the substrates between the various locations. Scheduling the progression of substrates through the system can be done using various sequences, some of which are efficient, resulting in a large number of processed substrates in a particular amount of time, i.e., the throughput of the processing system, while others are inefficient and may lead to congestions, bottle necks or even dead locks. Various methods of scheduling have been used. In some methods, for example, an advance simulation is performed, prior to the operation of the system to determine the optimum scheduling sequence and then this sequence is executed. During execution no further adjustments are made. In examples of other methods, a controller uses a pre-sequencer or look ahead feature to identify congestions ahead of time and transfer substrates to a holding position and reschedule these substrates at the earliest time for completion of processing. However, these methods are not very flexible and are not able to provide a processing sequence that maintains desired processing goals, e.g., high throughput, under varying circumstances, e.g., when processing circumstances unexpectedly change. Advantageously, various embodiments disclosed herein provide a scheduler for a processing system that can perform and adjust a sequence to maintain desired processing goals under varying circumstances. In some embodiments, the processing goals, and thus the sequence to be performed, may be changed in the midst of performing a sequence in response to changes in processing circumstance.
Embodiments disclosed herein relate to systems and methods for scheduling processes and/or actions of a processing system. As explained herein, a wafer or substrate may undergo numerous processes at different processing stations in order to manufacture a finished wafer or substrate. Electronic controllers or schedulers may be used to automate the process flow for substrates in one or more boats (also referred to herein as boat racks) in order to improve system throughput. Because it may be desirable to maintain high system throughput, there is a continuing need for controllers that are capable of scheduling the operation of the system that increases throughput, including, e.g., scheduling the movement and processing of boats of substrates.
For example, in some embodiments, a current state of the system or part of a system, e.g., reactor module in the system, may be determined. The current state may include the location and processing status of substrates in the system or part of the system, along with the status (e.g., position, task being performed, etc.) of equipment in the system or part of the system (e.g., robots, reactors, etc.). Based on the determined current state, a controller, e.g., a programmed processor, may generate a search tree having one or more branches of potential actions, with each branch identifying one or more subsequent actions that are capable of being performed by the system in the current state. Each branch may further branch off to one or more sub-branches including further subsequent actions. Each branch and sub-branches, if any, may form one or more branch pathways that indicate alternative, consecutive actions that may be performed by the system. Each branch pathways of the generated search tree may be scored based on whether and how much the branch pathway improves system throughput (or other desired criteria), and a branch pathway of the tree may be selected based at least in part on the score of the branch pathway. The system may then perform the actions in the selected branch pathway of the search tree. Accordingly, in some embodiments, system throughput and efficiency may be improved by using knowledge of the current system state and subsequent actions that may be performed when the system is in the current state.
Processing Systems
FIGS. 1A and 1B
FIG. 1A
FIG. 1B
FIG. 1A
FIG. 1A
201
201
201
201
113
113
illustrates an example of a semiconductor processing system having two reactors, e.g., a dual reactor module (DRM), along with additional ancillary equipment, such as for moving and temporarily storing substrates. In particular, is a schematic perspective view of the semiconductor processing system . is a top plan view of the semiconductor processing system of . It should be appreciated that substrates may be processed in the system in various orientations, including vertical or horizontal orientations, depending, e.g., on the type of substrate holder that is utilized. The substrates are shown in in a horizontal configuration. In some other embodiments, the substrate may be processed oriented vertically.
201
102
102
121
122
102
103
104
123
104
105
102
133
201
The semiconductor processing system comprises housing and may in general have been installed in a so-called “clean room”. The housing may include a reactor area or chamber in which various processes may be performed on a substrate. An intermediate storage chamber may be located in the housing between partitions and . An initial storage chamber may be located between partitions and in the housing . An Input Output station (IO station) may be provided adjacent the initial storage chamber to introduce the substrates into the processing system .
201
201
106
107
121
106
107
106
107
106
107
112
113
106
107
106
107
114
114
107
106
114
112
FIG. 1A
FIG. 1A
FIG. 1A
The processing system of illustrates a dual reactor module (DRM), e.g., the system includes two reactors, a first reactor and a second reactor , arranged in the reactor chamber . In the embodiment shown in , the first reactor and the second reactor are furnaces, although it should be appreciated that the reactors and may be any suitable reactor or processing station, including, but not limited to, deposition chambers, lithography stations, etching stations, etc. The reactors , are positioned vertically and substrate boats filled with substrates may be introduced into the reactors , in the vertical direction from below the reactors , . To this end each reactor may have an elevator , which is movable in the vertical direction. Only one elevator associated with the second reactor may be seen in , although it should be appreciated that the first reactor may also include an elevator . The boat may be provided at the bottom with an insulating plug, which is not indicated in detail, which may provide a seal between the boat and the furnace.
111
115
106
107
111
111
112
113
106
107
111
112
116
115
115
114
115
112
115
111
114
112
111
FIG. 1A
As discussed herein, rotatable carousel , provided with cut-outs , may be positioned underneath the reactors , . The carousel may include at least two carousel positions configured to support a boat of substrates. Further, the carousel may be configured to transfer a boat of substrates between the first reactor and the second reactor , or vice versa. For example, the carousel may rotate by any suitable angle, e.g., 90 or 180 degrees, to move a boat between the two reactors or to a boat transfer device , such as a robot arm. Those cut-outs are shaped such that, if the cut-outs have been brought into the correct position, the elevator is able to move up and down through the cut-outs . On the other hand, the diameter of the bottom of the boat may be such that the diameter is larger than the cut-out in the carousel , so that when the elevator moves downwards from the position shown in the boat may be placed on carousel and may be removed therefrom again in a reverse operation.
112
106
107
112
106
107
112
113
113
110
133
108
134
131
131
132
126
127
127
108
131
135
131
131
110
133
108
131
108
130
130
110
104
137
110
113
110
124
112
122
112
122
116
117
115
116
112
119
103
112
111
121
119
121
122
123
The boats may be fed to both the first reactor and the second reactor , and various treatments may be performed therein. In some embodiments, parallel groups of boats may be treated exclusively by the first reactor and/or exclusively by the second reactor . The boats may be provided with substrates . For example, substrates may be supplied in transport cassettes which, from the IO station , may be placed in storage station through a closable opening with the aid of arm . Arm may include a bearing surface which has dimensions smaller than those of a series of cut-outs in a rotary platform . A number of such rotary platforms may be provided one above the other in the vertical direction in storage station . The arm may be movable in the vertical direction with the aid of height adjuster . Arm may be mounted such that the arm may pick up or remove cassettes between the IO station and the storage station . The arm may also move cassettes between the storage station and a rotary platform . The rotary platform may be constructed such that on rotation the cassette may be placed against partition where an opening has been made so that, after opening the cassettes , substrates may be taken from the cassette by substrate handling robot and may be placed in the boat located in the intermediate storage chamber . The boat , while located in intermediate storage chamber , may be supported by a boat transfer device (e.g., a robotic arm) which may be provided with a bearing surface at an end, the dimensions of which are once again somewhat smaller than those of cut-outs . The transfer device may move the boat through a closure in partition to place the boat on the carousel in the reaction chamber . The closure is provided in order to be able to close off intermediate storage chamber from the chambers and .
140
108
110
136
110
133
131
109
110
108
110
133
134
110
109
108
108
109
110
FIG. 1A
To conduct various types of processing steps, an operator , shown diagrammatically in , may load the storage station by introducing a number of cassettes and carrying out control operations on panel . Each of the cassettes may be transferred from the IO station with the aid of the arm into storage compartments made for these cassettes in the storage station . This means that, starting from the position for removing the relevant cassette from IO station through the opening , the cassette may then be moved upwards for moving into a higher compartment of the storage station . By rotation of the storage station it is possible to fill various compartments with cassettes .
110
108
131
130
110
130
104
124
113
112
116
111
121
113
121
112
122
106
107
119
112
111
116
111
112
111
106
107
106
107
113
112
106
107
113
106
107
The cassettes may then be removed from the storage station by arm and placed on rotary platform . The cassettes may be rotated on the rotary platform and placed against partition . With the aid of substrate handling robot , the substrates may be removed and placed in substrate boat placed on or near the boat transfer device . In the interim, as explained herein, the carousel is able to move in the reactor chamber with regard to the treatments to be carried out on the substrates present inside the reactor chamber . After boat has been filled in the intermediate storage chamber and has become or becomes available to one of the reactors , , opening , which may be closed up to this time, is opened and the filled substrate boat may be placed on carousel using any suitable boat transfer device, such as the boat transfer device (e.g., a robotic arm). The carousel may then be rotated and the filled substrate boat may be removed from the carousel and moved up into one of the reactors or . After treatment in the reactor(s) and/or , the treated substrates in a filled boat may be removed from the reactors , using movements opposite to those described above for loading the substrates into the reactors , .
FIG. 2A-2E
FIGS. 2A and 2B
FIG. 1A
FIG. 2A
FIG. 2A
FIG. 2A
FIG. 2A
101
201
101
102
101
102
102
102
101
101
106
107
102
101
106
107
102
106
106
107
107
106
106
107
107
111
111
106
107
106
107
a
b
a
a
a
a
b
b
b
a
b
a
b
a
b
a
b
a
b
a
a
b
b.
illustrate another example of a semiconductor processing system according to some embodiments. Thus, parts of the system of may be similar to or the same as the system of . is a schematic perspective view of a semiconductor processing system . The processing system may include a housing that generally encloses the system components. The system of includes two dual reactor modules (DRMs). A first DRM may be housed within a first housing , and a second DRM may be housed within a second housing . Within the first housing , the system may include one or more reactors in a reactor chamber. As illustrated in , the system may include a first reactor and a second reactor in the first housing to form the first DRM. Further, the system may also include another first reactor and another second reactor in the second housing to form the second DRM. In , the first reactors , and the second reactors , are furnaces, such as for performing deposition processes, yet it should be appreciated that the reactors , , , and may be any other suitable processing station, including but not limited to a single substrate reactor, an etching station, lithographic machines, etc. Rotatable carousels and may be disposed below each pair of reactors and , and and
165
108
108
108
108
112
112
141
141
112
118
118
113
165
118
108
108
112
112
141
141
112
112
118
141
141
112
112
118
113
112
112
112
112
111
111
111
111
112
112
106
106
107
107
116
112
112
141
141
111
111
111
111
141
141
112
112
141
141
111
111
a
b
c
d
a
b
a
b
a
d
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b.
FIG. 2C-2E
FIG. 2A
FIGS. 2D and 2E
FIG. 2A
FIGS. 1A and 4
A manipulator , such as a multi arm robot, may be configured to transport substrates between storage stations , , , and boats , located at exchange positions and . Each boat may hold a number of substrate holders , each substrate holder accommodating a batch of substrates as further illustrated in . The manipulator or other system machinery may move the substrate holders with substrates between the storage stations -and each respective boat and positioned at the exchange positions and . As shown in , boats and holding a plurality of substrate holders with batches of substrates (e.g., semiconductor substrates, including substrates for forming integrated circuits and/or solar cells and related devices) may be positioned in exchange positions and before being transported to the reactors for processing. In some embodiments, the boats and may hold 10 or more, 50 or more, or 100 or more substrates (see, e.g., ). Once the substrate holders with substrates are loaded on the boats and , the boats and may be loaded onto the carousels , , and the carousels and may be rotated to position the boats and underneath the reactors , , , or . In various embodiments, a boat transfer device (not shown in ; see e.g., , robotic arm ) may move the boats and between the exchange positions , and the carousels , . In other embodiments, the carousels , may include a carousel position beneath the exchange positions , , such that no separate boat transfer device is needed to move the boats , from the exchange positions , to the carousels ,
FIG. 4
112
112
106
106
107
107
106
106
107
107
112
112
111
111
112
112
112
112
112
112
111
111
112
112
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
Elevators (not shown; see, e.g., ) may be employed to raise the boats and through an opening into the reactors , , , and for processing. When processes in the reactors , , , and are completed, the elevators may lower the boats and to the carousels and , and the processed substrates in the boats and may be unloaded from the boats and for further processing. Once the substrates are added or removed from the boats and , the carousels and may rotate the boats , into a suitable position for processing.
133
108
108
101
101
120
120
136
120
165
112
112
118
113
116
112
112
141
141
112
112
116
112
112
111
111
111
111
112
112
106
106
107
107
106
106
107
107
112
112
106
106
107
107
101
112
112
111
111
111
111
112
112
111
111
112
112
111
111
141
141
112
112
165
108
108
133
118
101
c
d
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
a
b
c
d
An Input Output station (IO station) comprising the storage stations , may be provided to introduce the substrates into the processing system . The system may also include a controller , which may include a computer having a processor and memory. The controller may be in electrical communication with the various processing system components described herein, or may be configured to communicate with the various components to provide operational instructions to those components. An operator may use an interface panel to operate or provide instructions to the controller . In operation, in some embodiments, the operator may initiate a desired processing sequence using the interface panel. For example, the manipulator may load a boat , with substrate holders filled with unprocessed substrates and/or the boat transfer device may exchange a boat , of processed substrates to exchange stations , where the processed substrates can be exchanged with unprocessed substrates. Once loaded with a boat , of unprocessed substrates, boat transfer device may transfer the boat , from the exchange position to a front position of a carousel , . The carousel , may then be rotated by a suitable angle, e.g., 90 degrees, to position the boats , underneath the reactors , , , or . The elevators may then raise the boats through the openings and into the reactors , , , or for processing. After processing, the boats , may be removed from the reactors , , , or and the system by lowering the processed boats , onto the carousels , using the elevator. The carousels , may rotate the boats , to the front position of the carousels , , and the boat transfer device may transfer the processed boats , from the carousels , to the exchange positions and , where the substrate holders with processed substrates may be unloaded from the boats , , by manipulator , to, e.g., storage positions , in the IO station . The substrate holders with substrates may then be removed from the system for further processing.
FIG. 2B
FIG. 2A
FIG. 2B
FIG. 2A
101
111
111
151
112
116
116
116
112
141
112
151
141
141
111
155
157
155
106
157
107
107
112
151
111
112
106
112
107
112
155
114
112
106
112
157
114
112
107
111
153
111
153
112
112
153
111
111
a
b
is a top plan view of a DRM of the system shown in . As shown in , the carousel may include four carousel positions. For example, the carousel may include a front carousel position configured to receive a boat of substrates from the boat transfer device . The boat transfer device may include a robotic arm having any suitable number of degrees of freedom. The boat transfer device may support the boat of substrates and at the exchange position may move the boat into the front carousel position from the exchange position . As explained herein, processed substrates may be exchanged for unprocessed substrates at the exchange position . The carousel may also include a first reactor carousel position and a second reactor carousel position . The first reactor carousel position may be positioned underneath the first reactor , and the second reactor position may be positioned underneath a second reactor , such as, e.g., the second reactor disclosed above with respect to the embodiment of . When the boat is received in the front carousel position , the carousel may be rotated by 90 degrees clockwise to position the boat underneath the first reactor or by 90 degrees counterclockwise to position the boat underneath the second reactor . For example, if the boat is rotated into the first reactor carousel position , a first boat elevator may be used to raise the boat into the first reactor . If the boat is rotated into the second reactor carousel position , then a second boat elevator may be used to raise the boat into the second reactor for processing. In various embodiments, the carousel may include a rear carousel position near the rear of the carousel . The rear carousel position may be used to hold boats in a standby position. In various embodiments, the boats may be cooled while in the rear carousel position . In other embodiments, the carousel may only include two or three positions, while in yet other embodiments, the carousel may include more than four carousel positions.
FIGS. 2C and 2D
FIG. 2A
FIG. 2A
FIG. 2C
FIG. 2D
FIG. 2A
106
112
101
106
106
106
107
107
106
112
106
106
112
118
113
112
142
106
a
b
a
b
illustrate perspective views of a reactor and a boat that may be used in conjunction with the system of . For example, the reactor may be implemented as the reactors , , , or in the system of . The reactor and the boat may also be used in other embodiments disclosed herein. The reactor of may be used in various embodiments for processing solar cell substrates. In some other embodiments, the reactor may be used to process integrated circuits. As shown in , for example, the boat may include multiple substrate holders configured to support an array of substrates (). The boat may be moved vertically through the opening into the reactor for processing.
FIG. 2E
FIG. 2E
113
112
113
113
113
113
118
is a schematic perspective view of a batch of substrates loaded onto the boat . The illustrated substrates are square or rectangular in shape, but it should be appreciated that the substrates may be any suitable shape, such as circular. In addition, the illustrated substrates may be solar cell substrates, but they may also be integrated circuit substrates or any other suitable substrate. The substrates may be vertically oriented on the substrate holder in .
Those skilled in the art will understand that numerous modifications to the above are possible. For instance, it is possible for one reactor to suffice, or for more than two reactors to be present. The storage station may be of different construction, and the various displacement mechanisms transfer devices and manipulators may likewise be adjusted depending on the system parameters.
Systems and Methods for Scheduling the Processing of Substrates
FIGS. 1A-1B
2
2
As shown with respect to and A-E, substrates at various stages of processing may be moved from station to station by various mechanisms. In order to provide a high throughput, it may be advantageous to organize the movement of substrates (e.g., substrates loaded in boats) between processing stations and related equipment in the system based on the current state of the system, to provide efficient processing. For example, when a boat is undergoing processing in a reactor, typically no other boats may be introduced into the reactor. However, in order to efficiently operate the system, other actions may be taken while the boat being processed is in the reactor. For example, a subsequent boat may be loaded onto the boat transfer device, or a recently processed boat may be rotated into the rear carousel position in order to allow substrates in that boat to cool down. In sum, throughput may be advantageously increased by efficiently organizing the movement and processing of the boats through the system.
120
120
120
120
102
102
165
106
107
102
106
107
102
FIG. 2A
FIG. 2A
FIG. 2A
a
b
a
a
a
b
b
b
As an overview, in various embodiments, the controller (as shown, e.g., in ) may schedule the processing sequence and movement of substrates based on a current state of the processing system. In general, at any arbitrary moment in time during a processing sequence, the controller may determine what actions the system may take when it is in the current state, e.g., whether it is able to move a boat into the reactor or remove a boat from a reactor, or whether a boat of substrates may be exchanged in the exchange position. Additional actions may be possible, as explained herein. Each possible action may be simulated by the controller , and the controller may create a search tree of all possible subsequent actions. The controller shown in may be programmed to control one or more DRMS, e.g., both DRMs shown in , including the reactors in both housings and and manipulator . In other arrangements, however, each DRM may have its own controller. For example, a first controller can control the operation of the reactors , in the first housing , and a second controller can control the operation of the reactors , in the second housing . In various embodiments, the controller may systematically investigate the search tree for a pre-determined time horizon, or “look-ahead time.” Each branch of the search tree may be scored, and the branch with the best score may be selected. The actions in the selected branch may then be performed by the system. Thus, the system controller may schedule the processing sequence in real-time, based on the current state of the system and based on simulated subsequent states of the system, the simulation extending to states within a particular pre-determined time horizon.
FIG. 3
300
300
302
1
1
2
2
2
2
2
is a flowchart illustrating one method for scheduling and controlling the operation of a processing system, e.g., the various components in the system associated with the reactor(s) and processing system. For example, the method may begin in a block to determine a current state of the system. In general, the state of the system may be defined at least by a location of one or more boats and a processing status of each of the one or more boats. In addition, in various embodiments, the state of the system may be further defined by the status of other equipment in the processing system and may also include a time component. For example, the time component may include an amount of time until a reactor is available to receive a boat and/or by an amount of time until a boat is scheduled to leave one of the reactors (e.g., after processing). Further, the current state may be based on an amount of time in which a process is scheduled to continue in one of the reactors. For example, the state may be defined in part by the determination that Boat in Reactor has 10 minutes of processing time left, while Boat in Reactor has finished processing; for example, the substrates in Boat may have 2 minutes of cool-down time left, or another boat may be in Boat 's next position such that it prevents Boat from moving for another 2 minutes. The state of the system may also be defined in part by the position of the elevator(s). For example, the elevator may be in an up position near the reactor, in a down position near the carousel, or in an intermediate position while it moves upward or downward. Further, the state of the system may be determined based on an amount of time that each boat is scheduled to cool down after a process, e.g., after unloading a hot boat from a reactor the boat is scheduled for a subsequent cool-down period before removing of the substrates from the boat. Moreover, the state of the system may be defined in part by the orientation of the carousel, e.g., which carousel positions are currently supporting boats and/or the processing status of the boat(s) on the carousel. In other embodiments, e.g. an embodiment wherein the processing system is a single wafer cluster tool, the state may be defined by a location of one or more substrates and a processing status of the one or more substrates. In addition, in such embodiments, the state of the system may be further defined by the status of other equipment in the processing system including reactors and moving parts such as robots, and may also include a time component related to the amount of remaining processing time, etc.
FIG. 4
FIGS. 1A-1B
FIG. 4
FIG. 4
401
2
2
401
116
141
102
108
108
141
112
401
112
112
401
112
112
112
106
112
151
111
114
114
a
b
a
b
a
b
a
b
schematically illustrates a processing system in a particular current state. As in and A-E, the system may include a boat transfer device positioned between an exchange position and the housing . A storage station may be positioned adjacent the exchange position, such that substrates may be moved between the storage station and the exchange position . Multiple boats may be located within the system , but for the purposes of this example, only a first boat and a second boat will be considered. For example, the state of the system illustrated in may be defined by the location and processing status of the first boat and the location and processing status of the second boat . For the purposes of , the first boat is undergoing a standby process in the first reactor after completing a processing step, and the second boat is an empty boat and is positioned in the front carousel position of the carousel . As shown, both elevators and are in the “down” position.
300
304
107
114
FIG. 3
FIG. 4
b
The method () proceeds to a block to generate a search tree having one or more branches. As explained herein, each branch of the tree may identify one or more subsequent actions that are capable of being performed when the system is in the current state. For the purposes of simplicity with respect to the example of , the presence of the second reactor and its boat elevator will be ignored. However, it should be appreciated that the disclosed scheduling processes may be carried out with multiple reactors, resulting in a larger tree with a greater number of branches and possible actions.
107
550
401
550
550
550
304
304
300
FIG. 5
FIG. 4
FIGS. 4 and 5
Ignoring the second reactor , illustrates an example search tree given the current state of the system illustrated in . The controller may generate the search tree for a pre-determined time horizon. Without a pre-determined time horizon, the controller may look ahead in the tree for an exceptionally long period of time such that the number of potential actions may grow exponentially with the time horizon, which may increase controller processing time. By limiting the time horizon, the controller may create a manageable time window within which to analyze the potential actions to be taken, which may both decrease processing time and facilitate the ability to perform real-time analysis that allows next actions to be selected in the midst of a processing sequence. The pre-determined time horizon may be any suitable amount of time. For example, the time horizon may be in a range of about 10 minutes to about 45 minutes. In other embodiments, the time horizon may be in a range of about 15 minutes to about 35 minutes. In some embodiments, the time horizon may be selected based upon the duration of a particular process, e.g., the time required to perform a particular process, such as a deposition of a layer of material on substrates in a boat. In the example of , a time horizon of 15 minutes is used to generate the search tree . Thus, the controller will look ahead to all possible actions that may be initiated within the time horizon of 15 minutes. Note that some actions may not be capable of being performed, e.g. lowering a boat from a reactor while a boat is already present at the carousel position below the reactor, or inserting a boat into a reactor in which another boat is already present. Such actions are excluded from consideration in executing block . Further, note that, in some embodiments, useless or non-productive actions may not be considered in executing block of the method . Because one objective of the disclosed embodiments may be to improve system throughput, the controller may only consider actions that advance the processing of substrates, e.g., the controller may not consider trivial actions that, while theoretically possible to perform, do not advance the processing of substrates in the system. For example, rotating an empty carousel or placing a boat on the carousel immediately after removing it may not be considered. Generally, any actions that bring the system into a previously visited state are considered as non-productive. These actions, while theoretically possible in a current state, do not impact the processing of substrates.
FIG. 5
FIG. 4
FIG. 4
FIG. 5
FIG. 4
552
112
106
106
112
151
550
550
1
1
1
1
1
1
a
b
In , block illustrates the current state of the system, for example, the current state as shown in the system snapshot illustrated in . In the current state, the first boat is undergoing a standby process in the first reactor () after having completed a processing step in the reactor . In addition, the second boat is empty and positioned at the front carousel position . As shown in , there are various possible actions (e.g., five actions that are not useless or non-productive) that the controller could select given that the system is in the current state shown in . Each branch may be generated by associating each possible action with a branch of the one or more branches of the search tree . Thus, each possible action in the current state may correspond to one branch of the tree , e.g., Branch A, Branch B, Branch C, Branch D, Branch E or Branch F. Each branch may further branch off to one or more sub-branches including one or more subsequent actions; alternatively, each branch may only include one action. Thus, as used herein, a branch may refer to the initial actions that may be performed when the system is in the current state. Each branch may include one or more sub-branches that correspond to one or more subsequent actions, or further actions based on those subsequent actions.
FIG. 5
FIG. 5
1
1
1
1
1
1
552
553
559
562
552
Each branch and possibly a string of consecutive sub-branch(es) may form one or more branch pathways, each branch pathway formed of a string, or sequence, of consecutive actions that may be performed by the system. Indeed, as shown in , each branch pathway can include multiple levels of consecutive actions. Each of the sub-branches and actions within a branch may have an associated duration, corresponding to the time needed to perform all actions in a branch or a particular action, respectively. Also, each branch may have a single associated action, or a plurality of actions if the actions are performed together. For example, a branch pathway may refer to a path of consecutive actions that may be taken during the pre-determined time horizon. For example, illustrates six branches—Branches A, B, C, D, E and F. An example branch pathway may begin in the current state, e.g., block , and may proceed to blocks , , and , at which point the time horizon is complete. It should be appreciated that many other branch pathways are possible for the current state shown in block .
1
1
553
559
562
553
559
560
553
559
559
558
558
550
In sum, as explained herein, a branch (e.g., Branches A-F) may describe an action that the system can take in the current state. Each branch may include one or more sub-branches extending from the branch. A branch pathway can be formed by a string of one or more consecutive actions. For example, one branch pathway may include the three consecutive actions given by blocks , , and . Another branch pathway may include the three consecutive actions given by blocks , , and . Still another branch pathway may include the two consecutive actions given by blocks and , for example, in embodiments where the pre-determined time horizon does not extend beyond block . Indeed, yet another branch pathway may be defined by the single action taken in block , for example, in embodiments where the pre-determined time horizon does not extend beyond block . Thus, the branch pathways can define one or more consecutive actions that may be selected and taken by the system, and the tree can define the collection of all possible branch pathways for a given time horizon.
1
553
112
116
141
553
112
141
1
554
111
1
555
111
1
556
111
112
153
1
557
112
114
1
558
1
1
b
b
b
a
a
FIG. 4
FIG. 4
As a further example, Branch A may begin with a block , in which the second boat of may be moved by the transfer device to the exchange position . As shown in the block , transferring the boat to the exchange position in may take about 2 minutes. For Branch B, a block may include rotating the carousel by 90 degrees clockwise, which may take about 1 minute in various embodiments. For Branch C, a block may include rotating the carousel by 90 degrees counterclockwise, which may also take about 1 minute. For Branch D, a block may include rotating the carousel by 180 degrees, e.g., to move the second boat to the rear carousel position , which may take about 2 minutes. Alternatively, in Branch E and in a block , the first boat may be lowered by the elevator , which may take about 3 minutes to complete. Finally, in Branch F and in block , the actions of Branch A and Branch E may be started simultaneously.
550
550
1
112
116
141
1
553
112
141
112
106
FIG. 5
b
b
a
For each branch of the tree , the controller may be configured to determine a subsequent state of the system if the action associated with the branch is performed. The subsequent state of the system may be based on the factors explained above with respect to the current state, including, e.g., the location and processing status of the one or more boats of the system. As with the initial actions, the subsequent state may be analyzed to determine a set of actions (e.g., actions that are not useless or non-productive) that are capable of being performed when the system is in the subsequent state. For example, as shown in the tree of , if Branch A were to be selected by the controller, then the second boat would be moved by the transfer device to the exchange position . For Branch A, therefore, the subsequent state of the system associated with a branch pathway leading to block would be such that the second boat is empty and positioned at the exchange position , while the first boat remains in the first reactor undergoing the standby process.
553
559
112
141
559
112
563
112
564
112
112
563
563
b
b
a
b
a
FIG. 5
In the subsequent state, e.g., after executing the step of block , there are multiple actions that can be performed by the system. One option, shown in a block , is to exchange the substrate load of the second boat at the exchange position . As shown in , alternative actions to block include: (a) return boat from the exchange position to the carousel (block ); (b) lower boat (block ); or (c) simultaneously exchange substrate load of boat and lower boat . It should be noted that action (a) in block is an example of a useless action as it returns the system to a previous state, without any progress. Useless actions may be excluded, thus block is not considered as a potential subsequent action in some embodiments.
FIG. 5
553
559
553
1
559
559
112
141
553
559
112
141
559
112
b
b
b
As an example of a branch pathway of , a particular branch pathway may begin in block and continue to block . Therefore, the controller may direct the system from block of Branch A to block . Thus, in the branch pathway leading to block , a load of unprocessed substrates may be loaded into the second boat at the exchange position , which may take about 10 minutes. So far, therefore, the actions in blocks and have taken a total of 12 minutes (e.g., 2 minutes to move the second boat to the exchange position and 10 minutes to exchange the substrate load). Because the time horizon or look-ahead time is 15 minutes, the controller may continue generating additional sub-branches off of the block by analyzing the actions that are possible now that the system is in the next state, e.g., that the second boat has been loaded with a new batch of unprocessed substrates.
559
553
559
560
112
116
111
561
112
106
114
562
560
561
562
112
111
112
114
112
562
553
559
562
550
1
1
1
b
a
a
b
a
a
a
FIG. 4
FIG. 4
FIG. 5
In light of the subsequent state of the system after performing the steps of block , e.g., after traversing the branch pathway leading from block to block , there may be three additional possible actions, or branch pathways. First, in the branch pathway traversing block , the second boat in may be moved by the boat transfer device to the carousel , which may take about 2 minutes. Or, in the branch pathway leading to a block , the first boat in the first reactor of may be lowered using the first elevator , which may take about 3 minutes. Alternatively, in the branch pathway going to a block , the actions of block and may be performed in parallel or simultaneously. Thus, in block , the controller may direct the system to simultaneously move the second boat to the carousel and to lower the first boat using the elevator , which may take 3 minutes due to the 3 minutes used to lower the first boat . If the steps in block are executed, then the time horizon of 15 minutes is reached, because performing blocks (2 minutes), (10 minutes), and (3 minutes) equals the pre-determined time horizon of 15 minutes total. Note, however, that the example tree described with respect to has only been analyzed herein with respect to a portion of Branch A and its sub-branches, e.g., to the various branch pathways that are possible given a time horizon of 15 minutes. It should be appreciated that the controller may also similarly analyze all possible subsequent actions and branch pathways for Branches B-F for the pre-determined time horizon. These subsequent actions have been omitted for purposes of brevity.
561
112
106
112
111
562
112
111
112
112
111
112
111
114
112
300
FIG. 4
a
b
b
a
a
b
a
a
In general, multiple actions may be performed in parallel, e.g., simultaneously, so long as there are no scheduling conflicts. Thus, a particular branch pathway may lead to a node of the tree that includes simultaneous and/or parallel actions. For example, as explained above with respect to block and , the first boat may be lowered from the first reactor at the same time that the second boat is moved to the carousel . However, it should be appreciated that parallel actions may lead to the partial execution of an action. For example, in various embodiments, when one action is finished before the other action, a decision point is reached in the tree. For example, after 2 minutes in block , moving the second boat to the carousel has been fully completed while lowering the first boat has only been partially completed, e.g., 1 minute remains until the first boat is fully lowered to the carousel . Thus, at the end of moving the second boat to the carousel , the position of the elevator and the first boat is in an undefined, intermediate position. In various embodiments, partial completion of an action will be accounted for in the method , such that the controller may account for even intermediate positions when analyzing branches of the search tree and/or the time needed to complete the partially completed action. For example, when analyzing potential subsequent actions, the controller or processor may store intermediate positions in memory and may incorporate the time needed to complete the partially-completed action when analyzing potential subsequent actions.
Typically, the completion of an action will not exactly coincide with the time horizon. An action might be completed earlier than the time horizon, which means that another action can be started, or an action may be partially completed at the time horizon. In order to allow a fair comparison between different branch pathways, all branch pathways may be truncated at the predetermined time horizon and the time required to complete a partially completed action is taken into account when scoring the branch pathways. Furthermore, as explained above, it may be desirable to reduce the size of the search tree in order to maintain a manageable set of actions to evaluate. The pre-determined time horizon or look-ahead time may advantageously reduce the size of the search tree. However, as explained herein, the size of the search tree may further be reduced by ignoring useless actions, such as rotating an empty carousel, rotating the carousel twice in a row without moving a boat, and picking up a boat from the carousel and replacing it without rotating the carousel or exchanging substrates in the boat. Moreover, in various embodiments, visited states may be remembered by the controller such that a previously-visited state is not executed again. If, for example, a new state was previously visited during another time period, then the controller may not explore returning to that state. As one example, if a boat is removed from the reactor and placed on the carousel in a new state, then the controller may not consider simply placing the boat back in the reactor without further intermediate processing or substrate exchange. In considering the state of the system at a particular time, the controller or processor may take into account the overall processing status of a set of substrates when determining the state, even where the controller or processor's internal clock indicates that the state properties are unequal. For example, at time t=0, a particular boat at the front carousel position with 10 minutes of cool-down time left is at the same state at time t=5 minutes as a boat in the same position with 5 minute of cool-down time left. However, the system as a whole is at a different state at t=5 minutes when the boat has a full 10 minutes of cool-down time left. Skilled artisans will appreciate that various other ways of generating and limiting the size of the search tree are possible.
FIG. 3
FIG. 4
FIG. 5
304
300
306
304
1
1
1
553
560
561
562
Returning to , after generating the search tree in block , the method may proceed to a block to score each branch pathway of the generated search tree. In the block , multiple branches were generated for each action capable of being performed when the system is in the current state shown in the example embodiment of . As explained herein, each branch, e.g., Branches A-F with each permutation of sub-branches off these branches, may include one or more branch pathways that can be scored based on any suitable factor or weight. In some embodiments, the entire branch pathway of the tree spanning from the current state to the last possible action within the pre-determined time horizon may be analyzed when scoring the branch pathway, e.g., Branch A of may include a branch pathway spanning from block to one of blocks , , and .
550
A scoring function can be applied to each generated branch pathway based on various suitable processing parameters. In various embodiments, desirable situations (e.g., actions that increase throughput and/or reduce processing times) may receive a positive score, while undesirable situations (e.g., actions that reduce throughput, such as actions that lead to bottlenecks) may receive a negative score. For example, without being limited by theory, the branch pathways of the tree may be scored based on how far each boat has advanced in its processing sequence. Further, branch pathways that increase the idle time of the reactor(s) may be penalized (e.g., may receive a negative score), because reactor(s) that are not being used in a process may decrease the overall throughput of the system. Also, if performing actions along a certain branch would cause a boat to be stuck in a reactor when it may instead be moved to a subsequent processing step, the particular branch pathway may also be penalized in order to reduce bottlenecks in the system. For example, the scoring function may account for the amount of time that a boat waits in a reactor before being lowered. Actions that increase bottlenecks may receive negative scores, while actions that reduce bottlenecks and/or improve throughput may receive positive scores. Thus, each branch pathway may be scored based at least in part on the ability of each boat to proceed to a subsequent processing step.
To score the branch pathways, appropriate weights may be assigned to each action in each branch. As explained herein, actions that speed up or otherwise reduce bottlenecks may receive positive scores or weights, while actions that slow down or otherwise increase bottlenecks may receive negative scores or weights. In some embodiments, the overall score of a branch pathway may be determined based on a weighted sum of each action in the branch pathway, such that the overall processing time may be accounted for when weighing the benefits of executing the actions of a particular branch pathway of the tree. Actions that incur penalties, e.g., actions that increase the idle time of the reactor(s) or that cause bottlenecks, may receive a negative score or weight or a fractional weight in various embodiments. In addition, in various embodiments, certain weights may favor one particular boat over another, depending on the devices that are being manufactured in the boats and/or the priority of finishing one particular boat relative to another. For example, if the operator prioritizes the processing of Boat A over Boat B for reasons related to, e.g., the value of the boats, then Boat A may be assigned a higher or otherwise more favorable weight than Boat B. In general, however, in some embodiments, the branch pathways of the tree may be scored to increase the overall throughput of the system and/or to reduce the processing time for a particular boat. In some embodiments, the degree of occupancy of each reactor may be considered in addition to or separate from the weighting noted above. For example, high reactor occupancy can correspond to actions that result in a high number of reactors being used simultaneously, and low reactor occupancy can correspond to actions that result in only one or a few reactors being used simultaneously. Because high occupancy states can increase throughput, branch pathways that lead to high reactor occupancy may receive positive scores, while branch pathways that lead to low reactor occupancy may receive negative scores.
308
300
306
Turning to block of the method , the controller may select a branch pathway to be performed by the system based at least in part on the score of the selected branch pathway. As explained herein, it may be desirable to select a branch pathway providing the highest throughput for the system, e.g., to increase the number of substrates that are processed in a given amount of time. In this situation the scoring function may be arranged such that the branch pathway having the highest throughput may be selected in various embodiments. In some aspects, the controller may select the branch pathway that provides the lowest cycle time of one or more boats through the system, and the scoring function may be arranged accordingly. In some embodiments, the selected branch pathway may be the branch pathway with the best score as determined by the block .
FIG. 5
FIG. 3
553
559
562
300
559
112
b
Therefore, as explained herein, a selected branch pathway may include several consecutive actions that begin with the current state and that end in the last action that can be performed before or when the time horizon is reached. In the example disclosed with respect to , for example, the controller may score every possible branch pathway within the time horizon and may select one of the scored branch pathways. For example, the controller may select the branch pathway beginning in the block that proceeds to the block , and that terminates in the block . The scored and selected branch pathways may therefore include multiple, alternative pathways of consecutive actions that may be performed by the system in various states. It should also be appreciated that, even though a controller selects a particular branch pathway of consecutive actions, the controller may, after completion of an action, reconsider the selected branch pathway and repeat the method of . If, for example, the system is performing the action in block of the selected branch pathway (e.g., exchanging a wafer load in boat ), the system may recognize a system change, e.g., an unexpected delay in a parallel performed action, or an error from time to time, such as misaligned or absent substrates in the exchange station. The controller can recognize such delays or errors in real-time and can adapt accordingly by recognizing the current state (e.g., including the error), generating a search tree, and scoring and selecting the appropriate branch pathway. Thus, the systems and methods disclosed herein can advantageously adapt to changing conditions in real-time. In addition, in some embodiments, the criteria by which a branch pathway is scored may change with changes in circumstance (e.g., selection of a branch pathway after some error conditions may be based on different criteria than selection at the start of a progress sequence, when the system is assumed to be performing normally).
308
300
Once a branch pathway has been selected in block , the system may perform the actions in the selected branch pathway. The controller may communicate the actions of the selected branch pathway to the system, and the system may manipulate the boats as prescribed by the selected branch pathway. It should be appreciated that the method may be performed repetitively such that the current state of the system may be determined repetitively by the controller. The controller may repetitively generate the search tree, score each branch pathway of the tree, select the branch pathway of actions to be performed, and cause the system to perform the one or more subsequent actions of the selected branch pathway. Thus, the controller may advantageously determine the desired, e.g., most efficient, action sequence based on the present and simulated subsequent states of the system in order to provide high throughput.
Batch Processing and Other Types of Systems
FIGS. 3-5
FIG. 3
FIG. 1A
FIG. 2A
300
120
201
101
The scheduling processes disclosed above with respect to may be employed with any suitable processing system. For example, it should be appreciated that the method of may be scalable to any number or type of processing stations. Indeed, the controller may generate a search tree for any suitable number of processing stations and may score and select a particular branch pathway based on the criteria explained herein. The disclosed embodiments may thereby advantageously enhance throughput for systems with any number and type of processing stations, including, e.g., the single DRM system shown in , the two DRM system shown in , or cluster tools where substrates may be processed in different processing stations, such as single substrate stations arrayed around a central transfer station.
FIG. 6
601
601
633
633
633
601
601
633
601
633
601
601
633
601
a
b
a
a
b
b
is a top plan view of a cluster tool processing system , according to some embodiments. The system can include a first IO station and a second IO station . The first IO station may be used to accommodate substrates (e.g., by housing a substrate holder) and provide unprocessed substrates into the system or to receive processed substrates from the system , or the first IO station may be configured to both provide and receive substrates into and from the system . Similarly, the second IO station may be used to provide unprocessed substrates into the system or to accept process substrates from the system , or the second IO station may be configured to both provide and receive substrates into and from the system .
616
633
633
606
606
606
606
606
606
606
606
606
606
606
606
606
606
a
b
a
b
c
d
a
b
c
d
a
d
a
d
a
d
A manipulator may be configured to move substrates from the IO stations and/or to one of multiple processing stations , , , and . The processing stations , , , and may be any suitable type of processing station. The processing stations -may be the same type of processing station, or they may be configured to apply different types of processes to the substrates. For example, the processing stations -may include one or more of a deposition chamber, an etching chamber, a lithographic station, a cooling station, or any other suitable type of processing station. In various embodiments, the processing stations -may be configured to process a single substrate at a time. In other embodiments, more than one substrate may be processed in the processing stations.
300
601
620
606
606
620
606
606
620
601
620
FIGS. 3-5
FIG. 6
FIGS. 4 and 5
a
d
a
d
The method explained above with respect to may similarly be used with respect to the cluster processing system illustrated in . For example, a controller may be programmed to schedule the various processes to be performed in the processing stations -in order to improve throughput. The controller can be programmed to reduce bottlenecks and to ensure that the processing stations -are operated at improved capacity or efficiency. As with the examples shown in , the controller can generate numerous branches and sub-branches, forming various branch pathways that represent potential actions that may be taken when the system is in the current state, and the controller can score the potential branch pathways that may be taken by the system for a pre-determined time horizon. Thus, the disclosed scheduling methods may be used in many types of processing systems.
120
FIG. 2A
Features and methods described herein may be embodied in, and automated by, computer programs, including software modules, which may be executed by processors or integrated circuits of general purpose computers. The software modules may be stored in any type of non-transitory computer storage device or medium, which may be in electronic communication with a processor or integrated circuit, which in turn may be part of the controller (). Thus, the controller may be programmed to perform any of the methods described herein.
Although the various inventive features and services have been described in terms of certain preferred embodiments, other embodiments that are apparent to those of ordinary skill in the art, including embodiments which do not provide all of the benefits and features set forth herein and do not address all of the problems set forth herein, are also within the scope of this invention. All combinations and sub-combinations of the various embodiments and features described herein fall within the scope of the present invention. The scope of the present invention is defined only by reference to the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
Specific embodiments of the invention will now be described with reference to the following drawings, which are provided by way of example, and not limitation. Note that the relative dimensions of the following figures may not be drawn to scale.
FIG. 1A
is a schematic perspective view of a semiconductor processing system, according to some embodiments.
FIG. 1B
FIG. 1A
is a top plan view of the semiconductor processing system of , according to some embodiments.
FIG. 2A
is a schematic perspective view of a semiconductor processing system, according to some embodiments.
FIG. 2B
is a plan view of a rotatable carousel and a boat transfer device, according to some embodiments
FIG. 2C
is a schematic perspective view of a reactor, according to some embodiments.
FIG. 2D
is a schematic perspective view of a substrate boat rack, according to some embodiments.
FIG. 2E
is a schematic perspective view of a cassette of substrates loaded onto a substrate boat, according to some embodiments.
FIG. 3
is a flowchart illustrating one method for scheduling and controlling the operation of a processing system, according to some embodiments.
FIG. 4
is a schematic perspective view of a processing system in a particular current state, according to some embodiments.
FIG. 5
FIG. 4
illustrates an example search tree for the current state of the system illustrated in .
FIG. 6
is a top plan view of a cluster tool system, according to some embodiments. | |
CROSS-REFERENCE TO RELATED APPLICATIONS
TECHNICAL FIELD
BACKGROUND
SUMMARY
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
This application is related to commonly-assigned copending applications, Ser. No. 11/309,308, entitled, “ARTICLE WITH MULTILAYER DIAMOND-LIKE CARBON FILM”, filed Jul. 25, 2006, and “ARTICLE WITH MULTILAYER DIAMOND-LIKE CARBON FILM”, filed XXXX (Attorney. Docket No. US9083). Disclosures of the above identified applications are incorporated herein by reference.
The present invention relates to articles with multilayer diamond-like carbon film, and more particularly to an article with multilayer diamond-like carbon film that has high corrosion resistance, low friction coefficient and good wear resistance, and a method for manufacturing the article.
Diamond-like carbon films have characteristics similar to those of diamond, such as hardness, low friction coefficient, and high chemical stability. Therefore, diamond-like carbon films are used in articles such as molds, or as protective films for improving corrosion and wear resistance. The diamond-like carbon film on the mold is general a single layer, and is formed by the direct current sputtering process. This kind of diamond-like carbon film has poor wear resistance. When being used many times, the diamond-like carbon film can easily be rubbed off from the mold surface, leaving the mold with low corrosion resistance and bad wear resistance.
What is needed, therefore, is an article with multilayer diamond-like carbon film that has high corrosion resistance, low friction coefficient and good wear resistance, and a method for manufacturing the article.
In an embodiment, an article with multilayer diamond-like carbon film is provided. The article includes a substrate, an adhesive layer formed on the substrate, a multilayer doped diamond-like carbon film formed on the adhesive layer, and an undoped diamond-like carbon layer formed on the diamond-like carbon film. The adhesive layer is comprised of a material selected from the group consisting of chrome, titanium, silicon, chromium nitride, titanium nitride, and silicon carbide. The multilayer doped diamond-like carbon film includes a number of doped diamond-like carbon layers stacked one on another. Each doped diamond-like carbon layer is comprised of diamond-like carbon and an additive material selected from a group consisting of chrome, titanium, silicon, chromium nitride, titanium nitride, silicon carbide, silicon nitride, and any combination thereof. A content of the additive material in each diamond-like carbon layer gradually decreases with increasing distance away from the substrate.
In another embodiment, a method for manufacturing an article is provided. The method includes the steps of: providing a substrate; forming an adhesive layer on the substrate, the adhesive layer being comprised of a material selected from the group consisting of chrome, titanium, silicon, chromium nitride, titanium nitride, and silicon carbide; forming a multilayer doped diamond-like carbon film on the adhesive layer; the multilayer doped diamond-like carbon film comprising a plurality of doped diamond-like carbon layers stacked one on another, each doped diamond-like carbon layer being comprised of diamond-like carbon and an additive material selected from a group consisting of chrome, titanium, silicon, chromium nitride, titanium nitride, silicon carbide, silicon nitride, and any combination thereof; a content of the additive in each diamond-like carbon layer gradually decreasing with increasing distance away from the substrate; forming an undoped diamond-like carbon layer on the multilayer doped diamond-like carbon film.
Other advantages and novel features will become more apparent from the following detailed description of the present article with multilayer diamond-like carbon film and method for manufacturing the same when taken in conjunction with the accompanying drawings.
Many aspects of the article with multilayer diamond-like carbon film and method for manufacturing the same can be better understood with reference to the following drawings. The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the present invention. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
FIG. 1
is a schematic view of a multilayer diamond-like carbon film formed on a substrate, in accordance with a preferred embodiment; and
FIG. 2
FIG. 1
is a flowchart of a method for manufacturing the article in , in accordance with another preferred embodiment.
Reference will now be made to the drawing figures to describe the preferred embodiments of the present article with multilayer diamond-like carbon film and method for manufacturing the same in detail.
FIG. 1
1
1
10
21
22
23
21
10
22
21
23
22
Referring to , an article in accordance with a preferred embodiment is shown. The article includes a substrate , an adhesive layer , a multilayer doped diamond-like carbon film , and undoped diamond-like carbon layer . The adhesive layer is formed on the substrate , the multilayer doped diamond-like carbon film is formed on the adhesive layer , and the undoped diamond-like layer is formed on the multilayer doped diamond-like carbon film .
21
21
21
10
23
A thickness of the adhesive layer is in the range from 5 nanometers to 20 nanometers. The material of the adhesive layer is selected from the group consisting of chrome, titanium, silicon, chromium nitride, titanium nitride, silicon carbide, and silicon nitride. The adhesive layer adheres to the substrate . A thickness of the undoped diamond-like carbon layer is in the range from 2 nanometers to 20 nanometers.
22
21
23
22
221
222
223
221
21
222
221
223
223
22
10
23
223
22
The multilayer doped diamond-like carbon film is sandwiched between the adhesive layer and the undoped diamond-like carbon layer . The multilayer doped diamond-like carbon film is composed of N layers of doped diamond-like carbon layer, i.e. a first layer , a second layer and so on to an Nth layer stacked one on top of the other in that order, wherein N is an integer, preferably in a range from 5 to 30. The first layer is formed on the adhesive layer , the second layer is formed on the first layer , and the Nth layer is formed on an (N−1)th layer. The Nth layer is the outermost layer of the multilayer doped diamond-like carbon film and is distant from the substrate . The undoped diamond-like carbon layer is formed on the Nth layer . A thickness of each doped diamond-like carbon layer is in the range from 2 nanometers to 60 nanometers. Each doped diamond-like carbon layer of the multilayer doped diamond-like carbon film is composed of diamond-like carbon and an additive material. The additive material is selected from the group consisting of chrome, titanium, silicon, chromium nitride, titanium nitride, silicon carbide, silicon nitride, and any combination thereof.
221
223
The additive material in each doped diamond-like carbon layer gradually decreases in content from the first layer to the Nth layer . For example, molar percentage of the additive of an Mth doped diamond-like carbon layer is (N−M+1)(X, wherein X is in the range from 0.2% to 1%, and M is in the range from 1 to N.
221
223
221
21
22
The molar percentage of the additive material in the first layer is the greatest and the Nth layer has least percentage of the additive material. The additive material can enhance binding force of atoms of the doped diamond-like carbon layers. Therefore, the first layer has higher corrosion resistance and a good binding force with the adhesive film . With the gradual reducing content of the additive material, the doped diamond-like carbon layers of the multilayer doped diamond-like carbon film have a lower binding force, low friction coefficient, and good wear resistance.
FIG. 2
1
Referring to , a method for manufacturing the article with another preferred embodiment is shown.
In the step 1, a substrate is provided. The material of the substrate is selected from the group consisting of iron-carbon-chrome alloy, iron-carbon-chrome-molybdenum alloy, and iron-carbon-chrome-vanadium alloy. The surface of the substrate undergoes mirror polishing. The roughness of the surface is less than 10 nanometers.
In step 2, an adhesive layer is formed on the substrate and the adhesive layer is composed of a material selected from the group consisting of chrome, titanium, silicon, chromium nitride, and silicon carbide. The adhesive layer is applied by ion beam sputtering.
In step 3, a first doped diamond-like carbon layer is formed on the adhesive layer. The doped diamond-like carbon layer is comprised of diamond-like carbon and an additive material. In this step, two targets are used. A first target is used to sputter the diamond-like carbon and at the same time a second target is used to sputter the additive material. The material of the first target is graphite or carbon. The material of the second target is selected from the group consisting of chrome, titanium, silicon, chromium nitride, titanium nitride, silicon carbide, silicon nitride, and a mixture thereof. A thickness of the first doped diamond-like carbon layer is in the range from 2 nanometers to 60 nanometers.
The gas used in sputtering the first doped diamond-like carbon layer is a mixture of a first gas and a second gas. The first gas is selected from the group consisting of argon and krypton. The second gas is selected from the group consisting of hydrogen, methane, and acetylene. The amount of the second gas is 5% to 20% of that of the first gas. The gas used in the sputtering of the additive material is argon or krypton.
Preferably, the substrate is rotated during the sputtering, and thus a uniform doped diamond-like carbon layer is achieved and the diamond-like carbon and additive are uniformly distributed in the first doped diamond-like carbon layer.
In step 4, the step 3 is repeated and the content of the additive material is reduced for each repetition of the step, and a multilayer doped diamond-like carbon film composed of a plurality of layers of doped diamond-like carbon layer is stacked on the adhesive layer. The first layer is formed on the adhesive layer, then a second layer, a third layer, and so on to an Nth layer. The additive material in each doped diamond-like carbon layer gradually decreases from the first layer to the Nth layer.
In step 5, an undoped diamond-like carbon layer is formed on the multilayer doped diamond-like carbon film. Thus, an article with multilayer diamond-like carbon film is achieved.
Although the present invention has been described with reference to specific embodiments, it should be noted that the described embodiments are not necessarily exclusive, and that various changes and modifications may be made to the described embodiments without departing from the scope of the invention as defined by the appended claims. | |
3 Untold Song Lyrics Secrets
It is common knowledge to most that song lyrics are written to be sung. They are different from other types of creative writing such as poetry and narratives, even though at the same time, they share many similarities.
In a nutshell, lyrics are written in short lines and organized into repeating sections. They contain rhymes and other poetic techniques for easy remembering. They can tell a story or express emotions of a subject.
Made for Singing
Instead of being read, song lyrics are meant to be performed to the accompaniment of background music. How the lyrics are composed differ from people to people. Sometimes the writer would compose the melody first on a musical instrument such as the piano or guitar, and then come up with matching lyrics. Other times, the process is mirrored. There is no one rule that lyricists must follow.
With that being said, each syllable of the lyrics must be connected with a musical note. Each note can also hold different lengths, which means unlike in poetry where each word is read in the same rhythm, each word in a song can stretch into several beats, or be as short as a quarter of a beat. Rap lyrics are different though. Instead of being sung, rappers chant the words to accompanying beats.
If you need some ideas about what the song should be about, here is a list of suggested song ideas to check out.
Written in Lines
Just like poetry, lyrics are composed into short lines. It is up to lyricist to choose when the lines should break. Though there is no set rule, many lyricists tend to break a line at the end of each or several bars of music. Also, the breaks should be consistent throughout the whole song. This way, the lyrics would somehow match the music being played to.
It should be noted that each line does not need to be a complete sentence. It can either continue into the next break, or move directly into a new section. This means a line can also be one word.
Lyrics are Organized into Sections With Poetic Techniques
Similar to poetry, lyrics are also organized into stanzas. And just like in a narrative, each stanza (or paragraph) usually introduce a new idea to the song. This could be a feeling, or the explanation of the feeling. It can also be a reputation of the same idea or words over and over again to different musical notes. To keep the song interesting, several poetic techniques and devices can be employed, including similes, metaphors, hyperboles and onomatopoeia.
The stanza must belong in a type of song section. They are usually the verse, chorus or bridge. In most cases, each of these sections would have the same melody, tempo and rhythm, but this does not need to be set in stones. This is true if the writer decides to be a bit more experimental or creative to keep the listener glued.
It should be noted that not every song needs to follow the points in this article. Some people simply imagine the song in their heads and put into paper whatever comes to mind. At the end, the song should be an expression of your feelings. | https://www.freshoutofthebooth.com/single-post/3-untold-song-lyrics-secrets |
Geometry is a branch of mathematics that deals with spatial figures and shapes in a fairly abstract way. Geometry is applied to measure distances, find the size of shapes, and solve real-world problems. Here in this article, let us see what an equilateral triangle is and what is the area of an equilateral triangle.
Geometry can be divided into two areas: Euclidean and non-Euclidean geometry. Euclidean geometry is named after the Greek mathematician Euclid, who was one of the first people to study the subject.
The term comes from the ancient Greek word “geometria,” which means “earth measurement.” Non-Euclidean geometry is basically any geometry that deviates from Euclid’s theories. Non-Euclidean geometries were discovered by different mathematicians such as Bolyai and Lobachevsky in the 19th century.
If we draw three straight lines intersecting at three points, we get the geometrical shape called a triangle. Now, these triangles are classified as different types like equilateral, right isosceles, obtuse isosceles, acute isosceles, right scalene, obtuse scalene, and acute scalene.
An equilateral triangle is one that has all three sides equal, and also the angles are equal, measuring 60 degrees which makes it easier to understand and solve the problems around it. Geometry usually revolves around the study of different shapes and their properties.
If we talk of area of triangle it means the space that the triangle occupies in the two-dimensional space. Triangle is the simplest polygon in geometry. All the three sides of an equilateral triangle have the same median, angle bisector, and also the altitude.
The area of an equilateral triangle is very easy to calculate because the triangle is symmetrical, and every side of the triangle measures equal. It can be calculated by the formula which says : (square root of 3)/ 4 * a*a, where ‘a’ is the side of a triangle.
Triangles are one of the basic shapes of geometry. They are also considered as one of the basic building blocks in mathematics, and so they have many uses. Hence to make students easily understand the concept of triangles, they are advised to practice from triangle worksheets.
Studies show that this type of shape has a strong correlation to mathematical thinking, which is important for students. Students should not give up on using this as an educational tool because it can help with many things, such as problem-solving and reasoning skills.
One of the uses of triangle worksheets is their importance in learning geometry. Triangles have a well-defined geometrical property, so they can be used as models for other shapes like quadrilateral and hexagon. The sum of all the given angles inside a triangle is 180 degrees.
Practicing geometry worksheets will help the students in not only understanding the problem being presented but they will also get to know how to draw all the different basic kinds of triangles. This will ensure their learning process of taking accurate measurements, which is an important skill to have while learning geometry.
Cuemath is one such website where teachers, as well as the students, can find a variety of triangle worksheets. These are easy to download and are printable too. With problems and concepts presented in visually appealing patterns and images, as well as the pictorial depiction of concepts, will enable students to retain and recall these concepts whenever required. Also, if at any point they get stuck, there are answer keys too provided along with these worksheets, which have detailed explanations to the solutions provided by the expert team at Cuemath. | https://www.influencive.com/what-is-the-area-of-an-equilateral-triangle/ |
Computational thinking is the process of approaching a problem in a systematic manner and creating and expressing a solution such that it can be carried out by a computer. But you don't need to be a computer scientist to think like a computer scientist! In fact, we encourage students from any field of study to take this course. Many quantitative and data-centric problems can be solved using computational thinking and an understanding of computational thinking will give you a foundation for solving problems that have real-world, social impact.
Computational Thinking for Problem Solving
Acerca de este Curso
Resultados profesionales del estudiante
42%
32%
100 % en línea
Fechas límite flexibles
Nivel principiante
Aprox. 37 horas para completar
Inglés (English)
Habilidades que obtendrás
Resultados profesionales del estudiante
42%
32%
100 % en línea
Fechas límite flexibles
Nivel principiante
Aprox. 37 horas para completar
Inglés (English)
Programa - Qué aprenderás en este curso
Pillars of Computational Thinking
Computational thinking is an approach to solving problems using concepts and ideas from computer science, and expressing solutions to those problems so that they can be run on a computer. As computing becomes more and more prevalent in all aspects of modern society -- not just in software development and engineering, but in business, the humanities, and even everyday life -- understanding how to use computational thinking to solve real-world problems is a key skill in the 21st century.
Expressing and Analyzing Algorithms
When we use computational thinking to solve a problem, what we’re really doing is developing an algorithm: a step-by-step series of instructions. Whether it’s a small task like scheduling meetings, or a large task like mapping the planet, the ability to develop and describe algorithms is crucial to the problem-solving process based on computational thinking. This module will introduce you to some common algorithms, as well as some general approaches to developing algorithms yourself. These approaches will be useful when you're looking not just for any answer to a problem, but the best answer. After completing this module, you will be able to evaluate an algorithm and analyze how its performance is affected by the size of the input so that you can choose the best algorithm for the problem you’re trying to solve.
Fundamental Operations of a Modern Computer
Computational thinking is a problem-solving process in which the last step is expressing the solution so that it can be executed on a computer. However, before we are able to write a program to implement an algorithm, we must understand what the computer is capable of doing -- in particular, how it executes instructions and how it uses data. This module describes the inner workings of a modern computer and its fundamental operations. Then it introduces you to a way of expressing algorithms known as pseudocode, which will help you implement your solution using a programming language.
Applied Computational Thinking Using Python
Writing a program is the last step of the computational thinking process. It’s the act of expressing an algorithm using a syntax that the computer can understand. This module introduces you to the Python programming language and its core features. Even if you have never written a program before -- or never even considered it -- after completing this module, you will be able to write simple Python programs that allow you to express your algorithms to a computer as part of a problem-solving process based on computational thinking.
Revisiones
- 5 stars
- 4 stars
- 3 stars
- 2 stars
- 1 star
Principales revisiones sobre COMPUTATIONAL THINKING FOR PROBLEM SOLVING
Excellent course for beginners with enough depth, programming and computational theory to increase their computer science knowledge to a higher level. It builds a good foundation of how computers work
The course is very well-designed and it helped me develop understand how to apply computational thinking in solving various types of problems as well as acquire basic skills of programming in Python.
The course is great. I learned a lot. The support for the course is SUPER slow. It's hard to get any direct help for any questions or issues that you are having. Beware of assignment 4.7!
Very well thought out. This course covers simple concepts while still being engaging and challenging. Examples from varying disciplines help illustrate concepts in a real-life context.
Course content is good, graded assignments are good, I just had problems with my assignments in week 4 as I easily became confused with the implementation of all the lessons combined.
Useful course taught at an adequate rate. I recommend it for people who are interested in learning the basics of computational thinking, i.e. a systematic approach to problem-solving.
Great course - the non-programming parts (making flow charts etc) were actually more difficult than the programming (simple Python programming - my first time programming in python)
The autograder can be somewhat of a pain and some of the solutions in the peer-review sections aren't the most efficient. Otherwise it helped teach the building blocks of coding.
An excellent bridge into introductory computer science topics. Professors Susan Davidson and Chris Murphy exposed learners to computer science concepts within everyday problems.
Thoroughly enjoyed the course. If you are new to computer science or need a refresher of the basic concepts you learned in high school / college, this is the perfect course
The course is generally good. However, the assignment content and the lecture are not really getting along, especially the Python part. I suggest more "bridging" materials.
Well taught with good examples and exercises that require thinking but still approachable. Very well laid out and taught. Definitely sparked an interest to go learn more.
Value, conherence content, life-long skill for problem solving. It the best in Computational Thinking, you can also consider as good Introduction to Computer Sciences.
I learned the methods to solve the problem with computational thinking and the course is really great. I will recommend this course to my friends and colleagues.
More than yet another programming crash course. The only requirements here are your brain and desire to solve real-world problems. Simple and clear for everyone.
A very thorough and engaging experience for student. Lots of video and very good explanation of computer science concept and practical problems involving python
This is a good short starter course for people leaping into IT and good refresher course for some. I could finish the audit (minus assignment) in two days
Great course for an introduction to Computer Science. Learned a lot and enjoyed even more. The quality of the material (video production) is really goof.
If you don't have programming experience, you may find the last module a bit challenge. But overall, it's a very good, structured, informative course.
Great course for people new to computing, but also helpful for those who have dabbled in various languages but want to understand the bigger picture.
Acerca de Universidad de Pensilvania
Preguntas Frecuentes
¿Cuándo podré acceder a las lecciones y tareas?
Una vez que te inscribes para obtener un Certificado, tendrás acceso a todos los videos, cuestionarios y tareas de programación (si corresponde). Las tareas calificadas por compañeros solo pueden enviarse y revisarse una vez que haya comenzado tu sesión. Si eliges explorar el curso sin comprarlo, es posible que no puedas acceder a determinadas tareas.
¿Qué recibiré si compro el Certificado?
Cuando compras un Certificado, obtienes acceso a todos los materiales del curso, incluidas las tareas calificadas. Una vez que completes el curso, se añadirá tu Certificado electrónico a la página Logros. Desde allí, puedes imprimir tu Certificado o añadirlo a tu perfil de LinkedIn. Si solo quieres leer y visualizar el contenido del curso, puedes participar del curso como oyente sin costo.
¿Cuál es la política de reembolsos?
¿Hay ayuda económica disponible?
Do I need to know how to program or have studied computer science in order to take this course?
No, definitely not! This course is intended for anyone who has an interest in approaching problems more systematically, developing more efficient solutions, and understanding how computers can be used in the problem solving process. No prior computer science or programming experience is required.
How much math do I need to know to take this course?
Some parts of the course assume familiarity with basic algebra, trigonometry, mathematical functions, exponents, and logarithms. If you don’t remember those concepts or never learned them, don’t worry! As long as you’re comfortable with multiplication, you should still be able to follow along. For everything else, we’ll provide links to references that you can use as a refresher or as supplemental material.
Does this course prepare me for the Master of Computer and Information Technology (MCIT) degree program at the University of Pennsylvania?
This course will help you discover whether you have an aptitude for computational thinking and give you some beginner-level experience with online learning. In this course you will learn several introductory concepts from MCIT instructors produced by the same team that brought the MCIT degree online.
If you have a bachelor's degree and are interested in learning more about computational thinking, we encourage you to apply to MCIT On-campus (http://www.cis.upenn.edu/prospective-students/graduate/mcit.php) or MCIT Online (https://onlinelearning.seas.upenn.edu/mcit/). Please mention that you have completed this course in the application.
Where can I find more information about the Master of Computer and Information Technology (MCIT) degree program at the University of Pennsylvania?
Use these links to learn more about MCIT:
MCIT On-campus: http://www.cis.upenn.edu/prospective-students/graduate/mcit.php
MCIT Online: https://onlinelearning.seas.upenn.edu/mcit/
¿Tienes más preguntas? Visita el Centro de Ayuda al Alumno. | https://es.coursera.org/learn/computational-thinking-problem-solving?authMode=login |
Rate this recipe!
Your rating
Overall rating
0
0
ratings
0
3
Ingredients
9 cake
Serving
1/2 cups
almond flour
1 cup
shredded coconut
3/4 cup
Swerve Sweetener
1/3 cup
unflavoured whey protein powder
or
white
1 tbsp
baking powder
1/4 tsp
salt
1/2 cup
coconut oil, melted
5 large
egg whites
1 cup
unsweetened almond or coconut milk
1 tsp
vanilla extract Coconut Cream
1 cup
full fat canned coconut milk
1/4 cup
powdered Swerve Sweetener
1 large
egg
1 large
egg yolk
yolk
1/4 tsp
coconut extract
1/4 tsp
vanilla extract
1/8 tsp
xanthan gum Topping
1 cup
whipping cream
3 tbsp
powdered Swerve Sweetener
1/2 tsp
vanilla extract
1/3 cup
unsweetened shredded coconut, lightly toasted
9 inch
square baking pan (if using an 8x8 inch, the cake will need to bake a bit longer). In a large bowl, whisk together the almond flour, shredded coconut, sweetener, protein powder, baking powder and salt. Stir in coconut oil, egg whites, nut milk and vanilla extract until well combined. Spread batter in prepared baking pan and bake 30 to 35 minutes, or until set and a tester inserted in the center comes out clean. Remove from oven and let cool 15 minutes, then poke with a skewer at 1/2 inch intervals all over cake. Coconut Cream: In a medium saucepan over medium heat, combine coconut milk and sweetener and bring to just a simmer. In a medium bowl, whisk together the egg and egg yolk. Slowly stir in about 1/3 of the hot coconut milk, whisking continuously. Then slowly whisk egg mixture back into hot coconut milk. Cook another 4 to 5 minutes, whisking continuously, until mixture begins to thicken. Remove from heat and whisk in coconut extract and vanilla extract. Sprinkle surface with xanthan gum and whisk briskly to combine. Pour mixture evenly over cooled cake, shaking gently from side to side to get as much mixture into the poke holes as possible. Refrigerate 2 hours until completely cooled. Topping: Whip cream with powdered sweetener and vanilla extract until stiff peaks form. Spread over chilled cake and sprinkle with toasted coconut. *For a dairy-free version, you can make coconut whipped cream by refrigerating a can of coconut milk overnight and scooping out the thick cream from the top the next day. Whip with sweetener and vanilla extract until firm.Notes
Cholesterol: 32mg
Instructions
Read the full description
Advertisement
Comments
Log in
or
Register
to write a comment.
Post
All Day I Dream About Food
2
936
Follow
Advertisement
Also saved in
Ricos postres
estibalialvarez
5
0
[email protected]
teresaj3
19
0
My favourites
magdalenacastromuñoz
6
0
Related recipes
Coconut Cream Chocolate Poke Cake
Coconut
Cream
Chocolate
Poke
Cake
– an easy box
cake
makeover with
cream
of
coconut
, toasted
coconut
, and whipped
cream
Dessert Now, Dinner Later!
0
4
0
Coconut Cream Poke Cake
so I made this
Coconut
Cream
Poke
Cake
. In case you're wondering where to find the
cream
of
coconut
, you can usually find it in the international section of the store
Love Bakes Good Cakes
0
2
0
Coconut Cream Poke Cake
Coconut
Cream
Poke
Cake
starts with a
cake
mix, uses one of my favorite summer ingredients (
cream
of
coconut
), is frosted in cool whip, and topped with crunchy toasted
coconut
Dessert Now, Dinner Later!
0
2
0
Chocolate Cake with Coconut Cream Cheese Balls / Tort de ciocolata cu bile din cocos
Polka Dot
Cake
and the balls are not pre-baked, they are made from
coconut
and.
coconut-cheese
balls. cheese and therefore they are soft and
creamy
Cofetar De Ocazie
0
6
0
Boston Cream Pie Poke Cake
Boston
Cream
Pie
Poke
. It has the yummy flavor combinations of Boston
Cream
Pie, but in a simplified form
Edesia's Notebook
0
5
0
Coconut Carrot Cake with Orange Cream Cheese Frosting
In a mixer, combine the
cream
cheese,
coconut
cream
and butter and beat until well combined.
Coconut
Carrot
Cake
The Lazy Vegan Baker
0
4
0
Salmon Cakes with Coconut Cream and Strawberry Mango Salsa
The salmon
cakes
are seared in
coconut
oil and then topped with a dollop of
coconut
cream
and strawberry mango salsa for a refreshing summer dinner
Curly Girl Kitchen
1
2
5
Coconut Orange Bundt Cake with Coconut Whipped Cream #Bundtbakers
Coconut
Orange Bundt
Cake
with
Coconut
Whipped
Cream
#BundtbakersServings. Regular whipped
cream
can be substituted for the
coconut
whipped
cream
if you prefer
Cassie's Kitchen
0
2
0
Coconut Lemon Curd Poke Cake Recipe
Add sugar, egg whites,
cream
of tartar, salt, and water to the bowl.
Coconut
Lemon Curd
Poke
Cake
. Top with shredded
coconut
julieseatsandtreats
0
2
0
Advertisement
Coconut Cake With Coconut Cream Cheese Frosting
This is a
coconut
cake
for
coconut
lovers as it’s made with
coconut
flour and decorated with
coconut
on top of a
creamy
cream
cheese frosting
Divalicious Recipes
0
2
0
Caribbean Cake: Pineapple Banana Cream Coconut Cake
– Pineapple Banana
Cream
Coconut
Cake
. Once the
cakes
have cooled, frost one layer with the whipped pudding mixture, and then sprinkle with
coconut
flakes
Craving4More
0
2
0
Banana Cream Poke Cake & More Banana Recipes
Besides breads and pies I also love to bake
cakes
, so today I want to share an easy recipe for a Banana
Cream
Poke
Cake
Mommy's Kitchen
0
1
0
Poke Cake with Whipped Cream Frosting and No Artificial Colors
Poke
Cake
with Whipped
Cream
Frosting - No Artificial Colors. You may not be impressed to know it’s basically just sweetened whipped
cream
but when you spread this over the finished
cake
and let it set in the fridge overnight, this combination melts in your mouth, pure and simple
Good Dinner Mom
0
1
0
Banana Poke Cake with Cream Cheese Brown Sugar Frosting
Cream
Cheese Brown Sugar Frosting. Using the round handle of a spoon or whatever else you can find with a round end, (I shutter at what people might use),
poke
holes all over the
cake
Food Thoughts of a Chef Wannabe
0
1
0
Carrot Cake Scones with Coconut Cream Cheese Frosting
Moist and delicious Carrot
Cake
Scones topped with a sinful
coconut
cream
cheese frosting are a perfect weekend breakfast
Broma Bakery
0
1
0
Raspberry Coconut Poke Cake
Delicate white
cake
, sweet raspberry filling,
creamy
coconutty
filling, and toasted flakes of
coconut—the
perfect way to welcome summer
A baJillian Recipes
0
1
0
Cherry Coconut Cauliflower Ice Cream Cake
Enter a bit of
coconut
cream
, and all
creaminess
problems are solved. Just black cherry jam &
coconut
, though you can easily adjust the flavors to your liking (as people have done beautifully in the examples I linked to above)
Gluten-Free Vegan Love
0
2
0
Strawberry Coconut Poke Cake
1 strawberry
cake
mix. 1 – 15 ounce can
cream
of
coconut
. 2 cups sweetened shredded
coconut
. com 1 strawberry
cake
mix. 15 ounce can
cream
of
coconut
. 2 cups sweetened shredded
coconut
Creative edibles, fancy edibles!
0
1
0
Advertisement
Boston Cream Poke Cake
Be sure to
poke
right down to the bottom of the
cake
. Pour pudding over
cake
. Spread frosting evenly over
cake
starting in the center then spreading it towards the sides
www.thecountrycook.net
0
1
0
Coconut Topped / Cream Cheese Sheet Cake Recipe
You can now toast
coconut
.
Cream
together butter,
cream
cheese and sugar. Can also use a sheet
cake
pan. You can also place
coconut
on a baking sheet and toast in oven, turning frequently so not to burn
www.justapinch.com
0
1
0
Pineapple Coconut Poke Cake
www.asouthernsoul.com
0
1
0
Boston Cream Pie Poke Cake
Remove
cake
from oven and
poke
several holes with the handle of a wooden spoon. Pour pudding mixture all over
cake
filling all the holes and letting it completely cover the
cake
diaryofarecipecollector.com
0
1
0
Banana Cream Poke Cake
Creamy
Bacon and Cheese Dip. Quick Strawberry
Cheesecake
.
Creamy
Broccoli Chicken Roll-Ups. Despicable Me Minion Sheet
Cake
Cheri's Delicious Delectable Divine Recipes
0
50
0
6-Layer Coconut Cream Cake
The original idea was to bake a cocoa shortbread
cake
with white
coconut
cream
filing. | https://www.mytasteus.com/r/coconut-cream-poke-cake-82596899.html |
CROSS-REFERENCE TO RELATED APPLICATIONS
TECHNICAL FIELD
The disclosure claims the benefits of priority to U.S. Provisional Application No. 63/005,305, filed on Apr. 4, 2020, and U.S. Provisional Application No. 63/002,594, filed on Mar. 31, 2020, both of which are incorporated herein by reference in their entireties.
The present disclosure generally relates to video processing, and more particularly, to the use of palette mode in video encoding and decoding.
BACKGROUND
A video is a set of static pictures (or “frames”) capturing the visual information. To reduce the storage memory and the transmission bandwidth, a video can be compressed before storage or transmission and decompressed before display. The compression process is usually referred to as encoding and the decompression process is usually referred to as decoding. There are various video coding formats which use standardized video coding technologies, most commonly based on prediction, transform, quantization, entropy coding and in-loop filtering. The video coding standards, such as the High Efficiency Video Coding (HEVC/H.265) standard, the Versatile Video Coding (VVC/H.266) standard, and AVS standards, specifying the specific video coding formats, are developed by standardization organizations. With more and more advanced video coding technologies being adopted in the video standards, the coding efficiency of the new video coding standards get higher and higher.
SUMMARY OF THE DISCLOSURE
Embodiments of the present disclosure provide a computer-implemented method for palette predictor. In some embodiments, the method includes: receiving a video frame for processing; generating one or more coding units of the video frame; and processing one or more coding units using one or more palette predictors having palette entries, wherein each palette entry of the one or more palette predictors has a corresponding reuse flag, and wherein a number of reuse flags for each palette predictor is set to a fixed number for a corresponding coding unit.
Embodiments of the present disclosure provide an apparatus. In some embodiments, the apparatus includes a memory figured to store instructions; and a processor coupled to the memory and configured to execute the instructions to cause the apparatus to perform: receiving a video frame for processing; generating one or more coding units of the video frame; and processing one or more coding units using one or more palette predictors having palette entries, wherein each palette entry of the one or more palette predictors has a corresponding reuse flag, and wherein a number of reuse flags for each palette predictor is set to a fixed number for a corresponding coding unit.
Embodiments of the present disclosure provide a non-transitory computer-readable storage medium that stores a set of instructions that is executable by one or more processors of an apparatus to cause the apparatus to initiate a method for performing video data processing. In some embodiments, the method includes: receiving a video frame for processing; generating one or more coding units of the video frame; and processing one or more coding units using one or more palette predictors having palette entries, wherein each palette entry of the one or more palette predictors has a corresponding reuse flag, and wherein a number of reuse flags for each palette predictor is set to a fixed number for a corresponding coding unit.
Embodiments of the present disclosure provide a computer-implemented method for deblocking filter of palette mode. In some embodiments, the method includes: receiving a video frame for processing; generating the one or more coding units for the video frame, wherein each coding unit of the one or more coding units has one or more coding blocks; and setting a boundary filter strength to 1 in response to at least a first coding block of two neighboring coding blocks being coded in palette mode and second coding block of the two neighboring coding blocks has a coding mode different from the palette mode.
Embodiments of the present disclosure provide an apparatus. In some embodiments, the apparatus includes a memory figured to store instructions; and a processor coupled to the memory and configured to execute the instructions to cause the apparatus to perform: receiving a video frame for processing; generating the one or more coding units for the video frame, wherein each coding unit of the one or more coding units has one or more coding blocks; and setting a boundary filter strength to 1 in response to at least a first coding block of two neighboring coding blocks being coded in palette mode and second coding block of the two neighboring coding blocks has a coding mode different from the palette mode.
Embodiments of the present disclosure provide a non-transitory computer-readable storage medium that stores a set of instructions that is executable by one or more processors of an apparatus to cause the apparatus to initiate a method for performing video data processing. In some embodiments, the method includes: receiving a video frame for processing; receiving a video frame for processing; generating the one or more coding units for the video frame, wherein each coding unit of the one or more coding units has one or more coding blocks; and setting a boundary filter strength to 1 in response to at least a first coding block of two neighboring coding blocks being coded in palette mode and second coding block of the two neighboring coding blocks has a coding mode different from the palette mode.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments and various aspects of the present disclosure are illustrated in the following detailed description and the accompanying figures. Various features shown in the figures are not drawn to scale.
FIG. 1
is a schematic diagram illustrating structures of an example video sequence, according to some embodiments of the present disclosure.
FIG. 2A
is a schematic diagram illustrating an exemplary encoding process of a hybrid video coding system, consistent with embodiments of the disclosure.
FIG. 2B
is a schematic diagram illustrating another exemplary encoding process of a hybrid video coding system, consistent with embodiments of the disclosure.
FIG. 3A
is a schematic diagram illustrating an exemplary decoding process of a hybrid video coding system, consistent with embodiments of the disclosure.
FIG. 3B
is a schematic diagram illustrating another exemplary decoding process of a hybrid video coding system, consistent with embodiments of the disclosure.
FIG. 4
is a block diagram of an exemplary apparatus for encoding or decoding a video, according to some embodiments of the present disclosure.
FIG. 5
shows an illustration of a block coded in palette mode, according to some embodiments of the present disclosure.
FIG. 6
shows an example of the palette predictor updating process.
FIG. 7
shows an example palette coding syntax.
FIG. 8
shows a flow chart of a palette predictor updating process, according to some embodiments of the present disclosure.
FIG. 9
shows an example of a palette predictor updating process, according to some embodiments of the present disclosure.
FIG. 10
shows an example decoding process for palette mode.
FIG. 11
shows an example decoding process for palette mode, according to some embodiments of the present disclosure.
FIG. 12
shows a flow chart of another palette predictor updating process, according to some embodiments of the present disclosure.
FIG. 13
shows an example of another palette predictor updating process, according to some embodiments of the present disclosure.
FIG. 14
shows an example palette coding syntax, according to some embodiments of the present disclosure.
FIG. 15
shows an example decoding process for palette mode, according to some embodiments of the present disclosure.
FIG. 16
shows an illustration of a decoder hardware design for palette mode implementing a portion of a palette predictor updating process, according to some embodiments of the present disclosure.
FIG. 17
shows an example decoding process for palette mode, according to some embodiments of the present disclosure.
FIG. 18
shows an example initializing process for palette mode.
FIG. 19
shows an example initializing process for palette mode, according to some embodiments of the present disclosure.
FIG. 20
shows an example of a palette predictor update and corresponding run length coding of the reuse flags.
FIG. 21
shows an example of a palette predictor update and corresponding run length coding of the reuse flags, according to some embodiments of the present disclosure.
FIG. 22
shows an example of a palette predictor update and corresponding run length coding of reuse flags, according to some embodiments of the present disclosure.
FIG. 23
shows an example palette coding syntax, according to some embodiments of the present disclosure.
FIG. 24
shows an example palette coding semantics.
FIG. 25
shows an example palette coding semantics, according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims. Particular aspects of the present disclosure are described in greater detail below. The terms and definitions provided herein control, if in conflict with terms and/or definitions incorporated by reference.
The Joint Video Experts Team (JVET) of the ITU-T Video Coding Expert Group (ITU-T VCEG) and the ISO/IEC Moving Picture Expert Group (ISO/IEC MPEG) is currently developing the Versatile Video Coding (VVC/H.266) standard. The VVC standard is aimed at doubling the compression efficiency of its predecessor, the High Efficiency Video Coding (HEVC/H.265) standard. In other words, VVC's goal is to achieve the same subjective quality as HEVC/H.265 using half the bandwidth.
To achieve the same subjective quality as HEVC/H.265 using half the bandwidth, the JVET has been developing technologies beyond HEVC using the joint exploration model (JEM) reference software. As coding technologies were incorporated into the JEM, the JEM achieved substantially higher coding performance than HEVC.
The VVC standard has been developed recent, and continues to include more coding technologies that provide better compression performance. VVC is based on the same hybrid video coding system that has been used in modern video compression standards such as HEVC, H.264/AVC, MPEG2, H.263, etc.
A video is a set of static pictures (or “frames”) arranged in a temporal sequence to store visual information. A video capture device (e.g., a camera) can be used to capture and store those pictures in a temporal sequence, and a video playback device (e.g., a television, a computer, a smartphone, a tablet computer, a video player, or any end-user terminal with a function of display) can be used to display such pictures in the temporal sequence. Also, in some applications, a video capturing device can transmit the captured video to the video playback device (e.g., a computer with a monitor) in real-time, such as for surveillance, conferencing, or live broadcasting.
For reducing the storage space and the transmission bandwidth needed by such applications, the video can be compressed before storage and transmission and decompressed before the display. The compression and decompression can be implemented by software executed by a processor (e.g., a processor of a generic computer) or specialized hardware. The module for compression is generally referred to as an “encoder,” and the module for decompression is generally referred to as a “decoder.” The encoder and decoder can be collectively referred to as a “codec.” The encoder and decoder can be implemented as any of a variety of suitable hardware, software, or a combination thereof. For example, the hardware implementation of the encoder and decoder can include circuitry, such as one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), discrete logic, or any combinations thereof. The software implementation of the encoder and decoder can include program codes, computer-executable instructions, firmware, or any suitable computer-implemented algorithm or process fixed in a computer-readable medium. Video compression and decompression can be implemented by various algorithms or standards, such as MPEG-1, MPEG-2, MPEG-4, H.26x series, or the like. In some applications, the codec can decompress the video from a first coding standard and re-compress the decompressed video using a second coding standard, in which case the codec can be referred to as a “transcoder.”
The video encoding process can identify and keep useful information that can be used to reconstruct a picture and disregard unimportant information for the reconstruction. If the disregarded, unimportant information cannot be fully reconstructed, such an encoding process can be referred to as “lossy.” Otherwise, it can be referred to as “lossless.” Most encoding processes are lossy, which is a tradeoff to reduce the needed storage space and the transmission bandwidth.
The useful information of a picture being encoded (referred to as a “current picture”) include changes with respect to a reference picture (e.g., a picture previously encoded and reconstructed). Such changes can include position changes, luminosity changes, or color changes of the pixels, among which the position changes are mostly concerned. Position changes of a group of pixels that represent an object can reflect the motion of the object between the reference picture and the current picture.
A picture coded without referencing another picture (i.e., it is its own reference picture) is referred to as an “I-picture.” A picture is referred to as a “P-picture” if some or all blocks (e.g., blocks that generally refer to portions of the video picture) in the picture are predicted using intra prediction or inter prediction with one reference picture (e.g., uni-prediction). A picture is referred to as a “B-picture” if at least one block in it is predicted with two reference pictures (e.g., bi-prediction).
FIG. 1
100
100
100
100
illustrates structures of an example video sequence , according to some embodiments of the present disclosure. Video sequence can be a live video or a video having been captured and archived. Video can be a real-life video, a computer-generated video (e.g., computer game video), or a combination thereof (e.g., a real-life video with augmented-reality effects). Video sequence can be inputted from a video capture device (e.g., a camera), a video archive (e.g., a video file stored in a storage device) containing previously captured video, or a video feed interface (e.g., a video broadcast transceiver) to receive video from a video content provider.
FIG. 1
FIG. 1
FIG. 1
100
102
104
106
108
102
106
106
108
102
102
104
102
106
104
108
104
104
102
102
106
As shown in , video sequence can include a series of pictures arranged temporally along a timeline, including pictures , , , and . Pictures - are continuous, and there are more pictures between pictures and . In , picture is an I-picture, the reference picture of which is picture itself. Picture is a P-picture, the reference picture of which is picture , as indicated by the arrow. Picture is a B-picture, the reference pictures of which are pictures and , as indicated by the arrows. In some embodiments, the reference picture of a picture (e.g., picture ) can be not immediately preceding or following the picture. For example, the reference picture of picture can be a picture preceding picture . It should be noted that the reference pictures of pictures - are only examples, and the present disclosure does not limit embodiments of the reference pictures as the examples shown in .
110
100
102
108
110
FIG. 1
Typically, video codecs do not encode or decode an entire picture at one time due to the computing complexity of such tasks. Rather, they can split the picture into basic segments, and encode or decode the picture segment by segment. Such basic segments are referred to as basic processing units (“BPUs”) in the present disclosure. For example, structure in shows an example structure of a picture of video sequence (e.g., any of pictures -). In structure , a picture is divided into 4×4 basic processing units, the boundaries of which are shown as dash lines. In some embodiments, the basic processing units can be referred to as “macroblocks” in some video coding standards (e.g., MPEG family, H.261, H.263, or H.264/AVC), or as “coding tree units” (“CTUs”) in some other video coding standards (e.g., H.265/HEVC or H.266/VVC). The basic processing units can have variable sizes in a picture, such as 128×128, 64×64, 32×32, 16′16, 4-8, 16×32, or any arbitrary shape and size of pixels. The sizes and shapes of the basic processing units can be selected for a picture based on the balance of coding efficiency and levels of details to be kept in the basic processing unit.
The basic processing units can be logical units, which can include a group of different types of video data stored in a computer memory (e.g., in a video frame buffer). For example, a basic processing unit of a color picture can include a luma component (Y) representing achromatic brightness information, one or more chroma components (e.g., Cb and Cr) representing color information, and associated syntax elements, in which the luma and chroma components can have the same size of the basic processing unit. The luma and chroma components can be referred to as “coding tree blocks” (“CTBs”) in some video coding standards (e.g., H.265/HEVC or H.266/VVC). Any operation performed to a basic processing unit can be repeatedly performed to each of its luma and chroma components.
FIGS. 2A-2B
FIGS. 3A-3B
Video coding has multiple stages of operations, examples of which are shown in and . For each stage, the size of the basic processing units can still be too large for processing, and thus can be further divided into segments referred to as “basic processing sub-units” in the present disclosure. In some embodiments, the basic processing sub-units can be referred to as “blocks” in some video coding standards (e.g., MPEG family, H.261, H.263, or H.264/AVC), or as “coding units” (“CUs”) in some other video coding standards (e.g., H.265/HEVC or H.266/VVC). A basic processing sub-unit can have the same or smaller size than the basic processing unit. Similar to the basic processing units, basic processing sub-units are also logical units, which can include a group of different types of video data (e.g., Y, Cb, Cr, and associated syntax elements) stored in a computer memory (e.g., in a video frame buffer). Any operation performed to a basic processing sub-unit can be repeatedly performed to each of its luma and chroma components. It should be noted that such division can be performed to further levels depending on processing needs. It should also be noted that different stages can divide the basic processing units using different schemes.
FIG. 2B
For example, at a mode decision stage (an example of which is shown in ), the encoder can decide what prediction mode (e.g., intra-picture prediction or inter-picture prediction) to use for a basic processing unit, which can be too large to make such a decision.
The encoder can split the basic processing unit into multiple basic processing sub-units (e.g., CUs as in H.265/HEVC or H.266/VVC), and decide a prediction type for each individual basic processing sub-unit.
FIGS. 2A-2B
For another example, at a prediction stage (an example of which is shown in ), the encoder can perform prediction operation at the level of basic processing sub-units (e.g., CUs). However, in some cases, a basic processing sub-unit can still be too large to process. The encoder can further split the basic processing sub-unit into smaller segments (e.g., referred to as “prediction blocks” or “PBs” in H.265/HEVC or H.266/VVC), at the level of which the prediction operation can be performed.
FIGS. 2A-2B
For another example, at a transform stage (an example of which is shown in ), the encoder can perform a transform operation for residual basic processing sub-units (e.g., CUs). However, in some cases, a basic processing sub-unit can still be too large to process. The encoder can further split the basic processing sub-unit into smaller segments (e.g., referred to as “transform blocks” or “TBs” in H.265/HEVC or H.266/VVC), at the level of which the transform operation can be performed. It should be noted that the division schemes of the same basic processing sub-unit can be different at the prediction stage and the transform stage. For example, in H.265/HEVC or H.266/VVC, the prediction blocks and transform blocks of the same CU can have different sizes and numbers.
110
112
FIG. 1
In structure of , basic processing unit is further divided into 3×3 basic processing sub-units, the boundaries of which are shown as dotted lines. Different basic processing units of the same picture can be divided into basic processing sub-units in different schemes.
100
In some implementations, to provide the capability of parallel processing and error resilience to video encoding and decoding, a picture can be divided into regions for processing, such that, for a region of the picture, the encoding or decoding process can depend on no information from any other region of the picture. In other words, each region of the picture can be processed independently. By doing so, the codec can process different regions of a picture in parallel, thus increasing the coding efficiency. Also, when data of a region is corrupted in the processing or lost in network transmission, the codec can correctly encode or decode other regions of the same picture without reliance on the corrupted or lost data, thus providing the capability of error resilience. In some video coding standards, a picture can be divided into different types of regions. For example, H.265/HEVC and H.266NVC provide two types of regions: “slices” and “tiles.” It should also be noted that different pictures of video sequence can have different partition schemes for dividing a picture into regions.
FIG. 1
FIG. 1
110
114
116
118
110
114
116
118
110
For example, in , structure is divided into three regions , , and , the boundaries of which are shown as solid lines inside structure . Region includes four basic processing units. Each of regions and includes six basic processing units. It should be noted that the basic processing units, basic processing sub-units, and regions of structure in are only examples, and the present disclosure does not limit embodiments thereof.
FIG. 2A
FIG. 2A
FIG. 1
FIG. 1
200
200
202
228
200
100
202
110
202
200
202
200
200
200
114
118
202
illustrates a schematic diagram of an example encoding process A, consistent with embodiments of the disclosure. For example, the encoding process A can be performed by an encoder. As shown in , the encoder can encode video sequence into video bitstream according to process A. Similar to video sequence in , video sequence can include a set of pictures (referred to as “original pictures”) arranged in a temporal order. Similar to structure in , each original picture of video sequence can be divided by the encoder into basic processing units, basic processing sub-units, or regions for processing. In some embodiments, the encoder can perform process A at the level of basic processing units for each original picture of video sequence . For example, the encoder can perform process A in an iterative manner, in which the encoder can encode a basic processing unit in one iteration of process A. In some embodiments, the encoder can perform process A in parallel for regions (e.g., regions -) of each original picture of video sequence .
FIG. 2A
202
204
206
208
208
210
210
212
214
216
206
216
226
228
202
204
206
208
210
212
214
216
226
228
200
214
216
218
220
222
222
208
224
204
200
218
220
222
224
200
In , the encoder can feed a basic processing unit (referred to as an “original BPU”) of an original picture of video sequence to prediction stage to generate prediction data and predicted BPU . The encoder can subtract predicted BPU from the original BPU to generate residual BPU . The encoder can feed residual BPU to transform stage and quantization stage to generate quantized transform coefficients . The encoder can feed prediction data and quantized transform coefficients to binary coding stage to generate video bitstream . Components , , , , , , , , , and can be referred to as a “forward path.” During process A, after quantization stage , the encoder can feed quantized transform coefficients to inverse quantization stage and inverse transform stage to generate reconstructed residual BPU . The encoder can add reconstructed residual BPU to predicted BPU to generate prediction reference , which is used in prediction stage for the next iteration of process A. Components , , , and of process A can be referred to as a “reconstruction path.” The reconstruction path can be used to ensure that both the encoder and the decoder use the same reference data for prediction.
200
224
202
The encoder can perform process A iteratively to encode each original BPU of the original picture (in the forward path) and generate predicted reference for encoding the next original BPU of the original picture (in the reconstruction path). After encoding all original BPUs of the original picture, the encoder can proceed to encode the next picture in video sequence .
200
202
Referring to process A, the encoder can receive video sequence generated by a video capturing device (e.g., a camera). The term “receive” used herein can refer to receiving, inputting, acquiring, retrieving, obtaining, reading, accessing, or any action in any manner for inputting data.
204
224
206
208
224
200
204
206
208
206
224
At prediction stage , at a current iteration, the encoder can receive an original BPU and prediction reference , and perform a prediction operation to generate prediction data and predicted BPU . Prediction reference can be generated from the reconstruction path of the previous iteration of process A. The purpose of prediction stage is to reduce information redundancy by extracting prediction data that can be used to reconstruct the original BPU as predicted BPU from prediction data and prediction reference .
208
208
208
210
208
210
208
206
210
Ideally, predicted BPU can be identical to the original BPU. However, due to non-ideal prediction and reconstruction operations, predicted BPU is generally slightly different from the original BPU. For recording such differences, after generating predicted BPU , the encoder can subtract it from the original BPU to generate residual BPU . For example, the encoder can subtract values (e.g., greyscale values or RGB values) of pixels of predicted BPU from values of corresponding pixels of the original BPU. Each pixel of residual BPU can have a residual value as a result of such subtraction between the corresponding pixels of the original BPU and predicted BPU . Compared with the original BPU, prediction data and residual BPU can have fewer bits, but they can be used to reconstruct the original BPU without significant quality deterioration. Thus, the original BPU is compressed.
210
212
210
210
210
210
To further compress residual BPU , at transform stage , the encoder can reduce spatial redundancy of residual BPU by decomposing it into a set of two-dimensional “base patterns,” each base pattern being associated with a “transform coefficient.” The base patterns can have the same size (e.g., the size of residual BPU ). Each base pattern can represent a variation frequency (e.g., frequency of brightness variation) component of residual BPU . None of the base patterns can be reproduced from any combinations (e.g., linear combinations) of any other base patterns. In other words, the decomposition can decompose variations of residual BPU into a frequency domain. Such a decomposition is analogous to a discrete Fourier transform of a function, in which the base patterns are analogous to the base functions (e.g., trigonometry functions) of the discrete Fourier transform, and the transform coefficients are analogous to the coefficients associated with the base functions.
212
212
210
210
210
210
210
210
Different transform algorithms can use different base patterns. Various transform algorithms can be used at transform stage , such as, for example, a discrete cosine transform, a discrete sine transform, or the like. The transform at transform stage is invertible. That is, the encoder can restore residual BPU by an inverse operation of the transform (referred to as an “inverse transform”). For example, to restore a pixel of residual BPU , the inverse transform can be multiplying values of corresponding pixels of the base patterns by respective associated coefficients and adding the products to produce a weighted sum. For a video coding standard, both the encoder and decoder can use the same transform algorithm (thus the same base patterns). Thus, the encoder can record only the transform coefficients, from which the decoder can reconstruct residual BPU without receiving the base patterns from the encoder. Compared with residual BPU , the transform coefficients can have fewer bits, but they can be used to reconstruct residual BPU without significant quality deterioration. Thus, residual BPU is further compressed.
214
214
216
216
216
The encoder can further compress the transform coefficients at quantization stage . In the transform process, different base patterns can represent different variation frequencies (e.g., brightness variation frequencies). Because human eyes are generally better at recognizing low-frequency variation, the encoder can disregard information of high-frequency variation without causing significant quality deterioration in decoding. For example, at quantization stage , the encoder can generate quantized transform coefficients by dividing each transform coefficient by an integer value (referred to as a “quantization scale factor”) and rounding the quotient to its nearest integer. After such an operation, some transform coefficients of the high-frequency base patterns can be converted to zero, and the transform coefficients of the low-frequency base patterns can be converted to smaller integers. The encoder can disregard the zero-value quantized transform coefficients , by which the transform coefficients are further compressed. The quantization process is also invertible, in which quantized transform coefficients can be reconstructed to the transform coefficients in an inverse operation of the quantization (referred to as “inverse quantization”).
214
214
200
216
Because the encoder disregards the remainders of such divisions in the rounding operation, quantization stage can be lossy. Typically, quantization stage can contribute the most information loss in process A. The larger the information loss is, the fewer bits the quantized transform coefficients can need. For obtaining different levels of information loss, the encoder can use different values of the quantization parameter or any other parameter of the quantization process.
226
206
216
206
216
226
204
212
226
228
228
At binary coding stage , the encoder can encode prediction data and quantized transform coefficients using a binary coding technique, such as, for example, entropy coding, variable length coding, arithmetic coding, Huffman coding, context-adaptive binary arithmetic coding, or any other lossless or lossy compression algorithm. In some embodiments, besides prediction data and quantized transform coefficients , the encoder can encode other information at binary coding stage , such as, for example, a prediction mode used at prediction stage , parameters of the prediction operation, a transform type at transform stage , parameters of the quantization process (e.g., quantization parameters), an encoder control parameter (e.g., a bitrate control parameter), or the like. The encoder can use the output data of binary coding stage to generate video bitstream . In some embodiments, video bitstream can be further packetized for network transmission.
200
218
216
220
222
222
208
224
200
Referring to the reconstruction path of process A, at inverse quantization stage , the encoder can perform inverse quantization on quantized transform coefficients to generate reconstructed transform coefficients. At inverse transform stage , the encoder can generate reconstructed residual BPU based on the reconstructed transform coefficients. The encoder can add reconstructed residual BPU to predicted BPU to generate prediction reference that is to be used in the next iteration of process A.
200
202
200
200
200
212
214
200
200
FIG. 2A
It should be noted that other variations of the process A can be used to encode video sequence . In some embodiments, stages of process A can be performed by the encoder in different orders. In some embodiments, one or more stages of process A can be combined into a single stage. In some embodiments, a single stage of process A can be divided into multiple stages. For example, transform stage and quantization stage can be combined into a single stage. In some embodiments, process A can include additional stages. In some embodiments, process A can omit one or more stages in .
FIG. 2B
200
200
200
200
200
200
230
204
2042
2044
200
232
234
illustrates a schematic diagram of another example encoding process B, consistent with embodiments of the disclosure. Process B can be modified from process A. For example, process B can be used by an encoder conforming to a hybrid video coding standard (e.g., H.26x series). Compared with process A, the forward path of process B additionally includes mode decision stage and divides prediction stage into spatial prediction stage and temporal prediction stage . The reconstruction path of process B additionally includes loop filter stage and buffer .
224
224
Generally, prediction techniques can be categorized into two types: spatial prediction and temporal prediction. Spatial prediction (e.g., an intra-picture prediction or “intra prediction”) can use pixels from one or more already coded neighboring BPUs in the same picture to predict the current BPU. That is, prediction reference in the spatial prediction can include the neighboring BPUs. The spatial prediction can reduce the inherent spatial redundancy of the picture. Temporal prediction (e.g., an inter-picture prediction or “inter prediction”) can use regions from one or more already coded pictures to predict the current BPU. That is, prediction reference in the temporal prediction can include the coded pictures. The temporal prediction can reduce the inherent temporal redundancy of the pictures.
200
2042
2044
2042
224
208
208
206
Referring to process B, in the forward path, the encoder performs the prediction operation at spatial prediction stage and temporal prediction stage . For example, at spatial prediction stage , the encoder can perform the intra prediction. For an original BPU of a picture being encoded, prediction reference can include one or more neighboring BPUs that have been encoded (in the forward path) and reconstructed (in the reconstructed path) in the same picture. The encoder can generate predicted BPU by extrapolating the neighboring BPUs. The extrapolation technique can include, for example, a linear extrapolation or interpolation, a polynomial extrapolation or interpolation, or the like. In some embodiments, the encoder can perform the extrapolation at the pixel level, such as by extrapolating values of corresponding pixels for each pixel of predicted BPU . The neighboring BPUs used for extrapolation can be located with respect to the original BPU from various directions, such as in a vertical direction (e.g., on top of the original BPU), a horizontal direction (e.g., to the left of the original BPU), a diagonal direction (e.g., to the down-left, down-right, up-left, or up-right of the original BPU), or any direction defined in the used video coding standard. For the intra prediction, prediction data can include, for example, locations (e.g., coordinates) of the used neighboring BPUs, sizes of the used neighboring BPUs, parameters of the extrapolation, a direction of the used neighboring BPUs with respect to the original BPU, or the like.
2044
224
222
208
106
FIG. 1
FIG. 1
For another example, at temporal prediction stage , the encoder can perform the inter prediction. For an original BPU of a current picture, prediction reference can include one or more pictures (referred to as “reference pictures”) that have been encoded (in the forward path) and reconstructed (in the reconstructed path). In some embodiments, a reference picture can be encoded and reconstructed BPU by BPU. For example, the encoder can add reconstructed residual BPU to predicted BPU to generate a reconstructed BPU. When all reconstructed BPUs of the same picture are generated, the encoder can generate a reconstructed picture as a reference picture. The encoder can perform an operation of “motion estimation” to search for a matching region in a scope (referred to as a “search window”) of the reference picture. The location of the search window in the reference picture can be determined based on the location of the original BPU in the current picture. For example, the search window can be centered at a location having the same coordinates in the reference picture as the original BPU in the current picture and can be extended out for a predetermined distance. When the encoder identifies (e.g., by using a pel-recursive algorithm, a block-matching algorithm, or the like) a region similar to the original BPU in the search window, the encoder can determine such a region as the matching region. The matching region can have different dimensions (e.g., being smaller than, equal to, larger than, or in a different shape) from the original BPU. Because the reference picture and the current picture are temporally separated in the timeline (e.g., as shown in ), it can be deemed that the matching region “moves” to the location of the original BPU as time goes by. The encoder can record the direction and distance of such a motion as a “motion vector.” When multiple reference pictures are used (e.g., as picture in ), the encoder can search for a matching region and determine its associated motion vector for each reference picture. In some embodiments, the encoder can assign weights to pixel values of the matching regions of respective matching reference pictures.
206
The motion estimation can be used to identify various types of motions, such as, for example, translations, rotations, zooming, or the like. For inter prediction, prediction data can include, for example, locations (e.g., coordinates) of the matching region, the motion vectors associated with the matching region, the number of reference pictures, weights associated with the reference pictures, or the like.
208
208
206
224
106
FIG. 1
For generating predicted BPU , the encoder can perform an operation of “motion compensation.” The motion compensation can be used to reconstruct predicted BPU based on prediction data (e.g., the motion vector) and prediction reference . For example, the encoder can move the matching region of the reference picture according to the motion vector, in which the encoder can predict the original BPU of the current picture. When multiple reference pictures are used (e.g., as picture in ), the encoder can move the matching regions of the reference pictures according to the respective motion vectors and average pixel values of the matching regions. In some embodiments, if the encoder has assigned weights to pixel values of the matching regions of respective matching reference pictures, the encoder can add a weighted sum of the pixel values of the moved matching regions.
104
102
104
106
104
108
104
FIG. 1
FIG. 1
In some embodiments, the inter prediction can be unidirectional or bidirectional. Unidirectional inter predictions can use one or more reference pictures in the same temporal direction with respect to the current picture. For example, picture in is a unidirectional inter-predicted picture, in which the reference picture (e.g., picture ) precedes picture . Bidirectional inter predictions can use one or more reference pictures at both temporal directions with respect to the current picture. For example, picture in is a bidirectional inter-predicted picture, in which the reference pictures (e.g., pictures and ) are at both temporal directions with respect to picture .
200
2042
2044
230
200
208
206
Still referring to the forward path of process B, after spatial prediction and temporal prediction stage , at mode decision stage , the encoder can select a prediction mode (e.g., one of the intra prediction or the inter prediction) for the current iteration of process B. For example, the encoder can perform a rate-distortion optimization technique, in which the encoder can select a prediction mode to minimize a value of a cost function depending on a bit rate of a candidate prediction mode and distortion of the reconstructed reference picture under the candidate prediction mode. Depending on the selected prediction mode, the encoder can generate the corresponding predicted BPU and predicted data .
200
224
224
2042
224
232
224
224
232
234
202
234
2044
226
216
206
In the reconstruction path of process B, if intra prediction mode has been selected in the forward path, after generating prediction reference (e.g., the current BPU that has been encoded and reconstructed in the current picture), the encoder can directly feed prediction reference to spatial prediction stage for later usage (e.g., for extrapolation of a next BPU of the current picture). The encoder can feed prediction reference to loop filter stage , at which the encoder can apply a loop filter to prediction reference to reduce or eliminate distortion (e.g., blocking artifacts) introduced during coding of the prediction reference . The encoder can apply various loop filter techniques at loop filter stage , such as, for example, deblocking, sample adaptive offsets, adaptive loop filters, or the like. The loop-filtered reference picture can be stored in buffer (or “decoded picture buffer”) for later use (e.g., to be used as an inter-prediction reference picture for a future picture of video sequence ). The encoder can store one or more reference pictures in buffer to be used at temporal prediction stage . In some embodiments, the encoder can encode parameters of the loop filter (e.g., a loop filter strength) at binary coding stage , along with quantized transform coefficients , prediction data , and other information.
FIG. 3A
FIG. 2A
FIGS. 2A-2B
FIGS. 2A-2B
300
300
200
300
200
228
304
300
304
202
214
304
202
200
200
300
228
300
300
300
114
118
228
illustrates a schematic diagram of an example decoding process A, consistent with embodiments of the disclosure. Process A can be a decompression process corresponding to the compression process A in . In some embodiments, process A can be similar to the reconstruction path of process A. A decoder can decode video bitstream into video stream according to process A. Video stream can be very similar to video sequence . However, due to the information loss in the compression and decompression process (e.g., quantization stage in ), generally, video stream is not identical to video sequence . Similar to processes A and B in , the decoder can perform process A at the level of basic processing units (BPUs) for each picture encoded in video bitstream . For example, the decoder can perform process A in an iterative manner, in which the decoder can decode a basic processing unit in one iteration of process A. In some embodiments, the decoder can perform process A in parallel for regions (e.g., regions -) of each picture encoded in video bitstream .
FIG. 3A
228
302
302
206
216
216
218
220
222
206
204
208
222
208
224
224
224
204
300
In , the decoder can feed a portion of video bitstream associated with a basic processing unit (referred to as an “encoded BPU”) of an encoded picture to binary decoding stage . At binary decoding stage , the decoder can decode the portion into prediction data and quantized transform coefficients . The decoder can feed quantized transform coefficients to inverse quantization stage and inverse transform stage to generate reconstructed residual BPU . The decoder can feed prediction data to prediction stage to generate predicted BPU . The decoder can add reconstructed residual BPU to predicted BPU to generate predicted reference . In some embodiments, predicted reference can be stored in a buffer (e.g., a decoded picture buffer in a computer memory). The decoder can feed predicted reference to prediction stage for performing a prediction operation in the next iteration of process A.
300
224
304
228
The decoder can perform process A iteratively to decode each encoded BPU of the encoded picture and generate predicted reference for encoding the next encoded BPU of the encoded picture. After decoding all encoded BPUs of the encoded picture, the decoder can output the picture to video stream for display and proceed to decode the next encoded picture in video bitstream .
302
206
216
302
228
228
302
At binary decoding stage , the decoder can perform an inverse operation of the binary coding technique used by the encoder (e.g., entropy coding, variable length coding, arithmetic coding, Huffman coding, context-adaptive binary arithmetic coding, or any other lossless compression algorithm). In some embodiments, besides prediction data and quantized transform coefficients , the decoder can decode other information at binary decoding stage , such as, for example, a prediction mode, parameters of the prediction operation, a transform type, parameters of the quantization process (e.g., quantization parameters), an encoder control parameter (e.g., a bitrate control parameter), or the like. In some embodiments, if video bitstream is transmitted over a network in packets, the decoder can depacketize video bitstream before feeding it to binary decoding stage .
FIG. 3B
300
300
300
300
300
300
204
2042
2044
232
234
illustrates a schematic diagram of another example decoding process B, consistent with embodiments of the disclosure. Process B can be modified from process A. For example, process B can be used by a decoder conforming to a hybrid video coding standard (e.g., H.26x series). Compared with process A, process B additionally divides prediction stage into spatial prediction stage and temporal prediction stage , and additionally includes loop filter stage and buffer .
300
206
302
206
206
In process B, for an encoded basic processing unit (referred to as a “current BPU”) of an encoded picture (referred to as a “current picture”) that is being decoded, prediction data decoded from binary decoding stage by the decoder can include various types of data, depending on what prediction mode was used to encode the current BPU by the encoder. For example, if intra prediction was used by the encoder to encode the current BPU, prediction data can include a prediction mode indicator (e.g., a flag value) indicative of the intra prediction, parameters of the intra prediction operation, or the like. The parameters of the intra prediction operation can include, for example, locations (e.g., coordinates) of one or more neighboring BPUs used as a reference, sizes of the neighboring BPUs, parameters of extrapolation, a direction of the neighboring BPUs with respect to the original BPU, or the like. For another example, if inter prediction was used by the encoder to encode the current BPU, prediction data can include a prediction mode indicator (e.g., a flag value) indicative of the inter prediction, parameters of the inter prediction operation, or the like. The parameters of the inter prediction operation can include, for example, the number of reference pictures associated with the current BPU, weights respectively associated with the reference pictures, locations (e.g., coordinates) of one or more matching regions in the respective reference pictures, one or more motion vectors respectively associated with the matching regions, or the like.
2042
2044
208
208
222
224
FIG. 2B
FIG. 3A
Based on the prediction mode indicator, the decoder can decide whether to perform a spatial prediction (e.g., the intra prediction) at spatial prediction stage or a temporal prediction (e.g., the inter prediction) at temporal prediction stage . The details of performing such spatial prediction or temporal prediction are described in and will not be repeated hereinafter. After performing such spatial prediction or temporal prediction, the decoder can generate predicted BPU . The decoder can add predicted BPU and reconstructed residual BPU to generate prediction reference , as described in .
300
224
2042
2044
300
2042
224
224
2042
2044
224
224
232
224
234
228
234
2044
206
FIG. 2B
In process B, the decoder can feed predicted reference to spatial prediction stage or temporal prediction stage for performing a prediction operation in the next iteration of process B. For example, if the current BPU is decoded using the intra prediction at spatial prediction stage , after generating prediction reference (e.g., the decoded current BPU), the decoder can directly feed prediction reference to spatial prediction stage for later usage (e.g., for extrapolation of a next BPU of the current picture). If the current BPU is decoded using the inter prediction at temporal prediction stage , after generating prediction reference (e.g., a reference picture in which all BPUs have been decoded), the decoder can feed prediction reference to loop filter stage to reduce or eliminate distortion (e.g., blocking artifacts). The decoder can apply a loop filter to prediction reference , in a way as described in . The loop-filtered reference picture can be stored in buffer (e.g., a decoded picture buffer in a computer memory) for later use (e.g., to be used as an inter-prediction reference picture for a future encoded picture of video bitstream ). The decoder can store one or more reference pictures in buffer to be used at temporal prediction stage . In some embodiments, prediction data can further include parameters of the loop filter (e.g., a loop filter strength). In some embodiments, prediction data includes parameters of the loop filter when the prediction mode indicator of prediction data indicates that inter prediction was used to encode the current BPU.
FIG. 4
FIG. 4
FIG. 4
400
400
402
402
400
402
402
402
402
402
402
402
a
b
n.
is a block diagram of an example apparatus for encoding or decoding a video, consistent with embodiments of the disclosure. As shown in , apparatus can include processor . When processor executes instructions described herein, apparatus can become a specialized machine for video encoding or decoding. Processor can be any type of circuitry capable of manipulating or processing information. For example, processor can include any combination of any number of a central processing unit (or “CPU”), a graphics processing unit (or “GPU”), a neural processing unit (“NPU”), a microcontroller unit (“MCU”), an optical processor, a programmable logic controller, a microcontroller, a microprocessor, a digital signal processor, an intellectual property (IP) core, a Programmable Logic Array (PLA), a Programmable Array Logic (PAL), a Generic Array Logic (GAL), a Complex Programmable Logic Device (CPLD), a Field-Programmable Gate Array (FPGA), a System On Chip (SoC), an Application-Specific Integrated Circuit (ASIC), or the like. In some embodiments, processor can also be a set of processors grouped as a single logical component. For example, as shown in , processor can include multiple processors, including processor , processor , and processor
400
404
200
200
300
300
202
228
304
402
410
404
404
404
FIG. 4
FIG. 4
Apparatus can also include memory configured to store data (e.g., a set of instructions, computer codes, intermediate data, or the like). For example, as shown in , the stored data can include program instructions (e.g., program instructions for implementing the stages in processes A, B, A, or B) and data for processing (e.g., video sequence , video bitstream , or video stream ). Processor can access the program instructions and data for processing (e.g., via bus ), and execute the program instructions to perform an operation or manipulation on the data for processing. Memory can include a high-speed random-access storage device or a non-volatile storage device. In some embodiments, memory can include any combination of any number of a random-access memory (RAM), a read-only memory (ROM), an optical disc, a magnetic disk, a hard drive, a solid-state drive, a flash drive, a security digital (SD) card, a memory stick, a compact flash (CF) card, or the like. Memory can also be a group of memories (not shown in ) grouped as a single logical component.
410
400
Bus can be a communication device that transfers data between components inside apparatus , such as an internal bus (e.g., a CPU-memory bus), an external bus (e.g., a universal serial bus port, a peripheral component interconnect express port), or the like.
402
400
For ease of explanation without causing ambiguity, processor and other data processing circuits are collectively referred to as a “data processing circuit” in this disclosure. The data processing circuit can be implemented entirely as hardware, or as a combination of software, hardware, or firmware. In addition, the data processing circuit can be a single independent module or can be combined entirely or partially into any other component of apparatus .
400
406
406
Apparatus can further include network interface to provide wired or wireless communication with a network (e.g., the Internet, an intranet, a local area network, a mobile communications network, or the like). In some embodiments, network interface can include any combination of any number of a network interface controller (NIC), a radio frequency (RF) module, a transponder, a transceiver, a modem, a router, a gateway, a wired network adapter, a wireless network adapter, a Bluetooth adapter, an infrared adapter, a near-field communication (“NFC”) adapter, a cellular network chip, or the like.
400
408
FIG. 4
In some embodiments, optionally, apparatus can further include peripheral interface to provide a connection to one or more peripheral devices. As shown in , the peripheral device can include, but is not limited to, a cursor control device (e.g., a mouse, a touchpad, or a touchscreen), a keyboard, a display (e.g., a cathode-ray tube display, a liquid crystal display, or a light-emitting diode display), a video input device (e.g., a camera or an input interface coupled to a video archive), or the like.
200
200
300
300
400
200
200
300
300
400
404
200
200
300
300
400
It should be noted that video codecs (e.g., a codec performing process A, B, A, or B) can be implemented as any combination of any software or hardware modules in apparatus . For example, some or all stages of process A, B, A, or B can be implemented as one or more software modules of apparatus , such as program instructions that can be loaded into memory . For another example, some or all stages of process A, B, A, or B can be implemented as one or more hardware modules of apparatus , such as a specialized data processing circuit (e.g., an FPGA, an ASIC, an NPU, or the like).
FIG. 5
8
500
510
shows an illustration of a CU coded in palette mode. In VVC draft , the palette mode can be used in monochrome, 4:2:0, 4:2:2 and 4:4:4 color formats. When palette mode is enabled, a flag is transmitted at the CU level if the CU size is smaller than or equal to 64×64 and is larger than 16 samples indicating whether palette mode is used. If the palette mode is utilized to code a (current) CU , the sample values in each position in the CU are represented by a small set of representative color values. The set is referred to as the palette .
501
502
503
504
For sample positions with values close to the palette colors , , , the corresponding palette indices are signaled. It is also possible to specify a color value that is outside the palette by signaling an escape index . Then for all positions in the CU that use the escape color index, the (quantized) color component values are signaled for each of these positions.
FIG. 6
600
For coding of the palette, a palette predictor is maintained. shows an exemplary process for updating palette predictor after encoding each coding unit . The predictor is initialized to 0 (i.e., empty) at the beginning of each slice for non-wavefront cases and at the beginning of each CTU row for wavefront cases. For each entry in the palette predictor, a reuse flag is signaled to indicate whether it will be included in the current palette of the current CU. The reuse flags are sent using run-length coding of zeros. After this, the number of new palette entries and the component values for the new palette entries are signaled. After encoding the palette coded CU, the palette predictor will be updated using the current palette, and entries from the previous palette predictor that are not reused in the current palette are added at the end of the new palette predictor until the maximum size allowed is reached.
FIG. 5
An escape flag is signaled for each CU to indicate if escape symbols are present in the current CU. If escape symbols are present, the palette table is augmented by one and the last index is assigned to be the escape symbol. Palette indices of samples in a CU form a palette index map, as shown in the example in . The index map is coded using horizontal or vertical traverse scans. The scan order is explicitly signaled in the bitstream using the palette_transpose_flag. The palette index map is coded using the index-run mode or the index-copy mode.
8
In VVC draft , a deblocking filter process includes defining a block boundary, deriving the boundary filtering strength based on the coding modes of two neighboring blocks along the defined block boundary, deriving the number of samples to be filtered and applying the deblocking filter to the samples. When an edge is a coding unit, coding subblock unit or a transform unit boundary, the edge is defined as a block boundary. Then, the boundary filtering strength is calculated based on the coding modes of two neighboring blocks according to the following 6 rules. (1) If both of two coding blocks are coded in BDPCM mode, the boundary filter strength is set to 0. (2) Otherwise, if one of the coding blocks is coded in intra mode, the boundary filter strength is set to 2. (3) Otherwise, if one of the coding blocks is coded in CIIP mode, the boundary filter strength is set to 2. (4) Otherwise, if one of the coding blocks contains one or more non-zero coefficient levels, the boundary filter strength is set to 1. (5) Otherwise, if one of the blocks is coded in IBC mode and the other block is coded in inter mode, the boundary filter strength is set to 1. (6) Otherwise (both two blocks are coded in IBC or inter modes), the reference pictures and motion vectors of two blocks are used to derive the boundary filter strength.
8
8
1
0
0
2
0
0
3
0
0
4
0
0
5
0
0
6
0
0
VVC draft gives a more detailed description of the process of calculating the boundary filtering strength. Specifically, the VVC draft gives eight sequential, exhaustive scenarios. In scenario , if cIdx is equal to 0 and both samples p and q are in a coding block with intra_bdpcm_luma_flag equal to 1, bS[xDi][yDj] is set equal to 0. Otherwise, in scenario , if cIdx is greater than 0 and both samples p and q are in a coding block with intra_bdpcm_chroma_flag equal to 1, bS[xDi][yDj] is set equal to 0. Otherwise, in scenario , if the sample p or q is in the coding block of a coding unit coded with intra prediction mode, bS[xDi][yDj] is set equal to 2. Otherwise, in scenario , if the block edge is also a coding block edge and the sample p or q is in a coding block with ciip_flag equal to 1, bS[xDi][yDj] is set equal to 2. Otherwise, in scenario , if the block edge is also a transform block edge and the sample p or q is in a transform block which contains one or more non-zero transform coefficient levels, bS[xDi][yDj] is set equal to 1. Otherwise, in scenario , if the prediction mode of the coding subblock containing the sample p is different from the prediction mode of the coding subblock containing the sample q (i.e. one of the coding subblock is coded in IBC prediction mode and the other is coded in inter prediction mode), bS[xDi][yDj] is set equal to 1.
7
Otherwise, in scenario , if cIdx is equal to 0, edgeFlags[xDi][yDj] is equal to 2, and one or more of the following conditions are true, bS[xDi][yDj] is set equal to 1.
0
0
Condition (1): the coding subblock containing the sample p and the coding subblock containing the sample q are both coded in IBC prediction mode, and the absolute difference between the horizontal or vertical component of the block vectors used in the prediction of the two coding subblocks is greater than or equal to 8 in units of 1/16 luma samples.
0
0
0
1
Condition (2): for the prediction of the coding subblock containing the sample p different reference pictures or a different number of motion vectors are used than for the prediction of the coding subblock containing the sample q. For condition (2), note that the determination of whether the reference pictures used for the two coding subblocks are the same or different is based only on which pictures are referenced, without regard to whether a prediction is formed using an index into reference picture list or an index into reference picture list , and also without regard to whether the index position within a reference picture list is different. Also, for condition (2), note that the number of motion vectors that are used for the prediction of a coding subblock with top-left sample covering (xSb, ySb), is equal to PredFlagL0[xSb][ySb]+PredFlagL1[xSb][ySb].
0
0
Condition (3): one motion vector is used to predict the coding subblock containing the sample p and one motion vector is used to predict the coding subblock containing the sample q, and the absolute difference between the horizontal or vertical component of the motion vectors used is greater than or equal to 8 in units of 1/16 luma samples.
0
0
Condition (4): two motion vectors and two different reference pictures are used to predict the coding subblock containing the sample p, two motion vectors for the same two reference pictures are used to predict the coding subblock containing the sample q and the absolute difference between the horizontal or vertical component of the two motion vectors used in the prediction of the two coding subblocks for the same reference picture is greater than or equal to 8 in units of 1/16 luma samples.
0
0
0
1
0
0
1
0
1
0
0
0
Condition (5): two motion vectors for the same reference picture are used to predict the coding subblock containing the sample p, two motion vectors for the same reference picture are used to predict the coding subblock containing the sample q and both of the following conditions are true. Condition (5.1): the absolute difference between the horizontal or vertical component of list motion vectors used in the prediction of the two coding subblocks is greater than or equal to 8 in 1/16 luma samples, or the absolute difference between the horizontal or vertical component of the list motion vectors used in the prediction of the two coding subblocks is greater than or equal to 8 in units of 1/16 luma samples. Condition (5.2): the absolute difference between the horizontal or vertical component of list motion vector used in the prediction of the coding subblock containing the sample p and the list motion vector used in the prediction of the coding subblock containing the sample q is greater than or equal to 8 in units of 1/16 luma samples, or the absolute difference between the horizontal or vertical component of the list motion vector used in the prediction of the coding subblock containing the sample p and list motion vector used in the prediction of the coding subblock containing the sample q is greater than or equal to 8 in units of 1/16 luma samples.
8
Otherwise, if none of the previous 7 scenarios are satisfied, in scenario , the variable bS[xDi][yDj] is set equal to 0. After deriving the boundary filter strength, the number of samples to be filtered is derived, and the deblocking filter is applied to the samples. Note that when a block is coded in palette mode, the number of samples is set equal to 0. This means the deblocking filter is not applied to a block coded in palette mode.
FIG. 7
FIG. 7
FIG. 7
FIG. 7
8
701
702
703
8
704
As mentioned earlier, the construction of a current palette for a block contains two parts. First, the entry in the current palette may be predicted from a palette predictor. For each entry in the palette predictor, a reuse flag is signaled to indicate whether this entry is included in the current palette or not. Second, the component values of the current palette entry may be directly signaled. After the current palette is obtained, the palette predictor is updated using the current palette. shows a portion of section 7.3.10.6 (“Palette coding syntax) of VVC draft . When parsing the syntax of a palette coded block, the reuse flags (i.e. “palette_predictor_run” in ) are first decoded followed by the component values of the new palette entries (i.e., “num_signalled_palette_entries” and “new_palette_entries” in ). In the VVC draft , the number of palette predictor entries (i.e. “PredictorPaletteSize[startComp]” in ) need to be known before parsing the reuse flags. That means, when two neighboring blocks are both coded in palette modes, the syntax of the second block cannot be parsed until the palette predictor update process of the first block has been finished.
In the conventional palette predictor updating processes, however, it is required to check the reuse flag of each entry. In a worse scenario, up to 63 times of checks are needed. These conventional designs may not be hardware friendly for at least two reasons. First, the Context-based Adaptive Binary Arithmetic Coding (CABAC) parsing needs to wait until the palette predictor has been completely updated. Normally, the CABAC parsing is the slowest module in the hardware. This may reduce the CABAC throughput. Second, it may be burdensome on the hardware when the palette predictor updating process is implemented in the CABAC parsing stage.
Additionally, another issue in these conventional designs is that the boundary filter strength is not defined for palette mode. When one of neighboring blocks is coded in palette mode and the other neighboring block is coded in IBC or inter mode, the boundary filter strength is not defined.
Embodiments of the present disclosure provide implementations to combat the one or more issues described above. These embodiments may improve the palette predictor updating process, increasing the efficiency, speed, and resource consumption for systems implementing the palette prediction process—or similar processes—described above.
FIG. 8
FIG. 2A or 200B
FIG. 2B
FIG. 3A or 300B
FIG. 3B
FIG. 4
FIG. 4
FIG. 4
FIG. 8
800
800
200
300
400
402
800
800
400
800
802
804
In some embodiments, the palette predictor updating process is simplified to reduce its complexity, thereby freeing up hardware resources. shows a flow chart of a palette predictor updating process , according to embodiments of the present disclosure. Method can be performed by an encoder (e.g., by process A of of ), a decoder (e.g., by process A of of ) or performed by one or more software or hardware components of an apparatus (e.g., apparatus of ). For example, a processor (e.g., processor of ) can perform method . In some embodiments, method can be implemented by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers (e.g., apparatus of ). Referring to , method may include the following steps -.
802
804
In step , when updating a palette predictor, all palette entries of a current palette are added in front of a new palette predictor as a first set of entries. In step , all palette entries from a previous palette predictor are added to the end of the new palette predictor regardless of whether the entries are reused in the current palette or not as a second set of entries, and the second set of entries are after the first set of entries.
FIG. 9
FIG. 8
901
902
903
802
904
804
For example, provides a simplified palette predictor updating process consistent with the process described in . A current palette is generated by process and process . The current palette includes entries reused from previous palette predictor and the signaled new palette entries. In process , all the palette entries of the current palette as a first set of entries of a new palette predictor (corresponding to step ). In process , palette entries from a previous palette predictor as a second set of entries of the new palette predictor regardless whether the entries are reused in the current palette or not (corresponding to step ).
8
FIG. 10
The benefit is that without checking the values of reuse flags, the size of new palette predictor is calculated by adding the size of current palette and the size of previous palette predictor. Since the updating process is much simpler than the conventional design in VVC draft (such as shown in ), the CABAC throughput may be increased.
FIG. 11
FIG. 10
FIG. 11
1101
For example, shows an example decoding process for palette mode, consistent with embodiments of the present disclosure. Changes between the conventional design of and the disclosed design of include removed portions highlighted with struck text.
In some embodiments, although the palette predictor updating process is simplified, the palette predictor may have two or more identical entries (i.e. redundancy in the palette predictor) due to the lack of checking the reuse flags. While still an improvement, this may mean the prediction effectiveness of the palette predictor may be reduced.
FIG. 12
FIG. 2A or 200B
FIG. 2B
FIG. 3A or 300B
FIG. 3B
FIG. 4
FIG. 4
FIG. 4
FIG. 12
1200
1200
200
300
400
402
1200
1200
400
1200
1202
1206
shows a flow chart of another palette predictor updating process , according to embodiments of the present disclosure. Method can be performed by an encoder (e.g., by process A of of ), a decoder (e.g., by process A of of ) or performed by one or more software or hardware components of an apparatus (e.g., apparatus of ). For example, a processor (e.g., processor of ) can perform method . In some embodiments, method can be implemented by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers (e.g., apparatus of ). Referring to , method may include the following steps -.
1202
1204
1206
In this exemplary embodiment, in order to remove the redundancy in the palette predictor and keep the palette predictor updating process simple, the previous palette predictor entries between the first reuse entry and the last reuse entry are directly discarded. Only the entries from the first entry to the first reuse entry and from the last reuse entry to the last entry of previous palette predictor is added to the new palette predictor. In summary, the palette predictor updating process is modified as follows: In step each entry of the current palette is added to the new palette predictor as a first set of entries. In step , each entry from the first to the first reuse entry of the previous palette predictor is added to the new palette predictor as a second set of entries after the first set of entries. In step , each entry from the last reuse entry to the last entry of the previous palette predictor is added to the new palette predictor as a third set of entries after the second set of entries.
FIG. 13
FIG. 12
1301
1202
1302
1204
1303
1206
For example, provides a simplified palette predictor updating process consistent with the process described in . In process , all palette entries of the current palette are added to a palette predictor as a first set of entries of the new palette predictor (corresponding to step ). In process , each entry from the first to the first reuse entry of the previous palette predictor is added to the new palette predictor as a second set of entries after the first set of entries (corresponding to step ). In process , each entry from the last reuse entry to the last entry of the previous palette predictor is added to the new palette predictor as a third set of entries after the second set of entries (corresponding to step ). In some embodiments, the third set of entries is after the first set of entries, and the second set of entries is after the third set of entries.
0
1401
1501
1502
FIG. 14
FIG. 7
FIG. 14
FIG. 15
FIG. 10
FIG. 15
It is noted that the first reuse entry and the last entry can be derived when the reuse flags are parsed. There is no need to check the values of the reuse flags. The size of the new palette predictor is calculated by adding the size of the current palette, the size from to the first reuse entry and the size from the last reuse entry to the last entry. shows an example palette coding syntax, consistent with embodiments of the present disclosure. Changes between the conventional design of and the disclosed design of include added portions marked. shows an example decoding process for palette mode, consistent with embodiments of the present disclosure. Changes between the conventional design of and the disclosed design of include removed portions highlighted with stuck text and an added portion marked.
In the previously disclosed embodiments, although the palette predictor updating process is simplified, this process is needed to implement in the CABAC parsing stage in the hardware design. In some embodiment, the number of reuse flags is set to a fixed value. Therefore, the CABAC can keep parsing without waiting for the palette predictor updating process. Furthermore, the palette predictor updating process can be implemented in a different pipeline stage outside of CABAC parsing stage, which allows more flexibility for the hardware design.
FIG. 16
1601
1602
1603
1631
1632
shows an illustration of decoder hardware design for the palette mode. A data structure and a data structure are exemplarily illustrated for CABAC and decoding palette pixel. A decoder hardware design includes a predictor updating module and a CABAC parsing module , where the predictor updating process and the CABAC parsing process are in parallel. Therefore, the CABAC can keep parsing without waiting for the palette updating process.
1
2
1
2
To set the number of reuse flags to a fixed value for each coding block, in some embodiments, the size of the palette predictor is initialized to a pre-defined value at the beginning of each slice for non-wavefront case and at the beginning of each CTU row for wavefront case. The pre-defined value is set to the maximum size of the palette predictor. In one example, the number of reuse flags is 31 or 63 depending on the slice type and the dual tree mode setting. When the slice type is I-slice and the dual tree mode is on (referred to as case ), the number of reuse flags is set to 31. Otherwise (the slice type is B-/P-slice or the slice type is 1-slice and the dual tree mode is off, (referred to as case ), the number of reuse flags is set to 63. Other numbers of reuse flags of the two cases may be used, and it may be beneficial to maintain a factor of 2 relationship between the number of reuse flags for case and the number of reuse flags for case . Besides, when initializing the palette predictor, the value of each entry and each component is set to 0 or (1<<(sequence bit depth-1)).
FIG. 17
FIG. 10
FIG. 17
FIG. 19
FIG. 18
FIG. 19
1701
1702
1901
1902
For example, shows an example decoding process for palette mode, consistent with embodiments of the present disclosure. Changes between the conventional design of and the disclosed design of include a removed portion highlighted with struck text and an added portion marked. Similarly, shows an example initialization process, consistent with embodiments of the present disclosure. Changes between the conventional design of and the disclosed design of include a removed portion highlighted with struck text and an added portion marked.
FIG. 9
FIGS. 20, 21, and 22
FIG. 20
FIG. 21
FIG. 9
FIG. 22
FIGS. 20, 21, and 22
FIG. 21
FIG. 22
FIG. 20
8
2101
2201
2001
8
In some embodiments, where reuse flag is set to a fixed value or where the embodiments are implemented, some redundancy may be introduced into the palette predictor. In some embodiments, this may not be a problem since those redundant entries are never selected for prediction at the encoder side. Examples of the palette predictor update are shown in . shows an example according to the procedure specified in VVC draft , shows an example according to some implementations of the method proposed in (referred to as a first embodiment), and shows another example according to some implementations of the embodiments where with the reuse flag is set to a fixed value (referred to as third embodiment). As a comparison of shows, assuming that there is no new signaled palette entry, the method proposed in the first embodiment () may increase the number of bits for signaling reuse flags , whereas the method proposed in the third embodiment () keeps the same number of bits for signaling reuse flags as the number of bits for signaling reuse flags of the method in the VVC draft design ().
In some embodiments, the palette predictor is first initialized to a fixed value. That means that there may be redundant entries in the palette predictor. Although redundant entries may not be problematic, as previously discussed, the design in this embodiment does not prevent an encoder from using those redundant entries. If it so happens that an encoder selects one of these redundant entries, the coding performance of the palette predictor, and therefore of palette mode, may be decreased. To prevent this case, in some embodiments, a bitstream conformance is added when signaling the reuse flags. In some embodiments, the bitstream conformance has a value of a size of the palette predictor size is equal to a maximum size of the palette predictor when signaling the reuse flags. More specifically, a range constraint is added to the binarization value of the reuse flags.
FIG. 23
FIG. 7
FIG. 23
FIG. 25
FIG. 24
FIG. 25
2301
2302
2501
2502
For example, shows an example palette coding syntax, consistent with embodiments of the present disclosure. Changes between the conventional design of and the disclosed design of includes a removed portion highlighted with struck text and an added portion marked. Similarly, shows an example palette coding semantics, consistent with embodiments of the present disclosure. Changes between the conventional design of and the disclosed design of include a removed portion highlighted with struck text and an added portion marked.
Moreover, the present disclosure provides the following methods to address the issue of the deblocking filter of palette mode.
8
3
0
0
In some embodiments, palette mode is treated as a subset of intra prediction mode. Therefore, if one of the neighboring blocks is coded in palette mode, the boundary filter strength is set equal to 2. More specifically, the portion of the VVC draft specification detailing the process of calculating the boundary filtering strength is altered so that scenario now states: “if the sample p or q is in the coding block of a coding unit coded with intra prediction mode or palette mode, bS[xDi][yDj] is set equal to 2.” The added language is underlined.
8
6
0
0
In some embodiments, palette mode is treated as an independent coding mode. When the coding modes of two neighboring blocks are different, the boundary filter strength is set equal to 1. More specifically, the portion of the VVC draft specification detailing the process of calculating the boundary filtering strength is altered so that scenario now states: “the prediction mode of the coding subblock containing the sample p is different from the prediction mode of the coding subblock containing the sample q, bS[xDi][yDj] is set equal to 1.” The remove language is struck.
8
3
4
0
0
4
8
5
9
In some embodiments, palette mode is treated as an independent coding mode. Similar to the BDPCM mode setting, when one of the coding blocks is palette mode, the boundary filter strength is set equal to 0. More specifically, the portion of the VVC draft specification detailing the process of calculating the boundary filtering strength is altered so that a new scenario is inserted between scenario and scenario which states: “otherwise, if the block edge is also a coding block edge and the sample p or q is in a coding block with pred_mode_plt_flag equal to 1, bS[xDi][yDj] is set equal to 0.” Using this modified process, there would be 9 scenarios, with scenarios - being renumbered to be scenarios -. Of note is that a block coded in palette mode is equivalent to the pred_mode_plt_flag being equal to 1.
In some embodiments, a non-transitory computer-readable storage medium including instructions is also provided, and the instructions may be executed by a device (such as the disclosed encoder and decoder), for performing the above-described methods. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. The device may include one or more processors (CPUs), an input/output interface, a network interface, and/or a memory.
It should be noted that, the relational terms herein such as “first” and “second” are used only to differentiate an entity or operation from another entity or operation, and do not require or imply any actual relationship or sequence between these entities or operations. Moreover, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
As used herein, unless specifically stated otherwise, the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a database may include A or B, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or A and B. As a second example, if it is stated that a database may include A, B, or C, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.
It is appreciated that the above-described embodiments can be implemented by hardware, or software (program codes), or a combination of hardware and software. If implemented by software, it may be stored in the above-described computer-readable media. The software, when executed by the processor can perform the disclosed methods. The computing units and other functional units described in this disclosure can be implemented by hardware, or software, or a combination of hardware and software. One of ordinary skill in the art will also understand that multiple ones of the above-described modules/units may be combined as one module/unit, and each of the above-described modules/units may be further divided into a plurality of sub-modules/sub-units.
The embodiments may further be described using the following clauses:
adding all palette entries of a current palette as a first set of entries of the palette predictor; and
adding palette entries from a previous palette predictor as a second set of entries of the palette predictor regardless of a value of reuse flag of a palette entry in the previous palette predictor, wherein the second set of entries are after the first set of entries.
processing one or more coding units using one or more palette predictors, wherein a palette predictor of the one or more palette predictors is updated by:
1. A video data processing method, comprising:
2. The method of clause 1, wherein each palette entry of the first set of entries and the second set of entries of the palette predictor includes a reuse flag.
receiving a video frame for processing, and
generating the one or more coding units for the video frame.
3. The method of clause 1, further comprises:
4. The method of anyone of clauses 1 to 3, wherein each palette entry of the palette predictor has a corresponding reuse flag, and wherein a number of reuse flags for the palette predictor is set to a fixed number for a corresponding coding unit.
adding all palette entries of a current palette as a first set of entries of the palette predictor;
adding one or more palette entries of a previous palette predictor that is within a first range to the palette predictor as a second set of one or more entries of the palette predictor, wherein the first range starts at a first palette entry of the previous palette predictor and ends at a first palette entry of the previous palette predictor that has a reuse flag set, and
adding one or more palette entries of the previous palette predictor that is within a second range to the palette predictor as a third set of one or more entries of the palette predictor, wherein the second range starts at a last palette entry of the previous palette predictor that has a reuse flag set and ends at a last palette entry of the previous palette predictor, the second set of entries and the third set of entries are after the first set of entries.
processing one or more coding units using one or more palette predictors, wherein a palette predictor of the one or more palette predictors is updated by:
5. A video data processing method, comprising:
6. The method of clause 5, wherein each palette entry of the first set of entries, the second set of entries and the third set of entries of the palette predictor includes a reuse flag.
receiving a video frame for processing, and
generating the one or more coding units for the video frame.
7. The method of clause 5, further comprises:
wherein a number of reuse flags for each palette predictor is set to a fixed number for a corresponding coding unit.
8. The method of any one of clauses 5 to 7, wherein each palette entry of the one or more palette predictors has a corresponding reuse flag, and
receiving a video frame for processing,
generating one or more coding units of the video frame; and
wherein each palette entry of the one or more palette predictors has a corresponding reuse flag, and
wherein a number of reuse flags for each palette predictor is set to a fixed number for a corresponding coding unit.
processing one or more coding units using one or more palette predictors having palette entries,
9. A video data processing method, comprising:
adding all palette entries of a current palette as a first set of entries of the palette predictor; and
adding entries from the previous palette predictor that are not reused in the current palette as a second set of entries of the palette predictor, wherein the second set of entries are after the first set of entries.
10. The method of clause 9, wherein the palette predictor is updated by:
11. The method of clause 9, wherein the fixed number is set based on a slice type and a dual tree mode setting.
12. The method of clause 9, wherein a size of a palette predictor of the one or more palette predictors is initialized to a pre-defined value at a beginning of a slice for non-wavefront case.
13. The method of clause 9, wherein a size of a palette predictor of the one or more palette predictors is initialized to a pre-defined value at a beginning of a coding unit row for wave front case.
14. The method of clause 12 or 13, further comprises adding a bitstream conformance having a value of a size of the palette predictor to be equal to a maximum size of the palette predictor when signaling the reuse flags.
15. The method of any one of clauses 12 to 14, wherein when initializing the one or more palette predictors, a value of each entry and each component is set to 0 or (1<<(sequence bit depth-1)).
16. The method of any one of clauses 9 to 15, further comprises adding a range constraint to a binarization value of the reuse flags.
a memory figured to store instructions; and
adding all palette entries of a current palette as a first set of entries of the palette predictor; and
adding palette entries from a previous palette predictor as a second set of entries of the palette predictor regardless of a value of reuse flag of a palette entry in the previous palette predictor, wherein the second set of entries are after the first set of entries.
processing one or more coding units using one or more palette predictors, wherein a palette predictor of the one or more palette predictors is updated by
a processor coupled to the memory and configured to execute the instructions to cause the apparatus to perform:
17. An apparatus for performing video data processing, the apparatus comprising:
18. The apparatus of clause 17, wherein each palette entry of the first set of entries and the second set of entries of the palette predictor includes a reuse flag.
receiving a video frame for processing, and
generating the one or more coding units for the video frame.
19. The apparatus of clause 17, wherein the processor is further configured to execute the instructions to cause the apparatus to perform:
wherein a number of reuse flags for the palette predictor is set to a fixed number for a corresponding coding unit.
20. The apparatus of any one of clauses 17 to 19, wherein each palette entry of the palette predictor has a corresponding reuse flag, and
a memory figured to store instructions; and
adding all palette entries of a current palette as a first set of entries of the palette predictor;
adding one or more palette entries of a previous palette predictor that is within a first range to the palette predictor as a second set of one or more entries of the palette predictor, wherein the first range starts at a first palette entry of the previous palette predictor and ends at a first palette entry of the previous palette predictor that has a reuse flag set, and
adding one or more palette entries of the previous palette predictor that is within a second range to the palette predictor as a third set of one or more entries of the palette predictor, wherein the second range starts at a last palette entry of the previous palette predictor that has a reuse flag set and ends at a last palette entry of the previous palette predictor, the second set of entries and the third set of entries are after the first set of entries.
processing one or more coding units using one or more palette predictors, wherein a palette predictor of the one or more palette predictors is updated by
a processor coupled to the memory and configured to execute the instructions to cause the apparatus to perform:
21. An apparatus for performing video data processing, the apparatus comprising:
22. The apparatus of clause 21, wherein each palette entry of the first set of entries and the second set of entries of the palette predictor includes a reuse flag.
receiving a video frame for processing, and
generating the one or more coding units for the video frame.
23. The apparatus of clause 21, wherein the processor is further configured to execute the instructions to cause the apparatus to perform:
wherein a number of reuse flags for the palette predictor is set to a fixed number for a corresponding coding unit.
24. The apparatus of any one of clauses 21 to 23, wherein each palette entry of the palette predictor has a corresponding reuse flag, and
a memory figured to store instructions; and
receiving a video frame for processing;
generating one or more coding units of the video frame; and
wherein each palette entry of the one or more palette predictors has a corresponding reuse flag, and
wherein a number of reuse flags for each palette predictor is set to a fixed number for a corresponding coding unit.
processing one or more coding units using one or more palette predictors having palette entries,
a processor coupled to the memory and configured to execute the instructions to cause the apparatus to perform:
25. An apparatus for performing video data processing, the apparatus comprising:
adding all palette entries of a current palette as a first set of entries of the palette predictor; and
adding entries from the previous palette predictor that are not reused in the current palette as a second set of entries of the palette predictor, wherein the second set of entries are after the first set of entries.
updating the palette predictor by:
26. The apparatus of clause 25, wherein the processor is further configured to execute the instructions to cause the apparatus to perform:
27. The apparatus of clause 25, wherein the fixed number is set based on a slice type and a dual tree mode setting.
28. The apparatus of clause 25, wherein a size of a palette predictor of the one or more palette predictors is initialized to a pre-defined value at a beginning of a slice for non-wavefront case.
29. The apparatus of clause 25, wherein a size of a palette predictor of the one or more palette predictors is initialized to a pre-defined value at a beginning of a coding unit row for wave front case.
30. The apparatus of clause 28 or 29, wherein the processor is further configured to execute the instructions to cause the apparatus to perform adding a bitstream conformance having a value of a size of the palette predictor to be equal to a maximum size of the palette predictor when signaling the reuse flags.
31. The apparatus of any one of clauses 28 to 30, wherein when initializing the one or more palette predictors, a value of each entry and each component is set to 0 or (1<<(sequence bit depth-1)).
32. The apparatus of any one of clauses 25 to 31, wherein the processor is further configured to execute the instructions to cause the apparatus to perform adding a range constraint to a binarization value of the reuse flags.
adding all palette entries of a current palette as a first set of entries of the palette predictor; and
adding palette entries from a previous palette predictor as a second set of entries of the palette predictor regardless of a value of reuse flag of a palette entry in the previous palette predictor, wherein the second set of entries are after the first set of entries.
processing one or more coding units using one or more palette predictors, wherein a palette predictor of the one or more palette predictors is updated by:
33. A non-transitory computer readable medium that stores a set of instructions that is executable by one or more processors of an apparatus to cause the apparatus to initiate a method for performing video data processing, the method comprising:
34. The non-transitory computer readable medium of clause 33, wherein each palette entry of the first set of entries and the second set of entries of the palette predictor includes a reuse flag.
receiving a video frame for processing, and
generating the one or more coding units for the video frame.
35. The non-transitory computer readable medium of clause 33, wherein the method further comprises:
36. The non-transitory computer readable medium of anyone of clauses 33 to 35, wherein each palette entry of the palette predictor has a corresponding reuse flag, and wherein a number of reuse flags for the palette predictor is set to a fixed number for a corresponding coding unit.
adding all palette entries of a current palette as a first set of entries of the palette predictor;
adding one or more palette entries of a previous palette predictor that is within a first range to the palette predictor as a second set of one or more entries of the palette predictor, wherein the first range starts at a first palette entry of the previous palette predictor and ends at a first palette entry of the previous palette predictor that has a reuse flag set, and
adding one or more palette entries of the previous palette predictor that is within a second range to the palette predictor as a third set of one or more entries of the palette predictor, wherein the second range starts at a last palette entry of the previous palette predictor that has a reuse flag set and ends at a last palette entry of the previous palette predictor, the second set of entries and the third set of entries are after the first set of entries.
processing one or more coding units using one or more palette predictors, wherein a palette predictor of the one or more palette predictors is updated by:
37. A non-transitory computer readable medium that stores a set of instructions that is executable by one or more processors of an apparatus to cause the apparatus to initiate a method for performing video data processing, the method comprising:
38. The non-transitory computer readable medium of clause 37, wherein each pallet entry of the first set of entries and the second set of entries of the palette predictor includes a reuse flag.
receiving a video frame for processing, and
generating the one or more coding units for the video frame.
39. The non-transitory computer readable medium of clause 37, wherein the method further comprises:
wherein a number of reuse flags for the palette predictor is set to a fixed number for a corresponding coding unit.
40. The non-transitory computer readable medium of anyone of clauses 37 to 39, wherein each palette entry of the palette predictor has a corresponding reuse flag, and
receiving a video frame for processing;
generating one or more coding units of the video frame; and
wherein each palette entry of the one or more palette predictors has a corresponding reuse flag, and
wherein a number of reuse flags for each palette predictor is set to a fixed number for a corresponding coding unit.
processing one or more coding units using one or more palette predictors having palette entries,
41. A non-transitory computer readable medium that stores a set of instructions that is executable by one or more processors of an apparatus to cause the apparatus to initiate a method for performing video data processing, the method comprising:
adding all palette entries of a current palette as a first set of entries of the palette predictor; and
adding entries from the previous palette predictor that are not reused in the current palette as a second set of entries of the palette predictor, wherein the second set of entries are after the first set of entries.
42. The non-transitory computer readable medium of clause 41, wherein the palette predictor is updated by:
43. The non-transitory computer readable medium of clause 41, wherein the fixed number is set based on a slice type and a dual tree mode setting.
44. The non-transitory computer readable medium of clause 41, wherein a size of a palette predictor of the one or more palette predictors is initialized to a pre-defined value at a beginning of a slice for non-wavefront case.
45. The non-transitory computer readable medium of clause 41, wherein a size of a palette predictor of the one or more palette predictors is initialized to a pre-defined value at a beginning of a coding unit row for wave front case.
46. The non-transitory computer readable medium of clause 44 or 45, further comprises adding a bitstream conformance having a value of a size of the palette predictor to be equal to a maximum size of the palette predictor when signaling the reuse flags.
47. The non-transitory computer readable medium of any one of clauses 44 to 46, wherein when initializing the one or more palette predictors, a value of each entry and each component is set to 0 or (1<<(sequence bit depth-1)).
48. The non-transitory computer readable medium of any one of clauses 41 to 47, further comprises adding a range constraint to a binarization value of the reuse flags.
49. A method for deblocking filter of palette mode, comprising:
receiving a video frame for processing;
generating the one or more coding units for the video frame, wherein each coding unit of the one or more coding units has one or more coding blocks; and
setting a boundary filter strength to 2 in response to at least a first coding block of two neighboring coding blocks being coded in palette mode.
50. A method for deblocking filter of palette mode, comprising:
receiving a video frame for processing;
generating the one or more coding units for the video frame, wherein each coding unit of the one or more coding units has one or more coding blocks; and
setting a boundary filter strength to 1 in response to at least a first coding block of two neighboring coding blocks being coded in palette mode and second coding block of the two neighboring coding blocks has a coding mode different from the palette mode.
a memory figured to store instructions; and
receiving a video frame for processing;
generating the one or more coding units for the video frame, wherein each coding unit of the one or more coding units has one or more coding blocks; and
setting a boundary filter strength to 1 in response to at least a first coding block of two neighboring coding blocks being coded in palette mode and second coding block of the two neighboring coding blocks has a coding mode different from the palette mode.
a processor coupled to the memory and configured to execute the instructions to cause the apparatus to perform:
51. An apparatus for performing video data processing, the apparatus comprising:
52. A non-transitory computer readable medium that stores a set of instructions that is executable by one or more processors of an apparatus to cause the apparatus to initiate a method for performing video data processing, the method comprising:
receiving a video frame for processing;
generating the one or more coding units for the video frame, wherein each coding unit of the one or more coding units has one or more coding blocks; and
1
setting a boundary filter strength to in response to at least a first coding block of two neighboring coding blocks being coded in palette mode and second coding block of the two neighboring coding blocks has a coding mode different from the palette mode.
In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method.
In the drawings and specification, there have been disclosed exemplary embodiments. However, many variations and modifications can be made to these embodiments. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation. | |
BACKGROUND OF THE INVENTION
1. Field of the Invention
2. Description of the Related Art
SUMMARY OF THE INVENTION
BRIEF DESCRIPTION OF THE DRAWINGS
DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS
The present invention relates to a vehicle interior illumination device in a vehicle.
Recently, a personal lamp that can be used, for example, by a passenger sitting on a rear seat is set on a roof, a pillar or the like in a vehicle. The personal lamp is an illumination lamp that illuminates the rear seat. The personal lamp is lit not only when a lighting switch operated by a passenger at the time of reading but also when a door switch detecting that a rear seat side door is open is turned on or a seat belt switch detecting that a seat belt is not fastened is turned on. This illumination by the personal lamp is called functional illumination.
JP-A-2006-44567
Moreover, a related art discloses that when the entire vehicle interior is illuminated by a monitor attached to a ceiling of a vehicle interior, light is emitted with a variable light amount and color shade and the vehicle interior is garnished with an illumination matching the mood of the passengers or the like and the atmosphere (see ). This illumination is called atmospheric illumination.
However, when an illumination lamp for the atmospheric illumination is set in the vehicle interior, since it is set in addition to an illumination lamp for the functional illumination set on the rear seat side, the number of illumination lamps increases.
Moreover, when an illumination lamp for the atmospheric illumination is provided in addition to an illumination lamp for the functional illumination, it is unclear how to use the functional illumination and the atmospheric illumination each in its proper way. That is, the atmospheric illumination is low in illuminance since it dimly illuminates the vehicle interior in order that the driver confirms the safety of the rear seat while driving or to provide the vehicle interior with a unique atmosphere. On the other hand, the functional illumination is high in illuminance since it is used when a passenger is informed that a door is open or that the seat belt is not fastened or when a passenger performs reading. As described above, the functional illumination and the atmospheric illumination are different in light amount and it is necessary to use them each in its proper way.
The present invention is made in view of the above-described circumstances, and an object thereof is to provide a vehicle interior illumination device in which the functional illumination and the atmospheric illumination are used each in its proper way by using an illumination lamp capable of being turned on at a side of a seat without any increase in the number of illumination lamps.
an illumination lamp that is provided in a vehicle interior and is configured to be turned on to illuminate the vehicle interior;
a detector that detects a condition to have the illumination lamp to be turned on; and
a controller that controls a lighting of the illumination lamp,
wherein when the condition is detected by the detector, the controller lights the illumination lamp with a first light amount and when the condition is not detected by the detector, the controller lights the illumination lamp with a second light amount smaller than the first light amount.
[1] A vehicle interior illumination device mounted on a vehicle includes:
According to the above configuration, the functional illumination with a high illuminance and the atmospheric illumination with a low illuminance can be used each in its proper way by using the illumination lamp capable of being turned on without any increase in the number of illumination lamps. Moreover, by placing priority on the functional illumination, the atmospheric illumination can be easily introduced without any significant changes from the usage of the related illumination lamp.
[2] In the vehicle interior illumination device according to the above [1], the illumination lamp is provided in the vehicle interior on a side of a seat in a rear of a driver seat and is configured to be turned on to illuminate the side of the seat in the rear of the driver seat; and the vehicle interior on a side of the driver seat is a space not light-intercepted from the vehicle interior on the side of the seat in the rear of the driver seat.
According to the above configuration, the vehicle interior including the side of the seat in the rear can be uniformly illuminated by using the illumination lamp capable of being turned on at the side of the seat in the rear, so that the atmospheric illumination can be made more effective.
[3] In the vehicle interior illumination device according to the above [1] or [2], the detector detects an operation for turning on the illumination lamp or an operation interlocking with a vehicle as the condition to have the illumination lamp to be turned on.
According to this vehicle interior illumination device, since operations interlocking with the vehicle such as door opening in addition to the turning on by a passenger brings the condition where the illumination lamp is lit, the role as the functional illumination can be fulfilled as before.
[4] The vehicle interior illumination device according to any of the above [1] to [3] further includes: an instruction portion that is configured to instruct a low illuminance mode for lighting the illumination lamp with the second light amount, wherein when the low illuminance mode is instructed by the instruction portion and the condition is not detected by the detector, the controller lights the illumination lamp with the second light amount.
According to this vehicle interior illumination device, since the illumination lamp is lit with the second light amount by instructing the low illuminance mode, the driver can freely set the atmospheric illumination.
[5] In the vehicle interior illumination device according to the above [4], when the condition is detected by the detector while the illumination lamp is lit with the second light amount, the controller cancels the low illuminance mode and lights the illumination lamp with the first light amount, and thereafter, when the condition becomes undetected, the controller sets the low illuminance mode again and lights the illumination lamp with the second light amount.
According to this vehicle interior illumination device, when the low illuminance mode is set and at times other than at the time of the functional illumination, the illumination is always the atmospheric illumination.
[6] In the vehicle interior illumination device according to the above [4] or [5], the controller concurrently lights a plurality of illumination lamps with the second light amount.
According to this vehicle interior illumination device, since a plurality of personal lamps are concurrently lit with the second illuminance, the effect as the atmospheric illumination can be enhanced.
To attain the above-mentioned object, a vehicle interior illumination device according to the present invention are characterized by the following [1] to [6].
According to the present invention, the functional illumination and the atmospheric illumination can be used each in its proper way by using the illumination lamp capable of being turned on at the side of a seat without any increase in the number of illumination lamps.
The present invention has been briefly described above. Further, details of the present invention will be further clarified by reading through the mode for carrying out the invention (hereinafter, referred to as "embodiment") described below with reference to the attached drawings.
FIG. 1
is a view, viewed from above, of a vehicle interior of a vehicle where a vehicle interior illumination device of an embodiment is mounted.
FIG. 2
is a view illustrating the vehicle interior viewed from a side of the vehicle.
FIG. 3
is a block diagram illustrating the structure of the vehicle interior illumination device
FIG. 4
is a flowchart illustrating a lighting request determination processing procedure.
FIG. 5
is a flowchart illustrating a lighting control procedure.
FIG. 1
FIG. 2
Hereinafter, a vehicle interior illumination device according to the present embodiment will be described by using the drawings. is a view, viewed from above, of a vehicle interior 4 of a vehicle 1 where the vehicle interior illumination device 10 of the present embodiment is mounted. The vehicle 1 is a minivan for seven people where seats are provided in three rows. is a view illustrating the vehicle interior 4 viewed from a side of the vehicle 1. The vehicle interior 4 is broadly divided into a front vehicle interior 2 on the side of a driver seat 3 and a rear vehicle interior 6 on the side of passengers (occupants). The front vehicle interior 2 is a space not separated from the rear vehicle interior 6 and not light-intercepted from the rear vehicle interior 6.
On the trim covers above the sides of middle seats 7 and 9 of the rear vehicle interior 6, personal lamps 25 and 27 and personal switches 36 and 37 are disposed. Likewise, on the trim covers above both sides of a rear seat 11 of the rear vehicle interior 6, personal lamps 29 and 31 and personal switches 38 and 39 are disposed, respectively.
The personal switches 36, 37, 38 and 39 are switches that can be operated by the passengers (occupants) sitting on the seats, and when the personal switches 36, 37, 38 and 39 are turned on respectively, the personal lamps 25, 27, 29 and 31 are lit with a high illuminance (light amount 100%) as the functional illumination correspondingly. The personal lamps 25, 27, 29 and 31 are lit not only when the personal switches 36, 37, 38 and 39 are turned on by the passengers sitting on the seats but also at times such as when the doors beside the seats are open and when the passengers do not fasten the seat belts as operations interlocking with the vehicle 1.
FIG. 2
On the other hand, a lounge mode setting switch 40 that can be operated by the driver is disposed on an instrument panel 18 disposed in front of the driver seat 3 and a front passenger seat 5 of the front vehicle interior 2. The lounge mode setting switch 40 (instruction portion) is a switch for dimly illuminating the vehicle interior to provide a unique atmosphere as described later, and when the lounge mode setting switch 40 is turned on, a mode (low illuminance mode) is set in which the personal lamps 25, 27, 29 and 31 are always on with a low illuminance (light amount 20%) as the atmospheric illumination. Here, the light amount 20% set as the low illuminance is a light amount where it is assumed that the illumination of the personal lamp is not reflected on the windshield on the side of the driver seat 3, and is one example. In both of the functional illumination and the atmospheric illumination, as illustrated in , the personal lamps 27 and 31 emit illumination lights L1 and L2 toward the rear of the vehicle interior 4 to the middle seat 9 and the rear seat 11, respectively. The same applies to the personal lamps 25 and 29.
Moreover, on both right and left sides of the ceiling in the vehicle interior 4, ceiling illuminations 33 and 35 that uniformly illuminates the rear vehicle interior 6 are disposed.
FIG. 3
is a block diagram illustrating the structure of the vehicle interior illumination device 10. The vehicle interior illumination device 10 mainly includes a controller 15 formed of an ECU (electric control unit). The controller 15 incorporates a known CPU, ROM and the like, and the CPU executes an operation program stored in the ROM to thereby perform centralized control of the elements of the vehicle interior illumination device 10. Moreover, the controller 15 has a flag memory 15z that stores the states of a flag A, a flag B and a flag C that are set or reset with the execution of the operation program described later.
To the controller 15, the following are connected: the lounge mode setting switch 40; an ignition (IG) switch 51; a door switch 52; a dome switch 53; a seat belt switch 54; and personal switches 36 to 39.
The IG switch 51 is operated by the driver. The vehicle is brought into driving state by turning on the IG switch 51. The door switch 52 (detector) detects the opening and closing operation of each door and also detects a door locking operation or a door unlocking operation of each door. For example, the door switch 52 detects the opening and closing operation of each door, a door locking operation, or a door unlocking operation of each door based on signals generated by these operations respectively.
The dome switch 53 is a switch capable of turning on a dome illumination mounted on the ceiling of the vehicle interior 4. The seat belt switch 54 is provided for each seat, and detects the presence or absence of the fastening of the seat belt. The personal switches 36 to 39 are switches capable of turning on the personal lamps 25, 27, 29 and 31, respectively, by the passengers sitting on the seats, and output the on/off states thereof to the controller 15. When it is not specifically necessary to distinguish the personal switches 36 to 39 from one another, they will be collectively called personal switch 30.
The lounge mode setting switch 40 is disposed on the instrument panel 18 as mentioned above. When the lounge mode setting switch 40 is turned on by the driver, the vehicle interior illumination device 10 shifts to a lounge mode. At the time of starting of an activation of an engine in the vehicle, the lounge mode setting switch 40 is set to off.
Moreover, the ceiling illuminations 33 and 35, the personal lamps 25, 27, 29 and 31 and a wireless communication portion 55 are connected to the controller 15.
The ceiling illuminations 33 and 35 are disposed on both right and left sides of the ceiling of the vehicle interior 4 as mentioned above, and uniformly illuminate the vehicle interior 4. The personal lamps 25, 27, 29 and 31 illuminate the middle seat 7 on the left side, the middle seat 9 on the right side, the rear seat 11 on the left side and the rear seat 11 on the right side, respectively. In particular, when it is unnecessary to distinguish the personal lamps from one another, they will be collectively called personal lamp 20. The personal lamp 20 is driven by the controller 15 performing PWM control, and is variably lit within a duty ratio range of 0% to 100%, that is, within a light amount range of 0% to 100%.
Moreover, the personal lamp 20 is lit with the high illuminance (light amount 100%) as the functional illuminance when the personal switch 30 is turned on by the passengers setting on the seats. Also, the personal lamp 20 is lit with the high illuminance (light amount 100%) at times such as when the passengers setting on the seats open the doors, when a door unlocking operation is performed, when the passengers do not fasten the seat belts and when the dome illumination is lit as operations interlocking with the vehicle 1. The personal lamp 20 is always on with the low illuminance (light amount 20%) as the atmospheric illumination when the lounge mode setting switch 40 is turned on by the driver and the vehicle interior illumination device 10 shifts to the lounge mode. The personal lamp 20 may be an LED, an LCD, an organic EL, an electric bulb or the like, and is not limited to a specific kind.
The wireless communication portion 55 performs near field communication with a smart key 60 possessed by the driver. When the smart key 60 approaches within a predetermined distance, the wireless communication portion 55 detects the approach and outputs an approach detection signal to the controller 15, and when a security cancellation operation is performed by the driver on the smart key 60, the wireless communication portion 55 receives the cancellation signal from the smart key 60 and outputs the cancellation signal to the controller 15. When the approach detection signal or the cancellation signal is received, the controller 15 lights the personal lamp 20 with the high illuminance.
The IG switch 51, the door switch 52, the dome switch 53, the seat belt switch 54, the wireless communication portion 55 and the like are examples of vehicle interlocking switches that detect operations interlocking with the vehicle.
FIG. 4
An operation of the vehicle interior illumination device 10 having the above-described structure will be illustrated. is a flowchart illustrating a lighting request determination processing procedure. This operation program is stored in the ROM in the controller 15, and executed by the CPU in the controller 15.
The controller 15 determines the state of the door switch 52 as an example of a vehicle interlocking switch that performs an operation interlocking with the vehicle 1 (S1). When the door switch 52 is on, that is, when a door is open or an unlocking operation has been performed, the controller 15 sets the flag A stored in the flag memory 15z to value 1 (S2). At this time, the controller 15 also stores into the flag memory 15z the identification information of the door switch as a factor that causes the flag A to be set or the personal lamp disposed close thereto. On the other hand, when the door switch 52 is off, that is, when the door is closed and a door locking operation has been performed, the controller 15 clears the flag A stored in the flag memory 15z to value 0 to reset it (S3).
Then, the controller 15 determines whether the lounge mode setting switch 40 for performing the atmospheric illumination is on or not (S4). When the lounge mode setting switch 40 is on, the controller 15 sets the flag B stored in the flag memory 15z to value 1 (S5), On the other hand, when the lounge mode setting switch 40 is off, the controller 15 clears the flag B to value 0 to reset it (S6).
Then, the controller 15 determines whether any one of the personal switches 36, 37, 38 and 39 is on or not (S7). When any one of the personal switches 36, 37, 38 and 39 is on, the controller 15 sets the flag C stored in the flag memory 15z to value 1 (S8). At this time, the controller 15 also stores into the flag memory 15z the identification information of the personal switch as a factor that causes the flag C to be set or the personal lamp disposed close thereto. On the other hand, when all of the personal switches 36, 37, 38 and 39 are off, the controller 15 clears the flag C to value 0 to reset it (S9). Thereafter, the controller 15 ends the present operation.
While the case of the door switch capable of detecting a door opening operation and a door unlocking operation is illustrated here as an example of the operation interlocking with the vehicle, the case of the seat belt switch capable of detecting that the seat belt is not fastened, the dome switch capable of turning on the dome illumination, the ignition switch capable of detecting the turning off of the ignition (IG) switch or the wireless communication portion capable of detecting the approach of the smart key 60 or the security cancellation operation by the smart key 60 may be illustrated, or the case of a switch group formed of a combination thereof may be illustrated.
FIG. 5
is a flowchart illustrating a lighting control procedure. This operation program is stored in the ROM in the controller 15, and executed by the CPU in the controller 150.
First, the controller 15 determines whether the flag A or the flag C stored in the flag memory 15z is set to value 1 or not (S11). When the flag A or the flag C is set to value 1, the controller 15 selects the corresponding personal lamp 20 based on the information stored in the flag memory 15z, and lights the personal lamp 20 with the high illuminance (light amount 100%) (S12). For example, when the personal switch 39 close to the rear seat 11 on the right side is turned on by a passenger or when the door is unlocked, the personal lamp 31 is lit with the high illuminance. In the case of the light amount 100%, the controller 15 drives the personal lamp 20 at a duty ratio of 100%. Thereafter, the controller 15 ends the present operation.
On the other hand, when both the flag A and the flag C are reset to value 0 at step S11, the controller 15 determines whether the flag B is set to value 1 or not (S13). When the flag B is set to value 1, the controller 15 selects the corresponding personal lamp 20 based on the information stored in the flag memory 15z, and lights the personal lamp 20 with the low illuminance (light amount 20%) (S14). In the case of the light amount 20%, the controller 15 drives the personal lamp 20 at a duty ratio of 20%. Thereafter, the controller 15 ends the present operation.
On the other hand, when the flag B is reset to value 0 at step S13, the controller 15 turns off all the personal lamps 20 (S15). That is, the controller 15 drives the personal lamps 20 at a duty ratio of 0%. Thereafter, the controller 15 ends the present operation.
The light amounts 100% and 20% set as the high illuminance and the low illuminance, respectively, are an example, and a different combination such as light amounts 90% and 10% may be adopted. Moreover, the light amount of the personal lamp may be adjustable in several steps by a manual operation by a passenger, or may be adjusted to a value other than the light amount 100%. Conversely, when the lounge mode is not set, the adjustment to the light amount 20% by a manual operation may be disabled. This is because there is hardly any request for the setting to such a low illuminance.
In this vehicle interior illumination device 10, the personal lamps 29 and 31 and the personal switches 38 and 39 are disposed on the trim covers above the sides of the middle seats 7 and 9 and the rear seat 11 in the rear vehicle interior 6, respectively. When the lounge mode setting switch 40 disposed on the instrument panel 18 is turned on by the driver, the vehicle interior illumination device 10 shifts to the lounge mode. In the lounge mode, for example, when a passenger sitting on a middle seat or the rear seat performs reading, the personal lamp 20 illuminating this seat as the functional illumination is lit by turning on the personal switch 30. On the other hand, when the personal switch 30 is off and the door switch 52 and the like detecting operations interlocking with the vehicle are also off, the personal lamps 20 perform the atmospheric illumination in the lounge mode.
Moreover, since the personal lamp operates as an illumination lamp used for both the functional illumination (light amount 100%) and the atmospheric illumination (light amount 20%), it is unnecessary to increase the number of illumination lamps compared with when a lamp for the functional illumination and a lamp for the atmospheric illumination are separately provided. That is, the illumination lamp necessary for the functional illumination can be used for the atmospheric illumination.
Thereby, the functional illumination with the high illuminance and the atmospheric illumination with the low illuminance can be used each in its proper way by using a personal lamp capable of being turned on at the side of the middle seats and the rear seat in the rear without any increase in the number of illumination lamps. Moreover, by placing priority on the functional illumination, the atmospheric illumination can be easily introduced without any significant changes from the usage of the conventional illumination lamp.
Moreover, the entire vehicle interior from the driver's seat side to the middle seat and the rear seat side can be uniformly illuminated, so that the atmospheric illumination can be made more effective.
Moreover, only when the door switch and the like interlocking with the vehicle are off and the personal switch is off, the controller lights the personal lamp in the lounge mode (light amount 20%). For example, when a door is half-shut on the rear seat side, shift to the lounge mode is not made, and the corresponding personal lamp is lit with the high illuminance as the functional illumination. As described above, operations interlocking with the vehicle such as door opening in addition to operations of the personal switches by passengers bring a condition where the personal lamp is lit and the role as the functional illumination can be fulfilled as before.
Moreover, since the personal lamp is lit with the low illuminance by instructing the low illuminance mode by the lounge mode setting switch, the driver can freely set the atmospheric illumination. Moreover, when the low illuminance mode is set, when the low illuminance mode is canceled and then set again, that is, at times other than at the time of the functional illumination, the illumination is always the atmospheric illumination. Moreover, since a plurality of personal lamps are concurrently lit with the low illuminance, the effect as the atmospheric illumination can be enhanced.
The technical scope of the present invention is not limited to the above-described embodiment. The above-described embodiment may be accompanied by various modifications, improvements and the like within the technical scope of the present invention.
For example, while in the above-described embodiment, when the switches detecting operations interlocking with the vehicle and all the personal switches are off, the controller 15 shifts to the lounge mode and the personal lamps are lit with the low illuminance as the atmospheric illumination, the present invention is not limited to this case; for example, even if one or two personal switches are on, when the remaining personal switches are off, the corresponding personal lamps may be lit as the atmospheric illumination, or except for the personal lamps corresponding to the personal switches that are on, the remaining personal lamps may be lit as the atmospheric illumination.
an illumination lamp (personal lamps 20, 25, 27, 29, 31) that is provided in a vehicle interior and is configured to be turned on to illuminate the vehicle interior;
a detector (door switch 52) that detects a condition to have the illumination lamp to be turned on; and
a controller (15) that controls a lighting of the illumination lamp,
wherein when the condition is detected by the detector, the controller lights the illumination lamp with a first light amount and when the condition is not detected by the detector, the controller lights the illumination lamp with a second light amount smaller than the first light amount.
[1] A vehicle interior illumination device (10) mounted on a vehicle, includes:
[2] The vehicle interior illumination device according to the above [1], wherein the illumination lamp is provided in the vehicle interior on a side of a seat in a rear of a driver seat and is configured to be turned on to illuminate the side of the seat in the rear of the driver seat; and
wherein the vehicle interior on a side of the driver seat is a space not light-intercepted from the vehicle interior on the side of the seat in the rear of the driver seat.
[3] The vehicle interior illumination device according to the above [1] or [2],
wherein the detector detects an operation for turning on the illumination lamp or an operation interlocking with a vehicle as the condition to have the illumination lamp to be turned on.
[4] The vehicle interior illumination device according to any of the above [1] to [3], further includes a instruction portion (lounge mode setting switch 40) that is configured to instruct a low illuminance mode for lighting the illumination lamp with the second light amount,
wherein when the low illuminance mode is instructed by the instruction portion and the condition is not detected by the detector, the controller lights the illumination lamp with the second light amount.
[5] The vehicle interior illumination device according to the above [4], wherein when the condition is detected by the detector while the illumination lamp is lit with the second light amount, the controller cancels the low illuminance mode and lights the illumination lamp with the first light amount, and thereafter, when the condition becomes undetected, the controller sets the low illuminance mode again and lights the illumination lamp with the second light amount.
[6] The vehicle interior illumination device according to the above [4] or [5], wherein the controller concurrently lights a plurality of illumination lamps with the second light amount.
Now, features of the embodiment of the vehicle interior illumination device according to the above-described present invention are briefly summarized and listed in the following [1] to [6]. | |
Bus runs between Marshfield and Braintree eliminated
About two dozen riders who take the bus to the Braintree MBTA station each morning will have to revamp their commutes.
The Plymouth & Brockton Street Railway Co. is eliminating a commuter route because of a loss of state subsidization. The route starts at the Roche Bros. store and includes a stop at the Hanover Mall. There are three runs in each direction each day.
The service will end on Nov. 18.
Chris Anzuoni, vice president of Plymouth & Brockton Street Railway Co., said the route received about $8,000 a month in state subsidization. That money was cut by Gov. Deval Patrick.
Other bus companies were also affected by state cuts.
“I think, out of the six, only two companies managed to maintain any of the subsidy,” Anzuoni said
He said the daily ridership on the route between Marshfield and Braintree is about 45 people.
“The monies out of the fare box don’t come close to the cost,” he said. “We couldn’t afford to do it without the subsidy. If we (could) get even close to breaking even, we would do it. There’s a lot of loyalty.”
Anzuoni said people can instead take a Plymouth & Brockton bus that travels from Duxbury to Boston by way of Marshfield and Rockland. That route has more riders and has maintained its state subsidy.
But that trip costs $5.50 per trip for a 10 ride-ticket, instead of $2 for the Braintree bus. An individual trip from Marshfield to Boston via Rockland is $14.
Anzuoni said he informed Marshfield-Braintree riders about the service cancellation last week. Many of the riders have been commuting on the bus since the mid-1980s, he said.
“The people on the bus were kind of understanding,” he said. “We gave them some passes to try out our Boston service.”
For more information, visit www.p-b.com
Reach Sydney Schwartz at [email protected]. | https://www.hollandsentinel.com/story/news/2008/11/06/bus-runs-between-marshfield-braintree/48562348007/ |
When is web scraping OK and when is it not?
-
Is web scraping legal? Can I get into trouble?
-
How can I make sure I’m doing the right thing?
-
What can I do with the data that I’ve scraped?Objectives
-
Wrap things up
-
Discuss the legal implications of web scraping
-
Establish a code of conduct
Now that we have seen several different ways to scrape data from websites and are ready to start working on potentially larger projects, we may ask ourselves whether there are any legal implications of writing a piece of computer code that downloads information from the Internet.
In this section, we will be discussing some of the issues to be aware of when scraping websites, and we will establish a code of conduct (below) to guide our web scraping projects.
This section does not consitute legal advice
Please note that the information provided on this page is for information purposes only and does not constitute professional legal advice on the practice of web scraping.
If you are concerned about the legal implications of using web scraping on a project you are working on, it is probably a good idea to seek advice from a professional, preferably someone who has knowledge of the intellectual property (copyright) legislation in effect in your country.
The first and most important thing to be careful about when writing a web scraper is that it typically involves querying a website repeatedly and accessing a potentially large number of pages. For each of these pages, a request will be sent to the web server that is hosting the site, and the server will have to process the request and send a response back to the computer that is running our code. Each of these requests will consume resources on the server, during which it will not be doing something else, like for example responding to someone else trying to access the same site.
If we send too many such requests over a short span of time, we can prevent other “normal” users from accessing the site during that time, or even cause the server to run out of resources and crash.
In fact, this is such an efficient way to disrupt a web site that hackers are often doing it on purpose. This is called a Denial of Service (DoS) attack.
Since DoS attacks are unfortunately a common occurence on the Internet, modern web servers include measures to ward off such illegitimate use of their resources. They are watchful for large amounts of requests appearing to come from a single computer or IP address, and their first line of defense often involves refusing any further requests coming from this IP address.
A web scraper, even one with legitimate purposes and no intent to bring a website down, can exhibit similar behaviour and, if we are not careful, result in our computer being banned from accessing a website.
The good news is that a good web scraper, such as Scrapy, recognizes that this is a risk and includes measures to prevent our code from appearing to launch a DoS attack on a website. This is mostly done by inserting a random delay between individual requests, which gives the target server enough time to handle requests from other users between ours.
This is Scrapy’s default behaviour, and it should prevent most scraping projects from ever causing problems. To be on the safe side, however, it is good practice to limit the number of pages we are scraping while we are still writing and debugging our code. This is why in the previous section, we imposed a limit of five pages to be scraped, which we only removed when we were reasonably certain the scraper was working as it should.
Limiting requests to a particular domain, by using Scrapy’s
allowed_domains property is another
way to make sure our code is not going to start scraping the entire Internet by mistake.
Thanks to the defenses web servers use to protect themselves against DoS attacks and Scrapy’s measure to avoid inadvertently launching such an attack, the risks of causing trouble is limited.
It is important to recognize that in certain circumstances web scraping can be illegal. If the terms and conditions of the web site we are scraping specifically prohibit downloading and copying its content, then we could be in trouble for scraping it.
In practice, however, web scraping is a tolerated practice, provided reasonable care is taken not to disrupt the “regular” use of a web site, as we have seen above.
In a sense, web scraping is no different than using a web browser to visit a web page, in that it amounts to using computer software (a browser vs a scraper) to acccess data that is publicly available on the web.
In general, if data is publicly available (the content that is being scraped is not behind a password-protected authentication system), then it is OK to scrape it, provided we don’t break the web site doing so. What is potentially problematic is if the scraped data will be shared further. For example, downloading content off one website and posting it on another website (as our own), unless explicitely permitted, would constitute copyright violation and be illegal.
However, most copyright legislations recognize cases in which reusing some, possibly copyrighted, information in an aggregate or derivative format is considered “fair use”. In general, unless the intent is to pass off data as our own, copy it word for word or trying to make money out of it, reusing publicly available content scraped off the internet is OK.
Be aware that copyright and data privacy legislation typically differs from country to country. Be sure to check the laws that apply in your context. For example, in Australia, it can be illegal to scrape and store personal information such as names, phone numbers and email addresses, even if they are publicly available.
If you are looking to scrape data for your own personal use, then the above guidelines should probably be all that you need to worry about. However, if you plan to start harvesting a large amount of data for research or commercial purposes, you should probably seek legal advice first.
If you work in a university, chances are it has a copyright office that will help you sort out the legal aspects of your project. The university library is often the best place to start looking for help on copyright.
Depending on the scope of your project, it might be worthwhile to consider asking the owners or curators of the data you are planning to scrape if they have it already available in a structured format that could suit your project. If your aim is do use their data for research, or to use it in a way that could potentially interest them, not only it could save you the trouble of writing a web scraper, but it could also help clarify straight away what you can and cannot do with the data.
On the other hand, when you are publishing your own data, as part of a research project, documentation or a public website, you might want to think about whether someone might be interested in getting your data for their own project. If you can, try to provide others with a way to download your raw data in a structured format, and thus save them the trouble to try and scrape your own pages!
This all being said, if you adhere to the following simple rules, you will probably be fine.
This lesson only provides an introduction to the practice of web scraping and highlights some of the tools available. Scrapy has many more features than those mentioned in the previous section, be sure to refer to its full documentation for details.
Happy scraping!
Key Points
-
Web scraping is, in general, legal and won’t get you into trouble.
-
There are a few things to be careful about, notably don’t overwhelm a web server and don’t steal content.
-
Be nice. In doubt, ask. | http://labs.timtom.ch/library-webscraping/05-conclusion/ |
What is a Parallelogram?
Properties of a Parallelogram
- It is a 2D Plane shape
- Has Straight lines
- It is a Quadrilateral so it has 4 sides and is a closed figure.
- It has opposite sides are parallel.
- The Opposite sides are congruent ( equal)
- The opposite angles are congruent
- The consecutive angles are supplementary or add to 180 degrees
- The diagonals bisect one another
- The diagonals bisect angles creating opposite congruent triangles.
A trapezoid is not a parallelogram because by definition it only has one pair of parallel sides.
A circle is not a parallelogram because it has curved lines.
How about a triangle, nope it has three sides, not four.
A rectangle which is a parallelogram has two pairs of parallel sides and four congruent angles. A rhombus has four congruent sides but only the opposite angles are equal. If you combine the rectangle and the rhombus you get a square with four congruent sides and angles.
The Area is the base times the height:
Area = b × h
The height is at right angles to the base.
The Perimeter is the distance around the edges.
You can add all 4 sides or use.
2 times the (base + side length): | http://www.moomoomathblog.com/2020/06/what-is-parallelogram.html |
The work aims to be the natural response to strong demand for accommodation facilities in the area, in order to take advantage of tourism that enhances the significant natural resources and environmental, scenic, archaeological, historical witnesses, as well as crafts and agriculture .
This will make it possible to expand the seasonality of customers, gaining tourist segments other than those belonging to short-term visitors and beach-users.
The accessibility of the area is guaranteed by the State Road 115, via the exit to Eraclea Minoa and covering about three kilometers of provincial road Cattolica Eraclea - Eraclea Minoa.
The area currently is served by the municipal aqueduct, from the electricity and the telephone network while it is absent from the public sewer system.
- Basement:.772 sqm , with a useful height of circa linear meters 3.40.
The area used to be equal as veranda: sqm.136.74 approx.
- First floor: mq.425, with a useful height of about 10 linear meters 2.90.
The area used to be equal as veranda: sqm.139.42 approx.
The basement will house a conference room, a reading room, internet point, kitchens with adjoining utility rooms, laundry with ironing facilities, hydro massage area with pool, two saunas and two massage facalities, gym, rooms for locker rooms, toilet and showers, a utility room.
The ground floor will house the lobby and TV room, a bar, restaurant, reception area and attached offices, a luggage area , 2 deposit areas and 5 suites.
The first floor will house a total of 10 suites and 2 deposits.
Outside will be provided an outdoor swimming pool and parking area for customers.
In the upper area will be housed water tanks which will be located completely underground so as not to create environmental impact.
The first with a useful capacity of ml.5, 00 x 20.00 x 2.00 = m3 200, 00, will be used for drinking-water supply, two of m3.10, 00 each will be used for the anti-fire plant and serve the collection of treated water.
They will also be modernized to target it to No. 2 suites (existing buildings) to serve the main structure. The volume of these two buildings is equal to m3 276.89.
The total gross surface area occupied by each building is sqm.47.4 meters with a useful height of linear meters 2.70.
Surface-occupied buildings mq.47.74 x 2 = sqm 95.48.
Surface-occupied by covered verandas sqm.59. 95 x 2 = sqm 119.90 .
The costs and works will be carried out by the seller for a price of 2,200,000.00 euros. The owner is open to other solutions or propositions. | https://www.ciancianamyhouse.it/gb/home/435-hotel-in-cattolica-eraclea-property-in-sicily-.html |
---
bibliography:
- '../../bibliography.bib'
title: On Some Properties of Space Inverses of Stochastic Flows
---
[l]{}\
The University of Edinburgh, E-mail: [[email protected]]([email protected])\
[**Remigijus Mikulevičius**]{}\
The University of Southern California, E-mail: [[email protected]]([email protected] )\
[**Abstract**]{}\
We derive moment estimates and a strong limit theorem for space inverses of stochastic flows generated by jump SDEs with adapted coefficients in weighted H[ö]{}lder norms using the Sobolev embedding theorem and the change of variable formula. As an application of some basic properties of flows of continuous SDEs, we derive the existence and uniqueness of classical solutions of linear parabolic second order SPDEs by partitioning the time interval and passing to the limit. The methods we use allow us to improve on previously known results in the continuous case and to derive new ones in the jump case.
Introduction
============
Let $\left( \Omega ,\mathcal{F}, \mathbf{F}=({\mathcal{F}}_t)_{t\ge 0},{\mathbf{P}}\right) $ be a complete filtered probability space satisfying the usual conditions of right-continuity and completeness. Let $(w_{t}^{\varrho})_{\rho \ge 1}$, $t\ge 0$, $\varrho\in{\mathbf{N}}$, be a sequence of independent one-dimensional $\mathbf{F}$-adapted Wiener processes. For a $(Z,\mathcal{Z},\pi)$ is a sigma-finite measure space, we let $p(dt,dz)$ be an $\mathbf{F}$-adapted Poisson random measure on $({\mathbf{R}}_+
\times Z,\mathcal{B}({\mathbf{R}}_+)\otimes \mathcal{Z})$ with intensity measure $\pi(dz)dt$ and denote by $
q(dt,dz )=p(dt,dz)-\pi(dz)dt
$ the compensated Poisson random measure. For each real number $T>0$, we let $\mathcal{R}_T$ and $\mathcal{P}_T $ be the $\mathbf{F}$-progressive and $\mathbf{F}$-predictable sigma-algebra on $\Omega\times [0,T] $, respectively.
Fix a real number $T>0$ and an integer $d\geq 1$. For each stopping time $\tau\le T$, consider the stochastic flow $X_t=X_{t}(\tau,x)$, $(t,x)\in [0,T]\times \mathbf{R}^d$, generated by the stochastic differential equation (SDE) $$\begin{aligned}
\label{eq:SDEIntro}
dX_{t}&=b_t(X_{t})dt+\sigma^{\varrho}_t(X_{t})dw^{\varrho}_{t}+\int_{Z}H_t(X_{t-},z )q(dt,dz), \;\tau <t\le T,\notag\\
X_{t} &=x,\; t \leq \tau,\end{aligned}$$ where $b_t(x)=(b^{i}_t(\omega,x))_{1\leq i\leq d}$ and $\sigma_t (x)=(\sigma_t ^{i\varrho }(\omega,x)
_{1\leq i\leq d,\rho\ge 1}$ are $\mathcal{R}_T\otimes \mathcal{B}(\mathbf{R}^{d})$-measurable random fields defined on $\Omega \times \lbrack
0,T]\times \mathbf{R}^{d}$ and $H_t(x,z)=(H^{i}_t(\omega,x,z))_{1\leq i\leq d}$ is a $\mathcal{P}_{T}\otimes \mathcal{B}(\mathbf{R}^{d})\otimes \mathcal{Z}$-measurable random fields defined on $\Omega \times [ 0,T]\times \mathbf{R}^{d}\times Z.$ The summation convention with respect to the repeated index $\varrho\in {\mathbf{N}}$ is used here and below. In this paper, under natural regularity assumptions on the coefficients $b$, $\sigma$, and $H$, we provide a simple and direct derivation of moment estimates of the space inverse of the flow, denoted $X_t^{-1}(\tau,x)$, in weighted H[ö]{}lder norms by applying the Sobolev embedding theorem and the change of variable formula. Using a similar method, we establish a strong limit theorem in weighted H[ö]{}lder norms for a sequence of flows $X_t^{(n)}(\tau,x)$ and their inverses $X_t^{(n);-1}(\tau,x)$ corresponding to a sequence of coefficients $(b^{(n)},\sigma^{(n)},H^{(n)})$ converging in an appropriate sense. Furthermore, as an application of the diffeomorphism property of flow, we give a direct derivation of the linear second order degenerate stochastic partial differential equation (SPDE) governing the inverse flow $X_t^{-1}(\tau,x)$ when $H\equiv 0$. Specifically, for each $\tau\le T$, consider the stochastic flow $Y_t=Y_t(\tau,x)$, $(t,x) \in [0,T]\times \mathbf{R}^d$, generated by the SDE $$\begin{aligned}
dY_{t} &=b_t(Y_t)dt+\sigma^{\varrho}_t (Y_{t}) dw^{\varrho}_{t}, \;\;\tau <t\le T,\\
Y_{t} &=x,\;\;t\leq \tau.\notag
\end{aligned}$$ Assume that $b$ and $\sigma$ have linear growth, bounded first and second derivatives, and that the second derivatives of $b$ and $\sigma$ are $\alpha$-H[ö]{}lder for some $\alpha>0$. By partitioning the time interval and using Taylor’s theorem, the Sobolev embedding theorem, and some basic properties of the flow and its inverse, we show that $u_t(x)=u_t(\tau,x):=Y_t^{-1}(\tau,x)$, $(t,x)\in [0,T]\times \mathbf{R}^d$ is the unique classical solution of the SPDE given by $$\begin{aligned}
\label{eq:SPDEIntro}
du_t (x)&=\left(\frac{1}{2}\sigma
^{i\varrho}_t(x) \sigma^{j\varrho}_t(x)\partial_{ij}u_t(x)-\hat b^i_t(x)\partial_iu_t(x)\right)d t -\sigma^{i\varrho}_t(x) \partial_iu_{t}(x)dw^{\varrho}_t,\;\;\tau <t\le T,\notag \\
u_t(x) &=x,\;\;t\leq \tau,\end{aligned}$$ where $$\hat b^i_t(x)=b^i_t(x)-\sigma^{j\varrho}_t(x) \partial_j\sigma^{i\varrho}_t(x).$$ In [@LeMi14], we use all of the properties of the flow $X_t(\tau,x)$ that are established in this work in order to derive the existence and uniqueness of classical solutions of linear parabolic stochastic integro-differential equations (SIDEs).\
One of the earliest works to investigate the homeomorphism property of flows of SDEs with jumps is by P. Meyer in [@Me81a]. In [@Mi83], R. Mikulevičius extended the properties found in [@Me81a] to SDEs driven by arbitrary continuous martingales and random measures. Many other authors have since expanded upon the work in [@Me81a], see for example [@FuKu85; @Ku04; @Me07; @QiZh08; @Zh13b; @Pr14] and references therein. In [@Ku04; @Ku86a], H. Kunita studied the diffeomorphism property of the flow $X_t(s,x),$ $(s,t,x)\in [0,T]^2\times\mathbf{R}^d$, and in the setting of deterministic coefficients, he showed that for each fixed $t$, the inverse flow $X_t^{-1}(s,x),$ $(s,x)\in [t,T]\times\mathbf{R}^d$, solves a backward SDE. By estimating the associated backward SDE, one can obtain moment estimates and a strong limit theorem for the inverse flow in essentially the same way that moment estimates are obtained for the direct flow (see, e.g. [@Ku86a]). However, this method of deriving moment estimates and a strong limit theorem for the inverse flow uses a time reversal, and thus requires that the coefficients are deterministic. In the case $H\equiv 0$, numerous authors have investigated properties of the inverse flow with random coefficients. In Chapter 2 of [@Bi81], Lemma 2.1 and 2.2. of [@OcPa89], and Section 6.1 and 6.2 of [@Ku96], the authors derive properties of $Y^{-1}_t(\tau,x)$ (e.g. moment estimates, strong limit theorem, and the fact that it solves ) by first showing that it solves the Stratonovich form SDE for $Z_t=Z_t(\tau,x)$, $(t,x)\in [0,T]\times \mathbf{R}^d$, given by $$\begin{aligned}
\label{eq:invflowsde}
dZ_t(x)&=- U_t(Z_t(x)) b_t(x)dt- U_t(Z_t(x))\sigma ^{\varrho}_t(x)\circ dw^{\varrho}_t,\;\;\tau <t\le T,\\
Z_0(x)&=x, \;\;\tau <t,\notag\end{aligned}$$ where $U_t(x)=U_t(\tau,x)=[\nabla Y_t(\tau,x)]^{-1}$. In order to obtain a strong solution to , the authors impose conditions on the coefficients that guarantee $\nabla U_t(x)$ is locally-Lipschitz in $x$. In the degenerate setting, the third derivative of $b_t$ and $\sigma_t$ need to be $\alpha$-H[ö]{}lder for some $\alpha>0$ to obtain that $\nabla U_t(x)$ is locally-Lipschitz in $x$. However, for some reason, the authors assume more regularity than this. In this paper, we derive properties of the inverse flow under those assumptions which guarantee that $Y_t(\tau,x)$ is a ${\mathcal{C}}^{\beta}_{loc}$-diffeomorphism (and with $\beta>1$).
Classical solutions of have been constructed in [@Bi81; @Ku96] by directly showing that $Y^{-1}_t(\tau,x)$ solves . As we have mentioned above, this approach requires the third derivatives of $b_t$ and $\sigma_t$ to be $\alpha$-H[ö]{}lder for some $\alpha>0$. Yet another approach to deriving existence of classical solutions of is using the method of time reversal (see, e.g. [@Ku96; @DaTu98]). While this method only requires that the second derivatives of $b_t$ and $\sigma_t$ are $\alpha$-H[ö]{}lder for some $\alpha>0$, it does impose that the coefficients are deterministic. In [@KrRo82a], N.V. Krylov and B.L. Rozvskii derived the existence and uniqueness of generalized solutions of degenerate second order linear parabolic SPDEs in Sobolev spaces using variational approach of SPDEs and the method of vanishing viscosity (see, also, [@GeGyKr14] and Ch. 4, Sec. 2, Theorem 1 in [@Ro90]). Thus, by appealing to the Sobolev embedding theorem, this theory can be used to obtain classical solutions of degenerate linear SPDEs. Proposition 1 of Ch. 5, Sec. 2 ,in [@Ro90] shows that if $\sigma$ is uniformly bounded and four-times continuously differentiable in $x$ with uniformly bounded derivatives and $b$ is uniformly bounded and three-times continuously differentiable with uniformly bounded derivatives, then there exists a classical solution of and $u_t(x)=Y_t^{-1}(x)$. This is more regularity than we require.
This paper is organized as follows. In Section 2, we state our notation and the main results. Section 3 is devoted to the proof of the properties of the stochastic flow $X_t(\tau,x)$ and Section 4 to the proof that $Y^{-1}_t(\tau,x)$ is the unique classical solution of . In Section 5, the appendix, auxiliary facts that are used throughout the paper are discussed.
Outline of main results
=======================
For each integer $n\ge 1$, let $\mathbf{R}^{n}$ be the $n$-dimensional Euclidean space and for each $x\in\mathbf{R}^{n}$, denote by $|x|$ the Euclidean norm of $x$. Let ${\mathbf{R}}_+$ denote the set of non-negative real-numbers. Let ${\mathbf{N}}$ be the set of natural numbers. Elements of $\mathbf{R}^d$ are understood as column vectors and elements of $\mathbf{R}^{2d}$ are understood as matrices of dimension $d\times d$. We denote the transpose of an element $x\in\mathbf{R}^d$ by $x^*$. The norm of an element $x$ of $\ell_2(\mathbf{R}^d)$ (resp. $\ell_2(\mathbf{R}^{2d})$), the space of square-summable $\mathbf{R}^d$-valued (resp. $\mathbf{R}^{2d}$-valued) sequences, is also denoted by $|x|$. For a topological space $(X,{\mathcal{X}})$ we denote the Borel sigma-field on $X$ by ${\mathcal{B}}(X)$.
For each $i\in \{1,\ldots,d_1\}$, let $\partial_i=\frac{\partial}{\partial x_i}$ be the spatial derivative operator with respect to $x_i$ and write $\partial_{ij}=\partial_i\partial_j$ for each $i,j\in \{1,\ldots,d_1\}$. For a once differentiable function $f=(f^1\ldots,f^{d_1}):{\mathbf{R}}^{d_1}\rightarrow{\mathbf{R}}^{d_1}$, we denote the gradient of $f$ by $\nabla
f=(\partial_jf^i)_{1\le i,j\le d_1}$. Similarly, for a once differentiable function $f=(f^{1\varrho},\ldots,f^{d\varrho})_{\varrho\ge 1} : {\mathbf{R}}^{d_1}\rightarrow \ell_2({\mathbf{R}}^{d_1})$, we denote the gradient of $f$ by $\nabla f=(\partial_jf^{i\varrho})_{1\le i,j\le d_1,\varrho\ge 1} $ and understand it as a function from ${\mathbf{R}}^{d_1}$ to $\ell_2(\mathbf{R}^{2d_1})$. For a multi-index $\gamma=(\gamma_1,\ldots,\gamma_d)\in\{0,1,2,\ldots,\}^{d_1}$ of length $|\gamma|:=\gamma_1+\cdots+\gamma_d$, denote by $\partial^{\gamma}$ the operator $\partial^\gamma=\partial_1^{\gamma_1}\cdots \partial_d^{\gamma_d}$, where $\partial_i^0$ is the identity operator for all $i\in\{1,\ldots,d_1\}$. For each integer $d\ge 1$, we denote by $C_c^{\infty}({\mathbf{R}}^{d_1}; {\mathbf{R}}^{d})$ the space of infinitely differentiable functions with compact support in ${\mathbf{R}}^{d}$.
For a Banach space $V$ with norm $|\cdot |_{V}$, domain $Q$ of $\mathbf{R}^{d}$, and continuous function $f:Q\rightarrow V$, we define $$|f|_{0;Q;V}=\sup_{x\in Q}|f(x)|$$ and $$[f]_{\beta;Q;V}=\sup_{x,y\in Q,x\neq y}\frac{|
f(x)-f(y)|_{V}}{|x-y|_{V}^{\beta }},\;\;\beta \in (0,1].$$ For each real number $\beta\in \mathbf{R}$, we write $\beta =[\beta]^-+\{\beta\}^+$, and $\{\beta\}^+\in (0,1]$. For a Banach space $V$ with norm $|\cdot |_{V}$, real number $\beta>0$, and domain $Q$ of $\mathbf{R}^{d}$, we denote by ${\mathcal{C}}^{\beta }(Q;V)$ the Banach space of all bounded continuous functions $f:Q\rightarrow V$ having finite norm $$|f|_{\beta ;Q;V}:=\sum_{| \gamma |\leq [\beta ]^-
}|\partial^{\gamma }f|_{0;Q;V}+\sum_{|\gamma|=[\beta]^-}[\partial^{\gamma}f]_{\{\beta\}^+ ;Q;V}.$$When $Q=\mathbf{R}^{d}$ and $V=\mathbf{R}^n$ or $V=\ell_2({\mathbf{R}}^n)$ for any integer $n\ge 1$, we drop the subscripts $Q$ and $V$ from the norm $| \cdot |_{\beta;Q;V}$ and write $|\cdot |_{\beta}
$. For a Banach space $V$ and for each $\beta>0$, denote by ${\mathcal{C}}_{loc}^{\beta}({\mathbf{R}}^d;V)$ the Fréchet space of continuous functions $f:\mathbf{R}^d\rightarrow V$ satisfying $f\in {\mathcal{C}}^{\beta}(Q;V)$ for all bounded domains $Q\subset \mathbf{R}^{d}$. We call a function $f:\mathbf{R}^{d}\rightarrow \mathbf{R}^{d} $ a ${\mathcal{C}}_{loc}^{\beta}({\mathbf{R}}^d;{\mathbf{R}}^d)$-diffeomorphism if $f$ is a homeomorphism and both $f$ and its inverse $f^{-1}$ are in ${\mathcal{C}}_{loc}^{\beta}({\mathbf{R}}^d;{\mathbf{R}}^d)$.\
For a Fréchet space $\chi$, we denote by $D([0,T];\chi)$ the space of $\chi$-valued càdlàg functions on $[0,T]$ and by $C([0,T]^{2};\chi)$ the space of $\chi$-valued continuous functions on $[0,T]\times \lbrack 0,T]$. The spaces $D([0,T];\chi)$ and $C([0,T]^{2};\chi )$ are endowed with the supremum semi-norms.\
The notation $N=N(\cdot ,\cdots,\cdot )$ is used to denote a positive constant depending only on the quantities appearing in the parentheses. In a given context, the same letter is often used to denote different constants depending on the same parameter. If we do not specify to which space the parameters $\omega ,t,x,y,z$ and $n$ belong, then we mean $\omega \in \Omega $, $t\in [ 0,T]$, $x,y\in \mathbf{R}^{d}$, $z\in Z$, and $n\in\mathbf{N}$.\
Let $r_1(x)=\sqrt{1+|x|^2},$ $x\in\mathbf{R}^d$. For each real number $\beta > 1$, we introduce the following regularity condition on the coefficients $b,\sigma, $ and $H$.
\[$\beta$\]\[asm:regularitypropflow\]
There is a constant $N_{0}>0$ such that for all $(\omega,t,z)\in \Omega\times [0,T]\times Z$, $$|r_1^{-1}b_t|_0+|\nabla b_t|_{\beta -1}+|r_1^{-1}\sigma_t|_0+|\nabla \sigma_t|_{\beta -1}\leq
N_{0}\quad \textit{and}\quad| r_1^{-1}H_t(z )|_{0}+|\nabla H_t(z )|_{\beta -1}\leq K_{t}(z
),$$ where $K :\Omega \times[ 0,T]\times Z\rightarrow \mathbf{R}_+$ is a $\mathcal{P}_{T}\otimes \mathcal{Z}$-measurable function satisfying $$K_{t}(z)+\int_{Z}K_t(z)^{2}\pi (dz)\leq
N_{0},$$ for all $(\omega,t,z)\in \Omega\times [0,T]\times Z$.
There are constants $\eta\in (0,1)$ and $N_{\kappa}>0$ such that for all $(\omega ,t,x, z)\in \{(\omega ,t,x,z)\in \Omega \times
[ 0,T]\times \mathbf{R}^d\times Z:|\nabla H_{t}(\omega ,x,z)|>\eta \},$ $$|\left( I_{d}+\nabla
H_t(x,z)\right) ^{-1}|\leq N_{\kappa}.$$
The following theorem shows that if Assumption \[asm:regularitypropflow\] $(\beta)$ holds for some $\beta>1$, then for any $\beta'\in [1,\beta]$, the solution $X_t(\tau,x)$ of has a modification that is a ${\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d{\mathbf{R}}^d)$-diffeomorphism and the $p$-th moments of the weighted $\beta '$-H[ö]{}lder norms of the inverse flow are bounded. This theorem will be proved in the next section.
\[thm:diffeoandmomest\] Let Assumption \[asm:regularitypropflow\]$(\beta)$ hold for some $\beta>1$.
For each stopping time $\tau\le T$ and $\beta'\in [1,\beta)$, there exists a modification of the strong solution $X_t(\tau,x)$ of , also denoted by $X_t(\tau,x)$, such that ${\mathbf{P}}$-a.s. the mapping $X_{t}(\tau,\cdot )\allowbreak:\mathbf{R}^{d}\rightarrow \mathbf{R}^{d}$ is a ${\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d)$-diffeomorphism, $X_{\cdot}(\tau,\cdot),X^{-1}_{\cdot}(\tau,\cdot)\in D([0,T];{\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$, and $X_{t-}^{-1}(\tau,\cdot )$ coincides with the inverse of $X_{t-}(\tau,\cdot)$. Moreover, for each $\epsilon >0$ and $p\ge 2$, there is a constant $N=N(d,p,N_{0},T,\beta',\epsilon)$ such that $$\label{eq:MomEstDirect}
{\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-(1+\epsilon )}X_{t}(\tau )|_{0}^{p}\right]+{\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-\epsilon }\nabla X_{t}(\tau )|_{\beta'-1}^{p}\right]\leq N$$ and a constant $N=N(d,p,N_{0},T,\beta',\eta ,N_{\kappa},\epsilon)$ such that $$\label{ineq:MomEstInverse}
{\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-(1+\epsilon )}X^{-1}_{t}(\tau )|_{0}^{p}\right]+{\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-\epsilon }\nabla X^{-1}_{t}(\tau )|_{\beta'-1}^{p}\right]\leq N.$$
If $H\equiv 0$, then for each $\beta'\in (1,\beta)$, ${\mathbf{P}}$-a.s. $X_{\cdot}(\cdot ,\cdot),X^{-1}_{\cdot}(\cdot,\cdot)\in C([0,T]^2;{\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and for each $\epsilon >0$ and $p\ge 2$, there is a constant $N=N(d,p,N_{0},T,\beta',\epsilon)$ such that $$\label{ineq:MomEstDirectcts}
{\mathbf{E}}\left[\sup_{s,t\leq T}|r_{1}^{-(1+\epsilon )}X_{t}(s )|_{0}^{p}\right]+{\mathbf{E}}\left[\sup_{s,t\leq T}|r_{1}^{-\epsilon }\nabla X_{t}(s )|_{\beta'-1}^{p}\right]\leq N$$ and $$\label{ineq:MomEstInversects}
{\mathbf{E}}\left[\sup_{s,t\leq T}|r_{1}^{-(1+\epsilon )}X^{-1}_{t}(s )|_{0}^{p}\right]+{\mathbf{E}}\left[\sup_{s,t\leq T}|r_{1}^{-\epsilon }\nabla X^{-1}_{t}(s )|_{\beta'-1}^{p}\right]\leq N.$$
The estimate is used in [@LeMi14] to take the optional projection of a linear transformation of the inverse flow of a jump SDE driven by two independent Weiner processes and two independent Poisson random measures relative to the filtration generated by one of the Weiner processes and Poisson random measures.
Now, let us state our strong limit theorem for a sequence of flows, which will also be proved in the next section. We will use this strong limit theorem in [@LeMi14] to show that the inverse flow of a jump SDE solves a parabolic stochastic integro-differential equation. For each $n$, consider the stochastic flow $X^{(n)}_t=X^{(n)}_t(\tau, x)$, $(t,x)\in[0,T]\times \mathbf{R}^d$, generated by the SDE $$\begin{aligned}
dX^{(n)}_t&=b^{(n)}_t(X^{(n)}_t)dt+\sigma^{(n)l\varrho}_t
(X ^{(n)}_t)dw^{\varrho}_{t}+\int_{Z}H^{(n)}_t(X^{(n)}_{t-},z )q(dt,dz ),\;\;\tau \leq
t\leq T, \\
X^{(n)}_t &=x,\;\;t\leq \tau.\end{aligned}$$ Here we assume that for each $n$, $b^{(n)}$, $\sigma^{(n)}$, and $H^{(n)}$ satisfy the same measurability conditions as $b,\sigma,$ and $H$, respectively.
\[thm:stronglimit\]Let Assumption \[asm:regularitypropflow\]$(\beta)$ hold for some $\beta>1$ and assume that $b^{(n)}, \sigma^{(n)}$, and $H^{(n)}$ satisfy Assumption \[asm:regularitypropflow\] $(\beta)$ uniformly in $n\in {\mathbf{N}}$. Moreover, assume that $$d{\mathbf{P}}dt-\lim_{n\rightarrow\infty}\left (|r_1^{-1}b^{(n)}_t- r_1^{-1}b_t|_{0}+|\nabla b^{(n)}_t-\nabla b_t|_{\beta-1}\right)=0,$$ $$d{\mathbf{P}}dt-\lim_{n\rightarrow\infty}\left (|r_1^{-1}\sigma^{(n)}_t-r_1^{-1}\sigma_t|_{\beta-1}+|\nabla \sigma^{(n)}_t-\nabla \sigma _t|_{0}\right)=0,$$ and for all $(\omega,t,z)\in \Omega\times [0,T]\times Z$ and $n\in {\mathbf{N}}$, $$|r_1^{-1}H^{(n)}_t(z)-r_1^{-1}H_t(z)|_{0}+|\nabla H^{(n)}_t(z)-\nabla H_t(z)|_{\beta-1}\le K^{(n)}(t,z),$$ where $(K^{(n)}_t(z))_{n\in\mathbf{N}}$ is a sequence of $\mathbf{R}_{+}$-valued $\mathcal{P}_{T}\otimes \mathcal{Z}$ measurable functions defined on $\Omega \times [ 0,T]\times Z$ satisfying for all $(\omega,t,z)\in \Omega\times [0,T]\times Z$ and $n\in {\mathbf{N}}$, $$K^{(n)}_t(z)+\int_Z K_t^{(n)}(z)^2\pi(dz)\le N_0$$ and $$d{\mathbf{P}}dt-\lim_{n\rightarrow\infty} \int_{Z} K_t^{(n)}(z)^2\pi(dz)=0.$$ Then for each stopping time $\tau\le T$, $\beta'\in [1,\beta)$, $\epsilon>0,$ and $p\ge2,$ we have $$\begin{gathered}
\lim_{n\rightarrow \infty }\left({\mathbf{E}}\left[\sup_{t\leq T} | r_1^{-(1+\epsilon)}X^{(n)}_t(\tau )-r_1^{-(1+\epsilon)}X_t(\tau)|_{0}^{p}\right]+{\mathbf{E}}\left[\sup_{t\leq T} |
r_1^{-\epsilon}\nabla X^{(n)}_t (\tau) -r_1^{-\epsilon}\nabla X_{t}(\tau ) |
_{\beta'-1}^{p}\right]\right)=0,\\
\lim_{n\rightarrow \infty }{\mathbf{E}}\left[\sup_{t\leq T} | r_1^{-(1+\epsilon)}X^{(n);-1}_t(\tau )-r_1^{-(1+\epsilon)}X^{-1}_t(\tau)|_{0}^{p}\right]=0,\end{gathered}$$ and $$\lim_{n\rightarrow \infty }{\mathbf{E}}\left[\sup_{t\leq T} |
r_1^{-\epsilon}\nabla X^{(n);-1}_t (\tau) -r_1^{-\epsilon}\nabla X^{-1}_{t}(\tau ) |
_{\beta'-1}^{p}\right]=0.$$
Let us introduce our class of solutions for the equation . For a each number $\beta' >2$, let $\mathfrak{C}_{cts}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d)$ be the linear space of all random fields $v:\Omega\times [0,T]\times \mathbf{R}^d\rightarrow \mathbf{R}^d$ such that $v$ is ${\mathcal{O}}_T\otimes{\mathcal{B}}(\mathbf{R}^d)$-measurable and ${\mathbf{P}}$-a.s. $r_{1}^{-\lambda}(\cdot)v_{\cdot}(\cdot)$ is a $C([0,T];{\mathcal{C}}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$ for a real number $\lambda>0$.
We introduce the following assumption for a real number $\beta >2$.
\[asm:propflowregwcorrec\] There is a constant $N_0$ such that for all $(\omega,t)\in \Omega\times [0,T]$, $$|r_1^{-1}b_t|_0+|r_1^{-1}\sigma_t|_0+
|\nabla b_t|_{\beta -1}+| \nabla \sigma_t|_{\beta -1}\leq N_{0}.$$
\[thm:SPDEEx\]Let Assumption \[asm:propflowregwcorrec\]$(\beta)$ hold for some $\beta >2$. Then for each stopping time $\tau\le T$ and $\beta'\in [1,\beta)$, there exists a unique process $u(\tau)$ in $\mathfrak{C}_{cts}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d)$ that solves . Moreover, ${\mathbf{P}}$-a.s. $u_t(\tau,x)=Y^{-1}_t(\tau,x)$ for all $(t,x)\in [0,T]\times\mathbf{R}^d$ and for each $\epsilon>0$ and $p\geq 2,$ there is a constant $N=N(d,p,N_{0},T,\beta',\epsilon)$ such that $${\mathbf{E}}\left[\sup_{s,t\leq T}|r_1^{-(1+\epsilon)}u_t(s)| _{0}^{p}\right]+{\mathbf{E}}\left[\sup_{s,t\leq T}|r_1^{-\epsilon}\nabla u_t(s)| _{\beta'-1}^{p}\right]\leq N.$$
\[rem:sigmazero\] It is clear by the proof of this theorem that if $\sigma\equiv 0$, then we only need to assume that Assumption \[asm:propflowregwcorrec\] $(\beta)$ holds for some $\beta >1$.
Now, consider the SPDE given by $$\begin{aligned}
\label{eq:SPDE}
d\bar u_t (x)&=\left(\frac{1}{2}\sigma
^{i\varrho}_t(x) \sigma^{j\varrho}_t(x)\partial_{ij}\bar u_t(x)+ b^i_t(x)\partial_i\bar u_t(x)\right)d t +\sigma^{i\varrho}_t(x) \partial_i\bar u_{t}(x)dw^{\varrho}_t,\;\;\tau <t\le T,\notag \\
\bar u_t(x) &=x,\;\;t\leq \tau.\end{aligned}$$
This SPDE differs from the one given in by the first-order coefficient in the drift. In order to obtain an existence and uniqueness theorem for this equation, we have to impose additional assumptions on $\sigma$.
We introduce the following assumption for a real number $\beta >2$.
\[asm:propflowregwocorrec\] There is a constant $N_0>0$ such that for all $(\omega,t)\in \Omega\times [0,T]$, $$|r_1^{-1}b_t|_0+
|\nabla b_t|_{\beta -1}+| \sigma_t|_{\beta+1 }\leq N_{0}.$$
For each $\tau\le T$, consider the stochastic flow $\hat Y_t=\hat Y_t(\tau,x)$, $(t,x) \in [0,T]\times \mathbf{R}^d$, generated by the SDE $$\begin{aligned}
d\bar Y_{t} &=-\hat b_t(\bar Y_t)dt-\sigma^{\varrho}_t (\bar Y_{t}) dw^{\varrho}_{t}, \;\;\tau <t\le T,\\
Y_{t} &=x,\;\;t\leq \tau.\notag
\end{aligned}$$ If Assumption \[asm:propflowregwocorrec\]($\beta$) holds for some $\beta>2$, then for all $(\omega,t,x)\in \Omega\times [0,T]\times {\mathbf{R}}^d$, $$|\hat b_t(x)|\le |b_t(x)|+|\sigma_t(x)|\nabla \sigma_t(x)|\le N_0(N_0+1)+N_0|x|$$ and $$|\nabla \hat b_t|_{\beta-1}\le |\nabla b_t|_{\beta-1}+| \sigma_t|_{\beta-1} |\nabla^2 \sigma_t|_{\beta-1}+|\nabla \sigma_t|_{\beta-1}^2\le N_0+2N_0^2,$$ which immediately implies the following corollary of Theorem \[thm:SPDEEx\].
\[cor:SPDEExredform\]Let Assumption \[asm:propflowregwocorrec\]$(\beta)$ hold for some $\beta >2$. Then for each stopping time $\tau\le T$ and $\beta'\in [1,\beta)$, there exists a unique process $\bar u(\tau)$ in $\mathfrak{C}_{cts}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d)$ that solves . Moreover, ${\mathbf{P}}$-a.s. $\bar u_t(\tau,x)=\bar Y^{-1}_t(\tau,x)$ for all $(t,x)\in [0,T]\times\mathbf{R}^d$ and for each $\epsilon>0$ and $p\geq 2,$ there is a constant $N=N(d,p,N_{0},T,\beta',\epsilon)$ such that $${\mathbf{E}}\left[\sup_{s,t\leq T}|r_1^{-(1+\epsilon)}\bar u_t(s)| _{0}^{p}\right]+{\mathbf{E}}\left[\sup_{s,t\leq T}|r_1^{-\epsilon}\nabla \bar u_t(s)| _{\beta'-1}^{p}\right]\leq N.$$
Properties of stochastic flows
==============================
Homeomorphism property of flows
-------------------------------
In this subsection, we collect some results about flows of jump SDEs that we will need. In particular, we present sufficient conditions that guarantee the homeomorphism property of flows of jump SDEs. First, let us introduce the following assumption, which is the usual linear growth and Lipschitz condition on the coefficients $b,\sigma$, and $H$ of the SDE .
\[asm:lineargrowthlipschitz\]There is a constant $N_{0}>0$ such that for all $(\omega ,t,x,y)\in \Omega\times[0,T]\times {\mathbf{R}}^{2d}$, $$\begin{aligned}
|b_t(x)|+|\sigma_t (x)| &\leq N_{0}(1+|x|), \\
|b_t(x)-b_t(y)|+|\sigma_t (x)-\sigma_t(y)| &\leq
N_{0}|x-y|.\end{aligned}$$Moreover, for all $(\omega ,t,x,y,z)\in \Omega\times[0,T]\times {\mathbf{R}}^{2d}\times Z,$ $$\begin{aligned}
|H_t(x,z)|&\leq
K_{1}(t,z)(1+|x|),\\
|H_t(x,z)-H_t(y,z)| &\leq K_{2}(t,z)|
x-y|, \end{aligned}$$where $K_1,K_2: \Omega \times[ 0,T]\times Z\rightarrow \mathbf{R}_+$ *are* $\mathcal{P}_{T}\otimes \mathcal{Z}$-measurable functions satisfying $$K_{1}(t,z)+K_2(t,z)+\int_{Z}\left(K_{1}(t,z)^{2}+K_2(t,z)^2\right)\pi (dz)\leq
N_{0},$$ for all $(\omega,t,z)\in \Omega\times [0,T]\times Z$.
It is well-known that under this assumption that there exists a unique strong solution $X_t(s,x)$ of (see e.g. Theorem 3.1 in [@Ku04]). We will also make use of the following assumption.
\[asm:Hdiffeoasm\] For all $(\omega,t,x,z)\in \Omega\times [0,T]\times {\mathbf{R}}^d\times Z$, $H_t(x,z) $ is differentiable in $x$, and there are constants $\eta\in (0,1)$ and $N_{\kappa}>0$ such that for all $(\omega ,t,x, z)\in$ $ \{(\omega ,t,x,z)\in \Omega \times
[ 0,T]\times \mathbf{R}^d\times Z$ $:|\nabla H_{t}$ $(\omega ,x,z)|>\eta \},$ $$\left\vert\left( I_{d}+\nabla
H_t(x,z)\right) ^{-1}\right\vert \leq N_{\kappa}.$$
The coming lemma shows that under Assumptions \[asm:lineargrowthlipschitz\] and \[asm:Hdiffeoasm\], the mapping $x+H_t(x,z)$ from $\mathbf{R}^d$ to $\mathbf{R}^d$ is a diffeomorphism and the gradient of inverse map is bounded.
\[lem:Hprop\] Let Assumptions \[asm:lineargrowthlipschitz\] and \[asm:Hdiffeoasm\] hold. For each $(\omega,t,z)\in \Omega\times[0,T]\times Z$, the mapping $\tilde{H}_t(\cdot,z):\mathbf{R}^d\rightarrow\mathbf{R}^d$ defined by $\tilde{H}_t(x,z):=x+H_t(x,z)$ is a diffeomorphism and $$| \tilde{H}_t^{-1}(x,z)|\leq \bar N N_0+\bar N|x|\quad \textrm{and} \quad |\nabla \tilde{H}_t^{-1}(x,z)| \leq \bar N,$$ where $\bar N:=(1-\eta)^{-1}\vee N_0.$
\(1) On the set $(\omega ,t,x,z)\in \{(\omega ,t,x,z)\in \Omega \times
[ 0,T]\times\mathbf{R}^d\times Z:|\nabla H_t(\omega,x,z)|\le \eta \}$, we have $$|\kappa_t (\omega,x,z)| \leq \left| I_d+\sum_{n=1}^{\infty }(-1)^{n}[\nabla H_t(\omega,x,z)]^{n}\right|\le \frac{1}{1-\eta}.$$It follows from Assumption \[asm:Hdiffeoasm\] that for all $\omega,t,x,$ and $z$, the mapping $\nabla \tilde{H}_t(x,z)$ has a bounded inverse. Therefore, by Theorem 0.2 in [@DeHoIm13] the mapping $\tilde{H}_t(\cdot,z):\mathbf{R}^{d}\rightarrow \mathbf{R}^{d}$ is a global diffeomorphism. Moreover, for all $\omega,t,x$ and $z$, $$| \tilde{H}_t^{-1}(x,z)-\tilde{H}_t^{-1}(y,z)|\le \bar N|x-y|,$$ which yields $$|\tilde{H}_t(x,z)-\tilde{H}_t(y,z)|\ge \bar N^{-1}|x-y| \quad \Longrightarrow \quad |\tilde{H}_t(x,z)|+K_1(t,z)\ge \bar N^{-1}|x|,$$ and hence $$|\tilde{H}_t^{-1}(x,z)|\le \bar NK_1(t,z)+\bar N|x|\le \bar NN_0+\bar N|x|.$$
The following estimates are essential in the proof of the homeomorphic property of the flow and the derivation of moment estimates of the inverse flow. We refer the reader to Theorem 3.2 and Lemmas 3.7 and 3.9 in [@Ku04] and Lemma 4.5.6 in [@Ku97] ($H\equiv 0$ case) for the proof of the following lemma.
\[lem:Direct Flow Estimates\]Let Assumption \[asm:lineargrowthlipschitz\] hold.
For each $p\geq 2,$ there is a constant $N=N(p,N_{0},T)$ such that for all $s,\bar s\in [0,T]$ and $x,y\in \mathbf{R}^d,$ $$\label{ineq:growthdirectposp}
{\mathbf{E}}\left[\sup_{t\leq T}r_{1}(X_{t}(s,x)^{p})\right]\leq Nr_{1}(x)^{p},$$ $$\label{ineq:estdirectdifftposp}
{\mathbf{E}}\left[\sup_{t\leq T}|X_{t}(s,x) -X_{t}(s,y)
|^{p} \right]\leq N|x-y|^{p}.$$
If Assumption \[asm:Hdiffeoasm\] holds, then for each $p\in\mathbf{R}$, there is a constant $N=N(p,N_{0},T,\eta,N_{\kappa})$ such that for all $s\in[0,T]$ and $x,y\in\mathbf{R}^d$, $$\label{ineq:estdirectgrowthnegp}
{\mathbf{E}}\left[\sup_{t\leq T}r_{1}(X_{t}(s,x)^{p}\right]\leq Nr_{1}(x)^{p},$$ and $$\label{ineq:estdirectdiffnegp}
{\mathbf{E}}\left[\sup_{t\leq T}|X_{t}(s,x) -X_{t}(s,Y)
|^{p}\right]\leq N|x-y|^{p}.$$
In the next proposition, we collect some facts about the homeomorphic property of the flow. Actually, the homeomorphism property has been shown in [@QiZh08] to hold under the log-Lipschitz condition (i.e. one uses Bihari’s inequality instead of Gronwall’s inequality), but we do not pursue this here.
\[prop:homeomorphism\]Let Assumptions \[asm:lineargrowthlipschitz\] and \[asm:Hdiffeoasm\] hold.
There exists a modification of the strong solution $X_t(s,x),$ $(s,t,x)\in [0,T]^2\times\mathbf{R}^d$, of , also denoted by $X_t(s,x)$, that is càdlàg in $s$ and $t$ and continuous in $x$. Moreover, for each stopping time $\tau\le T$, ${\mathbf{P}}$-a.s. for all $t\in
[0, T]$, the mappings $X_{t}(\tau ,\cdot),X_{t-}(\tau,\cdot):\mathbf{R}^d\rightarrow\mathbf{R}^d$ are homeomorphisms and the inverse of $X_{t}(\tau,\cdot ),$ denoted by $X_t^{-1}(\tau,\cdot )$, is càdlàg in $t$ and continuous in $x$, and $X_{t-}^{-1}(\tau,\cdot )$ coincides with the inverse of $X_{t-}(\tau,\cdot ).$ In particular, if $(x_{n})_{n\ge 1}$ is a sequence in $\mathbf{R}^d$ such that $\lim_{n\rightarrow \infty }x_{n}=x$ for some $x\in\mathbf{R}^d$, then ${\mathbf{P}}$-a.s. $$\lim_{n\rightarrow \infty }\sup_{t\leq T}|X_{t}^{-1}(\tau
,x_{n})-X_{t}^{-1}(\tau,x)|=0.$$ Furthermore, for each $\beta'\in [0,1)$, ${\mathbf{P}}$-a.s. $X(\tau,\cdot) \in D([0,T];{\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and for all $\epsilon>0$ and $p\ge 2$, there is a constant $N=N(d,p,N_0,T,\beta',\epsilon)$ such that $$\label{ineq:estimateofdirect}
{\mathbf{E}}\left[\sup_{t\le T} |r_1^{-(1+\epsilon)}X_t(\tau)|_{\beta'}^p\right]\le N.$$
If $H\equiv 0$, then ${\mathbf{P}}$-a.s. for all $s,t\in[0,T]$, the $X_{t}(s,x)$ and $X_{t}^{-1}(s,x)$ are continuous in $s,t,$ and $x.$ Moreover, for each $\beta'\in [0,1]$, ${\mathbf{P}}$-a.s. $X(\cdot,\cdot) \in C([0,T]^2;{\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and for each $\epsilon>0$ and $p\ge 2$, there is a constant $N=N(d,p,N_0,T,\beta',\epsilon)$ such that $$\label{ineq:estimateofdirectcts}
{\mathbf{E}}\left[\sup_{s,t\le T} |r_1^{-(1+\epsilon)}X_t(s)|_{\beta'}^p\right]\le N.$$
\(1) Owing to Assumptions \[asm:lineargrowthlipschitz\] and \[asm:Hdiffeoasm\], by Lemma \[lem:Hprop\], for all $\omega,t$ and $z$, the process $\tilde{H}_t(x,z):=x+H_t(x,z)$ is a homeomorphism (in fact, it is a diffeomorphism) in $x$ and $\tilde{H}^{-1}_t(x,z)$ has linear growth and is Lipschitz. This implies that assumptions of Theorem 3.5 in [@Ku04] hold and hence there is modification of $X_t(s,x)$, denoted $X_t(s,x)$, such that for all $s\in [0,T]$, ${\mathbf{P}}$-a.s. for all $t\in [0,T]$, $X_t(s,\cdot )$ is a homeomorphism. Following [@Ku04], for each $(s,t,x)\in [0,T]^2\times\mathbf{R}^d$, we set $$\label{def:flowdef}
\bar X_t(s,x) =\left\{ \begin{array}{cc}
x & t\le s\\
X_t(0,X_s^{-1}(0,x)) & t\ge s,
\end{array}\right.$$ and remark that ${\mathbf{P}}$-a.s. $\bar X_t(s,x)$ is càdlàg in $s$ and $t$ and continuous in $x$, and ${\mathbf{P}}$-a.s. for all $(s,t)\in [0,T]^2$, $\bar X_t(s,\cdot )$ is a homeomorphism, and $\bar X_t(s,x)$ is a version of $X_t(s,x)$ (the equation started at $s$). Fix a stopping time $\tau\le T$. We will now show that $\bar X_t(\tau,x)=\bar X_t(s,x)|_{s=\tau}$ (i.e. $\bar X_t(s,x)$ evaluated at $s=\tau$) is a version of $X_t(\tau,x)$. Define the sequence of stopping times $(\tau _{n})_{n\ge 1}$ by $$\tau _{n}=\sum_{k=1}^{n-1}\frac{kT}{n}\mathbf{1}_{\left\{ \frac{(k-1)T}{n}\le \tau <
\frac{kT}{n}\right\} }+T\mathbf{1}_{\left\{\tau\ge \frac{(n-1)T}{n}\right\}}.$$ For each $n$ and $x$, let $X_t^{(n)}=X_{t}^{(n)}(x)=\bar X_t(\tau_n,x)$, $t\in [0,T]$. It follows that for each $n$, $t$, and $x$, ${\mathbf{P}}$-a.s. for all $k\in \{1,\ldots,n\}$, $$X_t^{(n)}(x)\mathbf{1}_{\{\tau_n=\frac{kT}{n}\}}= X_t\left(\frac{kT}{n},x\right)\mathbf{1}_{\{\tau_n=\frac{kT}{n}\}} ,$$ and hence $$\begin{aligned}
X^{(n)}_t(x)\mathbf{1}_{\{\tau_n=\frac{kT}{n}\}}&=\mathbf{1}_{\{\tau_n=\frac{kT}{n}\}}x+\mathbf{1}_{\{\tau_n=\frac{kT}{n}\}}\int_{]\frac{kT}{n},\frac{kT}{n}\vee t]}b_r(X_{r}^{(n)}(x))dr\\
&\quad +\mathbf{1}_{\{\tau_n=\frac{kT}{n}\}}\int_{]\frac{kT}{n},\frac{kT}{n}\vee t]}\sigma^{\varrho}
_r(X_{r}^{(n)}(x))dw^{\varrho}_{r}\\
&+\mathbf{1}_{\{\tau_n=\frac{kT}{n}\}}\int_{]\frac{kT}{n},\frac{kT}{n}\vee t]}\int_Z H
_r(X_{r}^{(n)}(x),z)q(dr,dz).\end{aligned}$$ Since $\Omega$ is the disjoint union of the sets $\left\{ \tau _{n}=\frac{kT}{n}\right\}$, $k\in\{1,\ldots,n\}$, it follows that $X_{t}^{(n)}(x)$ solves $$\begin{aligned}
X_{t}^{(n)}(x)&=x+\int_{]\tau _{n},\tau _{n}\vee t]}b_r
(X_{r}^{(n)}(x))dr+\int_{]\tau _{n},\tau _{n}\vee t]}\sigma^{\varrho}_r
(X_{r}^{(n)}(x))dw ^{\varrho}_{r}\\
&\quad +\int_{]\tau _{n},\tau _{n}\vee t]}\int_ZH_r
(X_{r}^{(n)}(x),z)q(dr,dz).\end{aligned}$$Thus, by uniqueness, we have that for each $t$ and $x$, ${\mathbf{P}}$-a.s. $\bar X_{t}(\tau_n,x)=X^{(n)}_t(x)=X_{t}(\tau
_{n},x)$. It is easy to check that for each $t$ and $x$, ${\mathbf{P}}$-a.s. $X_t(\tau_n,x)$ converges to $X_t(\tau,x)$ as $n$ tends to infinity. Since $\bar X_t(s,x)$ is càdlàg in $s$, we have that $\bar X_t(\tau_n,x)$ converges to $\bar X_t(\tau,x)$ as $n$ tends to infinity. Therefore, $\bar X_t(\tau,x)$ is a version of $X_{t}(\tau,x)$ for all $t$ and $x$. We identify $X_t(s,x)$ and $\bar X_t(s,x)$ for all $(s,t,x)\in [0,T]^2\times \mathbf{R}^d$. Using Lemma \[lem:Direct Flow Estimates\](1) and Corollary \[cor:Kolmogorov Embedding\], we obtain that ${\mathbf{P}}$-a.s $X_{\cdot}(\tau,\cdot) \in D([0,T];{\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and that the estimate holds. Note here that for each $\beta \ge 0$, the Fréchet spaces $D([0,T];{\mathcal{C}}_{loc}^{\beta}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and ${\mathcal{C}}_{loc}^{\beta}({\mathbf{R}}^d;\allowbreak D([0,T];\mathbf{R}^{d}))$ are equivalent. It follows from the proof of Theorem 3.5 in [@Ku04] that for every stopping time $\bar \tau \le T$, ${\mathbf{P}}$-a.s.$$\label{eq:surjective}
\lim_{|x|\rightarrow\infty }\inf_{t\le T}|X_t(\bar \tau,x)|=\infty.$$ Let $(t_{n})\subseteq[0,T]$ and $(x_n)\subseteq\mathbf{R}^d$ be convergent sequences with limits $t$ and $x$, respectively. First, assume $t_{n}<t$ for all $n$. By , for every stopping time $\bar \tau\le T$, ${\mathbf{P}}$-a.s. the sequence $\left(X_{t_{n}}^{-1}(\bar \tau
,x_{n})\right) $ is uniformly bounded. Since ${\mathbf{P}}$-a.s. $X_{\cdot}(\tau,\cdot)\in D([0,T];{\mathcal{C}}^{\beta}({\mathbf{R}}^d;{\mathbf{R}}^d))$, $\beta'\in (0,1)$, we have $$\begin{gathered}
\lim_{n\rightarrow\infty} \left(X_{t-}(\bar \tau,X_{t_{n}}^{-1}(\bar \tau
,x_{n}))-X_{t-}(\bar \tau ,X_{t-}^{-1}(\bar \tau ,x) \right)=\lim_{n\rightarrow\infty} \left(X_{t-}(\bar \tau,X_{t_{n}}^{-1}(\bar \tau
,x_{n}))-x \right)\\
=\lim_{n\rightarrow\infty} \left(X_{t_n}(\bar \tau,X_{t_{n}}^{-1}(\bar \tau
,x_{n}))-x \right)=\lim_{n\rightarrow\infty} (x_n-x )=0,\end{gathered}$$ which implies $$\lim_{n\rightarrow\infty }X_{t_{n}}^{-1}(\bar \tau ,x_{n})= X_{t-}^{-1}(\bar \tau,x).$$ A similar argument is used for $t_{n}>t$. (2) It follows from the definition that $\bar X_t(s,x)$ and $\bar X^{-1}_t(s,x)$ are continuous in $s$, $t$, and $x$. Moreover, applying Lemma \[lem:Direct Flow Estimates\](1) and Corollary \[cor:Kolmogorov Embedding\], we get that ${\mathbf{P}}$-a.s. $X_{\cdot}(\cdot,\cdot) \in C([0,T]^2;{\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and that the estimate holds. The continuity of $X_s(\tau,x)$ with respect to $s$ actually plays an important role in the proof of Theorem \[thm:SPDEEx\].
Moment estimates of inverse flows: Proof of Theorem \[thm:diffeoandmomest\]
---------------------------------------------------------------------------
In this subsection, under Assumption \[asm:regularitypropflow\] ($\beta$), $\beta\ge 1$, we derive moment estimates for the flow $X_t(\tau,x)$ and its inverse $X_t^{-1}(\tau,x)$ in weighted Hölder norms and complete the proof of Theorem \[thm:diffeoandmomest\]. In particular, we will apply Corollaries \[cor:SobolevFull\] and \[cor:Kolmogorov Embedding\] with the Banach spaces $V=D([0,T];\mathbf{R}^{d})$ and $V=C([0,T]^{2};\mathbf{R}^{d})$.
\[p:Regularity of direct flow\]Let Assumption \[asm:regularitypropflow\]$(\beta)$ hold for some $\beta>1$
For each stopping time $\tau\le T$ and $\beta '\in [1,\beta)$, ${\mathbf{P}}$-a.s. $\nabla X_{\cdot}(\tau,\cdot )\in D([0,T];{\mathcal{C}}_{loc}^{\beta'-1}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and for each $\epsilon>0$ and $p\ge 2$, there is a constant $N=N(d,p,N_{0},T,\beta',\epsilon)$ such that $$\label{ineq:momentestdirectjmp}
{\mathbf{E}}\left[\sup_{t\leq T}| r_1^{-\epsilon}\nabla X_{t}(\tau)| _{\beta'-1}^{p}\right]\le N.$$Moreover, for each $p\ge 2$, there is a constant $N=N(d,p,N_{0},\beta,T)$ such that for all multi-indices $\gamma$ with $1\le |\gamma| \le \left[ \beta \right] $ and all $x\in
\mathbf{R}^{d}$, $$\label{ineq:GradientMomentbd}
{\mathbf{E}}\left[\sup_{t\leq T}|\partial ^{\gamma }X_{t}(\tau
,x)|^{p}\right] \leq N$$ and for all multi-indices $\gamma$ with $|\gamma|=[ \beta]^- $ and all $x,y\in \mathbf{R}^{d}$, $$\label{ineq:GradientMomentdiff}
{\mathbf{E}}\left[\sup_{t\leq T}|\partial ^{\gamma }X_{t}(\tau ,x)
-\partial ^{\gamma }X_{t}( \tau ,y) |^{p}\right] \leq
N|x-y|^{\{\beta\}^+p}.$$
If $H\equiv 0$, then for each $\beta '\in [1,\beta)$, ${\mathbf{P}}$-a.s. $\nabla X_{\cdot}(\cdot,\cdot )\in C([0,T]^2;{\mathcal{C}}_{loc}^{\beta'-1}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and for each $\epsilon>0$ and $p\ge 2$, there is a constant $N=N(d,p,N_{0},T,\beta',\epsilon)$ such that $$\label{ineq:momentestdirectcts}
{\mathbf{E}}\left[\sup_{s,t\leq T}| r_1^{-(1+\epsilon)}\nabla X_{t}(s)| _{\beta'-1}^{p}\right]\le N.$$ Moreover, for each $p\ge 2$, there is a constant $N=N(d,p,N_0,T,\beta) $ such that for all multi-indices $\gamma$ with $|\gamma|=[\beta]^{-}$ and all $s,\bar s\in[0,T]$ and $x\in \mathbf{R}^d$, $$\label{ineq:derivsmsbar}
{\mathbf{E}}\left[\sup_{t\le T}|\partial^{\gamma}X_t(s,x)-\partial^{\gamma}X_t(\overline{s},x)|^p\right]\le N|s-\overline{s}|^{p/2}.$$
\(1) Fix a stopping time $\tau\le T$ and write $X_t(\tau,x)=X_t(x)$. First, let us assume that $[\beta]^{-}=1$. It follows from Theorem 3.4 in [@Ku04] that ${\mathbf{P}}$-a.s. for all $t$, $X_t(\tau,\cdot)$ is continuously differentiable and $U_t=\nabla X_{t}(\tau,x) $ satisfies $$\begin{aligned}
\label{eq:GradientofFlow}
dU_t &=\nabla
b_t(X_{t})U_{t}dt+\nabla \sigma^{\varrho}_t (X_{t-})U_{t}dw^{\varrho}_{t}+
\int_{Z}\nabla H_t(X_{t-},z )U_{t-}q(dt,dz),\;\;\tau <t\le T,\notag\\
\nabla X_t&=I_d,\;\;t\le \tau,\end{aligned}$$ where $I_d$ is the $d\times d$-dimensional identity matrix. Taking $\lambda =0$ in the estimates (3.10) and (3.11) in Theorem 3.3 in [@Ku04], we obtain and . Then applying Corollary \[cor:Kolmogorov Embedding\] with $V=D([0,T];\mathbf{R}^d)$, we have that $X_{\cdot}(\cdot)\in D([0,T];{\mathcal{C}}^{\beta'}_{loc}({\mathbf{R}}^d;{\mathbf{R}}^d))$ fand that the holds. The proof for $[\beta]^{-}>1$ follows by induction (see, e.g. the proof of Theorem 6.4 in [@Ku97]).
\(2) The estimate is given in Theorem 4.6.4 in [@Ku97] in equation (19). The remaining items of part (2) then follow in exactly the same way as part (1) with the only exception being that we apply Corollary \[cor:Kolmogorov Embedding\] with $V=C([0,T]^2;\mathbf{R}^d)$.
\[lem:gradientinverseest\] Let Assumption \[asm:regularitypropflow\]$ (\beta)$ hold for some $\beta>1$.
For each stopping time $\tau\le T$ and $\beta '\in [1,\beta)$, ${\mathbf{P}}$-a.s. $\nabla X_{\cdot }(\tau ,\cdot)^{-1}\in
D([0,T];\allowbreak{\mathcal{C}}_{loc}^{\beta'-1}\allowbreak({\mathbf{R}}^d;{\mathbf{R}}^d))$ and for each $p\geq 2$, there is a constant $N=N(d,p,N_0,T,\eta,N_{\kappa}) $ such that for all $x,y\in \mathbf{R}^{d}$ $$\label{ineq:momentestgradinvbd}
{\mathbf{E}}\left[\sup_{t\leq T}|\nabla X_{t}(\tau ,x)^{-1}|^{p}\right]\leq N$$and $$\label{ineq:momentestgradinvdiff}
{\mathbf{E}}\left[\sup_{t\leq T}|\nabla X_{t}(\tau ,x)^{-1}-\nabla X_{t}(\tau
,y)^{-1}|^{p}\right]\leq N|x-y|^{((\beta-1)\wedge 1 )p}.$$
If $H\equiv0$, then for each $\beta '\in [1,\beta)$, ${\mathbf{P}}$-a.s. $\nabla X_{\cdot}(\cdot ,\cdot)^{-1}\in C([0,T]^{2};{\mathcal{C}}_{loc}^{\beta'-1}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and for each $p\geq 2$, there is a constant $N=N(d,p,N_0,T) $ such that for all $s,\bar s\in [0,T]$ and $x\in \mathbf{R}^d$, $$\label{ineq:momentestgraddiffctsinv}
{\mathbf{E}}\left[\sup_{t\le T}|\nabla X_t(s,x)^{-1}-\nabla X_t(s,x)^{-1}|^{p}\right]\leq N|s-\bar{s}|^{p/2}.$$
\(1) Let $\tau\le T$ be a fixed stopping time and write $X_t(\tau,x)=X_t(x)$. Using Itô’s formula (see also Lemma 3.12 in [@Ku04]), we deduce that $\bar U_t=[\nabla X_t(x)]^{-1}$ satisfies $$\begin{aligned}
\label{eq:Inversegradient}
d\bar U_t&=
\bar U_t\left(\nabla \sigma^{\varrho}_t(X_{t-}) \nabla
\sigma^{\varrho}_t(X_{t-}(\tau))\ -\nabla b_t(X_{t})\right)dt -\bar{U}_{t}\nabla \sigma^{\varrho}_t(X_{t})dw^{\varrho}_{t} \notag\\
&\quad -\int_{Z}\bar{U}_{t-}\nabla H_t(X_{t-},z )(I_d+\nabla H_t(X_{t-},z
))^{-1}q(dt,dz) \notag \\
&\quad +\int_Z\bar{U}_t\nabla H_t(X_{t-},z )^{2}(I_d+\nabla
H_t(X_{t-},z ))^{-1}\pi (dz) dt,\;\;\tau <t\le T,\notag\\
\bar{U}_t&=I_d, \;\; t\le \tau.\end{aligned}$$ Since matrix inversion is a smooth mapping, the coefficients of the linear equation satisfy the same assumptions as the coefficients of the linear equation , and hence the derivation of the estimates and proceed in the same way as the analogous estimates for . To see that ${\mathbf{P}}$-a.s. $X_{\cdot}(\cdot)^{-1}\in D([0,T];{\mathcal{C}}_{loc}^{\beta
'-1}({\mathbf{R}}^d;{\mathbf{R}}^d))$, we only need to note that ${\mathbf{P}}$-a.s. $X_{\cdot}(\cdot )\in D([0,T];{\mathcal{C}}_{loc}^{\beta
'-1}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and that matrix inversion is a smooth mapping. Part (2) follows with the obvious changes.
As an immediate corollary, we obtain the diffeomorphism property of the flow $X_t(\tau,x)$ under the assumptions Assumption \[asm:regularitypropflow\]$(\beta)$, $\beta>1$.
\[c:DiffeomorphismProperty\] Let Assumption \[asm:regularitypropflow\]$(\beta)$ hold.
For each stopping time $\tau\le T$ and $\beta'\in [1,\beta)$ the mapping $X_{t}(\tau,\cdot):\mathbf{R}^{d}\rightarrow \mathbf{R}^{d}$ is a ${\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d)$-diffeomorphism, ${\mathbf{P}}$-a.s. $X_{\cdot}(\tau,\cdot),
X_{\cdot}^{-1}(\tau,\cdot )\in D([0,T];{\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and for each $t\in[0,T]$, $X_{t-}^{-1}(\tau)$ coincides with the inverse of $X_{t-}(\tau)$.
If $H\equiv 0$, then for each $\beta'\in [1,\beta)$, ${\mathbf{P}}$-a.s. $X_{\cdot}(\cdot,\cdot),X^{-1}_{\cdot}(\cdot,\cdot) \in C([0,T]^{2},{\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$.
\(1) Fix a stopping time $\tau\le T$ and write $X_t(\tau,x)=X_t(x)$. It follows from Propositions \[prop:homeomorphism\] and \[p:Regularity of direct flow\] that ${\mathbf{P}}$-a.s. for all $t$, the mappings $X_{t}(\cdot),X_{t-}(\cdot):\mathbf{R}^d\rightarrow \mathbf{R}^d$ are homeomorphisms and $X_{\cdot }(\cdot ) \in D([0,T];{\mathcal{C}}^{\beta'}_{loc}({\mathbf{R}}^d;{\mathbf{R}}^d))$. Moreover, by Lemma \[lem:gradientinverseest\], ${\mathbf{P}}$-a.s. for all $t$ and $x$, the matrix $\nabla X_{t}\left( \tau ,x\right) $ has an inverse. Therefore, by Hadamard’s Theorem (see, e.g., Theorem 0.2 in [@DeHoIm13]), ${\mathbf{P}}$-a.s. for all $t$, $X_{t}(\cdot)\ $ is a diffeomorphism. Using the chain rule, ${\mathbf{P}}$-a.s. for all $t$ and $x$, $$\label{eq:inversematrixidentity}
\nabla X_{t}^{-1}(x)=\nabla X_{t}(X_{t}^{-1}(x))^{-1}.$$ Since, by Lemma \[lem:gradientinverseest\], ${\mathbf{P}}$-a.s. $[\nabla X_{\cdot }(\cdot )]^{-1}\in D([0,T];{\mathcal{C}}_{loc}^{\beta'-1}({\mathbf{R}}^d;{\mathbf{R}}^d))$ and we know that ${\mathbf{P}}$-a.s. for all $t$, $X^{-1}_{t}(\cdot)$ is differentiable, it follows from that ${\mathbf{P}}$-a.s. $$\nabla X_{\cdot }(X_{\cdot}^{-1}(\cdot))^{-1}\in D([0,T];{\mathcal{C}}_{loc}^{(\beta'-1)\wedge 1}({\mathbf{R}}^d;{\mathbf{R}}^d)).$$ One then proceeds inductively to complete the proof. Making the obvious changes in the proof of part (1), we obtain part (2).
We conclude with a derivation of Hölder moment estimates of the inverse flow $X_t^{-1}(\tau,x)$, which will complete the proof of Theorem \[thm:diffeoandmomest\].
\(1) Fix a stopping time $\tau\le T$ and write $X_t(\tau,x)=X_t(x)$. Fix $\epsilon>0$. First, let us assume that $\left[ \beta \right] ^{-}=1$. Set $J_{t}(x)=|\det \nabla
X_{t}(x)|$. It is clear from that for each $p\ge 2$ and $x$, there is a constant $N=N(d,p,N_{0},T)$ such that $$\label{ineq:estofdeterm}
{\mathbf{E}}[\sup_{t\le T}|J_t(x)|^p]\le N.$$ Using the change of variable $(\bar{x},\bar y)=(X^{-1}_{t}(x),X^{-1}_t(y))$, Fatou’s lemma, Fubini’s theorem, Hölder’s inequality, and the inequalities , , , , and , for any $\delta\in (0,1]$ and $p>\frac{d}{\epsilon}$, we obtain that there is a constant $N=N(d,p,N_{0},T,\delta,\eta,N_{\kappa},\epsilon)$ such that $$\begin{aligned}
{\mathbf{E}}\sup_{t\leq T}\int_{\mathbf{R}^{d}}|r_{1}(x)^{-(1+\epsilon)}X_{t}^{-1}(x)|^pdx &\leq \int_{\mathbf{R}
^{d}}|\bar{x}| ^{p}{\mathbf{E}}\sup_{t\leq
T}[r_{1}(X_{t}( \bar{x}) )^{-p(1+\epsilon)}J_{t}(\bar{x})]d\bar{x}\\
&\leq N{\mathbf{E}}\int_{\mathbf{R}^{d}}r_1(\bar{x})
^{-p\epsilon}d\bar{x}\le N\end{aligned}$$ and $$\begin{gathered}
{\mathbf{E}}\sup_{t\leq T} \int_{|x-y|<1}\frac{|r_1^{-(1+\epsilon)}(x) X_{t}^{-1}(x)-r_1^{-(1+\epsilon)}(y) X_{t}^{-1}(y)|^{p}
}{|x-y|^{2d+\delta p}}dxdy\\
\le \int_{|\bar x-\bar y|<1}{\mathbf{E}}\sup_{t\leq T}
\left[\frac{r_1^{-p(1+\epsilon)}(X_{t}(\bar{x}))|\bar x-\bar y|^pJ_{t}(\bar{x})J_t(\bar{y})}{|X_{t}(\bar{x})-X_{t}( \bar{y})|^{2d+\delta p}}\right] d\bar{x}d\bar{y}\\
+ \int_{|\bar x-\bar y|<1}{\mathbf{E}}\sup_{t\leq T}
\left[\frac{|\bar y|^p|r_1^{-(1+\epsilon)}(X_{t}(\bar{x}))-r_1^{-(1+\epsilon)}(X_{t}(\bar{y}))|^pJ_{t}(\bar{x})J_t(\bar{y})}{|X_{t}(\bar{x})-X_{t}( \bar{y})|^{2d+\delta p}}\right] d\bar{x}d\bar{y}\\
\le N\int_{|\bar{x}-\bar{y}|<1}\frac{r_{1}(\bar{x})^{-p(1+\epsilon) }}{|\bar{x}-\bar{y}|^{2d-(1-\delta
)p}}d\bar{x}d\bar{y}+N\int_{|\bar{x}-\bar{y}|<1}\frac{r_1(\bar x)^{-p(1+\epsilon)}+r_1(\bar y)^{-p(1+\epsilon)}}{|\bar{x}-\bar{y}|^{2d-(1-\delta
)p}}d\bar{x}d\bar{y}\le N.\end{gathered}$$ Similarly, making use of the inequalities , , , , , , and , for any $p>\frac{d}{\epsilon}\vee\frac{d}{\beta -\beta '}\vee \frac{d}{2 -\beta '}$, we get $$\begin{aligned}
{\mathbf{E}}\sup_{t\leq T}\int_{\mathbf{R}^{d}}|r^{-\epsilon}(x)\nabla
X_{t}^{-1}(x) |^{p}dx&\le \int_{\mathbf{R}^{d}}{\mathbf{E}}\sup_{t\leq T}[r_{1}(X_{t}(
\bar{x}) )^{-p\epsilon}| [\nabla
X_{t}(\bar{x})]^{-1}|^{p}J_{t}(\bar{x})]d\bar{x}\\
&\leq N{\mathbf{E}}\int_{\mathbf{R}^{d}}r_{1}(\bar{x})^{-p\epsilon}d\bar{x}\le N\end{aligned}$$ and $$\begin{gathered}
{\mathbf{E}}\sup_{t\leq T} \int_{|x-y|<1}\frac{|r_1^{-\epsilon}(x)\nabla X_{t}^{-1}(x)-r_1^{-\epsilon}(y)\nabla X_{t}^{-1}(y)|^{p}
}{|x-y|^{2d+(\beta'-1)p}}dxdy\\
\leq \int_{|\bar x-\bar y|<1}{\mathbf{E}}\sup_{t\leq T}
\left[\frac{|r_1^{-\epsilon}(X_{t}(\bar{x}))[\nabla X_{t}(\bar{x})]^{-1}-r_1^{-\epsilon}(X_{t}(\bar{y}))[\nabla X_{t}(
\bar{y})]^{-1}|^{p}J_{t}(\bar{x})J_t(\bar{y})}{|X_{t}(\bar{x})-X_{t}( \bar{y})|^{2d+(\beta'-1)p}}\right] d\bar{x}d\bar{y}\end{gathered}$$ $$\le N\int_{|\bar{x}-\bar{y}|<1}\frac{r_{1}(\bar{x})^{-p\epsilon }}{|\bar{x}-\bar{y}|^{2d-(\beta-\beta'
)p}}d\bar{x}d\bar{y}+N\int_{|\bar{x}-\bar{y}|<1}\frac{r_1(\bar x)^{-p\epsilon}+r_1(\bar y)^{-p \epsilon}}{|\bar{x}-\bar{y}|^{2d-(2-\beta')
p}}d\bar{x}d\bar{y}\le N,$$ where $N=N(d,p,N_{0},T,\beta',\eta,N_{\kappa},\epsilon)$ is a positive constant. Therefore, combining the above estimates and applying Corollary \[cor:SobolevFull\], we have that for all $p\ge 2$, there is f a constant $N=N(d,p,N_{0},T,\beta',\eta,N_{\kappa},\epsilon)$, such that $${\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-(1+\epsilon )}X^{-1}_{t}(\tau )|_{0}^{p}\right]+{\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-\epsilon }\nabla X^{-1}_{t}(\tau )|_{\beta'-1}^{p}\right]\leq N.$$ It is well-known that the the inverse map $\mathfrak{I}$ on the set of invertible $d\times d$-dimensional matrices is infinitely differentiable and for each $n$, there is a constant $N=N(n,d)$ such that for all invertible matrices $M$, the $n$th derivative of $\mathfrak{I}$ evaluated at $M$, denoted $\mathfrak{I} ^{(n)}(M)$, satisfies $$\label{ineq:inversemapderivative}
\left\vert \mathfrak{I} ^{(n)}(M)\right\vert \leq N|M^{-n-1}|\le N\left\vert
M^{-1}\right\vert ^{n+1}.$$ We claim that for each $n$ and every multi-index $\gamma$ with $|\gamma|= n$, the components of $\partial^{\gamma} X_t^{-1}(x)$ are a polynomial in terms of the entries of $[\nabla X_t(X_t^{-1}(x))]^{-1}$ and $\partial^{\gamma'} \nabla X_t(X_t^{-1}(x))$ for all multi-indices $\gamma'$ with $1\le |\gamma'|\le n-1$. Assume that statement holds for some $n$. By the chain rule, for each $\omega,t,$ and $x$, we have $$\nabla (\nabla X_t(X_t^{-1}(x))^{-1})
=\mathfrak{I}^{(1)}(\nabla X _t(X_t^{-1}(x)))\nabla^2 X_t(X_t^{-1}(x))\nabla X_t (X_t^{-1}(x))^{-1}$$ and for all multi-indices $\gamma$ with $1\le |\gamma'|\le n-1$, we have $$\nabla (\partial^{\gamma'} \nabla X_t(X_t^{-1}(x)))= \partial^{\gamma'} \nabla^2 X_t(X_t^{-1}(x))\nabla X_t(X_t^{-1}(x))^{-1},$$ where $\nabla ^2 X_t(X_t^{-1}(x))$ is the tensor of second-order derivatives of $X_t(\cdot)$ evaluated at $X_t^{-1}(x)$. This implies that for every multi-index $\gamma$ with $|\gamma|= n+1$, the components of $\partial^{\gamma} X_t^{-1}(x)$ are a polynomial in terms of the entries of $\nabla X_t(X_t^{-1}(x))^{-1}$ and $\partial^{\gamma'} \nabla X_t(X_t^{-1}(x))$ for all multi-indices $\gamma'$ with $1\le |\gamma'|\le n$. By induction, the claim is true. Therefore, for $[\beta]^-\ge 2$, using and , we obtain the moment estimates for the inverse flow in the almost exact same way we did for $[\beta]^-=1$. Making the obvious changes in the proof of part (1), we obtain part (2). This completes the proof of Theorem \[thm:diffeoandmomest\].
Strong limit of a sequence of flows: Proof of Theorem \[thm:stronglimit\]
-------------------------------------------------------------------------
Let $\tau\le T$ be a fixed stopping time and write $X_t(\tau,x)=X_t(x)$. For each $n$, let $$Z^{(n)}_t(x) =X^{(n)}_t(x)-X_{t}(x), \;(t,x)\in [0,T]\times\mathbf{R}^d.$$ Throughout the proof we denote by $(\delta _{n})_{n\ge 1}$ a deterministic sequence with $\delta _{n}\rightarrow 0$ as $n\rightarrow \infty $ that may change from line to line. Let $N=N(p,N_{0},T)$ be a positive constant, which may change from line to line. By virtue of Theorem 2.1 in [@Ku04] and , for all $p\geq 2$ and $t,x$ and $n,$ we have $${\mathbf{E}}\left[\sup_{s\leq t}|Z^{(n)}_s(x)|^{p}\right]\leq N{\mathbf{E}}\int_{]0,t]}|Z^{(n)}_s(x)|^{p}ds+ N\delta _{n}r_{1}(x)^{p}.$$ Since the right-hand-side is finite by , applying Gronwall’s lemma we get that for all $x$ and $n$, $$\label{ineq:estZn}
{\mathbf{E}}[\sup_{t\leq T}|Z^{(n)}_t(x)|^{p}]\leq N\delta
_{n}r_{1}(x)^{p}.$$ Similarly, by , for all $x$ and $n,$ we have $$\label{ineq:gradboundZn}
{\mathbf{E}}\left[\sup_{t\leq T}|\nabla Z^{(n)}_t(x)|^{p}\right]\leq N\delta _{n}.$$ Using , for all $x,y,$ and $n$, we obtain $${\mathbf{E}}\left[\sup_{t\leq T}|Z^{(n)}_t(x)-Z^{(n)}(y)|^{p}\right]\le|x-y|^p {\mathbf{E}}\sup_{t\leq T}\int_{0}^{1}|\nabla
Z^{(n)}_t(y+\theta (x-y))|^pd\theta \leq N|x-y|^{p}.$$It follows immediately from that for all $x,y,$ and $n$, $${\mathbf{E}}[\sup_{t\le T}]|\nabla Z^{(n)}_t(x)-\nabla Z^{(n)}_t(y)|^{p}]\leq N|x-y|^{(\beta-1)\vee 1}.$$ Thus, by Corollary \[cor:SobEmbLimit\], we have $$\label{eq:directsequence}
\lim_{n\rightarrow \infty }\left({\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-(1+\epsilon
)}X_{t}^{(n)}-r_{1}^{-(1+\epsilon )}X_{t}|_{0}^{p}\right]+{\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-\epsilon }\nabla X_{t}^{(n)}-r_{1}^{-\epsilon
}\nabla X_{t}|_{0}^{p}\right]\right)=0.$$ Owing to a standard interpolation inequality for Hölder spaces (see, e.g. Lemma 6.32 in [@GiTr01]), for each $\delta \in (0,1)$ and $\bar \beta \in (\beta ',\beta)$, there is a constant $N(\delta)$ such that$$\begin{aligned}
{\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-\epsilon }\nabla X_{t}^{(n)}-r_{1}^{-\epsilon }\nabla X_{t}|_{\beta '-1}^{p}\right] &\le \delta {\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-\epsilon }\nabla
X_{t}^{(n)}-\nabla X_{t}|_{\bar{\beta}-1}^{p}\right] \\
&\quad +C_{\delta }{\mathbf{E}}\left[\sup_{t\leq T}|r_{1}^{-\epsilon }\nabla
X_{t}^{(n)}-\nabla X_{t}|_{0}^{p}\right],\end{aligned}$$ and hence since $$\sup_{n}{\mathbf{E}}\left[\sup_{t\leq T}| r_{1}^{\varepsilon }\nabla
X^{(n)}|_{\bar{\beta}-1}^{p}\right]+{\mathbf{E}}\left[\sup_{t\leq
T}|r_{1}^{\varepsilon }\nabla X_t |_{\bar{\beta}-1}^{p}\right]<\infty,$$ we have $$\lim_{n\rightarrow\infty}{\mathbf{E}}[\sup_{t\leq T}|r_{1}^{-\epsilon }\nabla X_{t}^{(n)}-r_{1}^{-\epsilon }\nabla X_{t}|_{\beta '-1}^{p}]=0.$$By Theorem \[thm:diffeoandmomest\], Corollary \[cor:SobEmbLimit\], and the interpolation inequality for Hölder spaces used above, in order to show $$\lim_{n\rightarrow \infty }{\mathbf{E}}\left[\sup_{t\leq T} | r_1^{-(1+\epsilon)}X^{(n);-1}_t(\tau )-r_1^{-(1+\epsilon)}X^{-1}_t(\tau)|_{0}^{p}\right]=0$$ $$\lim_{n\rightarrow \infty }{\mathbf{E}}\left[\sup_{t\leq T} |
r_1^{-\epsilon}\nabla X^{(n);-1}_t (\tau) -r_1^{-\epsilon}\nabla X^{-1}_{t}(\tau ) |
_{\beta'-1}^{p}\right]=0,$$ it suffices to show that for each $x$, $$\label{eq:pointwiseinverselimit}
d{\mathbf{P}}-\lim_{n\rightarrow\infty}\sup_{t\leq T}|X_{t}^{(n);-1}(x)-X_{t}^{-1}(x)|=0$$and$$\label{eq:pointwisegradientlimit}
d{\mathbf{P}}-\lim_{n\rightarrow\infty}\sup_{t\leq T}|\nabla X_{t}^{(n);-1}(x)-\nabla
X_{t}^{-1}(x)|= 0.$$ For each $n$, define $$\Theta _{t}^{(n)}(x)=r_{1}(X_{t}^{(n)}(x))^{-1}-r_{1}(X_{t}(x))^{-1}, \; (t,x)\in [ 0,T]\times \mathbf{R}^{d}.$$For all $\omega ,t,x,$ and $n,$ we have $$|\Theta _{t}^{(n)}(x)| \leq r_{1}(X_{t}^{(n)}(x))^{-1}r_{1}(X_{t}(x))^{-1}|Z_{t}^{(n)}(x)|,$$ and hence using Hölder’s inequality, , and , we obtain that for all $p\ge 2$, $x$, there is a constant $N=N(p,N_{0},T,\eta,N_{\kappa})$ such that for all $n$, $${\mathbf{E}}[\sup_{t\leq T}|\Theta _{t}^{(n)}(x)|^{p}] \leq Nr_1(x)^{-p}\delta _{n},$$ where $N=N(p,N_{0},T,\eta,N_{\kappa})$ is a constant. Furthermore, since $$|\nabla \Theta _{t}^{(n)}(x)| \leq r_{1}(X_{t}^{(n)}(x))^{-2}|\nabla X_{t}^{(n)}(x)|+ r_{1}(X_{t}(x))^{-2}|\nabla X_{t}^{(n)}(x)|,$$ for all $\omega,t,x,$ and $n$, applying and , for all $p\ge 2$, $x$, and $n$, we get $${\mathbf{E}}\left[\sup_{t\leq T}| r_1(x)\Theta _{t}^{(n)}(x)-r_1(y)\Theta _{t}^{(n)}(y)|^{p}\right] \leq N|x-y|^p.$$ Then owing to Corollary \[cor:SobEmbLimit\], for each $p\geq 2,$ $$\label{eq:Thetaconvtozero}
\lim_{n\rightarrow \infty }{\mathbf{E}}\left[\sup_{t\leq T}|\Theta _{t}^{(n)}|^{p}_0\right]=0.$$We claim that for each $R>0$, $$\label{nf10}
d{\mathbf{P}}-\lim_{n\rightarrow\infty}E (n,R):=d{\mathbf{P}}-\lim_{n\rightarrow\infty}\sup_{t\leq T}|
X_{t}^{(n);-1}-X_{t}^{-1}|_{0;\left\{ \left\vert x\right\vert \leq R\right\} }=0.$$Fix $R>0$. It is enough to show that every subsequence of $E
(n)=E (n,R)$ has a sub-subsequence converging to $0$, ${\mathbf{P}}$-a.s.. Owing to and , for a given subsequence $(E(n_{k}))$, we can always find sub-subsequence (still denoted $(E(n_{k}))$ to avoid double indices) such that ${\mathbf{P}}$-a.s., $$\label{eq:limitdirectRbar}
\lim_{k\rightarrow \infty }\sup_{t\leq T}|X_{t}^{(n_{k})}-X_{t}|_{\beta';\left\{ \left\vert x\right\vert \leq \bar R\right\} } =0, \;\;\forall \;\bar R>0,$$ and $$\label{eq:oneoverlimit}
\lim_{k\rightarrow \infty }\sup_{t\leq T}|
r_{1}(X_{t}^{(n_{k})}(x))^{-1}-r_{1}(X_{t}(x))^{-1}|_0=0.$$ Fix an $\omega $ for which both limits are zero. We will prove that $$\label{eq:inversesubseqconvergefixedomega}
\lim_{k\rightarrow \infty }\sup_{t\leq T}|X_{t}^{(n_{k});-1}(\omega )-X_{t}^{-1}(\omega )|_{0;\left\{ \left\vert x\right\vert \leq R\right\} }
=0.$$Suppose, by contradiction, that (\[eq:inversesubseqconvergefixedomega\]) is not true. Then there exists an $\varepsilon >0$ and a subsequence of $(n_{k})
$ (still denoted $(n_{k}))$ such that $t_{n_{k}}\rightarrow t-$ (or $t_{n_{k}}\rightarrow t+$) and $x_{n_{k}}\rightarrow x$ as $k\rightarrow \infty $ with $\left\vert x_{n_{k}}\right\vert \leq R$ such that (dropping $\omega )$,$$\label{nf12}
|
X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}})-X_{t_{n_{k}}}^{-1}(x_{n_{k}})| \geq \varepsilon .$$Arguing by contradiction and using (\[eq:oneoverlimit\]), we have $$\label{ineq:boundsequence}
\sup_{k}|X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}})|<\infty .$$ Applying , , and the fact that $X_{\cdot}(\cdot),X^{-1}_{\cdot}(\cdot)\in D([0,T];{\mathcal{C}}_{loc}^{\beta'}({\mathbf{R}}^d;{\mathbf{R}}^d))$ , we obtain $$\begin{gathered}
\lim_{k\rightarrow \infty
}\left(X_{t-}(X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}}))-X_{t-}(X_{t_{n_{k}}}^{-1}(x_{n_{k}}))\right)=\lim_{k\rightarrow \infty
}\left(X_{t-}(X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}}))-x_{n_{k}}\right)\\
=\lim_{k\rightarrow \infty
}\left(X_{t-}(X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}}))-X_{t_{n_{k}}}^{(n_{k})}(X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}}))\right)\end{gathered}$$ $$\begin{gathered}
=\lim_{k\rightarrow \infty
}\left(X_{t-}(X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}}))-X_{t_{n_{k}}}(X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}}))\right)\\
+\lim_{k\rightarrow \infty
}\left(X_{t_{n_{k}}}(X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}}))-X_{t_{n_{k}}}^{(n_{k})}(X_{t_{n_{k}}}^{(n_{k});-1}(x_{n_{k}}))\right)
=0,\end{gathered}$$ which contradicts , and hence proves , , and . For each $n$, define $$\bar{U}^{(n)}_t=\bar{U}^{(n)}(t,x)=\nabla
X_{t}^{(n)}(x) ^{-1}\quad \textrm{and}\quad \bar{U}(t)=\bar{U}(t,x)
=\nabla X_{t}(x)^{-1},\;\;(t,x)\in [0,T]\times \mathbf{R}^{d}.$$ Using and and repeating the arguments given above, for each $p\geq 2,$ we get $$\label{f3}
\lim_{n}{\mathbf{E}}[\sup_{t\leq T}|r_{1}^{-\epsilon }\bar{U}^{(n)}_t-r_{1}^{-\epsilon }\bar{U}_t|_{\beta'-1}^{p}]=0.$$ Then (\[f3\]) and (\[nf10\]) imply that for each $R>0$, $$\begin{gathered}
d{\mathbf{P}}-\lim_{n\rightarrow\infty}\sup_{t\leq T}|\nabla
X_{t}^{(n);-1}(x)-\nabla X_{t}^{-1}(x)|_{0;\left\{ \left\vert x\right\vert \leq R\right\} }\\
=d{\mathbf{P}}-\lim_{n\rightarrow\infty}\sup_{t\leq T}|\nabla
X_{t}^{(n)}(X_{t}^{(n);-1}(x))^{-1}-\nabla X_{t}(X_{t}^{-1}(x))^{-1}|_{0;\left\{ \left\vert x\right\vert \leq R\right\} }=0,\end{gathered}$$ which yields and completes the proof.
Classical solution of an SPDE: Proof of Theorem \[thm:SPDEEx\] {#sec:classicalsolutionctsexist}
==============================================================
Fix a stopping time $\tau\le T$. By virtue of Theorem \[thm:diffeoandmomest\], we only need to show that $Y^{-1}(\tau)=Y^{-1}_t(\tau,x)$ solves and that this is the unique solution. Suppose we have shown $Y^{-1}(s,x)$, $s\in [0,T]$, solves (i.e. where $\tau$ is deterministic). It is then straightforward to conclude that $Y^{-1}(\tau')$ solves for a finite-valued stopping times $\tau'$. We can then use an approximation argument (see the proof of Proposition \[prop:homeomorphism\]) to show that $Y^{-1}(\tau)=Y^{-1}_t(\tau,x)$ solves . Thus, it suffices to take $\tau$ deterministic. Let $
u_t(x)$ $=u_t(s,x)$ $=Y_t^{-1}(s,x)$, $(s,t,x)\in [0,T]^2\times\mathbf{R}^d$. Fix $(s,t,x)\in [0,T]^2\times\mathbf{R}^d$ with $s<t$ and write $Y_t(x)=Y_t(s,x)$. We will treat a general stopping time $\tau\le T$ later. Let $((t^M_n)_{0\le n\le M})_{1\le M\le \infty}$ be a sequence of partitions of the interval $[s,t]$ such that for each $M>0$, $(t^M_n)_{0\le n\le M}$ has mesh size $(t-s)/M$. Fix $M$ and set $(t_n)_{0\le n\le
M}=(t^M_n)_{0\le n\le M}$. Immediately, we obtain $$\label{eq:Telescoping Sum}
u_t(x)-x=\sum_{n=0}^{M-1}(u_{t_{n+1}}(x)-u_{t_{n}}(x)).$$We will use Taylor’s theorem to expand each term in the sum on the right-hand-side of . By Taylor’s theorem, for each $n$ and $y$, we have $$\begin{gathered}
u_{t_{n+1}}(Y_{t_{n+1}}(y))-u_{t_n}(Y_{t_{n+1}}(y))=y-u_{t_n}(Y_{t_{n+1}}(y))=u_{t_n}(Y_{t_n}(y))-u_{t_n}(Y_{t_{n+1}}(y))\notag \\ \label{eq:Taylor for u at X}
=\nabla u_{t_n}(Y_{t_n}(y))(Y_{t_n}(y)-Y_{t_{n+1}}(y))-(Y_{t_n}(y)-Y_{t_{n+1}}(y))^*\Theta_n(Y_{t_n}(y))(
Y_{t_n}(y)-Y_{t_{n+1}}(y)),\end{gathered}$$where $$\Theta_n^{ij}(z)=\int_{0}^{1}(1-\theta )\partial_{ij}u_{t_n}\left(z+\theta
(Y_{t_{n+1}}(Y_{t_n}^{-1}(z))-z)\right)d\theta .$$ Since for each $n$, $
Y_{t_{n+1}}(s,x)=Y_{t_{n+1}}(t_n,Y_{t_n}(s,x)),
$ we have $$Y_{t_{n+1}}(Y_{t_{n}}^{-1}( x))=Y_{t_{n+1}}(t_{n},x)$$ and hence substituting $y=Y_{t_{n}}^{-1}(x)$ into , for each $n$, we get $$\label{eq:firstexpansion}
u_{t_{n+1}}(x)-u_{t_{n}}(x)=A_{n}+B_{n},$$where $$A_{n}:=\nabla u_{t_{n}}(x)(x-Y_{t_{n+1}}(t_{n},x))-(
x-Y_{t_{n+1}}(t_{n},x))^*\Theta^{ij}_n(x)(
x-Y_{t_{n+1}}(t_{n},x))$$ and $$B_{n}:= (u_{t_{n+1}}(x)-u_{t_{n}}(x)) -(
u_{t_{n+1}}(Y_{t_{n+1}}(t_{n},x))-u_{t_{n}}(Y_{t_{n+1}}(t_{n},x))).$$ Applying Taylor’s theorem once more, for each $n$, we obtain $$\label{eq:secondexpansion}
B_{n}=C_{n}+D_{n},$$where $$C_{n}:=(\nabla u_{t_{n+1}}(x)-\nabla u_{t_{n}}(x))(x-Y_{t_{n+1}}(t_{n},x)),$$ $$D_{n}:=-( x-Y_{t_{n+1}}(t_{n},x))^*\tilde{\Theta}_{n}(x)(
x-Y_{t_{i+1}}(t_{i},x))) ,$$ and $$\tilde{\Theta}_{n}(x)^{ij}:=\int_{0}^{1}(1-\theta )\partial_{ij}(u_{t_{n+1}}-u_{t_{n}})(x+\theta
(Y_{t_{n+1}}(t_{n},x)-x))d\theta.$$ Thus, combining , , and , ${\mathbf{P}}$-a.s. we have $$\label{eq:expansion of u}
u_t(x)-x=
\sum_{n=0}^{M-1}(A_n+C_n+D_n).$$Now, we will derive the limit of the right-hand-side of .
\[claim:Continuous SDE Limiting Procedure\]
$$\begin{aligned}
d{\mathbf{P}}-\lim_{M\rightarrow\infty} \sum_{n=0}^{M-1}A_{n}&
=-\int_{]s,t]}[\frac{1}{2}\sigma ^{i\varrho}_r(x)
\sigma_r ^{j\varrho}(x)\partial_{ij}u_r(x)+b^{i}_r(x)\partial_iu_r(x)]dr \\
&\quad -\int_{]s,t]}\sigma_r^{i\varrho}
(x)\partial_iu_r(x)dw^{\varrho}_{r};\end{aligned}$$
$d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}D_{n}=0;$
$
d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}C_{n}=\int_{]s,t]}\sigma
^{j\varrho}_r(x)\partial_j \sigma^{i\varrho}_r(x)\partial_iu_r(x)dr+\int_{]s,t]}\sigma
^{i\varrho}_r(x) \sigma ^{j\varrho}_r(x)\partial_{ij}u_r(x)dr.
$
\(1) For each $n$, we have$$\begin{aligned}
\nabla u_{t_{n}}(x)\left( x-Y_{t_{n+1}}(t_{n},x)\right)& =-\int_{]t_{n},t_{n+1}]}b^i_r(x)
\partial_i u_{t_{n}}(x)dr-\int_{]t_{n},t_{n+1}]}\sigma^{i\varrho}_r (x)\partial_i u_{t_{n}}(x)dw^{\varrho}_{r}\\&\quad +R^{(1)}_n+R^{(2)}_n,\end{aligned}$$where $$R^{(1)}_n:=\int_{]t_{n},t_{n+1}]}\left(b^i_r(x)-b^i_r(Y_{r}(t_{n},x))\right)\partial_i u_{t_{n}}(x)dr$$ and $$R^{(2)}_n:=\int_{]t_{n},t_{n+1}]}[\sigma_r^{i\varrho}
(x)-\sigma^{i\varrho}_r (Y_{r}(t_{n},x))]\partial_i u_{t_{n}}(x)dw^{\varrho}_{r}.$$ Since $b$ and $\sigma$ are Lipschitz, there is a constant $N=N(N_{0},T)$ such that $$\sum_{n=0}^{M-1}\left \vert R^{1}_n\right\vert \leq N\sup_{s \le
r\leq t}|\nabla u_r(x)|\sup_{|r_1-r_2|\leq \frac{t}{M}}|x-Y_{r_1}(r_2,x)|$$ and $$\int_{]s,t]}\left\vert\sum_{n=0}^{M-1}\mathbf{1}
_{]t_{n},t_{n+1}]}(r)\left(\sigma_r^{i\cdot}
(x)-\sigma^{i\cdot}_r (Y_{r}(t_{n},x))\right)\partial_i u_{t_{n}}(x)\right\vert ^{2}ds$$ $$\leq N\sup_{s \le r\leq t}|\nabla u_r(x)|^2\sup_{|r_1-r_2|\leq \frac{t}{ M}}|x-Y_{r_1}(r_2,x)|^{2}.$$ Owing to the joint continuity of $Y_t(s,x)$ in $s$ and $t$ and the dominated convergence theorem for stochastic integrals, we obtain $$\label{eq:limitofremainder1}
d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}(R^{(1)}_n+R^{(2)}_n)=0.$$ In a similar way, this time using the continuity of $\nabla u_t(x)$ in $t$ and the linear growth of $b$ and $\sigma$, we get $$d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}\left(-\int_{]t_{n},t_{n+1}]}b^i_r(x)
\partial_i u_{t_{n}}(x)dr-\int_{]t_{n},t_{n+1}]}\sigma^{i\varrho}_r (x)\partial_i u_{t_{n}}(x)dw^{\varrho}_{r}\right)$$ $$=-\int_{]s,t]}b_r(x)\partial_iu_r(x)dr-\int_{]s,t]}\sigma^{\varrho}
_r(x)\partial_iu_r(x)dw_{r}^{\varrho}.$$ For each $n$, we have $$-(x-Y_{t_{n+1}}(t_{n},x))^* \Theta_n(x)(
x-Y_{t_{n+1}}(t_{n},x))=S^{(1)}_n+S^{(2)}_n,$$where $S^{(1)}_n(t,x)$ has only $drdr$ and $drdw_r^{\varrho}$ terms and where $$\begin{aligned}
S^{(2)}_n:&=-\frac{1}{2}\left( \int_{]t_{n},t_{n+1}]}\sigma^{i\varrho}_r
(Y_{r}(t_{n},x))dw_{r}^{\varrho}\right)\partial_{ij}u_{t_{n}}(x)\left(\int_{]t_{n},t_{n+1}]}\sigma^{j\varrho}_r
(Y_{r}(t_{n},x))dw_{r}^{\varrho}\right)\\
&\quad -\left(\int_{]t_{n},t_{n+1}]}\sigma^{i\varrho}_r (Y_{r}(t_{n},x))dw^{\varrho}_{r}\right)
\left(\Theta^{ij}_n(x)-\frac{1}{2}\partial_{ij}u_{t_{n}}(x)\right)
\left(\int_{]t_{n},t_{n+1}]}\sigma^{j\varrho}_r (Y_{r}(t_{n},x))dw^{\varrho}_{r}\right).\end{aligned}$$ Since $$\begin{aligned}
\left\vert \Theta^{ij}_n(x)-\frac{1}{2}
\partial_{ij}u_{t_{n}}(x)\right\vert
&=\left\vert \int_{0}^{1}(1-\theta )(\partial_{ij}u_{t_{n}}(x+\theta
(Y_{t_{n+1}}(t_{n},x)-x))-\partial_{ij}u_{t_{n}}(x))d\theta \right\vert\\
&\leq N\sup_{|r_1-r_2|\leq \frac{t}{M},\theta \in
(0,1)}|\partial_{ij}u_{r_{1}}(x+\theta
(Y_{r_2}(r_1,x)-x))-\partial_{ij}u_{r_{1}}(x))|,\end{aligned}$$ proceeding as in the derivation of and using the joint continuity of $\partial_{ij}u_t(x)$ in $t$ and $x$, the continuity of $Y_t(s,x)$ in $s$ and $t$, and standard properties of the stochastic integral (i.e. Thm. 2 (5) in [@LiSh89] and the stochastic dominated convergence theorem), we obtain $$d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1} S^{(2)}_n =-\frac{1}{2}\int_{]0,t]}\sigma ^{i\varrho}_r(x) \sigma ^{j\varrho}_r(x)\partial_{ij}u_r(x)dr.$$ Similarly, by appealing to standard properties of the stochastic integral and the properties stated in Theorem \[thm:diffeoandmomest\](2), we have $
d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}S^{(1)}_n=0,
$ which completes the proof of part (1). The proof of part (2) is similar to the proof of part (1), so we proceed to the proof of part (3). We know that for each $n$, $
Y_{t_{n+1}}(x)=Y_{t_{n+1}}(t_n,Y_{t_n}(x)).
$ Thus, for each $n$, we have $
u_{t_{n+1}}(x)=u_{t_n}(Y^{-1}_{t_{n+1}}(t_n,x)),
$ and hence by the chain rule, $$\label{eq:Inverse identity for Du}
\nabla u_{t_{n+1}}(x)=\nabla u_{t_n}(Y_{t_{n+1}}^{-1}(t_n,x))\nabla
Y_{t_{n+1}}^{-1}(t_n,x).$$By and Taylor’s theorem, for each $n$, we get $$\begin{gathered}
C_{n}=(\nabla u_{t_{n+1}}(x)-\nabla u_{t_{n}}(x))(x-Y_{t_{n+1}}(t_{n},x))\\
=\nabla u_{t_n}(Y_{t_{n+1}}^{-1}(t_n,x))(\nabla
Y_{t_{n+1}}^{-1}(t_n,x)-I_d)(x-Y_{t_{n+1}}(t_n,x))\\
+(Y_{t_{n+1}}^{-1}(t_n,x)-x)^*\tilde{\Theta}_n(x)(x-Y_{t_{n+1}}(t_n,x))=:E_{n}+F_{n},\end{gathered}$$ where $$\tilde{\Theta}^{ij}_n(x):=\int_0^1 \partial_{ij}u_{t_n}(x+\theta(Y_{t_{n+1}}^{-1}(t_n,x)-x))d\theta.$$ By Itô’s formula, for each $n$, we have (see, also, Lemma 3.12 in [@Ku04]), $$\begin{aligned}
\nabla Y_{t_{n+1}}(t_n,x)^{-1}&=I_d-\int_{]t_n,t_{n+1}]} \nabla Y_{r}(t_n,x)^{-1}\nabla \sigma_r^{\varrho}
(Y_{r}(t_n,x))dw^{\varrho}_{r}\\
&\quad +\int_{]t_n,t_{n+1}]}\nabla Y_{r}(t_n,x)^{-1}\left(\nabla
\sigma^{\varrho}_r(Y_{r}(t_n,y)) \nabla \sigma^{\varrho}_r (Y_{r}(t_n,x))-\nabla b_r(Y_{r}(t_n,x))\right)dr,\end{aligned}$$ and hence $$\nabla Y_{t_{n+1}}^{-1}(t_n)-I_d=\nabla Y_{t_{n+1}}^{-1}(t_n,Y^{-1}_{t_{n+1}}(t_n,x))-I_{d}=:G_{t_n,t_{n+1}}^{(1)}(Y^{-1}_{t_{n+1}}(t_n,x))+G_{t_n,t_{n+1}}^{(2)}(Y^{-1}_{t_{n+1}}(t_n,x)),$$ where for $y\in\mathbf{R}^d$, $$G_{t_n,t_{n+1}}^{(1)}(y):=\int_{]t_n,t_{n+1}]} \nabla Y_{r}(t_n,z)^{-1}\left(\nabla
\sigma_r^{\varrho}(Y_{r}(t_n,y)) \nabla \sigma_r ^{\varrho}(Y_{r}(t_n,y))-\nabla b_r(Y_{z}(t_n,y))\right)dr$$ and $$G_{t_n,t_{n+1}}^{(2)}(z):=-\int_{]t_n,t_{n+1}]} \nabla Y_{r}(t_n,y)^{-1}\nabla \sigma_r^{\varrho}
(Y_{r}(t_n,y))dw^{\varrho}_{r}.$$ By the Burkholder-Davis-Gundy inequality, Hölder’s inequality, and the inequalities , , and , for each $p\geq 2$, there is a constant $N=N(p,d,N_{0},T)$ such that for all $x_1$ and $x_2$, $${\mathbf{E}}\left[|G_{t_n,t_{n+1}}^{(2)}(x_1)|^p\right]\le NM^{-p/2+1}\int_{]t_n,t_{n+1}]}{\mathbf{E}}\left[| \nabla Y_{r}(t_n,x_1)^{-1}|^p|\nabla \sigma_r
(Y_{r}(t_n,x_1)|^p\right]dr\le N M^{-p/2}$$ and $$\begin{gathered}
{\mathbf{E}}\left[|G_{t_n,t_{n+1}}^{(2)}(x_1)-G_{t_n,t_{n+1}}^{(2)}(x_2)|^{p}\right]
\leq N M^{-p/2+1}\int_{]t_n,t_{n+1}]}{\mathbf{E}}\left[|\nabla Y_{r}(t_n,x_1)^{-1}-\nabla Y_{r}(t_n,x_2)^{-1}|^{p}\right]dr\\
+ NM^{-p/2+1}\int_{]t_n,t_{n+1}]}\left( {\mathbf{E}}\left[|\nabla Y_{r}(t_n,x_1)^{-1}|^{2p}\right]\right)
^{1/2}\left( {\mathbf{E}}\left[|Y_{r}(t_n,x_1)-Y_{r}(t_n,x_2)|^{2p}\right]\right) ^{1/2}dr\\
\leq NM^{-p/2}|x-y|^{p}.\end{gathered}$$ Thus, by Corollary \[cor:Kolmogorov Embedding\], we obtain that for all $p\ge 2$, $\epsilon>0$, and $\delta <1$, there is a constant $N=N(p,d,\delta, N_{0},T)$ such that $$\label{ineq:Kolmogorov Theorem for SI Terms}
{\mathbf{E}}\left[|r^{-\epsilon}G_{t_n,t_{n+1}}^{(2)}|_{\delta }^{p}\right]\leq NM^{-p/2}.$$ For each $n$, we have $$\begin{aligned}
E_{n}&=\nabla u_{t_n}(Y_{t_{n+1}}^{-1}(t_n,x))G_{t_n,t_{n+1}}^{(1)}(Y_{t_{n+1}}^{-1}(t_n,x))(x-Y_{t_{n+1}}(t_n,x))\\
&\quad + \nabla u_{t_n}(x)
G_{t_n,t_{n+1}}^{(2)}(x)
(x-Y_{t_{n+1}}(t_n,x))\\
&\quad +\nabla u_{t_n}(Y_{t_{n+1}}^{-1}(t_n,x))(
G_{t_n,t_{n+1}}^{(2)}(Y_{t_{n+1}}^{-1}(t_n,x))-G_{t_n,t_{n+1}}^{(2)}(x) ) (x-Y_{t_{n+1}}(t_n,x))\\
&\quad + (\nabla u_{t_n}(Y_{t_{n+1}}^{-1}(t_n,x)-\nabla u_{t_n}(x))
G_{t_n,t_{n+1}}^{(2)}(x) (x-Y_{t_{n+1}}(t_n,x))
\end{aligned}$$ One can easily check that $$\label{eq:part1K}
d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}\nabla u_{t_n}(Y_{t_{n+1}}^{-1}(t_n,x))G_{t_n,t_{n+1}}^{(1)}(Y_{t_{n+1}}^{-1}(t_n,x))(x-Y_{t_{n+1}}(t_n,x))=0.$$ Since $\nabla u_t(x)$ is jointly continuous in $t$ and $x$ and $Y^{-1}_t(s,x)$ is jointly in $s$ and $t$, we have $$d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sup_n|\nabla u_{t_n^M}(Y_{t_{n+1}^M}^{-1}(t_n^M,x))-\nabla u_{t_n^M}(x)|=0.$$ Moreover, using H[ö]{}lder’s inequality, , and , we get $$\sup_M {\mathbf{E}}\sum_{n=0}^{M-1}|G_{t_n,t_{n+1}}^{(2)}(x)|x-Y_{t_{n+1}}(t_n,x)|<\infty,$$ and hence $$\label{eq:part2K}
d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}(\nabla u_{t_n}(Y_{t_{n+1}}^{-1}(t_n,x))-\nabla u_{t_n}(x))
G_{t_n,t_{n+1}}^{(2)}(x) (x-Y_{t_{n+1}}(t_n,x))=0.$$ We claim that $$\label{eq:part3K}
d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}\nabla u_{t_n}(Y_{t_{n+1}}^{-1}(t_n,x))\left(
G_{t_n,t_{n+1}}^{(2)}(Y_{t_{n+1}}^{-1}(t_n,x))-G_{t_n,t_{n+1}}^{(2)}(x) \right) (x-Y_{t_{n+1}}(t_n,x))=0.$$ Set $$J^M=\sum_{n=0}^{M-1}|
G_{t_n,t_{n+1}}^{(2)}(Y_{t_{n+1}}^{-1}(t_n,x))-G_{t_n,t_{n+1}}^{(2)}(x)|x-Y_{t_{n+1}}(t_n,x)|.$$ For each $\bar\delta ,\epsilon \in (0,1)$, we have $${\mathbf{P}}(J^M>\bar \delta )\leq {\mathbf{P}}\left(J^{M}>\bar \delta
,\;\max_{n }|Y_{t_{n+1}}^{-1}(t_n,x)-x|\leq \epsilon \right)+{\mathbf{P}}\left(\max_{n}|Y_{t_{n+1}}^{-1}(t_n,x)-x|>\epsilon \right).$$ By virtue of , there is a deterministic constant $N=N(x)$ independent of $M$ such that for all $\omega\in
V^{M}:=\{\max_{n}|Y_{t_{n+1}}^{-1}(t_n,x)-x|\leq \epsilon \}$, $$J^{M}\leq N \epsilon ^{\delta }\sum_{n=0}^{M-1}[r_1^{-\epsilon}G_{t_n,t_{n+1}}^{(2)}]_{\delta}|x-Y_{t_{n+1}}(t_n,x)|,$$which implies that $${\mathbf{E}}\mathbf{1} _{V^M}J^{M}\leq N\epsilon ^{\delta
}{\mathbf{E}}\sum_{n=0}^{M-1}\left([r_1^{-\epsilon}G_{t_n,t_{n+1}}^{(2)}]_{\delta}^{2}+
|x-Y_{t_{n+1}}(t_n,x)|^{2}\right)
\leq N\epsilon ^{\delta }\sum_{n=0}^{M-1}M^{-1}\leq N\epsilon ^{\delta }.$$Applying Markov’s inequality, we get $${\mathbf{P}}(J^{M}|>\bar \delta ,\;\max_{n}|Y_{t_{n+1}}^{-1}(t_n,x)-x|\leq
\epsilon )\leq N\frac{\epsilon ^{\delta }}{\bar\delta },$$and hence for all $\bar \delta>0$, $$\label{eq:Jmconverges}
\lim_{M\rightarrow \infty }{\mathbf{P}}(J^{M}>\bar \delta )=0,$$which yields . Owing to , , and , we have $$d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}E_{n}=\lim_{M\rightarrow
\infty }\sum_{n=0}^{M-1}\nabla
u_{t_n}(x)G_{t_n,t_{n+1}}^{(2)}(x)(x-Y_{t_{n+1}}(t_n,x)).$$Proceeding as in the proof of part (1) of the claim, we obtain $$\begin{gathered}
\lim_{M\rightarrow \infty }\sum_{n=0}^{M-1}K_{n}\\
=\lim_{M\rightarrow \infty }\sum_{n=0}^{M-1}\nabla
u_{t_n}(x)\int_{]t_n,t_{n+1}]}(\nabla Y_{r}(t_n,x)^{-1}-I_{d})\nabla \sigma^{\varrho}_r
(x)dW^{\varrho}_{r}\int_{]t_n,t_{n+1}]}\sigma^{\varrho}_r(x)dW^{\varrho}_{r}\\
+\lim_{M\rightarrow \infty }\sum_{n=0}^{M-1}\nabla
u_{t_n}(x)\int_{]t_n,t_{n+1}]}\nabla \sigma^{\varrho}_r(x)dw_{r}^{\varrho}\int_{]t_n,t_{n+1}]}\sigma^{\varrho}_r(x)dw^{\varrho}_{r}\\
\label{eq:limitofKn}
=\int_{]s,t]}\sigma
^{j\varrho}_r(x)\partial_j\sigma^{i\varrho}_r(x)\partial_iu_r(x)dr\end{gathered}$$ It is easy to check that for each $n$, $$\begin{aligned}
F_{n}&=(Y_{t_{n+1}}^{-1}(t_n,x)-x)^*\tilde{\Theta}_n(x)(x-Y_{t_{n+1}}(t_n,x))\\
&=:(G_{t_n,t_{n+1}}^{(3)}(Y^{-1}_{t_{n+1}}(t_n,x))+G_{t_n,t_{n+1}}^{(4)}(Y^{-1}_{t_{n+1}}(t_n,x)))^*\tilde{\Theta}_n(x)(x-Y_{t_{n+1}}(t_n,x)),\end{aligned}$$ where for $y\in\mathbf{R}^d$, $$G_{t_n,t_{n+1}}^{(3)}(y):=-\int_{]t_n,t_{n+1}]}b_r (Y_{r}(t_n,y))dr,\quad G_{t_n,t_{n+1}}^{(4)}(y):=-\int_{]t_n,t_{n+1}]}\sigma^{\varrho}_r (Y_{r}(t_n,y))dw^{\varrho}_{r}.$$ Arguing as in the proof of , we get $$d{\mathbf{P}}-\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}F_{n}=\int_{]s,t]}\sigma ^{i\varrho}_r(x) \sigma ^{j\varrho}_r(x)\partial_{ij}u_r(x)dt,$$which completes the proof of the claim.
By virtue of and Claim \[claim:Continuous SDE Limiting Procedure\], for all $s$ and $t$ with $s\le t$ and $x$, ${\mathbf{P}}$-a.s. $$\label{eq:proofctspde}
u_t(x)=x+\int_{]s,t]} \left( \frac{1}{2}\sigma ^{i\varrho}_r(x) \sigma
^{j\varrho}(x)\partial_{ij}u_r(x)-\hat b^i_t(x)\partial_iu_r(x)\right)dr-\int_{]s,t]}\sigma ^{i\varrho}_r(x)\partial_iu_r(x)dw^{\varrho}_{r}.$$ Owing to Theorem \[thm:diffeoandmomest\], $u=u_t(x)$ has a modification that is jointly continuous in $s$ and $t $ and twice continuously differentiable in $x$. It is easy to check that the Lebesgue integral on the right-hand-side of has a modification that is continuous in $s$, $t$, and $x$. Thus, the stochastic integral on the right-hand-side of has a modification that is continuous in $s,t,$ and $x$, and hence the equality in holds ${\mathbf{P}}$-a.s. for all $s$ and $t$ with $s\le t$ and $x$. This proves that $Y^{-1}(\tau)=Y^{-1}_t(\tau,x)$ solves . However, if $u^1(\tau), u^2(\tau)\in \mathfrak{C}^{\beta'}_{cts}({\mathbf{R}}^d;{\mathbf{R}}^d) $ are solutions of , then applying the Itô-Wentzell formula (see, e.g. Theorem 9 in Chapter 1, Section 4.8 in [@Ro90]), we get that ${\mathbf{P}}$-a.s. for all $t$ and $x$, $$u^1_t(\tau,Y_t(\tau,x))=x=u_t^2(\tau,Y_t(\tau,x)),$$ which implies that ${\mathbf{P}}$-a.s. for all $t$ and $x$, $u^1(\tau)=Y_t^{-1}(\tau,x)=u^2(\tau)$. Thus, $Y^{-1}(\tau)=Y^{-1}_t(\tau,x)$ is the unique solution of in $\mathfrak{C}^{\beta'}_{cts}({\mathbf{R}}^d;{\mathbf{R}}^d).$
Appendix
========
Let $V$ be an arbitrary Banach space. The following lemma and its corollaries are indispensable in this paper.
\[lem:Kembedding\] Let $Q\subseteq \mathbf{R}^{d}$ be an open bounded cube, $p\geq 1$, $\delta \in (0,1]$, and $f$ be a $V$-valued integrable function on $Q$ such that $$\left[ f\right] _{\delta;p;Q;V}:=\left( \int_{Q}\int_{Q}\frac{|f(x)-f(y)|_{V}^{p}}{|x-y|^{2d+\delta p}}dxdy\right) ^{1/p}<\infty .$$Then $f$ has a ${\mathcal{C}}^{\delta }(Q;V)$-modification and there is a constant $N=N(d,\delta,p )$ independent of $f$ and $Q$ such that$$[f]_{\delta ;Q;V}\leq N\left[ f\right] _{\delta ,p;Q;V}$$and $$\sup_{x\in Q}|f(x)| _{V}\leq N| Q|
^{\delta /d}[f] _{\delta;p;Q;V}+|Q|
^{-1/p}\left( \int_{Q}|f(x)| _{V}^{p}dx\right) ^{1/p},$$where $|Q|$ is the volume of the cube.
If $V=\mathbf{R}$, then the existence of a continuous modification of $f$ and the estimate of $\left[ f\right] _{\delta ;Q}$ follows from Lemma 2 and Exercise 5 in Section 10.1 in [@Kr08]. The proof for a general Banach space is the same. For all $x\in Q$, we have $$\begin{aligned}
|f(x)|_{V} &\leq \frac{1}{|Q|}\int_{Q}|
f(x)-f(y)|_{V}dy+\frac{1}{|Q|}\int_{Q}| f(y)|_{V}dy \\
&\leq N\frac{1}{|Q|}\left[ f\right] _{\delta
,p;Q}\int_{Q}|x-y|^{\delta }dy+\frac{1}{|
Q|}\int_{Q}| f(y)|_V dy \\
&\leq N|Q|^{\delta /d}[f] _{\delta
,p;Q}+| Q|^{-1/p}\left( \int_{Q}|
f(y)| _{V}^{p}dy\right) ^{1/p},\end{aligned}$$which proves the second estimate.
The following is a direct consequence of Lemma \[lem:Kembedding\].
\[cor:SobolevFull\]Let $p\geq 1$, $\delta \in (0,1]$, and $f$ be a $V$-valued function on $\mathbf{R}^d$ such that $$|f| _{\delta ;p;V}:=\left(\int_{\mathbf{R}^d}|f(x)|^p_Vdx+ \int_{|x-y|<1}\frac{|f(x)-f(y)|_{V}^{p}}{|x-y|^{2d+\delta p}}dxdy\right) ^{1/p}<\infty .$$Then $f$ has a ${\mathcal{C}}^{\delta }(\mathbf{R}^d;V)$-modification and there is a constant $N=N(d,\delta, p)$ independent of $f$ such that$$| f| _{\delta;V}\leq N| f| _{\delta ;p;V}.$$
\[cor:Kolmogorov Embedding\] Let $X$ be a $V$-valued random field defined on $\mathbf{R}^{d}$. Assume that for some $p\geq 1$, $l\ge 0,$ and $\beta \in (0,1]$ with $\beta p>d$ there is a constant $\bar{N}>0$ such that for all $x,y\in \mathbf{R}^{d}$, $${\mathbf{E}}\left[|X(x)|_{V}^{p}\right]\leq \bar{N}r_{1}(x)^{lp} \label{ineq:growthsob}$$and $${\mathbf{E}}\left[|X(x)-X(y)|_{V}^{p}\right]\leq \bar{N}[r_{1}(x)^{lp}+r_1(y)^{lp}]|x-y|^{\beta p}.
\label{ineq:diffsob}$$Then for any $\delta \in (0,\beta -\frac{d}{p})$ and $\epsilon >\frac{d}{p}$, there exists a ${\mathcal{C}}^{\delta }(\mathbf{R}^{d};V)$-modification of $r_{1}^{-(l+\epsilon )}X$ and a constant $N=N(d,p,\delta ,\epsilon )$ such that $${\mathbf{E}}\left[|r_{1}^{-(l+\epsilon )}X|_{\delta }^{p}\right]\leq N\bar{N}.$$
Fix $\delta \in (0,\beta -\frac{d}{p})$ and $\epsilon >\frac{d}{p}$. Owing to , there is a constant $N=N(d,p,\bar{N},\delta ,\epsilon )$ such that $$\int_{\mathbf{R}^{d}}{\mathbf{E}}\left[|r_{1}(x)^{-(l+\epsilon )}X(x)|_{V}^{p}\right]dx\leq
\bar{N}\int_{\mathbf{R}^{d}}r_{1}(x)^{-p\epsilon }dx\leq N\bar N.$$By the mean value theorem, for each $x$ and $y$ and $\bar{p}\in \mathbf{R}$, we have $$|r_{1}(x)^{\bar{p}}-r_{1}(y)^{\bar{p}}|\leq |\bar{p}|(r_{1}(x)^{\bar{p}-1}+r_{1}(y)^{\bar{p}-1})|x-y|. \label{ineq:weightest}$$ Appealing to and , we obtain that there is a constant $N=N(d,p,\delta ,\epsilon )$ such that $$\begin{gathered}
\int_{|x-y|<1}\frac{{\mathbf{E}}\left[|r_{1}(x)^{-(l+\epsilon
)}X(x)-r_{1}(y)^{-(l+\epsilon )}X(y)|_{V}^{p}\right]}{|x-y|^{2d+\delta p}}dxdy\\
\leq \bar{N}\int_{|x-y|<1}\frac{r_{1}(x)^{-p\epsilon }+r_{1}(y)^{-p\epsilon }}{|x-y|^{2d-(\beta-\delta)p}}dxdy+\bar{N}\int_{|x-y|<1}\frac{r_{1}(y)^{pl}|r_{1}(x)^{-(l+\epsilon )}-r_{1}(y)^{-(l+\epsilon )}|^{p}}{|x-y|^{2d+\delta p}}dxdy\\
\leq N\bar{N}+N\bar{N}\int_{|x-y|<1}\frac{r_{1}(x)^{-p(1+\epsilon
)}+r_{1}(y)^{-p(1+\epsilon )}}{|x-y|^{2d-(1-\delta)p}}dxdy\leq N\bar{N}.\end{gathered}$$Therefore, ${\mathbf{E}}[r_{1}^{-(l+\epsilon )}X]_{\delta ,p}^{p}\leq N\bar N$, and hence, by Corollary \[cor:Kolmogorov Embedding\], $r_{1}^{-(l+\epsilon )}X$ has a ${\mathcal{C}}^{\delta }(\mathbf{R}^{d};V)$-modification and the estimate follows immediately.
\[cor:SobEmbLimit\]Let $(X^{(n)})_{n\in{\mathbf{N}}}$ be a sequence of $V$-valued random field defined on $\mathbf{R}^{d}$. Assume that for some $p\geq 1$, $l\geq 0$ and $\beta \in
(0,1],$ with $\beta p>d$ there is a constant $\bar N>0$ such that for all $x,y\in \mathbf{R}^{d}$ and $n\in\mathbf{N}$, $${\mathbf{E}}\left[|X^{(n)}(x)|_{V}^{p}\right]\leq \bar Nr_{1}(x)^{lp}$$ and $${\mathbf{E}}\left[|X^{(n)}(x)-X^{(n)}(y)|_{V}^{p}\right]\leq \bar N(r_{1}(x)^{lp}+r_1(y)^{lp})|x-y|^{\beta p}.$$ Moreover, assume that for each $x\in \mathbf{R}^{d},$ $
\lim_{n\rightarrow\infty}{\mathbf{E}}\left[|X^{(n)}(x)|^{p}\right]= 0.
$ Then for any $\delta \in (0,\beta -\frac{d}{p})$ and $\epsilon >\frac{d}{p}$, $$\lim_{n\rightarrow\infty}{\mathbf{E}}\left[|r_{1}^{-(l+\epsilon )}X^{(n)}|_{\delta }^{p}\right]=0.$$
Fix $\delta \in (0,\beta -\frac{d}{p})$ and $\epsilon >\frac{d}{p}$. Using the Lebesgue dominated convergence theorem, we get $$\lim_{n\rightarrow\infty }\int_{\mathbf{R}^{d}}{\mathbf{E}}\left[|r_{1}(x)^{-(l+\epsilon
)}X^{(n)}(x)|_{V}^{p}\right]dx=0,$$and therefore for each $\zeta \in (0,1)$, $$\lim_{n}\int_{\zeta<|x-y|<1 }\frac{{\mathbf{E}}\left[|r_{1}(x)^{-(l+\epsilon
)}X_{n}(x)-r_{1}(y)^{-(l+\epsilon )}X_{n}(y)|_{V}^{p}\right]}{|x-y|^{2d+\delta p}}dxdy=0.$$Repeating the proof of Corollary \[cor:Kolmogorov Embedding\], we obtain that there is a constant $N$ such that $$\begin{gathered}
\int_{|x-y|\leq \zeta }\frac{{\mathbf{E}}\left[|r_{1}(x)^{-(l+\epsilon
)}X^{(n)}(x)-r_{1}(y)^{-(l+\epsilon )}X^{(n)}(y)|_{V}^{p}\right]}{|x-y|^{2d+\delta p}}dxdy\\
\le \bar N\int_{|x-y|\leq \zeta }\frac{r_{1}(x)^{-p\epsilon }+r_{1}(y)^{-p\epsilon }}{|x-y|^{2d+(\delta -\beta )p}}dxdy+\bar N\int_{|x-y|\leq \zeta }\frac{r_{1}(x)^{-p(1+\epsilon )}+r_{1}(y)^{-p(1+\epsilon )}}{|x-y|^{2d+(\delta
-1)p}}dxdy\\
\le \bar N\zeta ^{\beta p-\delta p-d}.\end{gathered}$$ Therefore, $\lim_{n\rightarrow\infty}{\mathbf{E}}\left[[r_{1}^{-(l+\epsilon )}X]_{\delta ,p}^{p}\right]=0$, and the statement follows.
| |
We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Spinach is a healthy plant rich in minerals and vitamins. Many summer residents are happy to grow it on their plots.
This unpretentious annual plant does not require complex maintenance and grows well both outdoors and in greenhouses.
It can be grown both in separate beds and planted with other vegetable crops between the rows, with which it gets along well. This allows not only to use the land rationally, but also to increase yields.
Choosing neighbors for a plant
Today the joint cultivation of various vegetable crops is gaining momentum.
Spinach releases nutrients into the soil that strengthens the root system of other plants, which has a beneficial effect on their growth and yield.
In mixed cultivation, it acts as a natural barrier between plants of the same species, thereby reducing the spread of pests. Also, a compact planting reduces the growth of weeds and prevents the soil from drying out. This plant provides moisture and porosity to the soil.
Both when grown in separate beds, and when planting mixed with other vegetables the following planting parameters must be observed with respect to spinach:
- The depth of the groove into which the seeds are sown is made up to two centimeters deep.
- The distance between plants in a row should be 6-10 cm, and the distance between two rows should be 20-30 cm.
The plant grows quickly, so after cutting it, there is enough room for other vegetables to grow and ripen. Next, we'll talk about what spinach grows best next to on the same bed and what parameters should be taken into account when growing together.
- Potatoes.
It is recommended to make a bed 90-100 cm wide, on which two rows of potatoes are planted, keeping a distance of half a meter between them. Spinach is planted between rows and along the edges of the garden bed 15 cm from the potatoes.
- Beet.
Spinach ripens much faster than beets and can be sown again after being cut. On a bed 90-100 cm wide, three rows of beets are planted in the middle, and greens are planted at the edges of the bed at a distance of 15 cm.
- Radish.
Radish loves moist soil, and the proximity to spinach provides this condition. Spinach ripens faster than radish. Therefore, it will protect the soil under the young radish from drying out. It is recommended to plant two or three rows of radish at a distance of 10-15 cm from each other, and plant spinach at the edges at a distance of 20 cm.
- Strawberry.
Spinach is unpretentious to the soil and does not impoverish it, and also does not have common pests with strawberries. It provides strawberries with the shade they need in hot regions.
The scheme for joint planting of these plants is as follows: the distance between the rows of strawberries is maintained 50-70 cm, spinach is planted in the aisle in the middle.
- Bow.
Onions go well with spinach. You can plant the bow at a distance of 30 cm from each other. Alternate planting spinach aisles across two rows of onions. An interesting option with carrots, then planting plants in the garden alternates as follows: onions-greens-carrots-greens-onions.
- Turnip.
Turnip and spinach get along well side by side. The turnips are planted keeping the distance between rows 25-30 cm. Spinach is sown in the row spacing. After sprouting, it is harvested after 25-30 days, and turnips need up to ninety days to ripen. Therefore, after harvesting the spinach, the turnip gets enough room to grow.
- Cabbage.
Often, spinach is planted next to cabbage, which has a longer ripening period. The distance between the cabbage rows is 80 cm, spinach is planted in the middle of the row.
What crops are undesirable to plant nearby?
Now that you know what to plant a plant with, however, there are a number of vegetable crops that should be avoided to be placed side by side or to increase the distance between them. What is better not to plant a plant next to?
- Pumpkin.
The pumpkin grows very quickly, and besides, it spins. And spinach is a light-loving plant, so pumpkin will shade it and interfere with good growth. Therefore, if you still decide to plant next to the pumpkin, then it is better to do this along the edge of the pumpkin planting site at a distance of at least 50 cm.
- Beans.
Beans grow quickly and can shade other crops, especially climbing varieties that require garters. In principle, these plants are good friends with each other, only you need to fulfill some planting conditions:
- Use bush beans for mixed planting.
- The distance between the rows of beans must be at least 50 cm.
- Fennel.
But the proximity to fennel for spinach is completely undesirable. Fennel inhibits the growth of many plants. Therefore, you will not get a good harvest of the latter in such a neighborhood. These plants are best planted as far apart as possible.
What is the best to grow before and why?
Spinach loves fertilized soil, but this cannot be done using organic matter in the year of planting due to the high content of pathogenic microbes in organic fertilizers. Therefore, it is good to plant it in the beds on which they were grown last year:
- cucumbers;
- tomatoes;
- potatoes;
- cabbage.
The soil after these crops becomes loose and rich in organic matter and is perfect for salad.
What is the best to plant after and why?
Spinach ripens quickly and saturates the soil with substances that contribute to the development of the root system, including tuberous.
therefore after harvesting, heat-loving vegetables can be planted on the bedsthat are planted in summer:
- pepper;
- tomatoes;
- zucchini;
- cucumbers.
Also, after spinach, radish, Jerusalem artichoke, radish will grow well.
Growing spinach is a simple matter, so feel free to plant it in your area in combination with other vegetables. Integrated planting increases the efficiency of land use and produces a good harvest. | https://tz.stopsbs.net/5625-important-features-in-the-cultivation-of-spinach-wha.html |
Q:
Time Complexity for f1(n) / f2(n)
What are some algorithms which we can use that has fraction equation time complexities? ( like 100n / 2n+1 for example )
A:
Complexities isn't some sort of precise algebra calculations. Function f(x) belongs to a class, if you can multiply one of class' functions g(x) by a constant C, and show it always greater than your function -- simply said, you have to show that there exists a non infinite value C so that the equation is true for sufficiently big x:
f(x) <= C * g(x)
So e.g. 3n + 5 belongs to O(n), because 3n + 5 <= 1000n.
In a similar vain, you can show that 100n / (2n + 1) <= 50n, and thus its complexity belongs to O(n).
Edit: It also belongs to O(1), because 100n / (2n + 1) <= 50*1.
| |
A seemingly bizarre problem has been solved by physicists at the University of Cambridge – how many different ways can you arrange 128 tennis balls? The answer – 10^250 (one with 250 zeros, or ten unquadragintilliard). To put that into perspective, it would exceed the total number of particles in the universe.
Tennis balls aside, the method used to calculate the conundrum and the implications of it are what is key. The method the team came up with allowed them to calculate configurational entropy, which describes how structurally disordered particles in a system are. For example, the water in a lake or the water molecules in an ice cube.
When a system changes (say because of the temperature) the arrangement of the particles does too. We can predict this on a molecular level, but when it comes to bigger things – like a sand dune – it becomes more difficult and we cannot predict about how systems will behave. To do so you would need to measure changes in the structural disorder of all the particles, and to do this you would need to know how many different ways a system can be structured.
Published in the journal Physical Review E, researchers carried out a calculation where the particles were 128 soft spheres, like tennis balls. They took the small sample of all the possible configurations and worked out the probability of them occurring. Based on these samples, they were able to work out how many ways the entire system could be arranged – and its configurational entropy.
They said their method should be applicable to a wide range of problems, including string theory, cosmology and how to make artificial intelligence more efficient. It could also allow them to predict the movement of avalanches, or work out how sand dunes in a desert will change over time.
Study author Stefano Martiniani said: "Obviously being able to predict how avalanches move or deserts may change is a long, long way off, but one day we would like to be able to solve such problems. This research performs the sort of calculation we would need in order to be able to do that.
"Because our indirect approach relies on the observation of a small sample of all possible configurations, the answers it finds are only ever approximate, but the estimate is a very good one. By answering the problem we are opening up uncharted territory. This methodology could be used anywhere that people are trying to work out how many possible solutions to a problem you can find."
What caused the Big Bang? Did Big Crunch or inflation kick-start the universe? | https://www.ibtimes.co.uk/128-tennis-balls-can-be-arranged-into-more-combinations-there-are-particles-universe-1540875 |
Working at a grocery store is chaotic
Working at a grocery store is something that as a young adult, who is also going to school, might seem ideal, but not so much during certain situations.
As the COVID-19 outbreak started, it has been quite an interesting and very difficult experience at work for the past three weeks. I have been working at a grocery store for the past year and I can confidently say it has never been this chaotic, especially during the first week of the pandemic.
In week one, my department received the news that we needed to maintain our distance and make sure we constantly washed our hands. This is when we all realized that things were going to get bad quick!
I think that in a matter of three hours, when first opening that day, the whole store got wiped out. The first two things that were sold out in less than an hour were toilet paper and water. After that, it was all the dry products that don’t go bad, like beans, rice and canned food.
Naturally, as someone that is witnessing all of this happening, you start to think to yourself that you need to also go out there and grab stuff for you and your family.
Thankfully, because employees were there an hour before the store opened to the public, my coworkers and I were able to buy everything we thought we needed to basically survive.
At the time no one really knew what was going to happen with the stores.
Everyone just thought that every single store was just going to close until further notice. There were no talks of “essential businesses” or anything until the government said that grocery stores would be classified as essential.
It needed to stay open to provide for the public.
I do not know if it has been obvious or not, but whatever the case may be, we, grocery workers are just as or even more worried and in a state of panic than the customers.
People were buying in bulk which was unfair for customers that were trying to also buy or just waited because they were busy or just unavailable.
But we still try our hardest to give out what we could! This before our store implemented the limit of items per customer.
It is very difficult for us to deal with this situation, especially when panicking customers are going off on us over something we cannot control.
Overall, it is great to help everyone and remain employed but I am also not a robot. I can’t just make things appear that you as a customer need, so bear with me. I hope this makes anyone that does not work in grocery realize that there is a bit more to this whole pandemic than simply just having a fully stocked store.
Julio Camacho is the sports editor and a staff reporter for The Chaparral newspaper. He is a film/television major and plans on being a cinematographer... | https://thechaparral.net/1599/opinion/working-at-a-grocery-store/ |
The Nevada football team doesn't used quite as many helmets as Oregon, but the Wolf Pack has streamed through its share of lids. Since 1975, the Wolf Pack has used 15 helmets, undergoing massive changes some seasons and minor changes in other years. Here, we compiled all of the helmets thanks to The Helmet Project, which allowed us to use the helmets it has compiled. It's worth checking out the website for its treasure trove of helmets. Here's a look at 14 of the 15 helmets, the only one missing is the one used last weekend, the Pack script on a white helmet, which is displayed on the photo of Tyler Stewart attached. | https://www.rgj.com/story/sports/college/nevada/2015/10/07/wolf-packs-helmets-over-years/73552938/ |
Q:
Application of FTC and change of variable
Let $f:[0,1]\to \mathbb{R}$ be continuous such that $$\int_{0}^{1} f(xt)dt=0$$ for all $x \in [0,1]$.
Show that $f(x)=0$ for all $x \in [0,1]$.
Using the FTC and substitution:
$$F(t)=\frac{1}{x}\int_{0}^{t} f(u)du$$
$$F'(t)=\frac{1}{x}[f(t)]$$
I'm not sure if I am going in the right direction. As of right now, I can't see how to go from my last step to showing that $f(x)=0$ for all $x$ in $[0,1]$.
A:
You put $u = xt$ so that $du = xdt$. Note
$$
0 = \int_0^x f(u)\frac{1}{x}\; du
$$
for all $x\in (0,1]$. So
$$
0 = \int_0^xf(u)\; du
$$
for all $x\in (0,1]$. Taking the derivative on both sides you get
$$
0 = f(x)
$$
for all $x\in (0,1)$. Now the fact that $f$ is continuous gives you that also $f(0) = f(1) = 0$.
| |
Uttarakhand government pins hope on Centre to fell pine trees in high altitude zone
The Bharatiya Janata Party (BJP) government in Uttarakhand has taken up a similar proposal that the previous Congress regime had with the Centre, seeking permission to fell pine trees in the upper reaches of the state
Pine fneedles during the summer cause forest fires and also retard the undergrowth, diminishing food for herbivores, leading to human-animal conflicts.(HT Photo.)
The Bharatiya Janata Party (BJP) government in Uttarakhand has taken up a similar proposal that the previous Congress regime had with the Centre, seeking permission to fell pine trees in the upper reaches of the state.
On Friday, chief minister TS Rawat during a meeting with Union forest minister Dr Harsh Vardhan, requested him for permission to fell trees above the altitude of 1,000m.
The issue was raised due to the sensitivity of the upper reaches where sprawling pine tree forests pose threat to the forest cover during the summer as pine needles cause forest fires and also retard the undergrowth, diminishing food for herbivores, leading to human-animal conflicts.
In 2015, a similar proposal was sent to the Centre by the then Congress government for its approval to fell pine trees above the altitude of 1000 m, the proposal, however, was not approved by the Centre.
Dinesh Agarwal, former state forest minister in the Congress government, said the present BJP government “only talks and does nothing”.
“We sent a proposal on the same lines earlier as our government was concerned about the (human-animal) conflicts and forest fires due to pine trees,” he told Hindustan Times.
“All the present government needs is to get approval of the project we had submitted.”
For felling trees in an area of more than a km, permission is needed from the Union ministry of environment, forest and climate change under the Forest Conservation Act, 1980.
A recent Forest Survey of India report states that Uttarakhand has a recorded forest cover 71%. But, the area has contracted from 268 sqkm between 2013 and 2015, leaving only 45.32% forest cover of the total geographical area of the state.
More than 1,300 hectare of has been gutted in forest fires between February and June this year, incurring a revenue loss of more than ₹18 lakh.
But, actual damage was caused last year when more than 4,400 hectare was gutted in forest fires incurring a revenue loss of ₹46 lakh.
The state forest department does not have a strategy to tackle the natural calamity. The crisis management plan prepared by the department to combat forest fires, does not have a long-term methodology.
The methodologies are limited to pre-fire season preparations, crew stations for disseminating fire alerts and engaging daily wage workers to combat forest fire.
This apart, the state government has no sustainable plans to clear combustible pine needles or resin from the reserve forest, a senior forest department official, who did not want to be named, said.
“Forest officials have been examining options for recycling (pine) needles but it has remained an issue of discussion for years now,” the official said.
RK Mahajan, head of forest force and principal chief conservator of forest admitted that there was a problem, saying: “Van panchayats are asked to clear pine needles but it’s a tedious job.”
“There are many factors like permissions, high transportation cost among others due to which recycling can’t be done. We, however, are working on some ways to mitigate the problem and utilize the pine needles.” | |
Background
==========
The inference of evolutionary trees with computational methods has many important applications in medical and biological research, such as drug discovery and conservation biology. A rich variety of tree reconstruction methods based on sequences have been developed, which fall into three categories, (a) maximum parsimony methods, (b) distance based methods and (c) approaches applying the maximum likelihood principle. The latter two are the most popular. Distance based methods calculate pair-wise distances between the sequences with each other, and support the tree that best fits these observed distances. The prominent distance based method is Neighbor joining (NJ) \[[@B1]\], in which partial trees are iteratively combined to form a larger tree in a bottom-up manner. Due to low computational time complexity and demonstrated topological accuracy for small data sets, NJ and its variants have been widely used.
Maximum likelihood methods aim to find the tree that gains the maximum likelihood value to have produced the underlying data. A number of studies \[[@B2],[@B3]\] have shown that maximum likelihood programs can recover the correct tree from simulated datasets more frequently than other methods, which supports numerous observations from real data and explains their popularity.
However, the main disadvantage of maximum likelihood methods is that they require much computational effort. Maximum likelihood reconstruction consists of two tasks. The first task involves edge length estimation: Given the topology of a tree, find edge lengths to maximize the likelihood function. This task is accomplished by iterative methods such as expectation maximization or using Newton-Raphson optimization. Each iteration of these methods requires computations that take on the order of the number of sequences times the number of sequence positions. The second, more challenging, task is to find a tree topology that maximizes the likelihood. The number of potential topologies grows exponentially with the number of sequences n, e.g. for n = 50 sequences there already exist 2.84\*10^76^alternative topologies; a number almost as large as the number of atoms in the universe (≈10^80^). In fact, it has already been demonstrated that finding the optimal tree under the maximum likelihood criterion is NP-hard \[[@B4]\]. Consequently, the introduction of heuristics to reduce the search space in terms of potential topologies evaluated becomes inevitable, such as, the hill climbing based reconstruction algorithms \[[@B5]-[@B7]\]; the genetic algorithm based ones \[[@B8],[@B9]\], etc.
Although using different search strategies, the heuristics are all to try to improve a starting tree/starting trees by a series of elementary topological rearrangements, until local optima is found. It is obvious that the performance of the heuristics depends on the degree of exhaustiveness of the topological rearrangements on some extent. The three often used topological rearrangements include Nearest Neighbor Interchange (NNI), Subtree Prune and Regraft (SPR) and Tree Bisection and Reconnection (TBR).
The NNI move swaps one rooted subtree or leave on one side of an internal edge e with another on the other side. For every internal edge, a NNI move can produce two different topologies as shown in Figure [1](#F1){ref-type="fig"}. For a tree containing n sequences, the size of the neighborhood induced by NNI is *O*(*n*). The neighborhood of an evolutionary tree T under a topological rearrangement move is defined as be the set of all trees that can be obtained from T by one move.
{#F1}
A SPR move on a tree T is defined as cutting any edge and thereby pruning a subtree t, and then regrafting the subtree by the same cut edge to a new vertex obtained by subdividing a pre-existing edge in T-t. For a tree containing n sequences, the size of the neighborhood induced by SPR is O(n^2^).
In a TBR move an edge is removed from T, creating subtrees t and T-t, and then a new edge is added between the midpoints of any two edges in t and T-t, creating a new tree. For a tree containing n sequences, the size of the neighborhood induced by TBR is O(n^3^). See Figure [2](#F2){ref-type="fig"} for schematic representation of SPR and TBR.
{#F2}
As shown above, the neighborhood size produced by NNI, SPR and TBR acting on an evolutionary tree T comprising of n sequences is O(n), O(n^2^)and O(n^3^) respectively. Thus, TBR are the most exhaustive. Even TBR searches, however, can often get trapped in local optima, since there are not many trees accessible in one step from any given tree, which motivates the introduction of p-Edge Contraction and Refinement (p-ECR) \[[@B10]\]. A p-ECR move means to contract p edges all at once, creating unresolved nodes in the process, then refine these unresolved nodes to give back a binary tree. A contraction collapses an edge in the tree and identifies its two end points, while a refinement expands an unresolved node into two nodes connected by an edge. For example, the trees *T*~1~and *T*~5~in Figure [3](#F3){ref-type="fig"} are separated by one 2-ECR move. From the definition of p-ECR, NNI is the special case of p-ECR when p equals to 1.
{#F3}
Let *S*~*u*~be the number of unresolved nodes produced in p-ECR and *d*~*i*~the degree of the unresolved node *i*, since the number of trees produced by n sequences is (2n-5)!!, refining the unresolved nodes can produce $\prod_{i = 1}^{S_{u}}\left( {\left( {2d_{i} - 5} \right)!!} \right)$ different trees. When p(\>1) edges are deleted from a tree, the location relationship between the deleted edges determines the number of unresolved nodes produced and the degrees of the unresolved nodes. Now two extreme special cases are analyzed.
The first extreme special case is when all the p edges are adjacent. In this case, only one unresolved node with degree-(2p) is produced. Then the number of trees produced by one p-ECR is (4p-5)!!. Another extreme special case is when all the *p*edges disjoin. In this case, p unresolved nodes with degree-4 are produced. Then the number of trees produced by one p-ECR is ((2 × 4--5)!!)^p^, that is 3^p^.
In other cases, the number of possible trees produced is intermediate of the two special cases. Observe that there are $C_{n}^{p}$ ways of selecting p edges to contract, there are Ω(*n*^*n*^3^*p*^) trees produced by p-ECR. Thus, although an every sequence of p NNI moves on a tree is a p-ECR move on that tree, there are p-ECR moves that can not be performed by a sequence of p NNI move(the neighborhood size produced by p NNI moves is O(n^p^)). With such a wide search space, getting trapped in bad local optima can often be avoided, resulting in an exhaustive local search. Moreover, the exhaustiveness degree of a p-ECR move is dependent on the value of p, that is, a larger p means a larger search space for the correct tree, which could be potentially useful in selecting a suitable range of p.
However, how to quickly select the best one from so many possible evolutionary trees is a hard problem facing the p-ECR move, since there are so many potential topologies to evaluate and it is very time-consuming to compute the likelihood of a given topology as mentioned above. The straight answer is to simply evaluate every potential tree and select the best. Even for medium size of p, the answer is apparently impossible. Until now, there is no an efficient and general implementation of p-ECR. Consequently, people often yield up the exhaustive p-ECR and turn to some simpler one, such as NNI. In order to make p-ECR efficient, a method called p-ECRNJ motivated by NJ is presented in this paper. The main idea of p-ECRNJ is to use NJ to refine the unresolved nodes produced in p-ECR. In this paper, we use NJ to refine the unresolved nodes to improve p-ECR.
NJ
--
NJ is a greedy algorithm, which attempts to minimize the sum of all branch-lengths on the constructed tree. Conceptually, it starts out with a star-formed tree where each leaf corresponds to a sequence, and iteratively picks two nodes adjacent to the root and joins them by inserting a new node between the root and the two selected nodes. When joining nodes, the method selects the pair of nodes i, j that minimizes
where *d*~*ij*~is the distance between node i and j(assumed symmetric, i.e., *d*~*ij*~= *d*~*ji*~), *R*~*k*~is the sum over row *k*of the distance matrix: *R*~*k*~= ∑~*x*~*d*~*kx*~(where x ranges over all nodes adjacent to the root node), and r is the remaining number of nodes adjacent to the root. Once the pair i, j to agglomerate is selected, a new node *C*which represents the root of the new cluster is created. Then the length of branches (*C*, *i*) and (*C*, *j*) is estimated by the following Eq. (2)
$$d_{Ci} = \frac{1}{2}\left( {d_{ij} + \frac{R_{i} - R_{j}}{r - 2}} \right),d_{Cj} = \frac{1}{2}\left( {d_{ij} + \frac{R_{j} - R_{i}}{r - 2}} \right)$$
Finally the distance matrix is reduced by replacing the distances relative to sequence *i*and sequence *j*by those between the new node C and any other node k using
$$d_{Ck} = \frac{1}{2}\left( {d_{ik} + d_{jk} - d_{ij}} \right)$$
This formulation of NJ gives rise to a canonical algorithm that performs a search for min~*i*,\ *j*~*Q*~*ij*~, using time *O*(*r*^2^), and joins i and j, using time O(r) to update d. The search and joining is continued until only there three nodes are adjacent to the root. The total time complexity becomes *O*(*n*^3^), and the space complexity becomes *O*(*n*^2^) (for representing the distance matrix d). An example of NJ is illustrated in Figure [4](#F4){ref-type="fig"}.
{#F4}
With a running time of *O*(*n*^3^) on n sequences, NJ is fast and widely used. Moreover, empirical work shows it to be quite accurate, at least for small data sets. St. John et al. \[[@B11]\] even suggest it as a standard against which new phylogeny reconstruction methods should be evaluated. In this paper, we use NJ to refine the unresolved nodes to improve p-ECR.
Results
=======
In order to test p-ECRNJ, we conducted experiments on real datasets to compare the heuristic ECRML and ECRML+PHYML with four most popular reconstruction methods, including BioNJ \[[@B12]\] (a variant of NJ), PHYML version 2.0.1 \[[@B6]\] (a maximum likelihood algorithm combining hill climbing and NNI moves), RAxML-III\[[@B7]\] (a maximum likelihood algorithm combining hill climbing and SPR moves) and fastDNAml version 1.2.2 \[[@B5]\] (a maximum likelihood algorithm combining the stepwise addition algorithm and SPR moves). ECRML is the heuristic base on p-ECRNJ and hill climbing as shown in Methods. ECRML+PHYML is the heuristic based on the combination of the p-ECRNJ move with NNI, where rounds of NNI and p-ECRNJ are alternated as follows. ECRML is called once each time the PHYML is stuck on a local optimum. If the ECRML is able to improve the tree and get out of the local optimum, the PHYML is applied again until it is trapped in another local optimum, etc. When a pre-defined times is reached or ECRML cannot find an improvement anymore either, the program terminates. The real datasets include the ones used in \[[@B7],[@B15]\], in particular, MouseLemurs, 4DAT, 3DAT, 42, Rbcl55, 101_SC, 132, 150_SC, 150_ARB, 218_RDPII, 250_ARB and 500_ZILLA. Table [1](#T1){ref-type="table"} shows the number of sequences and the number of sites for each dataset.
######
Real datasets
*dataset* *number of sequences* *number of sites*
---- ------------- ----------------------- -------------------
1 MouseLemurs 35 115
2 4DAT 35 452
3 3DAT 39 1116
4 42 42 1167
5 Rbcl55 55 1315
6 101_SC 101 1858
7 132 132 1881
8 150_SC 150 1269
9 150_ARB 150 3188
10 218_RDPII 218 4182
11 250_ARB 250 3638
12 500_ZILLA 500 759
All the programs are run with default options. In addition, the parameter p in ECRML and ECRML+PHYML is set as 4 and iteration times as 20. Computing time is measured on a PC Pentium IV 2.99 GHz running with Windows XP.
Since the BioNJ cannot compute the likelihood values of final trees and there is a difference between all the maximum likelihood algorithms in the way of likelihood computation, all final trees found by BioNJ, PHYML, RAxML and fastDNAml are re-evaluated using ECRML to enable a direct comparison. The main results are shown in Table [2](#T2){ref-type="table"}, Table [3](#T3){ref-type="table"} and Table [4](#T4){ref-type="table"}. In addition, Stars in Table [2](#T2){ref-type="table"} and Table [4](#T4){ref-type="table"} indicate entries where the algorithm was deemed to be too slow to bother with that test.
######
Likelihood values of BioNJ, PHYML, RaxML, fastDNAml and ECRML on different real datasets
*BioNJ* *PHYML* *RAxML* *fastDNAml* *ECRML*
---- --------- --------- --------- ------------- --------- ------- ---------- ---------- ---------
1 -10753 -6902 -5119 -1268 -4959 -1108 -4019 -168 -3851
2 -1082 -1 -1089 -8 -1093 -12 -1082 -1 -1081
3 -2861 -26 -2843 -8 -2842 -7 -2942 -107 -2835
4 -7866 -783 -7250 -167 -7281 -198 -7310 -227 -7083
5 -22552 -299 -22561 -308 -22382 -129 -22603 -350 -22253
6 -67480 -1311 -66695 -526 -66576 -407 -66481 -312 -66169
7 -46930 -3293 -43924 -287 -43641 -4 -43773 -136 -43637
8 -41090 -623 -40520 -53 -40660 -193 -40495 -28 -40467
9 -72423 -1329 -71100 -6 -71159 -65 -71178 -84 -71094
10 -138942 -2035 -137074 -167 -136921 -161 -136998 -91 -136907
11 -120315 -2627 -117869 -181 -118035 -347 \*\*\*\* \*\*\*\* -117688
12 -21917 -588 -22380 -1051 -21879 -550 \*\*\*\* \*\*\*\* -21329
######
Likelihood values of various tree building algorithms on different real datasets
*PHYML* *ECRML* *ECRML + PHYML*
---- --------- --------- ----------------- ------- ---------
1 -5119 -1275 -3851 -7 -3844
2 -1089 -8 -1081 0 -1081
3 -2843 -13 -2835 -5 -2830
4 -7250 -222 -7083 -55 -7028
5 -22561 -677 -22253 -369 -21884
6 -66695 -511 -66169 15 -66184
7 -43924 -292 -43637 -5 -43632
8 -40520 -80 -40467 -27 -40440
9 -71100 -76 -71094 -18 -71076
10 -137074 -207 -136907 -40 136867
11 -117869 -260 -117688 -79 -117609
12 -22380 -2059 -21329 -1008 -20321
######
Computing time(seconds) of various tree building algorithms on different real datasets
*dataset* *BioNJ* *PHYML* *RAxML* *fastDNAml* *ECRML* *ECRML + HYML*
------------- --------- --------- --------- ------------- --------- ----------------
MouseLemurs 3 14 7 187 142 276
4DAT 1 2 2 362 35 55
3DAT 1 7 5 1582 135 205
42 2 31 16 666 449 833
Rbcl55 4 40 89 1586 1340 1733
101_SC 10 155 622 26287 4421 5926
132 8 205 1255 20012 10623 13171
150_sc 24 163 399 26408 7206 9163
150_ARB 24 319 187 54788 25217 28857
218_RDPII 42 429 6779 102388 14236 19897
250_ARB 74 799 1103 \*\*\*\* 20788 29804
500_ZILLA 92 2456 29975 \*\*\*\* 24528 30016
Table [2](#T2){ref-type="table"} shows the maximum likelihood values of the evolutionary trees reconstructed by BioNJ, PHYML, RAxML, fastDNAml and ECRML on different datasets. Every of the former four algorithms include two columns: the first column lists the maximum likelihood values of the evolutionary trees reconstructed by the algorithm on different datasets; the second column lists the difference of the likelihood value between the algorithm and ECRML on corresponding dataset. A difference that is smaller than 0 means that ECRML can find an evolutionary tree with higher likelihood value than the algorithm on corresponding dataset and vice versa. ECRML only include one column, where lists the likelihood values of ECRML on different datasets. From Table [2](#T2){ref-type="table"}, we can see that on every dataset, the values in the second column of BioNJ, PHYML, RAxML and fastDNAml are all smaller than 0. This means that ECRML can find better trees than these four algorithms on all datasets and in further proves that p-ECRNJ has a wider search space.
Table [3](#T3){ref-type="table"} shows the likelihood values of the evolutionary trees reconstructed by PHYML, ECRML and ECRML+ PHYML on different datasets. Similar to Table [2](#T2){ref-type="table"}, every of PHYML and ECRML includes two columns: the first column lists the likelihood values of the evolutionary trees reconstructed by the algorithm on different datasets; the second column lists the difference of the likelihood value between the algorithm and ECRML+ PHYML on corresponding dataset. A difference that is smaller than 0 means ECRML + PHYML can find an evolutionary tree with higher likelihood value than the algorithm on corresponding dataset and vice versa. ECRML + PHYML only include one column, where lists the likelihood values of ECRML + PHYML on different datasets. From Table [3](#T3){ref-type="table"}, we can see that on every dataset, the values in the second column of PHYML are all smaller than 0. This support that p-ECRNJ can find better trees than other local rearrangements such as NNI and can further efficiently improve them. At the same time, we can also see from Table [3](#T3){ref-type="table"} that there are 10 values smaller than 0, one equal to 0 and one larger than 0 in the second column of ECRML. This means that ECRML + PHYML can often get better trees than ECRML, although they all include a p-ECRNJ search. This is mainly due to that in p-ECRNJ, p edges are randomly deleted; then there is randomicity in p-ECRNJ, which can be eliminated by much iteration. However, when there is no enough iteration, the resulting trees may show some of the defects of starting trees. ECRML is start from a tree reconstructed by NJ and ECRML+PHYML is start from a tree reconstructed by PHYML in each iteration. PHYML can often get better trees than NJ as shown in Table [2](#T2){ref-type="table"}. This explains that ECRML+PHYML can often get better trees than ECRML in Table [3](#T3){ref-type="table"}.
Table [4](#T4){ref-type="table"} shows the computing time of various tree building algorithms on different real datasets. From table [4](#T4){ref-type="table"}, we can see that on every datasets, BioNJ is the fastest. This is in accordance with conclusions that distance based reconstruction methods are often faster than maximum likelihood ones. For the five maximum likelihood methods, fastDNAml, ECRML+PHYML and ECRML are, as a whole, lower than PHYML and RAxML. Currently, PHYML is recognized as the fasted maximum likelihood. The efficiency of PHYML is obtained by simultaneously optimizing tree topology and edge lengths. The efficiency of RAxML comes to a large extent from a very efficient implementation for storing trees and calculating likelihoods. There are no special skills in fastDNAml, ECRML and ECRML+PHYML. Moreover, the computing time of ECRML and ECRML+PHYML is the total of 20 iterations. After each iteration, branches length and likelihood of the current tree is updated; this occupies the majority of the computation time. In terms of coding efficiency, BioNJ, PHYML, RAxML and fastDNAml have been highly brushed up, while the current version of ECRML and ECRML+PHYML is still an experimental program. The computing time for ECRML/ECRML+PHYML is actually the sum of the computing time of the ECRML/ECRML+PHYML subprograms.
At the same time, we can also see from Table [4](#T4){ref-type="table"} that although slower than the two fastest maximum likelihood methods PHYM and RAxML, ECRML and ECRML+PHYML are faster than fastDNAml, especially for large datasets.
Conclusion
==========
We have proposed the p-ECRNJ move, which can be used as a topological transformation in heuristics on evolutionary tree reconstruction algorithms by itself or can be used to improve local topological transforms. The p-ECRNJ move first randomly select the p edges to contract from the current tree, and then refine the contracted tree to give back a binary tree according to the fast NJ algorithm. Experiments on real datasets show that the p-ECRNJ in limited iterations can find better trees than the best-known maximum likelihood methods so far and can efficiently improve local topological transforms without much time cost. Therefore, the p-ECRNJ is an efficient implementation of p-ECR.
Methods
=======
In order to make p-ECR efficient, a method p-ECRNJ combining the exhaustiveness of the p-ECR move and the efficiency of NJ is presented in this paper and detailed here. Before p-ECRNJ, several concepts are introduced at first. An evolutionary tree is an unrooted or rooted tree whose leaves have degree one, and all of whose internal nodes have degree at least three. An internal node with degree more than three is called unresolved. A supernode *α*in a tree T is a degree-1 non leaf vertex, denoting some collapsed subtree.
The main idea of the p-ECRNJ is to randomly contract *p*edges from an evolutionary tree T, and the consequent refinement of the unresolved nodes is accomplished by NJ. As show in Figure [4](#F4){ref-type="fig"}, only one unresolved node Y is successively resolved in NJ. Generally, contracted tree T\* in p-ECRNJ contains c (1 ≤ c ≤ p) unresolved nodes. Then, a collapsing procedure is needed before the refinement using NJ. That is, to select an unresolved node to refine and to root the tree at the node, then to collapse every subtree rooted at the node adjacent to the root node into a supernode respectively. This collapsing procedure produces a tree containing only one unresolved node; consequently, the unresolved node is refined according to NJ. The refinement process is continued until there is no unresolved node in T\*.
A p-ECRNJ move on an evolutionary tree T is described in detail as follows.
\(1\) Contraction stage: to randomly select p edges to contract all at once and the unresolved tree T\* is resulted;
\(2\) Refinement stage:
The refinement stage includes the following two steps:
Step 1: Collapsing step:
Select an unresolved node to refine and root T\* at the unresolved node, collapse every subtree rooted at the internal node adjacent to the root node into a supernode respectively;
Step 2: NJ step:
1 Build the distance matrix M of the nodes or supernodes in the collapsed tree T\*. The distance between two nodes is the distance between the two sequences corresponding the two nodes, respectively. The distance between two supernodes is more sophisticated and computed as follows. Let a and *β*denote two supernodes respectively. It is assumed that a and *β*respectively represents a subtree containing x leaves *α*~*i*~(*i*= 0,\..., *x*-1) and a subtree containing y leaves *β*~i~(*i*= 0,\..., *y*-1). Then the distance between *α*and *β*is estimated using Eq. (4). The distance between a single node and a supernode is a special case where x = 1 or y = 1.
$$d_{\alpha\beta} = \frac{1}{xy}\left( {{\sum_{i = 0}^{x - 1}{\sum_{j = 0}^{j = y - 1}d_{\alpha_{i},\beta_{j}}}} - {\sum_{i = 0}^{x - 1}{d_{\alpha_{i}\alpha_{i + 1}} - {\sum_{i = 0}^{x - 1}d_{\beta_{i}\beta_{i + 1}}}}}} \right)$$
2 According to M, compute matrix Q according to Eq. (1);
3 Select the pair i, j such that min~*i*,\ *j*~*Q*~*ij*~to agglomerate;
4 Create a new node C which represents the root of the new cluster. Then estimate the length of branches (*C*, *i*) and (*C*, *j*) using Eq. (2);
5 Reduce the distance matrix by replacing the distances relative to *i*and *j*by those between the new node C and any other node k using Eq. (3).
The process of 2, 3, 4 and 5 is repeated until r = 2. The process of collapsing -- NJ is repeated until there is no unresolved node in the tree, that is, the number of unresolved nodes *S*~*u*~is 0. The whole p-ECRNJ process is illustrated in Figure [5](#F5){ref-type="fig"}.
{#F5}
Due to that p edges are randomly selected, p-ECRNJ is performed repeatedly a pre-defined times k to perform in actual application instead of considering all $C_{n}^{p}$ ways of selecting p edges to contract. The value of *k*depends on the time allowed, if there is enough time, k can be set to $C_{n}^{p}$. A simple heuristic named ECRML based on p-ECRNJ and hill climbing is shown in Figure [6](#F6){ref-type="fig"}.
{#F6}
As shown in Figure [6](#F6){ref-type="fig"}, for every unresolved node, the NJ method is run one time. The time complexity of the NJ method on n sequences is O(n^3^). So, the total time complexity of a p-ECRNJ move is $O\left( {\sum_{i = 1}^{S_{u}}d_{i}^{3}} \right)$, where *S*~*u*~is the number of unresolved nodes and *d*~*i*~is the degree of the unresolved node. As mentioned above, when p edges are deleted from a tree, the location relationship between the deleted edges determines the number of unresolved nodes produced and the degrees of the unresolved nodes. When all the p edges are adjacent, only one unresolved node with degree-(2p) is produced. Then the time complexity of p-ECRNJ is O(8p^3^), that is O(p^3^); when all the p edges disjoin, p unresolved nodes with degree-4 are produced. Then the time complexity of p-ECRNJ is O(4^3^p), that is O(p). In other cases, the time complexity is intermediate of the two special cases. Consequently, it takes at most O(p^3^) to refine unresolved nodes in every run of p-ECRNJ(step a). After every p-ECRNJ, it need to optimize the branches and re-compute the likelihood of the current tree(Step b). The time complexity in this step is O(lmn), where l is the iteration times in the optimization of branches, m and n is the number of sites and number of sequences respectively. So, the total time complexity of ECRML is O(k\*(p^3^+ lmn)).
In an actual tree search, besides used as a topological transformation operation as shown in Figure [6](#F6){ref-type="fig"}, the p-ECRNJ move can be combined with a local topological transforms, such as NNI, where rounds of NNI and p-ECRNJ are alternated. For example, ECRML+PHYML in Results is based on the combination of p-ECRNJ and NNI.
Competing interests
===================
The authors declare that they have no competing interests.
Authors\' contributions
=======================
JL conceived, designed and performed the study under the supervision of MG and YL. All authors read and approved the final manuscript.
Acknowledgements
================
The work was supported by the Natural Science Foundation of China under Grant No.60741001, No.60671011 and No.60761001, the Science Fund for Distinguished Young Scholars of Heilongjiang Province in China under Grant No. JC200611, the Natural Science Foundation of Heilongjiang Province in China under Grant No. ZJG0705, the Science and Technology Fund for Returnee of Heilongjiang Province in China, and Foundation of Harbin Institute of Technology under Grant No. HIT.2003.53.
This article has been published as part of *BMC Bioinformatics*Volume 9 Supplement 6, 2008: Symposium of Computations in Bioinformatics and Bioscience (SCBB07). The full contents of the supplement are available online at <http://www.biomedcentral.com/1471-2105/9?issue=S6>.
| |
Cheesy Breakfast Strata is perfect for breakfast, brunch, or as a light dinner with a salad. Swiss cheese, chives, and bacon give a kick to bland potatoes and eggs. The flavor is fabulous, and it is even good at room temperature.
Tag: Swiss cheese
Swiss Mushroom Chicken Recipe
Swiss Mushroom Chicken packs a flavorful punch and will soon become one of your go-to poultry main dishes. The chicken is browned, covered in mushrooms, green onions, sour cream sauce, and Swiss cheese, then baked to perfection.
Chicken Cordon Bleu Casserole Recipe
Chicken Cordon Bleu Casserole takes ordinary ingredients that you probably have on hand and turns them into a meal to please the entire family. Here I’ve combined this concept into a casserole using cooked chicken in a stuffing crust, then added broccoli and mushrooms to round it out as a one-dish meal.
Crab Muffins Recipe – Amazing Party Finger Food
These little crab muffins are bursting with fresh, ocean flavor. Serve these bite-sized gems warm or at room temperature for a perfect party finger food.
Salmon Bread Pudding Recipe for Lunch, Brunch, or Dinner
The outstanding flavor of this salmon bread pudding makes it my absolute favorite savory bread pudding. It makes a fabulous brunch dish that may be served hot or at room temperature, but I like it for a light dinner or lunch. | https://www.pegshomecooking.com/tag/swiss-cheese/?amp |
Joanna Catherine Schröder aka Légid, is a 28 years old photographer based in Berlin. She is half German and half from Rwanda. She grew up partly in Rwanda, Burkina Faso, studied in Paris and worked in Germany. She is currently the Online Editor and Social Media Manager for Blonde Magazine.
Her project "Légid" started out of nowhere in 2013 as she was jobless and met an Australian guy, named Shaun, who got her into photography.
"It was winter in Berlin and we would just walk around the city and talk about life. He had a camera, so he would always take pictures.I felt inspired and started doing the same, but with disposables (because I was afraid to loose or break a “real” camera)." she recalls.A few month later her friend gave her a Yashica T3 so she started documenting her Berlin life/friends as well as her travels! Now she just can’t imagine life without her little analog camera.
The series "Bed Stories" was a shoot about Berlin winters. She wanted to show the atmosphere of Berlin during the harshness of winter. "I’m a lazy person and I love when life takes place in bed " she explains "I used to do my high-school homework in bed, eat in it, live in it etc. So every time I had my friends over, chilling in my bed, I thought this would look so good on pictures!"
Joanna invited all her girls over on one grey Saturday. They had breakfast, drank Rotkäppchen Sekt and talked about life, boys, hair and society. She would capture each of them in bed. Then she decided to also shoot some boys (friends or Tinder dates). | http://www.curatedbygirls.com/joanna-catherine-schroder.html |
Jiban Bima Corporation
Jiban Bima Corporation (JBC) established on 14 May 1973 under the Insurance Corporation Act 1973 with an authorised capital of Tk 200 million divided into 2 million ordinary shares of Tk 100 each. The paid up capital of the corporation is Tk 50 million fully subscribed by the government. The corporation is engaged in life insurance business under the provisions of the Insurance Act 1938, Insurance Rules 1958, Insurance Corporation's Rules 1977, and related other laws enforceable in Bangladesh.
The total number of insurers registered in Pakistan up to 1968 was 81, of which 40 were constituted or incorporated in Pakistan and the remaining 41 in other foreign countries. Of the 40 indigenous companies, 10 were registered in East Pakistan and 30 in West Pakistan. Of the foreign companies, 21 originated in the UK, 8 in India, 5 in USA, 3 in New Zealand and one each in Australia, Canada, France, and Hong Kong. Ten of the 40 Pakistani companies were exclusively engaged in life insurance business, 21 in life and other business, and 9 in other business only. Foreign companies concentrated more on non-life insurance. Two of them did life insurance business only, another one did life as well as general insurance.
The number of insurance companies that had business in East Pakistan was 75, of which 10 were locally incorporated ones. Following the independence of Bangladesh in 1971, both life and general insurance business in the country was nationalised under the Bangladesh Insurance (Nationalisation) Order 1972. Five corporations were established to absorb, own and control the businesses of the 75 existing insurance companies and these new corporations were Bangladesh Jatiya Bima Corporation, Karnafuli Bima Corporation, Tista Bima Corporation, Surma Jiban Bima Corporation and Rupsa Jiban Bima Corporation. In 1973, the government decided to integrate the life and general insurance companies into two corporations, and accordingly, the Jiban Bima Corporation was formed to take over the undertakings of the Surma Jiban Bima Corporation and Rupsa Jiban Bima Corporation. The Karnafuli Bima Corporation and the Tista Bima Corporation were integrated into sadharan bima corporation. In that year, the government also decided to merge the Bangladesh Jatiya Bima Corporation with the newly formed Sadharan Bima Corporation.
Until 1985, Jiban Bima Corporation was the only institution to handle life insurance business in Bangladesh. Through the Insurance (Amendment) Ordinance 1984 and Insurance Corporations (Amendment) Ordinance 1984, the government allowed the private sector to establish insurance companies. Up to December 2000, at least 17 private sector insurance companies came into being and made the life insurance business competitive, which however, had little impact on the business performance of the Jiban Bima Corporation.
The corporation offers 15 different types of life insurance schemes. These are whole life assurance, endowment assurance, child protection policy, children endowment, anticipated endowment assurance, pension scheme policy, single payment policy, mortgage protection policy, group term insurance policy, group endowment policy, group variable endowment policy, group pension policy, grameen bima policy, joint life endowment policy, and progressive premium policy.
Premium income of the corporation was Tk 2,447 million in 2007 marking an increase 63% over premium income in 2001. Following is an account of the gross premium income structure, proportion among expenses, operating profit and net income, types of policies sold and the sums assured by types, and the management structure of the corporation presented for the year 1998. The figures are representative enough since although the absolute figures have changed over time, the relative proportions remained the same.
In 1998, the corporation earned gross premiums of Tk 1,402.8 million, which comprised first-year premiums (Tk 401.2 million), renewal premiums (Tk 913.0 million), and group insurance premiums (Tk 88.6 million). It paid Tk 493.7 million to settle life insurance claims under various schemes. Business management expenses of the corporation stood at Tk 629.2 million and it earned operating profits of Tk 279.9 million. The net incomes from its investments and other sources were Tk 189.2 million.
In 1998, the corporation sold 65,086 new individual policies and the sum assured was Tk 5,723 million. The number of policies on female lives was 10,244 and the sum assured in these policies was Tk 700.2 million. The number of policies written in the rural areas under its rural business scheme was 44,209 with a sum assured of Tk 3191.4 million. The corporation issued 47,925 policies with the amount assured of Tk 3,824.4 million under its non-medical business scheme in the year. The total number of organisations and persons covered, sum assured and premiums earned under the Group Insurance Scheme figured 340 organisations, 707,900 persons, Tk 1,7575.6 million and Tk 88.6 million respectively.
At the end of the year 1998, the corporation had 315,735 individual life policies in force with a sum assured of Tk 23,742 million. Of these policies 310,555 with an amount assured of Tk 23,727.4 million were underwritten by the corporation itself and the remaining, with a sum assured of Tk 14.6 million, were underwritten by the company's old units. Conversely, a total number of 43,641 individual policies with a sum assured of Tk 3,047.0 million were lapsed during the year.
The corporation has Re-Insurance Treaty with Swiss Re-Insurance Company of Switzerland and Munich Re-Insurance Company of Germany. The retention limit in respect of underwritten risk of the corporation up to 1998 was Tk 1 million. Jiban Bima Corporation is also working as a re-insurer of 2 private Life Insurance Companies of Bangladesh.
On 31 December 1998, the book value of investments and loans of the corporation including term deposits with banks stood at Tk 3,328.6 million and the investment portfolio included loans on mortgage of property, loans on insurers policies, investment in government securities, debentures of companies, bridge finance advance, investment in shares of companies, house property and land, and term deposit with banks. On that date, the total assets of the corporation were valued at Tk 4,340.49 million.
The management of the corporation is vested in a 7-member board of directors appointed by the government. The managing director is the corporation's chief executive. He is assisted by 2 general managers, 6 deputy general managers and 2 assistant general managers. In December 1998, the corporation had 1,772 employees. The corporation has 8 divisions in its head office at Dhaka and 19 zonal/regional offices. | https://en.banglapedia.org/index.php/Jiban_Bima_Corporation |
Recently Nicholas Levinge from England visited India and joined many a meetings formal and otherwise on philatelic matters. His chosen subject is the King George V Silver Jubilee issues of all countries and he is the founder President of the King George V Silver Jubilee Study Circle. He also undertook research at the National archives of India for about 3 months on the Silver Jubilee issue of India. Alas without much luck.
We have been busy during his visit with various exhibitions including Stamps of India National Exhibition where Levinge volunteered and was a great help in mounting and dismounting of the exhibits. We had promised that we will search and collate all the information we have access to on the subject and share it with him. The following is result of our work.
George V, became the King of the United Kingdom and the British Dominions, and Emperor of India on May 6, 1910 at the death of his father King Edward VII. His 25 years of Reign was celebrated as Silver Jubilee. On this occasion 59 countries besides India issued commemorative postage stamps. 44 postal administrations issued a common stamp design that was a first for a British colonial issue.
India issued its own unique design in a set of 7 stamps on May 6, 1935. The monuments featured on the stamps were Gateway of India Bombay, Victoria Memorial Calcutta, Rameswaram Temple Madras, Jain temple Calcutta, Taj Mahal Agra, Golden Temple Amritsar, and Pagoda in Mandlay.
These stamps were designed by H W Barr, Engraver at the Security Printing India as the India Security Press at Nasik Road was then known. He had joined the Press at its inception on a five year contract and after completion of his second five year contract left India in March 1935 upon his retirement. He was succeeded in office by T I Archer, a name well known to those interested in Indian philately.
Mr Barr was responsible for the organisation of the work of the Studio from the inception of the Security Press, and the for the bulk of the original design work. Apart from his work on the designs of Government currency Notes, he redesigned the whole range of higher values of court fee and non-judicial impressed stamps, the series of Government of India cheques, the Inauguration of New Delhi Commemorative series of Postage Stamps and the Silver Jubilee Issue, in addition to many other items including work for Indian Native States.
The Director General's Special circular No. 64 of the March 5, 1935 states that these stamps will be printed for the normal consumption of three months only and the sale of corresponding denominations of the ordinary postage stamps now current will remain suspended so long as the Jubilee stamps are on sale in the post offices. These commemorative stamsp will be used for prepayment of postage and airmail fees on all postal articles and for payment of telegraph charges as well. During the currency of Jubileee stamps, ordinary postage stamps now current will also be accepted, if used by the public on postal and airmail articles and telegrams. No service commemortive stamps will be issued. The Annual Report of the Indian Posts and Telegraphs Department for the year 1935-36 states that these pictorial stamps remained on sale up to December 31, 1935 when the unsold stocks were withdrawn and destroyed. The discovery of the exact quantity printed and sold is the final frontier of India's Silver Jubilee issue.
The accompanying Postal Notice to the above Circular states that a set of 7 stamps will be issued on May 6, 1935 and will be available at all post offices in India and Burma. However the denominations listed in the Notice are ½ Anna, ¾ Anna, 1 Anna, 2½ Annas, 3½ Annas, 8 Annas, and 1 Rupee. Another Postal Notice of April 17, 1935 modified the above stating that the stamp in the denomination of 1 Rupee will not be issued and that instead stamps in the denomination of 1¼ Annas will be issued.
The Post Office Building at Calcutta GPO and several other places were illuminated on this occasion. Although Monday, the May 6, 1935 was observed as a Post Office holiday on acccount of the Silver Jubilee of His Majesty the King E'mperpor's accesion to the throne vide Postal Notice of April 18, 1935.
A Postal Notice of February 28, 1935 states that a special stamp called Silver Jubilee Postal Seal has been issued by their Majesties Silver Jubilee Fund India and will be on sale at selected post offices from April 1, 1935 to May 15, 1935. The proceeds of the sale of the Seals will be used for the relief of distress and suffering in India. These Seals will not be recognized in payment of postage or any other postal or telegraph charges. The Postal Notice of May 8, 1935 extended the date of sale of the Seals to May 31, 1935.
A poster for the 1 Anna Seal was also put up in the post offices.
A severe Earthquake of 7.7 magnitude on Richter scale devastated Quetta on May 31, 1935. A Postal Notice of June 7, 1935 resumed the sale of Silver Jubilee Postal Seals now for the benefit of His Excellency the Viceroy's Quetta Earthquake Relief Fund. This sale was discontinued with effect from January 1, 1936 vide the Postal Notice of December 13, 1935.
A slogan postmark 'Support the Jubilee Fund' in about 3 distincts types are known, two types of Duplex and another is boxed type. This were used at at selected post offices in India during the year beginning with April 1, 1935.
Stephen Smith, distinguished aero-philatelist and pioneering astro-philatelist commemorated the Silver Jubilee by commemorative rocket mail on March 23, 1935 for which special labels were also produced by him. Later in between April 7 and 13, Smith launched 9 more rocket mail flights in Sikkim commemorating Silver Jubilee.
6 denominations, 6d, 1s, 2s 6d, 10s, 15s, and 20s, of the Silver Jubilee issue of British Postal Orders overprinted 'India' were put on sale at post offices in India on May 7, 1935 according to the Postal Notice of May 3, 1935. Jack Harwood, an acknowledged experet on postal orders of the world, has this to say on our request for information and imgaes for these: "Unfortunately, I have never seen a Silver Jubilee Postal Order overprinted for India. Ordinary British SJ issues of any but the 6d and 1/- are difficult enough to locate. Actually, I don't believe I've ever seen the SJ Postal Orders overprinted for any colony or territory. There are no such items in my collection. I've viewed most of the major collections of postal orders in the UK, and do not recall ever seeing an overprinted Jubilee issue in any of them. I have checked with several other postal order collectors, and no one seems to have seen any overprinted SJ examples, India or anywhere else. Only 4 million of each denomination were printed, so numbers sent overseas from the UK must have been very small." | http://stampsofindia.com/readroom/KGV-SilverJubilee-India.html |
The life cycle of a butterfly is an engaging and interesting concept for children. It can also be used to touch upon the concepts of family, animals and insects.
Lesson plan Details
This resource on the metamorhosis of the butterfly could be used as an extended exploration as part of the Class III EVS theme (as given in the NCERT curriculum) on Family and Friends. It also touches on the themes of animals and insects, to understand what children think of insects and familiarize them with the insect life around them. The Butterfly Life Cycle study is an idea worth exploring and engaging the child with and could also work as a basis for further explorations in class IV and V.
- To introduce and familiarize the learner with the basic sequence and stages of the butterfly’s growth.
- To familiarize the metamorphosis stages with simple terminology- egg, caterpillar, pupa, butterfly.
- To instill in the learner, a sense of wonder, sensitivity and documentation of what strikes one as worthy of preserving.
The resource has three parts to it which could be covered over 3 class sessions (20-25 minutes each) and extended to a required number of hours beyond the classroom.
Session 1
The children could be read out the story of Maria Sibylla Merian. She lived more than 3 centuries ago between the time period of 1647 to 1717. She changed the way one looked at documenting in Science. As a child, she was very keen on observing and painting out all that she observed; especially flowers, moths, butterflies and caterpillars.
"I collected all the caterpillars I could find in order to study their metamorphosis, I therefore withdrew from society and devoted myself to these investigations."; Maria at the age of 13 (“Maria Sibylla Merian: Why her Art changed how we see Nature”).
What was stunning about her art was that they were not fanciful imagery but accurate depiction of what was real that came from long hours of intense observation and eye squinting! Imagine that, in a time with no camera or easily accessible microscopes! The bare human eye and a will to express the seen in art! She became the eyes through which the world came to see the life around oneself in ways that had not been explored yet.
One must definitely display to the child some of Maria’s work. The following are some examples:
- Source: http://www.artsmia.org/index.php?section_id=2&exh_id=2855
- Source: http://www.booktryst.com/2011/11/beautifully-strange-insects-of-maria.html
- Source: http://pattifridayphotography.blogspot.in/2013/04/maria-sibylla-merian.html
- Source: http://www.hdwallcloud.com/maria-sibylla-merian-pictures/sy9ef6/
Further, given the facility for internet and projector access, one could also explore the idea of viewing the following video as a class:
http://www.youtube.com/watch?v=sCfOQF0nAXo&feature=youtu.be
The entire length of the video is about 11 minutes which might be too long an attention span (for a picture video) to expect of 7 year olds. However, parts of the video could be shown and extended/ cut short depending on the response of the children. One can never predict which moment of an otherwise simple seeming experience could inspire a young mind!
Session 2
Using one or more of Maria’s depictions of the butterfly metamorphosis cycles (one could also use other sources for pictures if the need is felt; however, use of real life pictures than stencil diagrams would excite and make the process a lot more meaningful and relatable to the child), the facilitator could briefly take the children through the four stages of metamorphosis while narrating the basic changes and functions that are characteristic of each stage. The following could be used as the underlying structure of the narration:
The life cycle of butterflies has long intrigued and amused many a common and science enthusiast alike. Butterflies, before they come to be the magnificent colourful flutter of a life that we gaze at, grow through 4 systemic stages.
The Egg: Butterflies, like many other species start of as an egg. One can find clusters of round or oval, translucent eggs on leaves. When looked at closely, one could actually see the growing organism within the egg.
The Caterpillar: The egg hatches to give life to what we know as a caterpillar. Now, the butterflies are very careful to lay their eggs on leaves that are edible to the new-born caterpillar who cannot crawl about looking for food. The caterpillar munches on the leaf it is born on and the neighbouring leaves until it grows to it’s full plump size.
The Pupa: On having grown plump and strong, the caterpillar now begins to hide and take shelter within a wall like structure called the pupa. Within this, it uses up all the energy from the many leaves to transform it’s body parts to what grows into a winged butterfly with a tiny body and antenna. This transformation is essentially what is termed as metamorphosis.
The Butterfly: Having developed the necessary tissues and organs, the fully grown butterfly emerges out of the crackling-tearing pupa in it’s full size. Within a couple of hours, it masters the art of flying and is already on the lookout for a mate to lay some eggs!
Having briefly narrated this life cycle, on could also allow the child to meddle and engage with a toy (resource 1) that depicts the changes across the four stages visually. The steps could be extrapolated to make the first transformation from eggs to caterpillar visual as well.
Session 3
The children, by now, have a second hand knowledge of how the butterfly comes to be. Now for the child to go out there and see this process enfold for oneself! How else would the curious unsettling mind of the child believe something to be true? The facilitator could now introduce the child to the idea of Nature Journaling. This is something that could be a year long activity of which the metamorphosis theme could be one aspect. The children, basically, are encouraged to go out into their gardens, school grounds, parks, trees, any niche with life and observe the many instances of life and document it in any form meaningful to them- scribbles, drawings, words, pictures (with the help of parents/any adult with a camera). So, this theme could be the start to a way of Science that shall be continued. It is a simple three step process:
- Find a scribble pad/ book and shine it up with colours, scribbles, paper mosaic, anything! This shall be the journal.
- Keep a sharp eye and an ear at all times! Spot a leaf with butterfly eggs and watch it transform to the butterfly! While at it, make little snippets of the observations.
- Keep at this nature journaling for some time to come! And share and exchange journal entries with one another.
Having completed this journaling activity, the maze (resource 2) could be printed out and given to the child as a take-home or in-class activity (whichever suits the class process). This could be a means to gauge whether or not the child has comprehended the process of butterfly metamorphosis.
A Couple of Ideas that the Facilitator could address in conversation with the children:
- The practice of observing nature in nature rather than bringing nature into a cage for convenient observation.
- The practice of sharing ideas and observations with one another in healthy conversation minus the attitude of who-knows-most; who-is-right; who-got-here-first.
- The sensitivity to watch out for the magic nature has to offer; to look out for the diversity that feeds one’s curiosity.
- Encourage further explorations; e.g. Do all butterflies grow exactly the same way; do some take longer to grow? Are the butterfly eggs always in clusters of more than one? Do all caterpillars eat similar leaves? What are the other insects that might eat up the butterfly pupa before it turns out into a butterfly? Do dragonflies go through a similar birth and growing process? How does a frog grow into the hopping creature it is?
The table below is an excerpt of the prescribed NCERT syllabus for class III EVS. It is proposed that this resource be used in relation to this theme. However, the resource could also be used as part of independent Science workshops (wherein the intent might be to introduce a certain way of Science driven by inquiry); Children’s camps.
|
|
Questions
|
|
Key Concepts / Issues
|
|
Suggested Resources
|
|
Suggested Activities
|
|
Some creepy crawlies – and flyers too
What different kinds of small crawling animals do you know? Where and from what does each of them hide? Which insects can crawl and also fly?
Which ones bite us? Can flies make us ill? Why does a spider make a web?
|
|
Exploring children’s ideas of crawling animals, flyers and insects.
|
|
Child’s daily life
experience, observation,
stories/ poems on insects, flyers and crawling animals.
|
|
Observation, of ants, flies, spiders, crickets, cockroaches, earthworms, lizards and other animals.
Discussion about them, where they live, what they eat, insect bites (wasp) etc.
Drawing some of them.
Here are some links of resources that the facilitator can use in the classroom: | http://teachersofindia.org/en/lesson-plan/metamorphosis-butterfly |
After reluctantly enrolling in Journalism as a freshman, Kacey Hertan ’16 knew that it would become a passion of hers, “as soon as I wrote my first article I knew Inklings was something that I wanted to be involved in,” Hertan said. This Massachusetts native has spent her three years in Inklings as a business manager, where she sells adds and manages the budget. In her free time, Kacey stays busy as the captain of the Diving team, which she started participating in freshman year after never being on a diving board before. Aside from being an impressive athlete, Hertan is the president of the Key Club, the oldest community service club at Staples. While she enjoys covering a variety of stories, her favorite to write is features. More specifically, the unique people that she has met writing her Humans of Staples piece has been her most rewarding Inklings experience.
Fall Cookies from Inklings News on Vimeo.
Halloween Cupcakes from Inklings News on Vimeo. | https://www.inklingsnews.com/staff_name/kacey-hertan/ |
Q:
Partial fraction decomposition help
In a text that I am reading, they state that the following partial fraction ($r$ fixed) expansion is "readily computed":
$$f(z) = \frac{z^r}{(1-z)(1-2z)(1-3z)\cdots (1-rz)} = \frac{1}{r!} \sum_{j=0}^r \binom rj \frac{(-1)^{r-j}}{1-jz}$$
I know how to do partial fractions, or at least I thought I did. I tried to set it up with something like
$$\frac{A_1}{1-z} + \frac{A_2}{1-2z} + \cdots + \frac{A_r}{1-rz}$$
but it got messy. Also, the answer given above has a constant term $\frac{(-1)^{r}}{r!}$ when $j=0$, which shouldn't show up when doing this method normally.
Could someone point me in the right direction? Thanks!
I now see why the constant term is necessary; doing a "long division" would result in the $\frac{(-1)^r}{r!}$ term. The remainder would be something like
$$\frac{z^r - \frac{1}{r!}(1-z)(1-2z)\cdots(1-rz)}{(1-z)(1-2z)\cdots(1-rz)}$$
which technically could be solved by partial fractions...
Induction also looks promising, as suggested by Maesumi.
A:
The coefficient of $\frac{1}{1-kz}$ is
$$\left[\frac{z^r}{(1-z)(1-2z)(1-3z)\cdots\widetilde{(1-kz)}\cdots (1-rz)}\right]_{r=\frac{1}{z}}=\frac{\left(\frac{1}{k}\right)^r}{(1-\frac{1}{k})(1-2\frac{1}{k})(1-3\frac{1}{k})\cdots\widetilde{(1-k\frac{1}{k})}\cdots (1-r\frac{1}{k})}=\frac{1}{k(k-1)(k-2)\cdots\widetilde{0}(-1)(-2)\cdots(k-r)}=\frac{1}{r!} \binom rj (-1)^{r-j}$$
| |
How Is Eye Color Determined?
Chromosome 15 has a region that plays a major role in eye color. There are two genes in this region: HERC2 and OCA2. OCA2 produces the P protein, which contributes to the maturation, storage, and production of melanin cellular structures. The lower the amount of P protein, the less melanin and the lighter the eye.
These genes result in the most common eye colors, including brown, blue, and green. Scientists still do not fully understand how certain other colors, such as hazel and gray occur.
Caucasian children are often born with blue eyes. By their 3rd birthday, their eye color can change and become darker. This happens when the melanin that causes dark eye colors is not present when the child is born. It is possible for a child to have an eye color that neither of their parents has.
Darker eye colors tend to dominate over the lighter colors. For example, if one parent has brown eyes and the other blue, there is a higher likelihood that their children will have brown eyes. However, this is not automatic.
It is also possible for genes to cause children to have two different eye colors, such as a blue left eye and a brown right eye. Referred to as heterochromia, this occurs as a result of a benign genetic disorder or faulty developmental transport.
Green Eyes
- About 2 percent of people throughout the world have green eyes.
- People with green and other light-colored eyes tend to have a higher eye cancer risk, specifically intraocular melanoma.
- When people have green eyes, they are usually not apparent until the person is at least 6 months old. Green-eyed people are typically born with blue or gray eyes that eventually transition to green.
Blue Eyes
- Blue eyes have less melanin compared to brown, but both colors are relatively common throughout the world. In fact, everyone with blue eyes shares a common ancestor.
- People with blue eyes tend to have greater light sensitivity.
- Night vision is often better among people with blue eyes.
- A genetic mutation is responsible for blue eyes.
- People with blue eyes are more likely to have red eye in photos.
Gray Eyes
- Gray eyes do not contain melanin. There are excess deposits of collagen in the eye’s stroma that essentially block blue hues by interfering with Tyndall scattering.
- Throughout the world, only about 3 percent of people have gray eyes.
- Depending on factors like lighting and clothing, gray eyes may appear to change color.
Brown Eyes
- The most common eye color in the world is brown. However, the shades of brown vary greatly depending on the region where someone is born. Brown-eyed people in Europe tend to have lighter hues while people born in Asia and Africa with brown eyes tend to have darker hues.
- About 41 percent of people in the U.S. have brown eyes.
- Brown eyes contain a high level of melanin. The more melanin present, the deeper the color.
- Having brown eyes may decrease your risk of environmental noise-related hearing issues.
- Women with brown eyes may experience more pain when they are birthing babies.
Hazel Eyes
This color is rare. Each hazel eye is a unique color, with no two people with hazel eyes looking the same.
Eye Color & Your Mood
It is a common myth that people’s eyes change color as a result of their mood. There are times when the eyes may appear to be a different color, but there are explanations for this outside of mood.
For example, eyes can look darker when someone is experiencing extreme happiness or a period of grief. This is not due to their emotions. It happens because the pupil is more dilated at these times.
When the pupil enlarges, the eyes can seem like it is darker, but this is only because the black pupil is creating this illusion. Once the pupil returns to its typical size, the darkening is no longer apparent.
Contrast is responsible for eyes appearing greener when you are angry. Anger can cause the blood vessels to dilate and become redder. This coloring can make it seem like your eyes are greener, but it is just the contrast to the redness that is causing this effect.
Eye Color & Personality
There are articles all over the internet saying that your personality traits are partially intertwined with your eye color. There is some research to suggest that this is possible, but it is not concrete.
Some research suggests the following personality traits are associated with certain eye colors:
- Brown: vivacious, outgoing, and affable
- Green: self-sufficient, impatient, and mysterious
- Hazel: imaginative, adventurous, and determined
- Gray: quiet, conforming, and self-effacing
- Blue: sincere, smart, and sentimental
Another study looked at eye color and personality. Researchers concluded that some of the genes that play a role in eye color also play a role in frontal lobe formation. This means that certain characteristics may be associated with certain eye colors.
These results have been replicated in other studies. One found a correlation between pain tolerance and eye color. This was only a pilot study, but the results are promising for further research.
Other research looked at eye color and how it impacts attractiveness to others. The research found that if a man has blue eyes, he is more likely to seek out women with blue eyes. People with blue eyes seemed to be the most determined to find a partner who had the same eye color as them compared to those with brown and green eyes.
Eye Color & Intelligence
Links between eye color and intelligence are far weaker. There is very limited research that shows certain eye colors may have intelligence as an associated trait.
Those who have associated intelligence with eye color have done so using research methods such as surveys. Since these are largely subjective, there is no scientific proof that certain eye colors tend to be linked to higher levels of intelligence than others. So far, scientists agree that factors like education, home life, and environment are the primary influencers on intelligence scores.
References
The World’s Population by Eye Color Percentages. World Atlas.
Eye Color Linked to Pain Tolerance in Pilot Study at Pitt. (July 2014). Post Gazette.
Eye Color Guide – The Most Common Eye Colors. AC Lens.
Is Eye Color Determined by Genetics? (May 2015). Genetics Home Reference.
Can Your Eyes Change Color? (April 2014). The Wall Street Journal.
Does Eye Color Indicate Intelligence or Personality? What Are Your Eyes Telling the World? (November 2018). Owlcation.
Can Eye Color Predict Pain Tolerance? (July 2014). University of Pittsburgh Medical Center.
Associations Between Iris Characteristics and Personality in Adulthood. (May 2007). Biological Psychology.
3 Things That Your Eye Color Tells the World About You. (March 2018). Psychology Today. | https://www.nvisioncenters.com/education/eye-color-guide/ |
Logistics trends and strategies in 2022
Logistics and supply chain operations continue to be affected today by the vulnerabilities of supply and production chains from the 2020-2021 period, but also from recent periods, determined by:
Which supply chain model do we choose to respond to the current and future operational challenges of the supply chain?
The research carried out by Gartner in February 2021 based on the data of a sample of over 1,300 supply chain professionals showed that:
- 87% plan to invest in supply chain resilience in the next two years
- 89% want to invest in agility
- 60% of the respondents admit that their supply chain models were not designed for resilience, but for efficiency, i.e. designed as a function of reducing costs in achieving marketing-sales strategy or, worse, as a cost center of the company’s business support services
Therefore, the challenge of any supply chain manager in the next period will be the redesign, based on the lessons learned from the supply chain vulnerabilities of 2020-2021, but also recent ones, of a SC model that combines cost optimization solutions with of flexibility, sustainability and resilience.
The redesign of a supply chain model is necessary when important changes occur in a company’s business strategy, determined by:
- Activity restriction or business growth through mergers and acquisitions (reallocation of existing resources or allocation of new resources)
- Expanding/changing sales channels (where to buy and where to pick up the goods)
- Entering new markets – the expansion of the distribution/production network or the increase of existing capacities is required
- Expanding or updating the range of products by selecting new suppliers or deselecting existing ones (introducing new product lines or updating existing products by expanding functional characteristics and design changes, where and when)
- Important changes in the client portfolio (appearance of large clients, increase or decrease in the number of small clients, changes in client requirements)
- Changes in the internal and international regulatory environment (the introduction of new restrictions or the emergence of new opportunities for operational expansion)
- The continuous reduction of the level of customer service, the reduced adaptability to new market requirements and the dynamics of product demand models – implies the introduction of new processes or the improvement of existing ones, the improvement of delivery times and/or the correct management of life cycles of the products
- The decrease in the profit margin as a result of the increase in production costs and transport tariffs during the energy crisis – it is necessary to reduce customer service costs by renegotiating supply and transport contracts and optimally reconfiguring the supply chain network in order not to affect the quality of commitments made to supply chain members
- Other situations (pandemic, Brexit, wars, international terrorism, natural disasters, climate change, global financial crises, etc.)
In today’s complex business environment, most companies will want to rethink their supply chain operating models to become more resilient, responsive and efficient. There is NO single model for successful operation of all companies’ supply chains.
The same design principles can lead to very different supply chain operating models. The diversity of operating models of supply chains is determined by the specifics of the following factors:
Therefore, a redesign of the operating model should take into account both the business environment, the objectives and the business strategy, as well as the specific functional characteristics of the company in the practical implementation of the supply chain strategy. Think strategically when designing the supply chain model, don’t worry about optimization or timely resolution of some problems that have arisen.
Michael Hammer, a visionary leader in process redesign and management, said: “High-performance operational processes are necessary but not sufficient for a company’s success”.
In other words, you can have the best operational processes in the world, if the company’s strategy does not give you the right direction, you cannot achieve your business goals. Many companies that have not integrated supply chain strategy into their business strategy have experienced many changes and disruptions in 2021.
Regardless of the changes made in the business objectives, i.e. reducing costs, expanding the product range, rationalizing assets, increasing the level of customer service, expanding the market share, you must ensure that the changes made in SC support the achievement of the new objectives of the business strategy. Without strategy you cannot be competitive. Sometimes business and SC strategies are a little out of phase and need to be aligned to focus on what you want to achieve in the next period.
For the current year 2022 and the following period, develop a configurable supply chain system. To reconfigure the supply-distribution chain in response to changes in the business environment, the structure of your supply chain network must be flexible in relation to the physical, tangible assets represented by factories, distribution centers, retail stores, showrooms, etc. and the intangible assets represented by systems software and management of supply chain processes.
The creation of a flexible supply chain structure, easily configurable, involves strategically ensuring the balance between the use of own assets and access to third-party assets.
Examples:
All these examples are based either on outsourcing processes that do not bring competitive advantages, or the development of coordination and control systems of collaborative supply chain processes.
For example: own distribution center vs. third party supplier’s distribution center. In order to obtain access to third-party distribution assets, a service contract will be concluded with 3PL storage and transport providers, and the coordination and control processes of collaborative fulfillment activities will be carried out by using a common software platform.
By developing an internal supply chain network with own and third-party assets, the configuration of the supply chain network is not fixed, on the contrary, it is flexible and easily configurable, and a change in the market related to the reduction of demand or the expansion of the business to new markets will allow the rapid reconfiguration of the supply chain.
We recommend that, in order to understand the vulnerabilities of a supply chain in the development of a flexible, reconfigurable structure, you initially use your previous professional experience obtained from implementation of best practices focused exclusively on cost reduction at the expense of resilience: Lean and Just In Time (JIT), rationalization of the number of suppliers and the use of unique sources of supply, outsourcing, global centralization of production and distribution.
Later, in order to respond more easily to the future challenges of logistics chains, add practices focused on efficiency that were previously mentioned, to the resilience and sustainability best practices related to segmentation, flexibility and agility.
The joint use of efficiency, resilience and sustainability practices will help you in the future to correctly design an efficient, flexible and reconfigurable structure of the new supply chain model.
We expect a paradigm shift from “design-for-efficiency” to “design – for- resilience”.
Note: The design of a supply chain system with a flexible, reconfigurable structure strategically offers the advantages of rapid system updating, thus avoiding the high costs of its total replacement or redesign.
In fig. 1 you can see a simplified plan in 8 stages that helps you increase the performance of the supply chain in the following period:
Fig. 1. The stages of the methodology for updating and implementing the supply chain strategy
You have read Article 1 of the Series “Logistics trends and strategies in 2022”
In the second article we will talk about the stages of establishing the supply chain strategy. | https://www.seniorsoftware.ro/en/logistics-trends-and-strategies-in-2022/ |
Does the gender of a student’s initial STEM instructor have any causal effect on major choice? Using a panel of student-by-course data from Wellesley College, I measure the effects of female STEM instructors on female student major choice. The main threats to internal validity are non-random student selection in or out of class sections based on observable or unobservable characteristics, along with simultaneity. To mitigate these threats, I employ an instrumental variable that uses the exogenous variation induced by random patterns of faculty hires and leaves within a given department and semester. My analyses find that exposure to female STEM professors significantly increases a female student’s probability of majoring in a STEM department. Female STEM professors increase the likelihood of a student major by 5 to 6 percentage points, which is about a 25% increase from the major declaration rate with male STEM professors. These positive results are especially pronounced for students with high SAT Math scores, where female professors increased the likelihood of majoring in STEM by 11 to 13 percentage points, a 40%-45% increase. I also observe significant negative female professor effects in the Humanities and Non-Economics Social Sciences. These results are consistent with the theories and empirical findings detailed in past literature, that exposure to female professors in traditionally male-dominated fields (i.e. STEM) increase female majors in those areas. | https://repository.wellesley.edu/thesiscollection/592/ |
Q:
Groups whose poset of direct factors are lattices
Let $G$ be a finite group. Denote by $\mathcal{N}(G)$ the modular lattice of normal subgroups of $G$ and denote by $\mathcal{D}(G)$ the subposet of $\mathcal{N}(G)$ whose elements are the direct factors of $G$.
In general, $\mathcal{D}(G)$ is not a sublattice of $\mathcal{N}(G)$. For example, take $G=\mathbb{Z}/4\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}$ and $H,K$ the subgroups generated by $(1,0)$ and $(1,1)$ respectively. We can check that $H$ and $K$ are two direct factors of $G$ but that $H\cap K=<(2,0)>$ is not.
We will call $\mathcal{D}$-group every finite group $G$ for which $\mathcal{D}(G)$ is a sublattice of $\mathcal{N}(G)$. Some class of finite groups are $\mathcal{D}$-group. For example, we can check that :
1) Every cyclic group is a $\mathcal{D}$-group.
2) More generally, every finite nilpotent group for which the sylow subgroups are indecomposable is a $\mathcal{D}$-group.
3) Every group (finite or infinite) in which every normal subgroups is a direct factor is obviously a $\mathcal{D}$-group. It is well known that such groups are the restricted direct product of simple groups (see James. Weigold, On direct factors in groups).
My question is : Can we describe (or if it is possible, classify) the finite $\mathcal{D}$-group.
Remark : If the lattice of normal subgroups of a finite $\mathcal{D}$-group is distributive, then the lattice $\mathcal{D}(G)$ is a boolean algebra.
A:
I have a few observations about this question,
but only time today to write down one of them.
For this I will write $H\cap K$ for the intersection
of two subgroups (just as everyone else does), and write
$H+K$ for the join of the subgroups.
Theorem.
Let $G$ be a group, and let $(P_1,Q_1)$ and
$(P_2,Q_2)$ be two pairs of complementary normal
subgroups (a.k.a. pairs of complementary
direct factor subgroups of $G$).
If $P = P_1\cap P_2$ and $Q = Q_1+Q_2$, then
$[G,G]\subseteq P+Q$.
$[P\cap Q,P+Q] = \{1\}$.
Therefore, if $G$ is any (finite) centerless, perfect group, then
$G$ is a $\mathcal D$-group.
Proof:
For the first item,
$$
\begin{array}{rl}
[G,G]&=[P_1+Q_1,P_2+Q_2]\\
&=[P_1,P_2]+[P_1,Q_2]+[Q_1,P_2]+[Q_1,Q_2]\\
&\leq [P_1,P_2]+Q\\
&\leq (P_1\cap P_2) + Q = P+Q.
\end{array}
$$
Here I am using the additivity of the commutator, the fact that
$[H,K]\leq H\cap K$, and the fact that $Q_1, Q_2\leq Q$.
For the second item,
$[P,Q_1] \leq P\cap Q_1 \leq P_1\cap Q_1 = \{1\}$.
Similarly
$[P,Q_2] = \{1\}$. By the additivity of the commutator,
$[P,Q]=[P,Q_1+Q_2]=[P,Q_1]+[P,Q_2]=\{1\}$. Now let $Z=P\cap Q$, which is $\leq P$ or $Q$. From the last two sentences and the monotonicity
of the commutator in each variable we deduce
$[Z,Q]\leq [P,Q] = \{1\}$ and $[Z,P]\leq [Q,P]=[P,Q]=\{1\}$,
so by additivity we get
$$[P\cap Q,P+Q]=[Z,P+Q]=[Z,P]+[Z,Q]=\{1\}.$$
This
is the assertion to be proved.
For the final sentence of the proof, let $G$
be a perfect group ($[G,G]=G$) that is also a
centerless group ($[G,N]=\{1\}$ implies $N=\{1\}$).
Using the perfectness of $G$,
the first item of the theorem can be written $G\subseteq P+Q$.
Using this (i.e. $G=P+Q$), the second item can be written
$[P\cap Q,G]=\{1\}$, or $P\cap Q\leq Z(G)$. Using the
centerlessness of $G$ we get $P\cap Q=\{1\}$.
Altogether we obtain that $P=P_1\cap P_2$ and $Q=Q_1+Q_2$
are complementary
normal subgroups of $G$.
This shows that the collection of
factor congruences is closed under $\cap$ and $+$,
so $G$ is a $\mathcal D$-group \\\
[One can go a bit further and show that the lattice of factor
subgroups of a perfect, centerless group is a
complemented distributive sublattice
of ${\mathcal N}(G)$.]
A:
(edit: Added Theorem 2 below, which gives half the case with abelian direct factors, and classifies the finite abelian $\mathcal{D}$-groups)
Theorem 1. Let $G$ be a non-trivial group satisfying both chain conditions on normal subgroups, and with no non-trivial abelian direct factors. Then $G$ is a $\mathcal{D}$-group if and only if $G$ admits a unique Krull-Schmidt decomposition, up to the order of the factors.
proof: The chain conditions guarantee that $G$ admits a (finite length) Krull-Schmidt decomposition $G= G_1\times \cdots \times G_n$, where the subgroups $G_1,...,G_n$ are all non-trivial and indecomposable. Let $\pi_1,...,\pi_n$ denote the corresponding projections $\pi_i\colon G\to G_i$.
The group $\text{Aut}_c(G) = C_{\text{Aut}(G)}(\text{Inn}(G))$ acts transitively on the Krull-Schmidt decompositions, up to the order of the factors. $G$ admits a unique KS decomposition, up to order of the factors, if and only if $\text{Aut}_c(G) = \prod_{i=1}^n \text{Aut}_c(G_i)$. This is in turn equivalent to $\text{Hom}(G_i,Z(G_j))$ being trivial for all $i\neq j$. Which is in turn equivalent to $\pi_j(\phi(G_i))$ being trivial for all $i\neq j$ and $\phi\in\text{Aut}_c(G)$.
So suppose that $G$ admits a unique KS decomposition, up to the order of the factors. Then every direct factor of $G$ is of the form $\prod_{i\in E} G_i$ for some $E\subseteq \{1,...,n\}$. The intersection and join of direct factors are therefore equivalent to the intersection and union of the corresponding subsets of $\{1,...,n\}$. Therefore the direct factors form a sublattice of $\mathcal{N}(G)$, and $G$ is a $\mathcal{D}$-group.
On the other hand, suppose that $G$ does not admit a unique KS decomposition up to the order of the factors. Fix any KS decomposition $G= G_1\times\cdots G_n$. Also fix $i,j$ such that $\text{Hom}(G_i,Z(G_j))$ is non-trivial, and let $z\in\text{Hom}(G_i,Z(G_j))$ be non-trivial. We define $\phi\in\text{Aut}_c(G)$ by $\phi(g)=g$ for $g\in G_k\neq G_i$ and $\phi(g)= g z(g)$ for all $g\in G_i$. Then $G_i$ and $\phi(G_i)$ are distinct direct factors of $G$ but $G_i\cap \phi(G_i) =\ker(z)$ is a proper normal subgroup of the indecomposable group $G_i$, so can be a direct factor only if $\ker(z)=1$. This implies $G_i$ is abelian, a contradiction to assumptions on $G$. $\square$
The reverse direction did not use the assumption of no abelian direct factors.
Theorem 2. Let $G$ be a non-trivial group satisfying both chain conditions on normal subgroups. Write $G= (A_1\times\cdots\times A_k)\times (G_1\times \cdots \times G_n)$, and $A=A_1\times\cdots \times A_k$, where the $A_i$ are indecomposable abelian groups and the $G_i$ are indecomposable non-abelian groups. If $G$ is a $\mathcal{D}$-group then the following three conditions hold:
(1) The Sylow subgroups of $A$ are either cyclic or elementary abelian. Equivalently, for all $i\neq j$, any non-trivial element of $\text{Hom}(A_i,A_j)$ is an injection.
(2) The Krull-Schmidt decomposition of $G_1\times\cdots\times G_n$ is unique, up to the order of the factors. Equivalently, for all $i\neq j$, $\text{Hom}(G_i,Z(G_j))$ is trivial.
(3) For all $i,j$, any non-trivial element of $\text{Hom}(A_i,Z(G_j))$ is an injection. Given the previous two conditions, this is equivalent to saying that if the Sylow $p$-subgroup of $A$ is not elementary abelian, then the Sylow $p$-subgroup of $Z(G_j)$ is trivial for all $j$.
proof: The proof proceeds essentially as in proof of the forward direction for the previous proof.
We write $G=H_1\times\cdots \times H_m$, where we permit one or more factors to be abelian. For any $i\neq j$ we consider $z\in\text{Hom}(H_i,Z(H_j))$, and define $\phi\in\text{Aut}_c(G)$ by $\phi(g)=g$ for all $g\in H_k\neq H_i$ and $\phi(g)=g z(g)$ for all $g\in H_i$. We consider $H_i\cap \phi(H_i)=\ker(z)$, which is the intersection of two direct factors of $G$.
If $G$ is a $\mathcal{D}$-group, then $\ker(z)$ is a normal subgroup of the indecomposable group $H_i$ which is also a direct factor of $G$. Therefore either $\ker(z)=H_i$ or $\ker(z)=1$. In particular, for all $i\neq j$ any non-trivial element of $\text{Hom}(H_i,Z(H_j))$ must be an injection. If such an injection exists, then $H_i$ must be abelian. As every quotient of a finite abelian group $X$ is isomorphic to a subgroup of $X$, and vice versa, the three conditions then follow. $\square$.
I'm fairly confident the converse also holds, thereby giving the full classification of $\mathcal{D}$-groups with both chain conditions, and so in particular all finite $\mathcal{D}$-groups. But I'm still working on that.
| |
FIELD OF THE INVENTION
BACKGROUND OF THE INVENTION
DETAILED DESCRIPTION
The present invention generally relates to providing haptic feedback to a user, and more particularly to a method and apparatus for providing a haptic feedback to a rotary knob.
As mobile devices incorporate more features, it is increasingly desirable to enable features such as knobs with multiple functions to handle the load and offer the user a way to interact with the features blindly. However, using such a knob must allow the user to easily differentiate between modes of operation. Therefore, it would be desirable to have a device rotary knob that is capable of providing a feedback to a user, the feedback identifying specific menu items or device functions.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions and/or relative positioning of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention. It will further be appreciated that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art will understand that such specificity with respect to sequence is not actually required.
In order to address the above, mentioned need, a method and apparatus for providing a haptic feedback to a rotary knob is provided herein. During operation the rotary knob will be rotated, causing a display to cycle through menu items or device functions. A haptic feedback is provided to the rotary knob in order to identify a border (transition) between menu items. During operation a different haptic effect may be provided to different borders in order to distinguish between menu items. In addition, an angle of rotation for a particular menu is allowed to vary/change prior to a border being encountered. The angle can be based on the particular menu item. Because both the haptic effect and the angle of rotation for the rotary knob is allowed to vary, a user may be able to easily identify transitions to various menu items.
FIG. 1
100
100
102
101
102
101
100
101
101
illustrates device having a haptic rotary knob. As shown, device comprises graphical user interface (GUI) and haptic rotary knob . In a preferred embodiment, GUI comprises a man-machine interface such as a touch-screen. Rotary knob allows the user to directly manipulate functions and settings of device . Knob is approximately a cylindrical object. Knob can alternatively be implemented as a variety of different objects, including conical shapes, spherical shapes, dials, cubical shapes, rods, etc., and may have a variety of different textures on their surfaces, including bumps, lines, or other grips, or projections or members extending from the circumferential surface.
201
101
202
FIG. 2
FIG. 2
The user (shown in ) preferably grips or contacts the circumferential surface of knob and rotates it a desired amount to scroll through menu items. Haptic feedback can be provided to distinguish between borders of menu items (only one menu item labeled in ). The Haptic feedback is preferably a tactile feedback which takes advantage of a sense of touch by applying forces, vibrations, or motions to the knob.
101
101
101
202
202
101
Menu items include any object that can be displayed to a user, including without limitation, text, web pages, digital images, icons, videos, animations and the like. For example, menu items such as “audio”, “map”, “temperature”, and “cellular phone” can be provided. Once knob is rotated to highlight a menu item, a sub-menu for that item may be displayed by pushing knob . Therefore, knob can then be rotated to cycle through a list of menu items , select a menu item by pushing the knob, and adjust a value of the selected menu item by again rotating knob .
101
201
202
102
101
102
102
As discussed, knob is preferably provided with haptic feedback to aide user in scrolling through menu items without the need to look at screen . That is, by adjusting the feel of the knob to clearly correspond to the context of GUI , a user may navigate through menu items without the need to look at GUI .
101
101
101
101
101
201
101
FIG. 3
FIG. 3
FIG. 3
As discussed, the haptic feedback is particularly useful to distinguish the transition, or border, between menu items as knob is rotated. This is illustrated in . More particularly, illustrates a haptic effect when rotating a knob between menu items. In particular a graph is shown that plots an intensity of force, vibration, or motion applied to the knob versus angle of rotation for knob . As knob is rotated, its angle increases. Little to no haptic effect (forces, vibrations, or motions) is provided to knob until prior to a transition to a next menu item occurs. As knob is rotated, and a next menu item is about to be selected, user is notified of the transition by a haptic effect applied to knob . The intensity of the haptic effect (e.g., an amount of force, vibration, or motion) increases as the border between menu items is reached. As shown in , once the transition to a next menu item is made, the haptic effect is reduced until a next menu-item border is reached.
FIG. 4
FIG. 4
3
102
301
2
3
1
2
101
1
2
3
4
In one embodiment of the present invention, menu items may be given different importance. For example, a menu item may be “starred” to indicate a higher importance. This is illustrated in where menu item is given a higher importance. Higher-importance menu items may be marked by GUI , for example, by star . As discussed above, a different haptic effect may be provided to different menu-item borders in order to distinguish between select menu items. In addition, the angle of rotation between menu-item borders is allowed to vary in order to distinguish between menu items. Thus, for example, in , the border between menu item and menu item may be identified with a different haptic effect than, for example, the border between menu item and menu item . In addition, knob may only need to be rotated a first amount (angle) to transition from menu item to menu item , but may need to be rotated a second amount to transition from menu item to menu item . Because both the haptic effect and the angle of rotation for the rotary knob is allowed to vary, a user may be able to easily identify transitions to various menu items.
FIG. 5
FIG. 1
FIG. 5
FIG. 5
101
2
3
501
3
502
illustrates a haptic effect versus angle plot for the device shown in . More particularly, the graph of illustrates how the haptic intensity being applied to knob changes as the knob is rotated. As is evident, different haptic effects may be applied to different menu-item border transitions. As shown in , the haptic effect when transitioning from menu item to menu item varies greatly from all other border transitions. Additionally, the angle of rotation needed to transition through menu item is greater than the rotation needed to transition through any other menu item.
Thus, when a special, favorite, starred, or landmark menu item is rolled over, the haptic signature (i.e., the intensity of the forces, vibrations, or motions with respect to time and/or rotation angle) could be differentiated at the border region in order to indicate a transition to the menu item. In addition, that menu item could have a different ratio of knob rotation for scrolling. For instance a starred menu item may require 30 degrees of rotation to scroll off of, while all other menu items only require 15 degrees of rotation to scroll off of.
FIG. 6
FIG. 5
FIG. 1
101
601
1
2
602
101
101
601
2
3
603
101
101
604
3
4
illustrates the haptic effect versus angle shown in as applied to the knob of . As knob is rotated through first angle , a transition from menu item to menu item takes place. At the border of this transition, a first haptic effect is applied to knob . As knob continues to be rotated through another first angle , a transition from menu item to menu item takes place. At the border of this transition, a second haptic effect is applied to knob . As knob continues to be rotated through second angle , a transition from menu item to menu item takes place, again using the first haptic effect at the border.
FIG. 7
FIG. 1
100
102
101
703
705
703
is a block diagram of the device shown in . As shown, device comprises display , knob , microprocessor (logic circuitry) , and haptic module . Logic circuitry comprises a digital signal processor (DSP), general purpose microprocessor, a programmable logic device, or application specific integrated circuit (ASIC) and is utilized to provide the functionality described below.
101
703
Knob includes an internal sensor (not shown) as known in the art to provide position and direction information to logic circuitry to communicate knob position for selection of menu items. Since the knob is preferably a continuous rotational device having an infinite range of rotational motion, an encoder, rather than continuous turn potentiometer, is a suitable sensor due to the encoder's accuracy and lower errors when transitioning between maximum and minimum values. Other types of sensors can, of course, be used in other embodiments, including magnetic sensors, analog potentiometers, etc.
705
101
705
705
705
Haptic module provides various haptic effects (such as vibration) to knob that can be perceived by the user. If the haptic module generates vibration as a haptic effect, the intensity and the pattern of vibration generated by the haptic module may be altered in various manners as discussed above. Haptic module may provide various haptic effects, other than vibration, as long as the haptic effect can be varied for differing menu item transitions. These include, but are not limited to, a haptic effect obtained using a pin array that moves perpendicularly to a contact skin surface, a haptic effect obtained by injecting or sucking in air through an injection hole or a suction hole, a haptic effect obtained by giving a stimulus to the surface of the skin, a haptic effect obtained through contact with an electrode, a haptic effect obtained using an electrostatic force, and a haptic effect obtained by realizing the sense of heat or cold using a device capable of absorbing heat or generating heat.
101
703
703
102
703
705
101
During operation, knob outputs an angle of rotation to microprocessor . In response, microprocessor instructs display to adjust an image accordingly (i.e., cycle through menu items). Additionally, microprocessor will determine if a border between menu items has been reached, and if so, microprocessor will instruct haptic module to provide an appropriate haptic feedback to knob .
FIG. 8
FIG. 6
FIG. 8
801
703
101
803
805
807
703
705
801
is a flow chart showing operation of the device of . More particularly, the logic flow of illustrates steps (not all steps are necessary) for providing a haptic effect to a rotary knob. The logic flow begins at step where logic circuitry receives feedback from rotating rotary knob and determines an angle traveled for the rotary knob. At step logic circuitry determines a first function associated with a position of the rotary knob. An angular distance to a first border for the first function is then determined by logic circuitry at step . At step logic circuitry instructs haptic module to apply a first haptic effect to the rotary knob when rotated to the first border. As discussed above, the first haptic effect is based on the angle traveled, the first function, and the angular distance. The logic flow returns to step where the process repeats.
703
The above logic flow allows for different border effects to be applied to different transitions between device functions. With this in mind, logic circuitry may determine a second function associated with a position of the rotary knob, determining an angular distance to a second border for the second function, and instruct haptic module to apply a second haptic effect to the rotary knob when rotated to the second border. The second haptic effect based on the angle traveled, the second function, and the angular distance. As discussed above the first function differs from the second function, and the first haptic effect may differ from the second haptic effect. The difference between the two haptic effects may be in amplitude, shape, size, . . . , etc. In addition an angular distance that the knob rotates to pass through the first function may differ from an angular distance that the knob rotates to pass through the second function.
101
102
While the above description was given with the first and second function comprising menu items, one of ordinary skill in the art will recognize that any device function that may be manipulated with rotary knob may have its border distinguished as described above. These functions may be taken from the group consisting of a menu item, a device operating parameter, a talkgroup, a channel, and a frequency. The current device function may be displayed on a graphical user interface (display) .
In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. For example, while the embodiment provided above cycled through menu items, varying a haptic effect at the transition, one of ordinary skill in the art will recognize that any device function may be cycled through in a similar manner. For example a rotary know may be used to cycle through channels (frequencies/talkgroups) on a radio, with a different haptic effect given at the boundary/transitions between the channels. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings.
Those skilled in the art will further recognize that references to specific implementation embodiments such as “circuitry” may equally be accomplished via either on general purpose computing apparatus (e.g., CPU) or specialized processing apparatus (e.g., DSP) executing software instructions stored in non-transitory computer-readable memory. It will also be understood that the terms and expressions used herein have the ordinary technical meaning as is accorded to such terms and expressions by persons skilled in the technical field as set forth above except where different specific meanings have otherwise been set forth herein.
The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
It will be appreciated that some embodiments may be comprised of one or more generic or specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used.
Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying figures where like reference numerals refer to identical or functionally similar elements throughout the separate views, and which together with the detailed description below are incorporated in and form part of the specification, serve to further illustrate various embodiments and to explain various principles and advantages all in accordance with the present invention.
FIG. 1
illustrates a device having a haptic knob.
FIG. 2
FIG. 1
shows operation of the device of .
FIG. 3
illustrates a haptic effect when rotating a knob between menu items.
FIG. 4
FIG. 1
illustrates a preferred menu item existing on the device of .
FIG. 5
FIG. 1
illustrates a haptic effect versus angle plot for the device shown in .
FIG. 6
FIG. 4
FIG. 1
illustrates the haptic effect versus angle shown in as applied to the knob of .
FIG. 7
FIG. 1
is a block diagram of the device shown in .
FIG. 8
FIG. 6
is a flow chart showing operation of the device of . | |
---
abstract: 'In this work we perform a fractal analysis of 160 pieces of music belonging to six different genres. We show that the majority of the pieces reveal characteristics that allow us to classify them as physical processes called the $1/f$ (pink) noise. However, this is not true for classical music represented here by Frederic Chopin’s works and for some jazz pieces that are much more correlated than the pink noise. We also perform a multifractal (MFDFA) analysis of these music pieces. We show that all the pieces reveal multifractal properties. The richest multifractal structures are observed for pop and rock music. Also the viariably of multifractal features is best visible for popular music genres. This can suggest that, from the multifractal perspective, classical and jazz music is much more uniform than pieces of the most popular genres of music.'
address:
- 'Institute of Nuclear Physics, Polish Academy of Sciences, PL–31-342 Kraków, Poland'
- 'Faculty of Mathematics and Natural Sciences, University of Rzeszów, PL–35-959 Rzeszów, Poland'
- 'Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, PL-30-059 Kraków, Poland'
author:
- 'P. Oświȩcimka'
- 'J. Kwapień'
- 'I. Celińska'
- 'S. Drożdż'
- 'R. Rak'
title: Computational approach to multifractal music
---
Fractal ,Fractal dimension ,Mulifractality ,Singularity spectrum.
Introduction
============
Since B. Mandelbrot’s “Fractal Geometry of Nature" was published [@mandelbrot82], fractals have an enormous impact on our perception of the surrounding world. In fact, fractal (i.e. self-similar) structures are ubiquitous in nature, and the fractal theory itself contitutes a platform on which various fields of science, such as biology [@ivanov99; @makowiec09; @rosas02], chemistry [@stanley88; @udovichenko02], physics [@muzy08; @oswiecimka06; @subramaniam08], and economics [@drozdz10; @kwapien05; @matia03; @oswiecimka05; @zhou09], come across. This (statistical) self-similarity concerns irregularly-shaped empirical structures (Latin word fractus means ’rough’) which often elude classical methods of data analysis. This interdisciplinary character of the applied fractal geometry is not confined only to science, but also in art, which may be treated as some reflection of reality, some interesting fractal features might be discerned. An example of this are the fractal properties of Jackson Pollock’s paintings [@taylor99] and the Zipf’s law describing literary works [@kwapien10; @zanette06; @zipf49]. In a course of time the fractal theory encompassed also the multifractal theory dealing with the structures which are convolutions of different fractals. It turned out that such structures and corresponding processes are not rare in nature and the proposed multifractal formalism allowed researchers to introduce distinction between mono- and multifractals [@halsey86]. Development of those intriguing theories would not have been possible, though, if there had not been substantial progress in computer science. On the one hand, fractals - due to their structure - can easily be modelled by using iterative methods, for which the computers are ideal tools. On the other hand, however, the multifractal analyses require significant computing power. The result of such an analysis is identification of diverse patterns in different subsets of data which would be impossible without modern computers.
![Exemplary time series representing the sound wave of the song “Good Times Bad Times" by Led Zeppelin.[]{data-label="serie"}](fig1.eps)
Although relations of mathematics and physics with music date back to ancient times (Pythagoras of Samos considered music a science of numbers), a new impulse for them arrived together with developments in the fractal methods of time series analysis [@bigerelle00; @ro09]. The first fractal analysis of music was carried out in 1970s by Voss and Clarke [@voss75], who showed that the frequency characteristics of investigated signals behave similar to $1/f$ noise. Interestingly, this type of noise (called pink noise or scaling noise) occurs very often in nature [@bak87]. The $1/f$ spectral density is an attribute, among others, of meteorological data series, electronic noise occurs in almost all electronic devices, statistics of DNA sequences and heart beat rthythm [@bak96]. Thus, from this point of view, music imitates natural processes. A note worth making here is that, according to the authors of the above-cited article, the most pleasent to ear kind of music is just the pink noise. In 1990s Hsu and Hsu showed that for some classical pieces of Bach and Mozart and for some children songs, a power law relation occurs between the number of subsequent notes $F$, distant from each other by $i$ semitones as a function of $i$ [@hsu90]: $$F=c/i^D$$ where $c$ denotes a propotionality constant and $D$ is the fractal dimension $(1 < D < 2.25)$.
![ Exemplary power spectra (in log-log scale) for six pieces of music representing six genres (from top to bottom: classical music, jazz, pop music, electronic music, rock, and hard rock). Power-law trends in each panel are indicated by dashed lines, whose slopes correspond to the corresponding values of $\beta$.[]{data-label="widmo"}](fig2a.eps)
![ Exemplary power spectra (in log-log scale) for six pieces of music representing six genres (from top to bottom: classical music, jazz, pop music, electronic music, rock, and hard rock). Power-law trends in each panel are indicated by dashed lines, whose slopes correspond to the corresponding values of $\beta$.[]{data-label="widmo"}](fig2b.eps)
![ Exemplary power spectra (in log-log scale) for six pieces of music representing six genres (from top to bottom: classical music, jazz, pop music, electronic music, rock, and hard rock). Power-law trends in each panel are indicated by dashed lines, whose slopes correspond to the corresponding values of $\beta$.[]{data-label="widmo"}](fig2c.eps)
In contrast, no similar relation has been observed for some works of Karlheinz Stockhausen, one of modern composers belonging to the strict musical avant-garde. It should be mentioned that the relation discovered by [@hsu90]. can be considered as an expression of the Zipf’s law in music. In recent years, a more advanced multifractal analysis was carried out [@jafari07; @su06]. For example, by substituting both the rhythm and melody by a geometrical sequence of points, [@su06] showed that these quantities can be considered the multifractal objects. They also suggested that various genres of music may possess their genre-specific fractal properties. Thus, there might exist a multifractal criterion for classifying a musical piece to a particular genre. Music can be considered a set of tones or sounds ordered in a way which is pleasant to ear. And although the reception of a musical piece is subjective, music affects a listener irrespective of his sensitivity or musical education [@storr97]. Therefore, a hypothesis which arises in this context is that music as an object may refer not only to the structure of a musical piece but also to the way it is perceived.
Power spectral analysis
=======================
In our work we analyzed 160 pieces of music from six popular genres: classical music, pop music, rock, hard rock, jazz, and electronic music. The first one, classical music, was represented by 38 works by Frederick Chopin, divided into three periods of his career. Pop music consisted of 51 songs performed by Britney Spears, rock and hard rock music - 20 songs performed by Led Zeppelin and 20 songs by Steve Vai, respectively, jazz - 25 compositions performed by Miles Davies or Glenn Miller. Finally, an electronic music consisted of 6 pieces of music by Royskopp. All the analyzed pieces were written in the WAV format. In this format the varying amplitude of a sound wave $V(t)$ is encoded by a 16-bit stream sampled with 44,1 kHz frequency. After encoding, the amplitude $V(t)$ was expressed by a time series of length depending on the temporal length of a given piece of music (several million points, on average). An exemplary time series encoding a randomly selected song is displayed in Figure \[serie\]. We started our analysis with calculating the power spectrum $S(f)$ for each piece of music. This quantity carries information on power density of sound wave components of frequency $(f;df)$. According to the Wiener-Khinchin theorem, $S(f)$ is equal to the Fourier transform of autocorrelation function or, equivalently, the squared modulus of a signal’s Fourier transform: $$S(f)=|X(f)|^2$$ where $$X(f)=\int\limits_{-\infty}^{\infty} x(t) e^{-2 \pi i ft}dt$$ is the Fourier transform of a signal $x(t)$. If the power spectrum decreases with $f$ as $1/f^\beta $, ($\beta\geq
0$), it means that the signal under study is characterized by log-range autocorrelation within the scales described by the corresponding frequencies $f$. The faster is the decrease of $S(f)$ (i.e. the higher value of $\beta$), the stronger is the autocorrelation of the signal. It is worth recalling here that the Brownian motion corresponds to $\beta=2$, while the white noise (uncorrelated signal) to $\beta=0$. Since the exponent $\beta$ can easily be transformed into the Hurst exponent (a well-known notion in fractal analysis) or into the fractal dimension, the power spectrum can be classified among the monofractal techniques of data analysis. The power spectra were calculated for each piece of music. In most cases, the graph $S(f)$ was a power-law decreasing function for frequencies 0.1-10 kHz with the slope characteristic for a given piece. The notable exceptions were works of Chopin for which the graphs were scaling between 1 and 10 kHz. Six representative spectra for different genres are shown in Figure \[widmo\]. To each empirical spectrum $S(f)$ a power function was fitted (the straight lines in Figure \[widmo\] ) within the observed corresponding power-law regime.
![Exponent $\beta$ calculated for each piece of music analyzed in the present work (short horizontal lines). Columns correspond to individual artists, periods of their career (Chopin), or albums. Dotted vertical lines separate different music genres.[]{data-label="beta"}](fig3.eps)
A slope of the fitted function corresponds to the exponent $\beta$. All the calculated values of $\beta$, are exhibited in Figure \[beta\]. As it can be seen, the highest values of $\beta$ correspond to works of F. Chopin (classical music,$\beta_{MAX}=~4.4$) and some works of Glenn Miller (jazz, $\beta _{MAX}=~4.8$). Exponents in these cases are much higher than 2 which means that the underlying processes are more correlated than the Brownian motion. Also several songs by Led Zeppelin (rock) have $\beta>2$ but not so prominent as the pieces from those genres mentioned before. Interestingly, $\beta$ for Led Zeppelin declines with time. For their chronologically first album, the highest exponent is 2.8, while for the subsequent albums it drops to 2.3 and 2.1, respectively. For the other analyzed music genres, i.e. electronic, pop, rock and hard rock music, $1 < \beta < 2$. It is also worth mentioning that for several jazz pieces, the exponent $\beta$ drops below 1, which means that they approach white noise. An author of these songs is Miles Davies, one of the most significant jazz artists, who often was a precursor of new styles and sounds. To summarize this part of our analysis, we can say that from the power spectrum perspective, the majority of the analyzed pieces of music can in fact be considered the $1/f$ processes. This is even more evident for more popular music genres like pop and rock than for rather exclusive genres like jazz and classical music.
Multifractal analysis of musical compositions
=============================================
In order to have a deeper insight into dynamics of the investigated signals we performed also multifractal analysis of data. We used one of the most popular and reliable methods - the Multifractal Detrended Fluctuation Analysis (MFDFA) [@kantelhardt02]. This method allows us to calculate fractal dimensions and Hoelder exponents for individual components of a signal decomposed with respect to the size of fluctuations. Consecutive steps of this procedure are presented below. At the beginning we calculate the so-called profile, which is the cumulative sum of the analyzed signal: $$Y(i)=\sum \limits_{j=1}^{i}[x_j-\langle x\rangle] \quad \hbox{for} \quad i=1,2,...N ,$$ where $\langle x\rangle$ donotes the signal’s mean, and $N$ denotes its length. The subtraction of the mean value is not necessary, because a trend is eliminated in the next steps. Then we divide the profile $Y(i)$ into $N_s$ disjoint segments of lentgh $s$ ($N_s=N/s$). In order to take into account all the points (at the end of the signal’s profile some data can be neglected), the dividing procedure has to be repeated starting from the end of $Y(i)$. In consequence, we obtain $2N_s$ segments. In each segment $\nu$, the estimated trend is subtracted from the data. The trend is represented by a polynomial $P_{\nu}^{l}$ of order $l$. The polynomial’s order used in calculation determines the variant of the method. Thus, for $l=1$ we have MFDFA1, for $l=2$ - MFDFA2, and so on. After detrending the data, its variance has to be calculated in each segment:
![ Exemplary fluctuation function $Fq$ (in $log-log$ scale) for six pieces of music representing six genres (from top to bottom: classical music, jazz, pop music, electronic music, rock and hard rock). Each line represents $Fq$ calculated for particular $q$ values in the range from -4 to 4[]{data-label="Fq"}](fig4a.eps)
![ Exemplary fluctuation function $Fq$ (in $log-log$ scale) for six pieces of music representing six genres (from top to bottom: classical music, jazz, pop music, electronic music, rock and hard rock). Each line represents $Fq$ calculated for particular $q$ values in the range from -4 to 4[]{data-label="Fq"}](fig4b.eps)
![ Exemplary fluctuation function $Fq$ (in $log-log$ scale) for six pieces of music representing six genres (from top to bottom: classical music, jazz, pop music, electronic music, rock and hard rock). Each line represents $Fq$ calculated for particular $q$ values in the range from -4 to 4[]{data-label="Fq"}](fig4c.eps)
$$F^2(s,\nu)=\frac{1}{s}\sum \limits_{i=1}^{s}\{Y[(\nu -1)s+i]-P_{\nu}^{l}(i)\}^2 \quad \hbox{for} \quad \nu=1,2,...N_s$$
or $$F^2(s,\nu)=\frac{1}{s}\sum \limits_{i=1}^{s}\{Y[N-(\nu -N_s)s+i]-P_{\nu}^{l}(i)\}^2 \quad \hbox{for} \quad
\nu=N_s+1,N_s+2,...2N_s$$ The variances are then averaged over all the segments and, finally, one gets the $q$th order fluctuation function given by: $$F_q(s)=\left \{ \frac{1}{2N_s}\sum \limits_{\nu =1}^{2N_s} [F^2(s,\nu)]^{q/2} \right \}^{1/q} ,$$ where the exponent $q$ belongs to $\mathbb{R} \setminus \{ 0 \}$. This procedure has to be repeated for different values of $s$. If the analyzed signal has any fractal properties, the fluctuation function behaves as: $$F_q(s)\sim s^{h(q)},$$ where $h(q)$ denotes the generalized Hurst exponents. A constant $h(q)$ for all $q$’s means that the studied signal is monofractal and $h(q)=H$ (the ordinary Hurst exponent). For multifractal signals, $h(q)$ is a monotonically decreasing function of $q$. It can be easily noticed that, by varying the $q$ parameter, it is possible to decompose the time series into fluctuation components of different character: for $q > 0$ the fluctuation function mostly describes large fluctuations, whereas for $q < 0$ the main contribution to the $F_q$ comes from small fluctuations. By knowing the $h(q)$ spectrum for a given set of data, we can calculate its singularity (multifractal) spectrum: $$\alpha = h(q)+qh^{'}(q) \quad \hbox{and} \quad f(\alpha)=q[\alpha-h(q)]+1 ,$$ where $h^{'}(q)$ stands for the derivative of $h(q)$ with respect to $q$, the Hoelder exponent $\alpha$ donotes singularity strength, and $f(\alpha)$ is the fractal dimension of the set of points characterized by $\alpha$. For a monofractal time series, the singularity spectrum reduces to a single point $(H,1)$, while for multifractal time series, the spectrum assumes the shape of an inverted parabola. The multifractal strength is a quantity which describes the richness of multifractality, i.e., how diverse are values of the Hoelder exponents in a data set. It can be estimated by the width of the $f(\alpha)$ parabola: $$\Delta \alpha = \alpha _{max} - \alpha _{min}$$
![The multifractal spectra $f(\alpha)$ calculated for each analyzed piece of music (brown lines) and its mean spectrum (black lines).[]{data-label="falfa"}](fig5a.eps)
![The multifractal spectra $f(\alpha)$ calculated for each analyzed piece of music (brown lines) and its mean spectrum (black lines).[]{data-label="falfa"}](fig5b.eps)
![The multifractal spectra $f(\alpha)$ calculated for each analyzed piece of music (brown lines) and its mean spectrum (black lines).[]{data-label="falfa"}](fig5c.eps)
where $\alpha _{min}$ and $\alpha _{max}$ stand for the extreme values of $\alpha$. The bigger is $\alpha$ the richer is the multifractal. Using MFDFA2, which guarantees stability of results, we calculated the fluctuation function $Fq$ for all the analyzed signals in the scale s range from 50 to 100,000 points. The value of $q$ was increased by 0.2 in the range from -4 to 4. Exemplary fluctuation functions for our six music genres are shown in Figure \[Fq\]. All the calculated $Fq$ functions are characterized by a power law dependence on the scale for all $q$’s. However, the range of scaling varies slightly for different pieces. By looking at the shown examples, it is easy to notice that for F. Chopin, Britney Spears, Glenn Miller, and Steve Vai, the scaling involves almost all the considered values of $s$, while for electronic music we can distinguish two scaling ranges: the longer one for the scales $40 < s < 10,000$ and the shorter one for $10,000 < s <
100,000$. Such double scaling appears also occasionally for the other genres of music. However, for the most cases, we observe only one type of scaling. In Figure \[Fq\] we can also notice a clear dependence of the $h(q)$ exponent (the slope coefficient of $Fq$ in double log scale) on $q$. And so, the largest values of $h(q)$ correspond to $q < 0$, whereas for $q > 0$, $h(q)$ takes smaller values. Therefore already at this stage of calculations, it can be seen that the analyzed signals can have distinct multifractal properties. It is also worth to mention that for the large scales (e.g. for jazz $s > 40,000$, for hard rock $s > 20,000$), scaling loses its multifractal traits, and $h(q)$ does not depend on $q$. It is related to the limited range of nonlinear correlations [@drozdz09]. The scale $s$, for which the scaling character of $Fq$ changes, sets a limit for estimation of the multifractal spectrum. For all the fluctuation functions, we estimated the singularity spectra $f(\alpha)$. Figure \[falfa\] presents the multifractal spectra (grey lines) and the corresponding mean multifractal spectra (black lines) for the music genres to which the given pieces belong. All the mean spectra are asymmetric. The right part which describe the small amplitude fluctuations is clearly longer. This effect is best visible for the rock, hard rock, and pop songs. Locations of the extrema of these spectra ($\alpha\approx 0.2$) suggest considerably antipersistent behavior of the analyzed time series. We can easily see that the width of the multifractal spectra for a particular genre fluctuates considerably. Nevert heless, all the spectra are characterized by the widths large enough that they can be regarded as multifractal structures. This confirms observation made above for the $Fq$ function. The narrowest mean multifractal spectrum was observed for electronic music ($\Delta\alpha= 0.85$). Classical music and jazz display mutually comparable widths of, respectively, 1.0 and 1.1. The widest mean $f(\alpha)$ is seen for hard rock (1.22), rock (1.5), and pop (1.8).
![Value of $\Delta \alpha$ calculated for each analyzed piece of music (short horizontal lines). Columns correspond to individual artist, periods of their career, or albums. Dotted vertical lines separate different genres of music.[]{data-label="dalfa"}](fig6.eps)
Thus, from this point of view, the richest multifractal (the richest dynamics of processes) is an attribute of the most popular music genres. The more exclusive genres are characterized by poorer multifractals. Figure \[dalfa\] presents a collection of all the calculated widths of $f(\alpha)$. Vertical lines separate different music genres and each piece is represented by a single horizontal line. As it can be seen, the most variable multifractal spectra widths characterize pop ($0.5 <\alpha< 2.8$), rock ($0.5 < \alpha< 2.1$) and hard rock ($0.51 < \alpha < 2.15$) music. Thus, on account of multifractal properties, the pieces belonging to these genres differ markedly among themselves. Much more consistent from this point of view are the pieces of classical music, jazz and electronic music. We can draw therefore a conclusion that this is the richness of multifractal forms what distinguishes popular music from the more exclusive and the less listened to musical genres.
Conclusions
===========
To sum up, our work presents results of a fractal analysis of selected music works belonging to six different genres: pop, rock, hard rock, jazz, classical, and electronic music. The results confirm that the amplitude signals $V(t)$ are characterized by the power spectrum falling off according to a power law: $S(f)\sim 1/f^\beta $. Interestingly, rate of this fall can be characteristic for a particular genre. For classical music and some pieces of jazz, $S(f)$ declines the fastest, while for popular music (pop, rock, hard rock , and electronic music ) the power spectrum falls more slowly suggesting less correlated signals. The same signals were also subject to a multifractal analysis. It turned out that such data demonstrate well-developed multifractality. Interestingly, the most variable widths of multifractal spectra (and also the widest singularity spectra thus strongest nonlinear correlations) were observed for popular genres like pop and rock. For the remaining genres, the multifractal properties were rather similar among the pieces. Therefore, from this point of view, the popular music is characterized by the amplitude signals with different degree of correlations, whereas more sophisticated musical genres (classical, jazz) are more consistent in this matter.
[00]{} Bak, P., Tang, C., Wiesenfield, K., 1987. Self-organized criticality: an explanation of 1 /f noise. Phys. Rev. Lett. 59, 381-384. Bak, P., 1996. How Nature Works: The Science of Self-Organised Criticality, Copernicus. Bigerelle, M., Iost, A., 2000. Fractal dimension and classification of music. Chaos, Solitons & Fractals 11, 2179-2192. Drożdż, S., Kwapień, J., Oświȩcimka, P., Rak, R., 2009. Quantitative features of multifractal subtleties in time series. EPL 88, 60003. Drożdż, S., Kwapień, J., Oświȩcimka, P., Rak, R., 2010. The foreign exchange market: return distributions, multifractality, anomalous multifractality and the Epps effect. New J. Phys. 12 105003. Halsey, T.C., Jensen, M.H., Kadanoff, L.P., Procaccia, I., Shraiman, B.I., 1986. Fractal measures and their singularities: The characterization of strange sets. Phys. Rev. A [**33**]{} 1141. Hsu, K.J., Hsu, A.J., 1990. Fractal geometry of music. Proc. Natl. Acad. Sci. USA 87 938-341. Ivanov, P.Ch., Amaral,A.N., Goldberger, A.L., Havlin, Sh., Rosenblum, M.G., Struzik Z.R. and Stanley, H.E., 1999. Multifractality in human heartbeat dynamics. Nature 399, 461-465. Jafari, G.R., Pedram, P., and Hedayatifar, L., 2007. Long-range correlation and multifractality in Bach’s Inventions pitches. J. Stat. Mech., P04012. Kantelhardt, J.W., Zschiegner, S.A., Koscielny-Bunde, E., Bunde, A., Havlin, Sh., and Stanley, H.E., 2002. Multifractal detrended fluctuation analysis of nonstationary time series. Physica A 316, 87-114. Kwapień, J., Oświȩcimka, P., Drożdż, S., 2005. Components of multifractality in high-frequency stock returns. Physica A 350, 466-474 . Kwapień, J., Drożdż, S., Orczyk, A., 2010. Linguistic complexity: english vs. polish, text vs. corpus. Acta Phys, Pol. A 117, 716. Makowiec, D., Dudkowska, A., Gałaska, R., Rynkiewicz, A., 2009. Multifractal estimates of monofractality in RR-heart series in power spectrum ranges. Physica A 388 3486-3502. Mandelbrot, B.B., 1982. The Fractal Geometry of Nature, W.H. Freeman and Company, New York. Matia, K., Ashkenazy, Y., Stanley, H.E., 2003. Multifractal Properties of Price Fluctuations of Stocks and Commodities. EPL 61, 422-428. Muzy, J.F., Bacry, E., Baile, R., and Poggi, P., 2008. Uncovering latent singularities from multifractal scaling laws in mixed asymptotic regime. Application to turbulence. EPL 82, 60007. Oświȩcimka, P., Kwapień, J., Drożdż, S., 2005. Multifractality in the stock market: price increments versus waiting times. Physica A 347, 626-638. Oświȩcimka, P., Kwapień, J., Drożdż, S., 2009. Wavelet versus Detrended Fluctuation Analysis of multifractal structures. Phys. Rev. E 74,016103. Ro, W., Kwon, Y., 2009. 1/f Noise analysis of songs in various genre of music. Chaos, Solitons & Fractals 42, 2305-2311. Rosas, A., Nogueira, E.Jr., Fontanari, J.F., 2002. Multifractal analysis of DNA walks and trails. Phys. Rev. E 66, 061906. Stanley, H.E. and Meakin, P., 1988. Multifractal phenomena in physics and chemistry. Nature 335, 405-409. Storr, A., 1997. Music and the Mind. HarperCollinsPublisher, London. Su, Z.-Y., Wu, T., 2006. Multifractal analyses of music sequences. Physica D 221 188-194. Subramaniam, A.R., Gruzberg, L.A., Ludwig, A.W.W., 2008. Boundary criticality and multifractality at the two-dimensional spin quantum Hall transition. Phys. Rev. B 78, 245105. Taylor, R.P., Micolich, A.P., Jonas, D., 1999. Fractal analysis of Pollock’s drip paintings. Nature 399, 422. Udovichenko, V.V., and Strizhak, P.E., 2002. Multifractal Properties of Copper Sulfide Film Formed in Self-Organizing Chemical System. Theoretical and Experimental Chemistry 38, 259-262. Voss, R.F., Clarke, J., 1975. 1/f noise in music and speech. Nature 258 317-318. Zanette, D., H., 2006. Zipf’s law and the creation of musical context. Musicae Scientiae 10, 3-18. Zhou, W.-X., 2009. The components of empirical multifractality in financial returns. EPL 88, 28004. Zipf, G.K., 1949. Human behavior and the principle of least effort. Addison-Wesley.
| |
Operation Clean Sweep Registration
Please fill in the form below.
Full Name
*
First Name
Last Name
I want to volunteer for a cleanup
*
4/18/2020-Land Lubber / Neighborhood Cleanup (8:30am-11am)
4/18/2020-Beachcomber/ Palmer's Island Cleanup (11:30am-3pm)
4/18/2020-Earth Day Hero / I'll volunteer for both! cleanups!
Do you want to be on a Team?
*
Yes
No
I would like to join an existing Team
Team Name
Do you want to be a Team Captain?
Yes
No
Team Captain's Name
Co-Captain's Name
Where do you want to clean? Choose a location from the list below.
E-mail
*
Phone Number
-
Area Code
Phone Number
# of Volunteers up to 17 years old (Under 16 must be accompanied by an adult)
*
# of Volunteers 13-15 years old - must be accompanied by an adult
*
# of Volunteers 16-18 years old
*
# of Volunteers over 18
*
Number of Small T-Shirts needed
Small
Number of Medium T-Shirts needed
Medium
Number of Large T-Shirts needed
Large
Number of X-Large T-Shirts needed
X-Large
Number of XX-large T-Shirts needed
XX-Large
Community Service Form Needed
Yes
No
Enter the message as it's shown
*
Waiver
*
Submit Form
Should be Empty: | https://oembed.jotform.com/form/61117107563146 |
One of the three major energy sources, the one usually found in grains, fruits, milk and vegetables and the one most responsible for raising the blood glucose.
Carbohydrate makes your blood glucose level go up. If you know how much carbohydrate you've eaten, you have a good idea what your blood glucose level is going to do. Carb counting can be used by anyone with diabetes - not just people using insulin. This method is also helpful for people who are using more aggressive methods of adjusting insulin to control their diabetes. The amount of meal and snack carb is adjusted based on the premeal blood sugar reading. Depending on the reading, more or less carb maybe eaten. Likewise, insulin may be adjusted based on what the person wants to eat. For example, if you want to eat a much larger meal, this approach can guide you to determine how much extra insulin to take.
Step 1: Know you meal plan.
Indicate on the chart below the number of servings from each group planned as part of your meal plan. The last row will be completed in Step 2.
Step 2: Know your carbs.
To make things easy, many people begin carb counting by counting the carb value of milk up to 15. In other works, one serving of starch, fruit or milk all contain 15 grams of carb or one carb serving. Three servings of vegetable also contain 15 grams. One or two servings of vegetables do not need to be counted. Each meal and snack will contain a total number of grams of carb.
Look back at your meal plan in Step 1. Total up the number of grams of carb for each meal and snack and write the totals in the last row. It is important to know your carb allowance for each meal and snack than it is to know your total for the day. The amount of carb eaten at each meal should remain consistent (unless you learn to adjust your insulin for a change in the amount of carb eaten).
Here is an example to show how carb counting can make meal planning easier. Let's say your dinner meal plan contains 5 carb servings or 75 grams of garbs. (This is based on a meal plan of 3 starch servings, 4 protein, 1 vegetable, 1 fruit, 1 milk and 3 fat). The label on a frozen dinner of beef enchiladas says it contains 62 grams of carbohydrate. Instead of calculating how many exchanges that converts to, just figure out how man more grams of carb you need to meet your 75 gram total. Add about 15 more grams of carb (one serving of fruit or milk, for example) and you have almost matched your total.
Try another example. If you want to have chili for lunch, what else can you have with it. The label on the chili says it contains 29 grams of carbohydrate per 1 cup serving.
Where do you get carbohydrate information?
In order to count carbs, you must begin by knowing your meal plan and the average carb values of the food groups. Start by making sure you know the average amount of carb per serving in each food group. It is helpful to have a carb counting reference book.
The Nutrition Facts panel on the food label is the best way to get carbohydrate information.
Protein and fat don't raise your blood glucose level as high as carbohydrate does. That's why you don't actually have to count them. But there are more calories in foods that contain fat than in most carbohydrate foods. Don't eat too much protein and fat or you may gain weight.
How do you count carbohydrate?
Carbohydrate is measured in grams (g). A gram is a unit of weight in the metric system. One ounce (oz) is about 30 grams (30 g). But don't confuse this with the gram weight of the food. A food may weigh 220 g but contain only 15 g of carbohydrate.
How do you know what size portions to eat?
Practice, practice, practice. Don't rely on measuring once and then just "quesstimating". Pull out the scales at least once a week to check yourself and reinforce your skills. Use a glass which you know only holds 4 or 8 ounces to better control your portion. Check your cereal portion using measuring cups. The cereal label will give you a more precise nutrition information such as calories, carb and fat grams that the food group averages. You have to weigh or measure your portions to know how much carbohydrate is in a serving. Before you can get to the point where you can just look at a portion of food and estimate the grams of carbohydrate in it, you have to practice weighing and measuring a lot of servings of individual food items. When you're used to seeing proper portion sizes, you'll be ready to count by looking.
How do you develop carbohydrate/insulin ration?
The amount of carbohydrate you eat determines how much insulin you need to cover a meal. Yours carbohydrate/insulin ration will cover your usual amount of protein and fat, as well as your carbohydrate in that meal. There are several approaches you can take to learning your individual carbohydrate/insulin rations. Two commonly used methods are Carbohydrate Gram Method and Carbohydrate Choices Method.
The carbohydrate gram method allows you to see the difference in your ratios from one meal to another. For example, some people find their ratio at dinner is different from their ratio at breakfast. Many people have a lower carbohydrate -to-insulin ratio at breakfast than they ave at dinner. For example, at breakfast the ratio may be 10/1, while at dinner that ratio is 15/1. This means that at breakfast 1 unit insulin covers 10 gram of carbohydrate, while at dinner 1 unit covers 15 grams of carbohydrate. This happens when early morning hormones affect your sensitivity to insulin, causing high BG (often called "dawn phenomenon") and a greater need for insulin in the morning.
Please not that the lower the carbohydrate/insulin ratio, the more insulin you need to cover your food. Using the carbohydrate gram method to figure your carbohydrate/insulin ratio is simple if you have met certain criteria.
First, you should be eating a consistent amount of carbohydrate at the meal for which you want to find your ratio.
Second your insulin dose for that meal should have been fine-tuned so that your premeal and your post meal BG's are within your target range. For example your premeal target range maybe 3.8-8.3 mmol and your 1 1‹2 -2 hour post meal target plus or minus 2 points but less than 10 mmol.
Once you meet these criteria, select 3 days of records of the same meal when both your premeal BG and your BG by the next meal were in the target range.
This would be a 15/1 ratio.
You may now want to experiment with the ratio you have calculated. Using the previous example, you might try eating a 60-gram carbohydrate breakfast.
Divide the total number of grams of carbohydrate by 15 to find the number of units you'll need.
You would take 4 units for premeal BG in the target range.
Remember also to figure how much or less insulin you need whenever your BG is not in the target range (your insulin supplement).
When you use a carbohydrate/insulin ratio, you first figure your dose based on the amount of carbohydrate that you plan to eat. Then you add or subtract the amount of insulin needed to bring your GB into the target range.
The ratio is found by dividing the number of units R or H by the number of carbohydrate choices that insulin dose covers.
If you eat a meal with 6 carbohydrate choices and take 6 units R for a BG in the target range, then your ratio would be 6uR divide 6 carbohydrate choices = 1 unit of insulin per carbohydrate choice.
The carbohydrate gram method offers you more precise insulin adjustments for the grams of carbohydrate that you eat. Tp fine-tune your adjustments using this method, it is important to weight and measure foods and use food label information and carbohydrate reference books.
Estimating carbohydrate to be eaten based on the Exchange Lists or the carbohydrate choice method is simpler but not as accurate. See the example below. | http://www.diabetesclinic.ca/en/diab/4nutrition/count_overview.htm |
Bacterial antibiotic resistance is an important concept in the process of understanding the nature and adaptability of bacterial species. By mating a kanamycin resistant strain with a non-resistant strain, students will observe the reproductive process and transfer of antibiotic resistance and therefore generate understanding of bacterial life cycles and evolution.
Benchmarks
1) Become familiar with microbiology laboratory techniques.
2) Observe the bacterial resistance resulting from the mating process.
3) Develop questions and theories based on the scientific conclusions determined in the experiment.
Learning Resources and Materials
Per group of four students:
1) pOX38-Km bacteria strains and ED24 bacteria strains
2) 2 TSB plates containing Km and Spc antibiotics (1 for control, 1 for growing transformed ED24)
3) 1 TSB plate containing Km Sm antibiotics (for growing pOX38 donors)
4) Eppendorf tubes
5) transfer loop
6) Bunsen burner or alcohol lamp
7) 1xSSC stock solution
8) water bath (37 deg C)
9) micropipettor and tips
Per class:
1) incubator
2) discard container
Development of Lesson
Introduction
Students will work in a group of four to complete the experiment after listening to a lecture on bacterial resistance. This lecture will focus on the mechanics of gene recombination i.e. bacterial mating and the development and spread of bacterial resistance.
Methods/Procedures
First 20 minutes of class:
1) Discuss with the class different types of bacterial infections and the many antibiotics used to treat them.
3) Lecture about the process of bacterial reproduction and the process in which bacteria are able to develop resistance and then transfer resistance through conjugation.
4) Encourage student questions about the subject and then demonstrate the lab procedure.
Next 30 minutes of class:
1) Students will obtain a test tube containing 800ml TSB (broth for bacteria) and label it with group name and "ED24 + pOX38".
2) Using a micropopettor and fresh tip, add 100 ml ED24 to the test tube.
3) Attach a fresh tip and then add 100 ml pOX38 to the test tube.
4) Place the tube in the water bath for 30 minutes.
5) While conjugation occurs, set --up the conjugation plate as follows:
6) Label the KmSpc plate with the group name, five points to spot dilution samples, and type of bacteria that should grow ("recipients--ED24"). Repeat this step for a second "donor" plate.
7) Using the micropipettor and fresh tip, fill each of five Eppendorf tubes with 180ml 1xSSC to dilute bacteria.
8) After 10 minutes, remove test tube with the conjugating bacteria from the water bath and gently tap the tube to interrupt the mating process.
9) Using a micropipettor and fresh tip, fill the first Eppendorf tube with 20ml of your sample, close and tap to mix well.
10) Draw 20 ml from the first Eppendorf tube and add to the second tube. Mix well.
11) Repeat this procedure for tubes 3-5.
12) Starting with Eppendorf tube #5, draw 10 ml of the sample and spot the 5 spot on the KmSpc "recipient" plate. Repeat with 10 ml samples from tubes 4-1.
13) Cover the plate and set aside, do not disturb until samples are dry.
14) Place in the incubator for 24 hours.
15) After 24 hours, remove your plates and draw the results of each plate. Write a summary
for each picture explaining your results.
Accommodations/Adaptations.
1) For students with disabilities, allow them to participate in the experiment with additional teacher supervision or aid.
2) Encourage group involvement if possible.
Assessment/Evaluation
1) Ensure student comprehension by evaluating summaries and diagrams.
2) Asses group participation as the experiment is carried out.
Closure
Once the students bring back their summaries and diagrams a discussion of the expected results will ensue and explanations to student questions will be provided.
Depending on the accuracy and nature of the results of this experiment, it will be evaluated for future use. This evaluation will also take into account student feedback and the possibility of more effective derivatives.
Teacher Reflection
I feel that the benchmarks of this lesson were well represented by this experiment. The ability of bacteria to pass on antibiotic resistance through conjugation was demonstrated. Accommodations were made when necessary. I learned how to present this experiment and how best to explain the procedure to students who have not encountered these lab techniques before. In the future this lab will be used for this concept, however a pre-lab explanation of techniques may be helpful because of the time restraint. | https://teachers.net/lessons/posts/4385.html |
Congratulations to Carol McBride and Jamie Sherwin who have won the two prizes of tickets to next years Bodies From The Library conference in the competition jointly sponsored with HarperCollins.
They correctly guessed the solution to the question set by Tony Medawar which required them to identify correctly the author of one of the stories, included in the third Bodies From The Library collection of rare and little known short stories by leading writers of the Golden Age of Detective Fiction, which goes on sale on 9 July, based on the opening sentence:
“Adrian Belford, emerging from the offices of Messrs Golding & Moss, Financiers, hesitated uncertainly at the corner of Conduit Street and Bond Street.”
The correct solution is Dorothy L. Sayers, whose short story The House of The Poplars, is one one of the many highlights of the collection.
Other authors whose works feature in Bodies From The Library 3 are:
Anthony Berkeley
Josephine Bell
Nicholas Blake
Lynn Brock
Christopher Bush
John Dickson Carr
Peter Cheyney
Agatha Christie
William A. R. Collins
Joseph Commings
Cyril Hare
David Hume
Ngaio Marsh
Stuart Palmer
John Rhode
Christopher St John Sprigg
Ethel Lina White
Full Competition Rules were as follows:
Answers should be submitted by email to [email protected]
Maximum one entry per person.
Entries must include the answer in the subject/headline/title and your name.
The competition will close at midnight on 4 July 2020
All correct entries received by the deadline will be entered into a draw. The first two names drawn will be the winners.
In the event that there is only one correct entry then that person will win one prize ticket. All other entries will then be entered into a draw to select the second winner.
In the event that there are no correct entries then all entries will be entered into the draw and the first two names drawn will be the winners.
The draw will take place after the competition closing date and will be conducted by an independent representative from HarperCollins Publishers.
The prizes are donated by HarperCollins Publishers and Bodies From The Library.
The prizes are a ticket for each winner to the Bodies From The Library 2021 conference to be held at the British Library. In the event that the conference does not take place the “Early Bird” ticket value will be awarded instead.
The winners will be notified by email. The names of the two winners will be published on the Bodies From The Library website.
No employee of HarperCollins or Bodies From The Library, nor any employee’s family member, may enter the competition.
By entering the competition you confirm acceptance of these rules and for your name to be published if you should be the winner. | https://bodiesfromthelibrary.com/ |
Q:
How to calculate the confidence interval of a regression prediction given a particular value for a binary predictor?
I have a regression predicting wage (dependent variable) from whether the participant is a college graduate (dummy variable, independent).
I have the regression coefficients and their standard errors, the R-squared of the regression, and the covariance between the intercept and the coefficient on the dummy variable.
How do I calculate a confidence interval for the average wage of a college graduate (given that dummy = 1)?
A:
Understanding what's going on comes down to appreciating the distinctions between parameters, random variables, and realizations of random variables. Getting an answer comes down to using this understanding to identify the pieces of information that are needed and knowing enough about the computer output to find those pieces.
Your model is in the form
$$\text{wage} = \beta_0 + \beta_1 \text{[college graduate]}.$$
This assumes $\text{[college graduate]}$ is coded as $1$ for grads and $0$ for non-grads.
The first step in solving such problems is to figure out what combination of coefficients corresponds to what you're estimating. Because that's the easy part, and it will be hard to go on without answering it, I will point out that the "average wage of a college graduate" is obtained by plugging in the dummy value for college grads, giving $\beta_0 + \beta_1 \times 1$ = $\beta_0 + \beta_1$.
The second step is recognizing that you do not know either $\beta_0$ or $\beta_1$. You estimate them from the data. The regression results include their estimates, which we may call $b_0$ and $b_1$, respectively. Because they are likely to differ from the betas, we model the wages as random variables. Because both $b_0$ and $b_1$ are computed from these data, they are realizations of random variables, too. Let's call these random variables $B_0$ and $B_1$. Got that?
OK, if not, let's note that if you have standardized your x-values in the regression (to make the formulas simpler), then the formula for $b_0$ is that it's the average of all the $y$-values:
$$b_0 = \frac{1}{n}\sum_i y_i.$$
We are viewing the $y_i$ as realizations of random variables $Y_i$ (the wages for each subject in the dataset). Thus, $b_0$ is the realization of the average random variable
$$B_0 = \frac{1}{n}\sum_i Y_i.$$
When you make assumptions about the distributions of the $Y_i$, this formula lets you determine how those assumptions affect the distribution of $B_0$. You would usually be most interested in the expected value and the variance of $B_0$: the expectation tells you what $\beta_0$ ought to be, more or less, and the variance (once you take its square root) tells you how close $b_0$ ought to come to $\beta_0$.
Similar reasoning (but with more complicated formulas) holds for $\beta_1$.
(Fortunately, you will not need to work through all the mathematics to relate the expectations and variances of the $B_i$ to those of the $Y_i$: the regression procedure does that for you.)
The upshot is that the fitted regression coefficients $b_0$ and $b_1$ are realizations of random variables. You are interested in how much $b_0 + b_1$ might vary, because (obviously) this is your estimate of $\beta_0 + \beta_1$. To that end you would want to find out two things (the third step):
What is the expected value of $B_0 + B_1$? (Hint: regression theory tells you what this is in terms of $\beta_0$ and $\beta_1$.)
How much should $B_0 + B_1$ vary around its expectation?
To answer #2, you would like to find the variance of the random variable $B_0+B_1$ (and then take its square root). You know, from basic principles governing covariances, that the variance of $B_0+B_1$ can be computed from the variance-covariance matrix of $(B_0, B_1)$, and you should know how to do this.
This reasoning reduces the problem to:
Using the regression output, how can you reconstruct the variance-covariance matrix of the coefficients?
| |
Work with each individual salesperson to find out what will work best for them. Farmers Ins.
Code of Regs. Base salary can also vary from company to company depending on how much support and service the sales rep is expected to provide to the customer while the customer learns how to use or integrate the product.
In contrast, independent contractors usually have the right to substitute other people's services for their own in fulfilling their contracts. It may be the wrong time to sell in the market.
For instance, you may know that Sheryl only ever bought pants from you, and in a size Some sales workers do not receive commissions, while others do not receive salaries and rely entirely upon making sales commissions for cash income.
Once they were in, it was up to us to sell to them.
Passionate, hardworking, experienced and talented people deserve to be treated, in some ways, differently. To meet the requirements of a generally-exempt employee, the employee must meet all of the following requirements: Be paid on a salary basis, 63 Be paid a monthly salary of at least twice the state minimum wage for full-time employment, 64 and Be primarily engaged in the duties of white-collar employees that are professionals, administrators, or executives.
Chapter 3 How Commission are Calculated There are many ways in which commissions can be computed.
Salaries and Wages Many sales employees receive a fixed amount of hourly compensation called a wage or a fixed amount of monthly compensation, known as a salary. | https://xazaqalatujetydi.carriagehouseautoresto.com/sales-and-employees606642905lr.html |
Lately, my sock drawer has really been in need of some organizing.
Let me show you a picture of what it looked like before I decided to reorganize it…
Notice how everything is neatly folded. I have all my regular socks over to the left, my ankle socks on the right in front of that open box and my tights and fluffy thick socks in the open box.
I’m guessing it looks ok to you.
But, here’s the truth.
After having my daughter, who is now two, I sometimes find it hard to neatly put everything away at the end of the day. I just don’t always have the time.
What really happens is that I end up tossing them into the drawer with the intention to eventually get back to them. That “eventually” doesn’t happen for another few days and by then a few more socks have ended up joining the mess.
So I figured just stop folding them all together and either roll them into a ball or toss them into the drawer unfolded.
I gave that a try and it just looked messy to me. I wanted some type of organization.
So after moving things around a few times and thinking about what would make most sense I eventually came up with this…
Ok here we go…
Organized Sock Drawer with a Few Random Items
First, those random items.
I decided to use three different items to organize my sock drawer:
A plastic shoebox from the container store
A small organizer from the dollar tree
A few mini Sephora shopping bags (I used a total of 5 bags)
Let’s briefly talk about what I’m doing now so that it’s easier to put away the socks that I use every day (my regular socks and knee-highs).
I decided that the fastest way I can put them away, short of just tossing them in the drawer, was to roll them tightly together. (You can see them rolled up below in the finished pictures)
I love rolling small items.
I also do this with my scarves and I find that it’s faster and takes up less space compared to folding. (I also use this rolling method in this post where I organized my daughters outgrown clothes)
I also wanted to keep some separation between all of my different socks. I had a few ankle socks, tights, a few knee-high socks, some thicker fluffy socks that I only pull out on very cold nights and then of course, my regular socks.
So here’s how I ended up organizing everything:
My knee highs and regular socks ended up in the mini Sephora bags.
First, I cut the bags down to fit my drawer.
This step is optional but I also decided to place some good old scotch tape right along the rim of the bags to give them some extra protection. I was worried that after some time the bags would start falling apart from me pulling the socks in and out.
This was kind of hard for me to photograph for you but I’m hoping you can see what I did here. I’m just running it all around the edge and then folding it in so as to cover the top part of the bags.
Then I filled my bags.
Knee-high socks went in the top left-hand bag.
Regular socks went into the rest of the bags and I tried to keep similar colors together as best as possible.
Here you can see that they are now rolled up:
Now my tights (and a few extra items like shoelaces)
Those ended up in the shoebox from the container store.
I don’t reach for those items very often so I figured it made more sense to keep them in a closed container and then make use of the top part of the shoebox with the Dollar Tree container which you will see pictured below. I am also keeping them folded like they were before so not much is changing here aside from where they are being stored in the drawer.
Moving on to my ankle socks…
I’m now storing those in the Dollar Tree container.
I love these containers! I use them all around my house. You can see here in an older post how I would make use of them in our old apartment.
Keeping them in this container is so much nicer because there’s more room for them compared to where they were before and now I don’t even feel the need to neatly fold them. I just give them a half fold and that’s it.
So much better now!
As for my fluffy socks.
Those were to thick to go into the little bags so I just folded them and placed them right next to the top row of the Sephora bags (on the right-hand side)
Below you can see how I placed the shoebox under the dollar tree container. This really helped to open up some space and make it easier to get my socks in and out. Before it felt as if I was constantly forcing everything to fit.
Well, that’s it. I hope you enjoyed this and found a few ideas that can help you get your sock drawer organized too!
ok friends, thanks for stopping by and checking out my blog! | http://thesemiorganizedant.com/tag/socks/ |
Traumatic Brain Injuries (TBI) Have Lifelong Consequences - Make Sure You Have Adequate Representation.
A person who suffers a blow or jolt to the head (a closed head injury) or a penetrating head injury may frequently develop a condition that disrupts the function of the brain, known as a traumatic brain injury (“TBI”). TBI is a leading cause of death and disability in the United States. Each year, 2.8 million people sustain a traumatic brain injury. 50,000 of those die from the TBI, and 282,000 people require hospitalization. Each year, 80,000 to 90,000 people will sustain a long-term disability as the result of a TBI. The Centers for Disease Control and Prevention estimate that at least 5.3 million Americans currently have a long-term or lifelong need for help to perform activities of daily living as a result of TBI.
The leading causes of traumatic brain injury are falls, motor vehicle accidents, being struck by or against an object, and assaults or abuse by another person. TBI can also result from a violent jolt of the head such as one might experience in a rear-end automobile collision (“whiplash”) may result in serious brain injury. In a violent collision, the head snaps forward and the brain hits the front of the skull; then the head snaps backward and the brain hits the back of the skull. These impacts can cause serious brain injury. Shaken-baby syndrome is an example of a serious brain injury without a direct blow to the head.
Even in this age of advanced medicine, there is no cure for a TBI. Recovery from a brain injury depends on the brain’s “plasticity,” that is, the brain’s ability for other areas of the brain to take over the functions of the damaged areas, to “rewire” itself.
If you have suffered a blow or jolt to the head, it is important that you receive medical evaluation and treatment as soon as possible. There are drugs and procedures available that can limit the “secondary” damage caused by swelling of the brain, when administered soon after the injury.
Traumatic brain injuries fall into three categories: mild, moderate, and severe. Doctors classify the injury using a test called the Glasgow Coma Scale (GCS). They examine the patient and measure responses to visual, vocal, and physical stimuli.
A person with a mild TBI scores 14 or above on the GCS. Typically, this person has experienced a traumatic blow to the head and a short disruption in brain function, such as loss of consciousness, and/or confusion and disorientation for less than 30 minutes.
Common symptoms include fatigue, headaches, visual disturbances, memory loss, poor attention and/or concentration, sleep disturbances, dizziness and/or loss of balance, irritability, feelings of depression, and, rarely, seizures.
Less-common symptoms associated with mild TBI include nausea, loss of smell, sensitivity to sound and lights, becoming lost or confused, and slowness in thinking.
Mild traumatic brain injury is the most common TBI, and people often fail to identify the cognitive symptoms at the time of the injury, but instead may notice them as the person returns to work, school, or housekeeping. Friends and colleagues may notice changes in the person’s behavior before the injured person realizes anything is wrong. 15% of people with mild TBI have symptoms that last one year or more.
The GCS classifies brain injuries as moderate with a score between 9 and 12. Moderate TBI sufferers usually experience a loss of consciousness between 30 minutes to six hours, and/or post-traumatic amnesia of greater than 30 minutes but less than 24 hours. Moderate TBI may be the result of a skull fracture.
Symptoms are similar to those listed for mild TBI, but there may be long-term physical or cognitive deficits from a moderate TBI. Much will depend on the type and location of the specific injuries to the brain. Rehabilitation will help to overcome some deficits and help to provide skills to cope with any remaining deficits.
A severe brain injury is one with a GCS score lower than 9. Severe brain injuries are life-threatening and often accompanied by a loss of consciousness of more than 6 hours or posttraumatic amnesia lasting more than 24 hours. A sever TBI survivor will typically face long-term physical and cognitive impairments. The range of the deficits can vary widely from a vegetative state to more minor impairments that may allow the person to still function independently. The person will require extensive rehabilitation to try to overcome some of the deficits and earn strategies to cope with others.
The long-term effects of TBI depend on a number of factors, including the severity of the initial injury, the rate and completeness of physiological healing, the types of functions affected, the resources available to aid in the recovery of function, and other factors.
Moderate to severe TBI can cause a wide range of functional changes affecting thinking, language, learning, emotions, behavior, and sensation.
Executive functions, or the processing of substantial amounts of complex information involved in planning, time management, decision making, coordinating events, adapting to change.
TBI may cause the person to have a seizure, which may or may not reoccur. In a personal injury accident, a seizure is more likely to occur due to brain trauma, especially “open” or penetrating wounds to the brain.
TBI can also increase the risk for such conditions as epilepsy, Alzheimer’s disease, Parkinson’s disease, and other brain disorders that become more likely as the person grows older.
One of the most common problems among people who sustain a TBI is fatigue. There are three types of fatigue: (1) physical fatigue, feeling tired and sleepy; (2) psychological fatigue, in which the person can’t be motivated to do anything; and (3) mental fatigue, or difficulties with concentration and focus.
Physical fatigue is due to muscle weakness and can result from working harder to do things that were previously easy, including dressing, housework, and even walking. Physical fatigue tends to worsen in the evening, after a busy day, but improves after a good night’s sleep. Often, physical fatigue will decrease as you become stronger, and more active.
To treat psychological fatigue, it is necessary to find its cause. For instance, if the psychological fatigue is due to depression, medication and therapy may be necessary to help treat the condition. Major depression in the general population runs at about 5 or 6%; it is ten times greater with TBI survivors.
Anxiety occurs in twice as many TBI sufferers than in the general population. This may take the form of post-traumatic stress disorder, in which the person has “flashbacks” in which they relive the event, or the development of phobias, in which the person experiences dread centered on a specific situation, such as being in a car, or hearing loud noises.
Persons who experience a mood disorder such as depression or an anxiety disorder, such as PTSD, a specific phobia, or generalized anxiety, may need to take appropriate psychiatric medication and undergo months, even years, of psychotherapy.
Mental or “cognitive” fatigue after a TBI happens because the person must concentrate harder to do tasks. The more the person must concentrate, the more mentally fatigued he or she may become. In some people, mental fatigue causes irritability; others have headaches. Mental or cognitive fatigue is the least-known type of fatigue associated with TBI.
Most spontaneous improvement from a TBI occurs within the first month after a brain injury. Some additional gains may occur over the next three to six months. The long-term effects of a TBI are different for every person. Some may experience only subtle difficulties, while others will have moderate dysfunction, while to still others the TBI may be life-threatening.
With TBI, the systems in the brain that control our social-emotional lives often are damaged. Thus, after injury, individuals with TBI may be unable to function well in their prior social roles. They may be confused easily when there is a change in normal routines, they may be unable to switch to a different tactic or a new task when encountering difficulties. Some TBI survivors may jump at the first solution they see, substituting impulsive responses for considered actions. They may be unable to go beyond a concrete appreciation of situations to understand abstract principles needed to carry learning into new situations.
Personality can change substantially or subtly following injury. The consequences for the individual and his or her significant others may be difficult, as “the person who once was” is no longer there. The person who was an optimist may now be depressed. The previously tactful and socially skilled negotiator may now blurt embarrassing comments. The person may exhibit a variety of other behaviors: over-dependence, mood swings, lack of motivation, irritability, aggression, lethargy, disinhibition, or inability to modify behavior to varying situations.
The severity and effects of the injury may not predict the impact in a person’s life. Each of us draws on various parts of our brains in diverse ways. For example, a severe injury to the frontal brain area may have less impact on an agricultural worker’s job performance than a mild frontal injury would have on a physicist’s work. The associated damage to any person’s life will depend on pre-injury lifestyle, personality, goals, values, resources, as well as the individual’s ability to adapt to changes and to learn techniques for minimizing the effects of brain injury.
Some suggest that “recovery” is a misnomer and that “improvement” better describes what happens eventually after TBI. The word recovery suggests that that the effects of TBI will disappear, while truthfully, improvement is all that happens in most cases. With TBI, some effects may disappear after a couple of years, but more frequently these long-term changes linger over the course of the person’s life. Brains do not heal like broken limbs, and everybody’s brain is different. Although they may superficially appear alike, no two brain injuries are the same and the consequence of two similar traumatic brain injuries may be vastly different.
Monetary damages you may receive when you have sustained a traumatic brain injury include all your medical and rehabilitation costs, lost wages because you were unable to return to work, pain and suffering, emotional distress, and loss of enjoyment of life due to your impaired condition.
If you or a loved one has suffered a traumatic brain injury due to another person’s carelessness – such as an automobile accident caused by another person’s inattentiveness, or a slip and fall on a store’s slippery floor – it is important that you promptly seek representation by a personal injury law firm experienced in this type of injury. | https://www.torklaw.com/practice-areas/brain-injury-lawyers/tbi/ |
Kerala's electric vehicles draft policy okayed by state finance department
The policy, which is the part of NITI Aayog, aims to reduce the number of vehicles on the road with the introduction of modern shared transport system like air conditioned e-bus and e-autorickshaws.
Published: 01st July 2018 04:07 PM | Last Updated: 01st July 2018 04:07 PM | A+A A-
THIRUVANANTHAPURAM: The Electric vehicles (EV) or e-mobility policy draft received a sanction after few corrections from the state department of finance on Saturday. The draft has now been forwarded to the chief minister Pinarayi Vijayan for his consent.
The EV-policy draft was forwarded to the government to get their sanction by the Transport Department which is currently handling the policy. The discussion on the version 2 of the policy began with the trial service of the electric bus in the Capital city on June 18.
The policy aims to reduce the number of vehicles on the road with the introduction of modern shared transport system like air conditioned e-bus and e-autorickshaws. It is a part of NITI ayog, Jhunjhunwala commission which aims to move towards all electric fleet by 2030.
The high vehicle population of over ten million vehicles on road in Kerala has made mobility a challenge, and it is accompanied by increase in road accidents and air pollution. The state government took several measures to ensure a sustainable development for its citizens like upgrading and widening the national highways to 45 meters and constructing a coastal highway.
The transition to electric vehicles is the choice for the state in line with its development ideas as there is an urgent need to accelerate the use of clean energy technologies in various sectors, in order to address the global challenges of energy security, climate change and sustainable development.
The other members involved with the policy are chief officers and secretaries of various departments including Kerala State Road Transport Corporation (KSRTC), Kerala State Electricity Board (KSEB), Kerala Infrastructure Investment Fund Board (KEIFB), Finance and Chief Industries. Each one has their own role in this policy.
"The KSRTC has proposed for the transition of corporation's 50 per cent fleet of more than 6000 buses into EV by 2030. This is expected to substantially reduce heavy outflow due to fuel cost. The electricity cost per unit will be Rs 5. The corporation has also asked for permitting charging facilities for other public vehicles on payment," said Tomin J Thachankary, KSRTC CMD.
KSRTC currently procures around 1000 new buses annually and is planning to replace these buses with EVs, charging infrastructure and innovative electricity tariff. To get these vehicles the corporation is waiting to enter into the central government's FAME II list that provides subsidies for EV's.
"The KSEB is responsible to set up charging stations and provide electricity at feasible prices for the EV's. Initially it aims to establish charging stations at Thiruvananthapuram, Eranakulam and Kozhikode," said Pradeep, Chief Engineer of KSEB.
According to the policy, the role of Industries will be to establish high tech manufacturing industries in areas like design, power electronics and IT components for EV's. Kerala focuses on growing its internal manufacturing ecosystem and turn away from being export-dependent, consumption driven economy.
The air pollution due to particulate matter in Kerala though does not exceed the national guideline value, is substantially higher than the guidelines recommended by WHO.
India’s transport sector is responsible for about 15 per cent of the country’s energy-related CO2 emissions and the accompanying impacts on air quality, public health, road safety, and sustainable urban development. This determined action needs to be initiated to impede the air pollution in Kerala. | http://www.newindianexpress.com/states/kerala/2018/jul/01/keralas-electric-vehicles-draft-policy-okayed-by-state-finance-department-1836547.html |
New York recently became the second state in the country to ban racial discrimination based on hairstyles. California passed a similar law, known as the Crown Act, in July. Given that other states, including New Jersey, are exploring their own legislation, employers in all states should review their grooming or appearance policies for provisions that limit or otherwise restrict natural hair or hairstyles.
The push to extend anti-discrimination protections to hairstyles can be traced back to New Jersey. In 2018, Andrew Johnson, a high school wrestler from Buena Regional High School, was forced to cut his dreadlocks immediately prior to his scheduled competition. The official gave Johnson 90 seconds to either cut his hair off or forfeit the match. In the wake of the incident, state legislatures have moved to curtail such forms of indirect racial discrimination.
New York’s Ban on Hairstyle Discrimination
New York’s new law (S.6209A/A.7797A) amends the state’s Human Rights Law and Dignity for All Students Act to specify that discrimination based on race includes hairstyles or traits associated with race. Both laws now include subsections that define race to include "traits historically associated with race, including but not limited to hair texture and protective hairstyles." The new law defines "protective hairstyles" to include, but not be limited to, such hairstyles as braids, locks, and twists.
Gov. Andrew Cuomo signed S.6209A/A.7797A into law on July 12, 2019. The provisions took effect immediately. "For much of our nation's history, people of color - particularly women - have been marginalized and discriminated against simply because of their hairstyle or texture," Governor Cuomo said. "By signing this bill into law, we are taking an important step toward correcting that history and ensuring people of color are protected from all forms of discrimination."
New Jersey Proposed Hair Bias Legislation
In New Jersey, legislation has been introduced that prohibits discrimination on the basis of hair in the workplace, housing, and schools under the state’s Law Against Discrimination (LAD). The provisions are largely modeled after California’s Crown Act and New York’s new law.
The bill (A-5564/S-3945) specifically amends the LAD so that the term “race” includes “traits historically associated with race, including, but not limited to, hair texture, hair type, and protective hairstyles.” Under the bill, the term “protective hairstyles” includes, but is not limited to, hairstyles such as braids, locks, and twists.
“It is a violation of their civil rights to tell you how long your hair should … it has nothing to do with how you perform in the workplace or on a wrestling mat,” said co-sponsor Senator Shirley Turner. The Senate and Assembly Labor Committees are currently considering the legislation.
Next Steps for Employers
In light of the New York and California laws (and the prospect of new regulations on the horizon), employers should review their dress, grooming and/or appearance policies to verify that hairstyles, such as afros or dreadlocks, are not prohibited. At the same time, employers should also ensure that seemingly “neutral” policies, such as those that require workers to maintain a “neat and professional appearance,” are not enforced in a way that could be construed as racial discrimination. While claims of discrimination based on hairstyles are relatively rare, the increased regulatory attention on potential hair bias is likely to fuel an uptick in litigation.
If you have questions, please contact us
If you have any questions or if you would like to discuss the matter further, we encourage you to contact us at 201-806-3364 or visit Scarinci Hollenbeck's Attorneys page to learn more about our attorneys and their legal experience. | https://scarincihollenbeck.com/law-firm-insights/labor-employment/grooming-policies-hair-bias-bans/ |
AAEI committees respond to specific global trade concerns, bringing needed information to members and members’ views to the attention of key policy makers and administrators. Members’ common concerns and potential areas of difficulty are addressed through the committee framework. Our substantive committees include:
Chemicals and Bulk Commodities
Focuses on issues of interest to importers and exporters of chemical and petroleum products, other bulk commodities, and related transportation companies.
Customs Policy and Procedures
Focuses on projects dealing with automation, entry revision, global customs harmonization, trade security, tariff data reporting, ISF, bonds and legislation impacting the trade community.
Drawback and Duty Deferral
Focuses on regulatory changes and administrative concerns regarding all types of duty drawback and special duty deferral programs. Its goal is to get the Drawback Modernization Act passed immediately and to work with CBP to automate the entire drawback process in ACE.
Export Compliance and Facilitation
Focuses on U.S. export controls, unilateral sanctions programs, and foreign barriers to U.S. exports.
Healthcare Industries
Focuses on trade security, facilitation and compliance issues for the pharmaceuticals and medical devices industries, as well as ISA.
International Policy
Focuses on the global trade facilitation and compliance issues and needs of multinational companies and their service providers with global trade compliance teams overseas.
Member Services
Focuses on recruitment of new members, retention of existing members and development of member services and benefits.
Regulated Industries
Focuses on the unique problems of importers subject to the regulations of FDA, USDA, EPA, CSPC, DOT, FWS, ATF and others.
Textiles, Apparel and Footwear
Focuses on the importing and exporting of textiles, apparel and footwear.
Trade Policy
Focuses on regional and special trade agreements, multilateral trade negotiations, dumping and countervailing duty issues, competition policy, legislation and macroeconomic policy as it affects international trade.
Western Regional Committee
Represents the interests and concerns of member companies doing business in the western region of the U.S., and plans a bi-annual meeting focusing on Western Regional issues. | https://aaei.org/committee-information/ |
An evaluation of the contribution of critical care outreach to the clinical management of the critically ill ward patient in two acute NHS trusts.
This paper reports on an evaluation of the role and contribution of outreach in the management of the critically ill ward patient using Stake's Responsive Model (Stake, 1975) and case study methodology (Simons, 1980). Twenty cases were examined, purposefully sampling all staff involved in the case identified by an initial interview with the outreach nurse. In total, 80 interviews were carried out, 20 with the outreach nurses and 54 with other members of health care teams involved in the cases, and six further targeted in-depth interviews with senior anaesthetic and nursing staff. The outreach contribution which emerged from the data analysis consisted of four core categories: action (getting things done, getting decisions made and following through), focus and vision (concentrating on one patient and having a vision of what action was needed to meet their care needs), orchestration (a communication and co-ordinating role) and expertise (bringing critical care skills and experience to the bedside). These categories were validated and developed in the six in-depth interviews. Three themes emerged from the data describing aspects of the acute care context in which outreach operates. The interviews revealed a battleweary workforce overwhelmed by the complex and increased demands of the critically ill ward patient. The medical and nursing teams at the bedside are inexperienced and often unsupported by senior clinical decision-makers. This is dealt with by 'passing the buck' creating gaps and delays in care management which are the problems addressed by the outreach contribution. Outreach may solve problems for the critically ill ward patient, but the underlying causes remain poorly understood.
| |
What’s the Difference Between Physical Therapy & Chiropractic Care?
If you’ve never been to a chiropractor before, you may find yourself drawing comparisons between physical therapy and chiropractic care. While both are focused on treating the physical body, there are significant differences between the two wellness modalities that are important to understand. By recognizing the effects and purposes of each of these therapeutic interventions, you will be better equipped to determine which one (if not both) would best suit your needs.
Here are four differences between physical therapy and chiropractic care:
#1. Chiropractors are experts on the neuromusculoskeletal system.
Chiropractors’ area of expertise is on the relationship between three major body systems: the nervous system, the muscular system, and the skeletal system. Together, these three body systems make up the neuromusculoskeletal system, which is responsible for a whole host of functions within the human body.
The nervous system in particular is an area of focus for chiropractors, who work to decrease inflammation, compression, and inhibited nerve function through their various treatments. When it’s functioning properly, the nervous system is responsible for carrying sensory feedback from the external environment via the peripheral nervous system to the central nervous system, which comprises the spine and brain. As messages are received and sent through the nervous system, messages are also given to the body to regulate various processes. Directly or indirectly, the nervous system is responsible for almost all resting processes that require no conscious effort: i.e., blinking, respiration, cardiovascular function, immune function, hormone release, etc.
When the nervous system is disrupted—which it can be through a variety of mechanisms which create imbalance or interruption to the muscular or skeletal systems—the tasks for which it is responsible can be jeopardized. Additionally, a disrupted nervous system can cause pain throughout the body, both locally and globally. In conjunction with providing messages about the environment to the brain and dispersing directives from the central nervous system throughout the body, the nervous system is also responsible for sensing imbalances and creating pain signals when something is amiss.
When there is a physical imbalance within the body, nerves transmit pain signals from the area to the central nervous system. With regard to the neuromusculoskeletal system, pain signals in nerves often mean that there is a joint or bone misalignment or a muscular imbalance. Expert chiropractors for pain can distinguish where this imbalance originates using manual manipulation, advanced digital imaging, and x-ray technology. Once the area is treated, the pain often diminishes or disappears completely. Nerve bundles which were compressed, constricted, or crushed by misalignment, inflammation and other imbalances are then returned to a state of wholeness and balance.
#2. Chiropractors realign bones and joints to create multi-system healing.
A large part of a chiropractor’s expertise and knowledge base is in realigning bones and joints within the body. As previously explored, the neuromusculoskeletal system is made up of the nervous system, the muscular system, and the skeletal system. Together, these three systems are responsible for many of the crucial bodily functions that allow human bodies to move through and interact with their environment comfortably. When any of these three systems are disturbed, the others are also compromised in their ability to function optimally.
Chiropractic care focuses on realigning bones and joints, which play an enormous role in muscular and nervous system function. When bones and joints become misaligned, or experience subluxation (when joints shift out of their ideal position in the global skeletal structure) this creates a host of imbalances throughout the body. Bones or joints experiencing subluxation create inflammation in the nearby area, constrict and crush local nerves, force the muscles of the body to attempt to compensate for the imbalance, and impair nerve communication and nerve energy transmission and flow.
To distinguish what areas of the skeletal system have been imbalanced by either accident, injury, posture, or genetic predisposition, an expert chiropractor may execute a visual analysis, employ manual manipulation, utilize digital imaging technology, and take x-rays. Each of these tools allow your chiropractic alignment expert to determine what bones and joints require attention, what the impact is on that area of the body (and on your lifestyle, pain levels, and overall comfort), and what the best mode of treatment is for your unique body.
#3. Chiropractors improve your baseline level of health.
While you may seek out a physical therapist for the purpose of rehabilitation after an injury or accident, chiropractors emphasize improving your baseline level of health. A chiropractor can absolutely help your body recover after some kind of injury, or a lifestyle that has induced pain and discomfort—but a chiropractor is equally well-equipped to assist you in raising your baseline of health by making adjustments and improvements to your global health and body structure.
For those who are interested in seeing how much they can improve their already-functional agility, mobility, and vitality, a chiropractor is an ideal wellness advocate. In addition to making significant changes and improvements to the overall structure and function of your body systems, an expert chiropractor can also aid in making minute tweaks and adjustments, the effects of which can be vastly influential on the body’s overall feeling of well-being and function. In attending to the neuromusculoskeletal system, including making just subtle changes, an expert chiropractor can improve athletic performance, overall vitality and clarity, cognition, and general comfort in the body.
#4. Chiropractors take a holistic, lifestyle-based approach.
Physical therapists are often enlisted to address local issues on the body—this may be rehabilitation for a specific body part or condition, restoring function after an accident or injury, or simply strengthening a part of the body that has been weakened due to lifestyle conditions, a genetic predisposition, or general wear and tear.
In contrast, chiropractors look at your health holistically, examining every facet of lifestyle to determine the ways in which your daily habits and routine may be affecting your symptoms or overall wellness. A chiropractor is equipped to provide lifestyle guidance, including exercises to aid and strengthen your body, take-home tools, and in-office treatments that may be supplemented with specific activities recommended for your unique body.
Which should I choose—physical therapist, or chiropractor?
Ultimately, physical therapists and chiropractors serve two very distinct purposes, with mildly overlapping services and outcomes. Ideally, you shouldn’t have to choose between either service—both physical therapists and chiropractors offer deeply therapeutic and healing-promoting care which are highly complementary to one another.
At Advanced Spine and Posture in Grand Rapids, MI, we proudly serve the entire Grand Rapids area and offer the most cutting edge chiropractic care and physical therapy to all of our visitors. We believe in taking a holistic approach that doesn’t exclude any healing modality, and instead, combines the best of both worlds to ensure the ideal wellness outcome for every individual who walks through our door.
If you or a loved one lives in the Grand Rapids, MI area and is seeking highly curated, individualized treatment designed exclusively for you—which includes both chiropractic care and physical therapy—then book your appointment today. | https://spineandposture.com/whats-the-difference-between-physical-therapy-chiropractic-care/ |
In 2009, our team of 12 outstanding classroom teachers and CTQ’s founder Barnett Berry debated the future of education among ourselves and with some of the greatest thinkers, researchers, and practitioners in education and future studies. Our book, Teaching 2030: What We Must Do for Our Students and Our Public Schools Now and in the Future, (2010) offered practical and controversial options for the direction of education reform.
Among the trends we identified was the potential for a new learning ecology for students and teachers. Emily Vickery predicted, “The creation of personalized learning experiences will grow more sophisticated, challenging current and future teachers to redefine what should be learned and what learning is.” Consequently, we advocated for development of more comprehensive and accurate measures of student learning than Carnegie units and standardized tests.
We also hoped for revived interest and investment in community schools as hubs for education and other services.
We vigorously rejected the de-professionalization of teaching, and we criticized how archaic systems stifled teacher creativity and isolated teaching expertise to the detriment especially of our most disadvantaged students. We urged a shifting (as countries with highly successful educational outcomes had already done) to create stable, interlocking teams of expert teachers, generalists and specialists, who work together to serve students and their families. Such teams, anchored by highly accomplished professional teachers, would support and mentor novice teachers, and be supported by a variety of specialists and volunteers.
We envisioned development of career lattices (not ladders) for the teaching profession across which teachers’ expertise could be developed and equitably distributed, allowing teachers to pursue leadership and other hybrid roles, without having to give up teaching.
T2030 teammate, Ariel Sacks, coined the phrase “teacherpreneurs” to capture our vision of “teacher leaders of proven accomplishment, with a deep knowledge of how to teach, a clear understanding of what strategies must be in place to make schools highly successful, and the skills and commitment to share their expertise to others—all the while keeping at least one foot firmly in the classroom” (137).
We sought to increase the numbers of risk-taking, still-teaching teachers who would be recognized and rewarded for their innovation and commitment to spreading expertise across the profession not just monetarily, but also in terms of policy input and school leadership.
So, we went to work, as a network and in our respective areas.
Recently, some of the T2030 team reflected on our vision and on the prospects for U.S. public education.
Originally, we saw hope in then new technologies that could offer unprecedented opportunity to reimagine school. Now, Emily insists that the Open Educational Resources movement (OER) has not taken off as expected; Massive Open Online Courses (MOOCs) are dead; and blended learning, despite all its potential, is mired in poor implementation.
However, she sees reason for hope in the calls for more civics education and grassroots social movements (e.g., #BlackLivesMatter, #MeToo, #NeverAgain), in wearable technology, and in development of projects like Learning Cities.
Happenings since the publication of Teaching 2030 convince Emily even more strongly of the need for teacher leaders to focus on state and local policy, in coalition with students, parents, and communities.
Ariel Sacks shares that, for the first 3-5 years, she was hopeful, especially working in NYC’s public charters to create new roles for teachers. However, she became less hopeful when she realized teachers were not being paid commensurate with their new responsibilities.
Ariel sees teachers having more input over classroom curriculum than in the past and that more teachers are spreading their expertise through social media.
Another veteran NYC teacher from the T2030 team, Jose Vilson, co-founder of #EduColor, also has a sobering analysis of current realities.
Despite these gloomy realities, what the 2030 team members remember and value most was the collaboration and the opportunity to insert views from the classroom into the education reform conversation.
Since writing Teaching 2030, I have not wavered in my perspective on what matters for students in the future of education. I still believe that a transformed learning ecology is perpetually on the horizon… I still believe that teachers will continue to push the boundaries of their classrooms and their profession through innovation and digital tools. I still believe that differentiated pathways and teacherpreneurs are the way the profession will cross the threshold from missionary work to self-governing profession.
Halfway to 2030, as CTQ prepares to celebrate 20 years of working with teacher leaders to effect quality change in public education, we’re re-visiting this conversation. This roundtable will look beyond the uncertainties of what might lie ahead to how we could respond to emergent issues.
How are you preparing to respond to what is next for public schools? What do you think might be keys to scaling up the bright spots being created (maybe you’re part of one of these!) to build a new kind of future for education?
Renee’s post is part of a roundtable blogging discussion exploring the future of education. We want to hear your thoughts! Join the conversation by commenting on and sharing this blog and by reading the other blogs in this series. Follow CTQ on Facebook and Twitter to see when each new blog is posted and use #CTQCollab to join the discussion on social media. | https://www.teachingquality.org/we-cant-create-what-we-cant-imagine/ |
Using authors you know to find books you'll love: why reading beyond an author's most famous work can help you find a hidden gem.
Tag: harper lee
5 Books On Empathy For ‘To Kill A Mockingbird’ Fans To Read
Please enjoy this selection of five books fit for an audience of 13 years and older – basically a target audience of mainly YA lovers – so that you can not only grow to understand what others go through, but also grow to understand yourself.
On This Day in 1960: ‘To Kill a Mockingbird’ Published
On July 11, 1960, Nelle Harper Lee published her first novel, the incredibly renowned 'To Kill a Mockingbird'.
7 Harper Lee Quotes We Love
For Harper Lee's birthday, we have compiled seven of our favorite quotes from her novels and beyond.
Remembering Harper Lee With 11 Facts About Her Life
Today we remember Harper Lee, and her 1960 masterpiece, with eleven lesser-known facts about her life. | https://bookstr.com/tag/harper-lee/ |
3 Money Myths …. Exposed!
Have you ever caught yourself saying something, just because it was what everybody else was saying? We all swear we won’t become our parents, and then we say things like, “Money doesn’t grow on trees!” when, in fact, it does.
There are a lot of things people say about money that simply aren’t true.
These are the sayings that form the basis of our thoughts about money. When our thoughts, feelings, and actions all line up, we experience success. We are able to accomplish anything, be it personal, business, or financial goals. But when what we are thinking isn’t true, the actions that follow can lead to results we never intended. In other words,
What you think about money is just as important as what you do with it!
Our thoughts about money lead to our actions. We spent alot of time talking about that, this past Saturday on the RichLife Show. The recording of that show will be available a bit later on in the week. Take the time to listen to it. We cover a HIGHLY controversial topic when it comes to our thinking and what we do with our money. But for right now, let’s take a look at a few money myths to help get us on track for a successful financial year.
Money Myth # 1: High Risk Equals High Return
This myth was part of my training as a financial planner, and I’m still hearing it today. We have all been taught to accept that in order to increase our chances of winning, we must increase our chances of losing.
Yet in practice, this means one thing – you will have a greater chance of losing!
It follows that the younger you are, the more risk you can tolerate. You have more earning years ahead of you, and so you are lulled into thinking that it’s okay to be in a losing position, because you can always make it up again. But why should you have to?
Ask yourself: why would you want to increase your chances of losing? You worked hard to earn your money!
When financial institutions invest, they go to great lengths to minimize all risk. They would never operate on the principle that you can make greater profits by increasing your chances of losing. Please allow me to introduce a new principle:
To increase your chances of winning,
you must decrease your chances of losing.
You do this by transferring as much risk to your investments as possible. This principle should be the foundation for all your RichLife investments.
Money Myth #2: The more money I accumulate, the more financially secure I will be
Most people go to a financial planner with one goal in mind — to invest earnings for the greatest returns. Because of this mindset, most financial planners focus mainly on the financial statements – the bottom line of monetary accumulation.
But ask yourself: is that the best goal? Should accumulation be the highest priority?
People rationalize to themselves that they must accumulate in order to be happy. But building a RichLife is about all of our assets – human, physical, and financial – and money only plays one part of that.
Upon closer inspection, most people realize it’s not having to worry about money that truly brings peace and happiness
When you realize this as your bottom line, you can choose investments that accomplish your financial goals without putting them unnecessarily at risk. You won’t go for the “high risk equals high return,” because that won’t bring you peace of mind. If a product that will give you 6% growth over 10 years is enough to do the trick, you won’t get sidetracked, and you won’t take a big hit because you thought a big return was worth the risk.
Money Myth #3: That will never happen to me!
Operating under this premise, we self-insure, deciding to pay for things as they come up.
We tell ourselves, “There is no way we could ever lose our job, become disabled, or, ultimately, die.”
And so we don’t apply the principles of Risk Transfer. We don’t invest in an adequate health insurance plan, we don’t plan for anyone losing their job, and we certainly don’t plan for what will happen in the event of death. Planning may not make life tragedies any less painful, but it will certainly help to make them less stressful.
What do you think about money? Has Life ever caught you by surprise? Help spread the RichLife message by sharing your story. I’d love to hear it!
2 Comments
-
Keep this great advice coming,
-
Keep this great advice coming, | https://www.richlifeadvisors.com/3-money-myths-exposed/ |
Replacing application code logic with SQL statements is very common. It saves developers time, effort, and also mistakes. However, substituting code with SQL can oftentimes result in complex and long SQL statements that are not performing as fast as expected. If your query is running slow and you want to understand why - you’ve come to the right place.
In this article, we will guide you through the process of identifying the performance bottlenecks in your query in just 5 simple steps. Keep on going through the steps until you identify a query part that takes a significant amount of time out of the total execution time. Keep in mind that there could be more than one bottleneck in your query so you might need to repeat this process a few times.
We will be using the following query to demonstrate each of the steps:
Step 1: make sure no other queries are running
It’s important to verify that no other queries are running in the background while you are debugging your query. Other queries might compete for the same resources as your query (RAM, CPU, and even disk), thus impacting its execution time. Run this query to make sure no other queries are running:
SELECT * FROM catalog.running queries;
Step 2: measure compilation duration
Before getting to the execution phase, every query needs to be compiled by the engine. In most cases, compilation runs in sub-seconds, but in some cases, it might take longer, and even take the majority of the query run time. To get an approximate estimation of the query compilation time, run an EXPLAIN over your query, like this:
EXPLAIN <ORIGINAL QUERY>;
Step 3: measure the fetch duration
The process of retrieving fetch result-set from the server from clients (AKA fetch) is heavily impacted by many factors such as the client’s network, number of records, number of columns, and the size of the column values. We are going to measure the fetch duration: the time from when the query execution is finished to when the result-set is available to the client.
Remember that FETCH is extremely inefficient for very large result sets. To get the number of fetched records, run this query: SELECT COUNT(*) FROM (<ORIGINAL QUERY>);
To understand the FETCH duration of your query - compare the original query run time with the run time of the following query: SELECT CHECKSUM(*) FROM (<ORIGINAL QUERY LINES>);
Step 4: setup a warmup execution baseline
Next, we want to get the execution time of the query. It’s important to run the query at least twice in order to ensure at least one warm execution (where relevant data was loaded into the disk). We recommend running this baseline benchmark from a fresh session to ensure that there aren’t any unwanted settings that impact the performance (unless they are planned to be used).
The most accurate way to get a query execution time is by fetching its duration from the catalog.query_history table, like this:
While you’re at it, it is also recommended to keep an eye on some other metrics that can point you to the actual bottleneck:
- A high number of scanned rows might indicate that the bottleneck is scan time.
- A high number of bytes scanned from cold storage, might indicate that the bottleneck is cold execution.
- A high number of RAM bytes consumed might indicate that the bottleneck is some memory-based process, like aggregation or join.
Once you got a consistent execution time for the query, we are ready to move forward to the next step and dive deeper into the syntax of your SQL query, and identify the bottleneck.
Step 5: query reduction
We will now start stripping elements from the query, top to bottom. The idea is to reduce the query until a certain reduction results in significant performance improvement - which means the last reduced element is the bottleneck. To remove certain query parts, we recommend commenting the part out so it is easy to bring it back when required. In case of a more complex query, which contains subqueries, we’ll start with the top-level, and then make our way down until we find the bottleneck. Let’s now analyze the query step by step and list the potential bottlenecks.
ORDER BY clause
Not a common time-waster, and yet something we have to rule out before moving forward. There are multiple factors that might affect ORDER BY step duration:
- The size of the sorted result set.
- Whether or not the sorted fields are part of the primary index.
- The number of fields in the ORDER BY and their granularity.
In our specific example there is an ORDER BY in the final SELECT statement:
As the ORDER BY step executed as one of the very last steps of the query execution, we can just comment it out and check performance without it:
SELECT clause
Next, we should inspect the result set target list. A straightforward SELECT should not take too much time, but there are some functionalities that take longer to compute and might end up as bottlenecks:
Note: aggregation functions part of the SELECT statement, accompanied by GROUP BY clause, are addressed separately in the GROUP BY clause section.
- Window functions or analytical functions - might be a huge time consumer. Keep in mind that the actual under the hood implementation of window functions, often includes aggregations (NEST and UNNEST is one approach) thus exceeded the boundaries of SELECT clause computation.
- JSON functions - JSON parsing and extract functions durations might accumulate to a bottleneck, depending on the number of parsed JSONS, the size of the JSON, and the number of extracted keys from each JSON.
- Lambda functions - might also accumulate to a bottleneck, depending mainly on the size of the array/arrays it is running over.
- Case statements with long conditions list.
- Fields with very long values.
- A large number of fields in the target list.
In the case of a long target list, we recommend commenting out half of the list recursively, until you are able to focus on the problematic fields. In case a function is involved in the field that is suspected as the bottleneck, replace the function with a plain fetch of the processed underlying value. In our example:
We should examine reducing the window function into a regular fetch:
Then check the performance again.
UNION clause
There are 2 typical patterns that might cause a UNION clause to become a bottleneck:
- A UNION is performed (rather than UNION ALL) over very large data sets, thus the engine takes a long time to compute the final deduplicated result set.
- The UNION is performed over complexed / many subqueries, thus the computation of all the subqueries (even if executed in parallel) accumulates into a performance bottleneck.
Going back to our example, we finished analyzing the final SELECT, so now we are going one level in, into subquery_b:
As a first inspection, we should replace the UNION with UNION ALL and check if it improves run time. In case you are seeing some improvement - run SELECT COUNT(*) over the UNION ALL clause, to see how many records are part of the UNION operation.
Then, we go over the UNION subqueries and execute them one by one separately. If nothing stands out we should look into the accumulated time spent on the UNION by adding more made up similar subqueries to query, so in our example, we could do:
If adding more subqueries results in linear increase in run time - that means the UNION indeed is a bottleneck. In this case, we recommend checking whether your UNIONs are essentially subqueries accessing the same data set each time for different results, or each subquery accesses a different data set (that would impact the optimization technique).
GROUP BY clause
We can divide the typical aggregation-related performance issues into 2 groups:
- Functions-related issues - performance issues related to the aggregation functions themselves:
- The number of calculated functions.
- The complexity of the aggregated values - aggregation over values that are computed on the fly like case statements, text parsing, and semi-structured data parsing.
- The specific aggregation function that is being used - some functions are more complex and require more resources to compute than others (for example count vs count distinct).
- Group by related issues - performance issues related to the fields listed in the GROUP BY clause:
- Number of fields in the clause.
- The granularity of the listed fields.
- The types and length of the listed fields.
Back to our query, let’s take one of the subqueries under the union in subquery_b as an example:
We start by inspecting the aggregation functions. While the SUM is straightforward and does not reflect special complexity COUNT(DISTINCT) over a CASE STATEMENT just might. To verify whether the function itself is a potential bottleneck, we can replace the COUNT(DISTINCT) with a simpler COUNT and/or replace the computed CASE STATEMENT with a plain value, like this:
Next move on to inspect the GROUP BY clause itself. We start by identifying how many unique values does the clause produces by running this type of query:
If we suspect that one of the fields in the GROUP BY clause is the main contributor to the clause complexity, we can try and mark it out / replace it with another field. So for example, in our query we have field col4 which contains long string values, so we can try and replace it and see the effect of it on the execution time:
JOIN clause
This is considered as one of the most common bottlenecks. The idea of the JOIN clause inspection is to verify whether the actual operation of joining the 2 or more data sets is what’s hearting the performance (rather than the individual scanning of each data set). There are multiple factors that can affect JOIN performance:
- The number of joined data sets.
- The number of rows of each data set.
- The complexity of the JOIN condition.
- The JOIN type (INNER / LEFT / CROSS)
Back to our query, as we finished inspecting subquery_b it’s now time to inspect subquery_a:
As we can see, there is a JOIN between 3 tables in this subquery. As a first step, we recommend replacing the JOIN with a UNION ALL which should provide a clear indication of whether the issue is within the actual JOIN operation (and based on this operation we’ll know if we need to continue diving in). So in our query, we can do something like this:
Note that this method isn’t flawless, and you might need to make some compromises regarding each table scan (in our example we had to compromise table_c scan to make the UNION work). Nevertheless, if the UNION query is significantly faster than the JOIN query, then it’s most likely that the JOIN itself is the bottleneck, and it’s recommended to continue with its inspection.
As a next step, we can start by trying to identify if there’s a specific table that is responsible for the bottleneck, by commenting out each table at at time. First, table_b:
And then table_c:
Once you identify the specific table JOIN that takes most of the time, try inspecting the JOIN condition (in case it’s not straight forward) by marking out complexed conditions:
If there is no issue with the JOIN condition itself, the bottleneck is probably a result of the number of records that need to be crossed as part of the JOIN operation. Run SELECT COUNT(*) over each side of the JOIN starting with table_a:
Then table_c:
Finally, run a count over the JOIN product, to check wether there is a one to many / many to many JOIN:
In case the result of the COUNT on the JOIN product is significantly higher than the COUNT on each side of the JOIN - the JOIN itself is most likely a bottleneck.
FROM and WHERE CLAUSES
Last 2 clauses we would inspect are the FROM and the WHERE clauses. We should inspect them together, as they are tightly connected, and are essentially the heart of every query execution.
We refer to the time spent on these 2 clauses as scan time which is determined by the number of records that have to be scanned in order to return the desired results. If you got to a point where you are able to narrow down your query to a single scan, and still performance does not improve - there is only one more potential bottleneck you need to clear before you can determine that the bottleneck is the actual scan - WHERE clause related bottleneck. A WHERE clause bottleneck is an applied filter that makes the query run slower. Some of the typical situations where you might encounter this:
- WHERE IN clauses which are based on another subquery - where the subquery computation might be a bottleneck.
- WHERE IN clauses which are based on very long lists of values - where the long list of values parsing might be a bottleneck.
- Complexed text searches over long strings - like regular expressions and other string functions
- Semi-structured objects filtering - filters applied on JSONs / arrays
Back to our query and subquery_a:
As we cleared out the JOIN clause earlier, now it’s time to inspect each table scan individually. In terms of potential WHERE clause related bottleneck, we can see that only table_a is filtered, so we can focus on it:
Our goal is to identify whether there is a filter that is applied and has a bad impact on the overall run time. We do that by simply commenting out one of the filters at a time, and checking if the run time improves. For example:
If run time indeed improved by this move, the filtering subquery is most likely your bottleneck (you can check run time of it to verify):
If you followed all the steps, and got here without finding your bottleneck (your reduced query is still performing slow) - then by this process of elimination, it is most likely that your query bottleneck is the actual table scan. | https://www.firebolt.io/blog/5-steps-to-debug-your-complex-sql-queries-in-firebolt |
|Both occupancy rate and revenue per available room (RevPAR) of hotels in HCM City fell slightly in the third quarter due to increase in room numbers and lower demand.|
According to a survey by property services provider Savills Vietnam, the average occupancy was 60 per cent, down 2 per cent from the previous quarter and 5 per cent year-on-year.
The average room rate (ARR) was at VND1.75 million (US$83)/room/ night, a slide of 1 per cent and 6 per cent and the lowest rate in the last four years.
RevPAR for all three grades decreased, with the three-star segment seeing the sharpest drop of 8 per cent quarter-on-quarter, followed by 6 per cent for the four-star segment, and 2 per cent for five-star.
One new four-star hotel and 95 additional rooms in an existing three-star hotel in District 1 entered the market.
As of the third quarter there were more than 12,100 rooms in 90 hotels, a 2 per cent increase quarter-on-quarter and 5 per cent year-on-year.
International visitor numbers to the city also reduced, falling 1 per cent year-on-year in the third quarter to 892,540.
In the next two quarters the market is expected to add more than 1,300 new rooms in four hotels, all five-star and located in District 1.
A report released by Grant Thornton on October 16 said the hotel industry in the country as a whole achieved a year-on-year growth of 4.2 per cent in RevPAR in the first half to $56.6.
The increase was attributed to an average occupancy growth rate of 4.1 per cent and a small increase in average room rate of 0.1 per cent, the report said.
The five-star segment achieved a better performance in the period than a year earlier with a significant RevPAR surge of 15.6 per cent, mainly due to a rise in occupancy rate of 5.5 per cent.
A 0.4 per cent decrease in RevPAR for three-Star hotels was attributed to a reduction of 4.1 per cent in the occupancy rate.
The four-star segment increased its average rate by 4.7 per cent.
Overall, average room rates rose marginally to $88.35.
But they were down by 2.3 per cent compared with the full-year average room rate in 2012.
When classifying average room rates by region, the Central and Highlands saw the highest rise of 5.8 per cent to $92.20.
The north and south saw declines of 5.3 and 3.4 per cent.
Room rates were down 5.9 and 1.8 per cent in Hanoi and HCM City.But Phan Thiet saw a jump of 8.3 per cent to $106.95. | http://m.hanoitimes.vn/hcm-city-hotels-hit-by-rising-competition-17287.html |
As the recipient in 2016 of the Clore Visual Artist Fellowship supported by a-n, I had the incredible opportunity to visit India and met many people working within the arts in Bangalore and Kochi. My impression – from a limited view of this vast country – is that artists in India, where there is little in the way of institutional arts infrastructure, often create their own platforms out of an implicit understanding that no-one is likely to do it for them.
It’s in sharp contrast to my own experience in the UK, where a conversation between artists is never too far from descending into a moan-fest. It’s something I’m guilty of myself, but afterwards I often think: “Why was I so negative? Is that really how I feel?” Is it possible that we have unwittingly developed a culture of complaint that has become so natural that we often aren’t aware when we’re perpetuating it?
One example of an artist-initiated platform I visited in India is the country’s first biennial art exhibition, the Kochi Muziris Biennale, started six years ago by Bose Krishnamachari and Riyaz Komu. They have overseen its rapidly growing local and international reputation while retaining its artist-led focus: each biennale is curated by an artist with many of the featured emerging artists exhibiting at a biennale for the first time.
In 2009, Archana Prasad, also a 2016-17 Clore Fellow, initiated Jaaga in Bangalore. A multi-platform support structure for emerging artists, designers and technologists, it has a particular focus on engaging broad audiences in a public context in the absence of institutional spaces.
Compared with Bangalore and Kochi, we have a vast array of arts infrastructure in the UK: museums, galleries, collections, awards, residencies, funding, talent development programmes. According to statistics from the British Council: “The UK has the largest creative sector of the European Union. In terms of GDP it is the largest in the world, and according to UNESCO it is, in absolute terms, the most successful exporter of cultural goods and services in the world, ahead of even the US.”
Yet despite having this highly developed arts infrastructure, it seems that it is still incredibly difficult for artists in the UK to sustain a career from their art practice. Research undertaken by a-n in 2013 revealed that 72% of the respondents earned £10,000 or less per annum from their work as artists.
Could it be that the very presence of a well-developed arts infrastructure is in itself a barrier, inhibiting us from finding creative solutions to our careers? Do we hold off on developing our own platforms to share our work while we wait for existing structures to do it for us? Moreover, does the institutional art world dominate our perceptions, understanding and expectations of what art is? In doing so, does it present narrowly-defined models for how to be an artist?
Artistic success continues to be benchmarked via solo exhibitions in institutional spaces, despite the fact that much of today’s artistic production simply doesn’t fit with this limited format. Could institutions do more to support work being made for outside of their walls – online spaces, homes and workplaces for example? Is it a problem for diversity if the majority of artists working in these institutional contexts have graduated from the same handful of (mostly London-based) graduate programmes?
Could it also be that a deep desire among many artists for their work to eventually be held in a public collection is something that limits other possibilities for the existence of art in the world? In one of the largest surveys of artists’ attitudes to the art market, Arts Council England’s 2004 ‘Taste Buds’ report stated: “Art is like no other commodity in that the ultimate desired resting place for an artwork is within a public collection. The dynamic within a large part of the art sector is the aspiration, by artists and their intermediaries, for their art to attain a place in museum or gallery collections.”
There are no statistics for what percentage of all art produced is accessioned into public collections, but my guess is that it is very, very low. What would change if artists shifted this value system, not making art to last forever but for right now?
So, what is it that we actually want an arts infrastructure to do? Is it to turn out a handful of big success stories or is it about enabling access to the arts for more people? While we may say that we want more resources to help us make art and develop careers, is there really any point in more artists making more work without more audiences having also been nurtured and developed? When we’re done with complaining, are artists clear and united on what we want?
Maurice Carlin was the 2016-17 Clore Visual Artist Fellow supported by a-n. Read a Q&A with him here
Images: | https://www.a-n.co.uk/news/platforms-change-artists-really-want-arts-organisations/ |
They might not look it, but sea urchins are the underwater equivalent of Mongol hordes. Left unchecked, they can strip a kelp forest clean, leaving a barren seafloor that supports little life. That also makes them good markers of an ocean habitat’s health: too many of the spiny little critters means that something is out of whack.
Most sea urchins are no more than a few inches across. They have a hard, dome-like shell that’s covered with spines that can jab your foot if you step on them.
The urchin’s underside is covered with small tube feet that are controlled by pumping water through the body. The urchin scoots along the bottom and eats algae and dead organisms. And in fact, keeping algae in check is an important service — it keeps the algae from overrunning coral reefs and other habitats.
One of the urchin’s favorite foods, though, is kelp — green strands of seaweed. Kelp forests are an important ocean habitat. They provide food and shelter for many species of fish and other creatures.
Normally, sea urchins are kept under control by predators such as giant lobsters and sea otters. The otters crack open the shells on rocks, then put them on their bellies while they float on their backs and eat the tasty innards.
But when the number of predators drops, the urchins thrive. They can sweep across a kelp forest by the millions, wiping out entire beds in months. That kills or chases away the other creatures that live in the kelp beds — leaving a seafloor that’s almost bereft of life. | https://www.scienceandthesea.org/program/200811/sea-urchins |
Cite this asBernal Rodríguez CE, García AC, Ponce-Palafox JT, Spanopoulos-Hernández M, Puga-López D, et al. (2017) The Color of Marine Shrimps and Its Role in the Aquaculture. Int J Aquac Fish Sci 3(3): 062-065. DOI: 10.17352/2455-8400.000030
In the present review, we have described aspects of the color of marine shrimp of importance in aquaculture (mainly Penaeus japonicus, Litopenaeus vannamei and Penaeus monodon) and in the world. It is generally described some ecological aspects and some factors that affect the color of the shrimp. It describes in a general way, ecological aspects and some factors that affect the color of the shrimp, as well as, specific aspects like the color change, the importance of the pigments in the color and the effect of the cooking and storage processes on the color of the shrimp. As well as some strategies that have been used to improve the color during the last decades are discussed. As well as the ability to select genetic lines of color shrimp.
CRCN: Crustacyanin; H: Homogeneous Individuals; ST: Striped Translucent Shrimp
The color in aquatic organisms has been studied mainly in an ecological and evolutionary context . It has been found that color is perceived by aquatic organisms differently than humans, so this has motivated effort to quantify both color traits and their visual environments . Color traits are studied for understanding morphological adaptation, visual orientation, communication, and deception as well as for exploring processes such as speciation and mimicry or camouflage .
The yellow, orange and red pigmentation, present in aquatic organisms is mostly caused by carotenoids . They occur in free form, esters, glycosides, sulfates and carotenoproteins and the oxidized derivatives are called xanthophylls . Among the 750 reported carotenoids found in nature, more than 250 are of marine origin. Marine animals contain carotenoids that show structural diversity . In general, marine animals (as Crustaceans) do not synthesize carotenoids de novo, and so those found in shrimp are either directly accumulated from food or partly modified through metabolic reactions . Carotenoids are confined to certain microorganisms, fungi, algae, and higher plant exclusively . Shrimp can synthesize astaxanthin from precursors as β-carotene into astaxanthin or astaxanthin into astaxanthin esters ingested from dietary sources. Free astaxanthin in shrimp is bound within a multimeric protein called CRCN . CRCN is widespread amongst crustaceans, producing the dark blue/slate colorations of the carapace which is common in this phylum . The interaction of CRCN and astaxanthin produce the naturally red carotenoid to blue or any other color, producing the diverse array of colors in the shrimp . Crustaceans present a wide range of species-specific colors and patterns, which are used for protection through cryptic coloration, reproduction, and communication . The color plays a role in consumer acceptability, perceived quality and price paid for commercial crustacean species mainly L. vannamei currently (Figure 1). This color may be in the exoskeleton or structures in pigments within the underlying hypodermic layer known as chromatophores . The amount and distribution of pigment are dependent of dietary, environmental and genetic factors . The main purpose of this study was to review the importance of color in the marine shrimps that are cultivated in ponds in the world. It examined the ecological, evolutionary and practical bases and aspects of importance in the marketing of shrimp.
The habitats of shrimp are very heterogeneous and exhibit wide diversity in color, brightness, and pattern. It has been found that decapod crustaceans among other invertebrates have the ability to change color in short and long time, in the first case in seconds, minutes and hours, and in the second associated with phenotypic plasticity and development [15,16]. The main function is to reduce the risk of detection and recognition to be depredated through camouflage .
It has been found that marine shrimps present two strategies mainly of morpho-specific camouflage (based on color and polymorphism) and habitat selection, which can be classified as H with different coloration, more trends greenish-brown or pink, and ST . The shrimp of the genus Penaeus and Litopenaeus belong to category H, where camouflage utilizes a strategy specialized to a limited number of backgrounds at any time. They are capable of changing color in just a few days towards to the type of background in which they are and the affinity of habitats higher for H shrimp, whereas swimming activity is higher for the ST morph, which indicates that strategy H shrimp tend to have a more benign life-style . It is known that shrimps are able to change body color in relation to environmental conditions .
h5>Color change
The color changes are under control of eyestalk hormones, they are rapid, reversible and rhythmic [13,14]. Others are slower and more permanent, with modifications of exoskeletal pigment composition or concentration. Several questions have been raised about the nature, mechanisms, evolution, and adaptive value of color changes and plasticity for concealment . Color changes in shrimp are due to several reasons including camouflage, thermoregulation, signals, stress and ultraviolet light protection. In most shrimp the color change can involve physiological processes involving the contraction and dispersion of pigments within the chromatophore cells . Most of the studies have focused on the physiological processes, functional and ecological aspects, and scarcely on pigmentation patterns and the importance of color in the commercialization of the main species of culture in the world [17,21], due to increased consumer acceptance, improved product quality and price paid for commercial shrimp species [22,23], with dark red colored shrimp attracting premium prices.
The colors of aquatic animals are derived from natural compounds such as chlorophyll, porphyrins, and carotenoids. Shrimp color is largely dependent on the amount of pigment (mainly astaxanthin) present in the exoskeleton and the epidermal layer . Shrimp pigmentation is influenced by the interaction of several factors, among which is the amount of dietary carotenoid, the distribution of hypodermal pigments, background substrate color, photoperiod, light intensity, stress, temperature, heavy metals (mainly copper) and genetics [15,25,26]. Body color is one of the factors that determine the quality, preference and price of shrimp and the concentration of astaxanthin controls the color of the shrimp and avoid damage caused by excess light . Niamnuy et al. , determined that the degradation of astaxanthin and color was found to follow a first-order Kinetic reaction, and temperature dependence was explained by the Arrhenius relationship. In addition, adequate correlations between astaxanthin degradation and color changes were also observed.
Background: It has been found that diet and background color in combination affect shrimp color . The shrimp growing on white substrates show poor color . However when placed in dark substrate they show an intermediate color, which improves when supplying astaxanthin in the diet and short term exposure to black substrates can have positive effects on shrimp color and dietary inclusion of astaxanthin improves shrimp pigmentation . The body color of P. monodon weakens when grown indoors . A disparity in shrimp color response in different ponds to black or white substratum exposure has been found in aquaculture farms, and this may influence the shrimp response to the color of the substrate . An animal that is dark may not present a strong response to dark substrates as compared to a shrimp that is less pigmented prior to exposure.
The reddish color of the shrimp usually occurs by thermal effect or hypoxic stress, but the effect can be reversed when the stress is eliminated . There is also the condition that redder color may result from exposure to copper and challenge the concept that highly pigmented shrimp is healthier than pale shrimp .
When the shrimp are cooked the interaction between CRCN and astaxanthin is disrupted, which causes the different red colorings of cooked shrimp . It is recommends that shrimp when harvested are kept alive prior to cooking, and the posterior salt brine application is very good at maintaining color and flavor. Wade et al. , found that post-cooking storage in ice was having minimal, if not a slightly beneficial, effect on prawn color.
The production of dried shrimp consists of three steps: boiling shrimp in salt solution, drying and storage . It has been found that shrimp dried by vacuum drying had greater astaxanthin content than that dried by hot air drying . Drying at 70 °C was recommended because the color and sensory quality of the dried shrimp were most acceptable . The first-order kinetic model was the best fit for the astaxanthin degradation and all color changes data. The increase in lightness and decrease in redness and yellowness of dried shrimp correlated well with the loss of astaxanthin during storage . So there is a relationship between astaxanthin degradation and color changes.
A common practice to improve shrimp color is through supplement of synthetic astaxanthin in the diets . Several other rearing and harvesting factors (particularly pre- and post-slaughter conditions) such as transportation, color of holding containers, handling, conditioning, fasting, killing method, chilling and storage may have influence on shrimp color . In contrast to the abundant evidence that shrimp color can be improved through manipulation of environmental factors and husbandry practices, there has been a paucity of scientific research in quantitative genetic aspects of shrimp color.
The color in shrimp can be improved by supplying astaxanthin in the diet, by the abundance of epithelial and astaxanthin esters . Shrimp have the metabolic ability to convert canthaxanthin and ß-carotene, into astaxanthin . In general, astaxanthin supplementation between 25 and 100 mg/kg in the feed for about one month has been found to produce adequate pigmentation for commercialization of several species of shrimp such as P. japonicus, L. vannamei and P. monodon [15, 38-40]. The krill oil / meal, crawfish oil, Pleuroncodes red cab, Phaffia yeast, Haematococcus pluvialis microalgae, Capsicum paprika, Tagetes marigold, and Carophyll Pink synthetic astaxanthin are used alone or combined as carotenoids in shrimp feeds. Consumer demand for natural products makes synthetic pigments much less desirable. The accumulation of carotenoids and the formation of specific axon fatty acid esters are related to the metabolism, storage, mobilization or deposition of astaxanthin within various tissues .
The body color of the shrimp can respond positively to genetic selection . Selection for dark color is also expected to increase redness of cooked shrimp. The association of body color of raw and cooked shrimp with morphometric traits was positive, suggesting that both body color and morphometric traits can be improved in breeding programs .
Color changes in shrimp are due to several reasons including camouflage, thermoregulation, signals, stress and ultraviolet light protection. Carotenoids are used for pigmentation in aquaculture shrimp. Synthetic and natural astaxanthin from Phaffia yeast, Haematococcus algae and Lutein from marigold is widely used for the pigmentation of marine shrimp, among other natural colorants. The color in shrimp can be improved by supplying astaxanthin in the diet, by the abundance of epithelial and astaxanthin esters. It is recommends that shrimp when harvested are kept alive prior to cooking, and the posterior salt brine application is very good at maintaining color and flavor. New carotenoids can still be found to pigment marine shrimp and body color and morphometric traits can be improved in breeding programs.
The authors are thankful to Postgraduate Program in Biological and Agricultural Sciences (CBAP) for funding the research work.
Subscribe to our articles alerts and stay tuned. | https://www.peertechz.com/articles/IJAFS-3-130.php |
Are You Tracking Your Top Blog Posts?
What does an uncertain economy mean for the bottom line of nonprofit fundraising efforts? "Overall, the news is good," says Sarah DiJulio, Executive Vice President of M+R Strategic Services. The economy may be down, but online donations are up -- so far.
Search engine optimization is all about helping people to find your website when they search for information online. Clearly, you want your website to show up at the top of the search engine results pages when potential supporters look for information about your organization or your cause -- but many small nonprofits simply don’t have the budget for an SEO / SEM company to help make that happen.. | https://www.wildapricot.com/blogs/newsblog/2008/12 |
Individuals may be permitted to invest in foreign shares, insurance products
China is likely to achieve a breakthrough this year that would permit personal investment in securities and insurance products overseas, an indication that policymakers are willing to see more active two-way capital flows, experts said on Monday.
The experts also hinted at the possibility of further removing limits on personal cross-border investment, with a senior official from the foreign exchange regulator suggesting the revision of some relevant rules in an article published in China Forex, a magazine, on Friday.
Ye Haisheng, director of the Capital Account Management Department at the State Administration of Foreign Exchange, said regulators are considering allowing individuals to invest in overseas securities and insurance products within the annual quota of $50,000.
The SAFE will also study to ease restrictions in an orderly way on outboard personal investment, amend the management regulations for individuals to participate in equity incentive plans of overseas listed companies, and optimize the management procedure, Ye wrote in the article.
The easier rules will encourage more capital outflows, which in turn will lead to a more flexible renminbi exchange rate, the experts said.
Li Zongguang, chief economist of China Renaissance, an investment bank, said the new regulations, when implemented, will increase Chinese mainland investors' purchase in shares of listed companies in Hong Kong and the United States.
Due to the country's improved balance of payments indicators by the end of last year, especially for the trade of goods and services, conditions are improving steadily for freeing up cross-border capital flows and this will boost the global usage of the renminbi, said Li Chao, chief economist of Zheshang Securities.
According to the monthly RMB tracker from the Society for Worldwide Interbank Financial Telecommunication, a global provider of financial messaging services, the renminbi's share in global payments accounted for 2.42 percent in January, up from 1.88 percent in December and 2.15 percent in the same period in 2019, the highest level in five years.
The Chinese currency also retained its ranking as the fifth most attractive currency for global payments in value terms in January. The total value of RMB payments increased by 21.34 percent on a monthly basis last month, according to SWIFT data.
Wang Chunying, a SAFE spokeswoman, said on Saturday that China maintained net inflows of FDI in January, and foreign investors increased their net holdings of onshore bonds and stocks by $41.6 billion, while domestic investors increased overseas investments mostly through southbound trading in the Shanghai-and Shenzhen-Hong Kong Stock Connect programs.
Buoyed by the sustained and stable recovery of China's economy and the further opening up of the financial market, two-way cross-border capital flows have become more active recently and this will help in the further development and stability of the foreign exchange market, said Wang.
China registered another current account surplus last year, as exports of goods and services exceeded imports. The surplus widened by 112 percent on a yearly basis to $298.9 billion, the highest level in five years. It also accounted for 2 percent of the GDP, compared with 1 percent in 2019, according to SAFE data issued on Friday.
The wider current account surplus was also a result of the better-than-expected 4 percent year-on-year growth in exports, China's early work recovery and significant increase in export prices. It was also aided by the narrow deficit in services trade, largely due to the 47 percent slump in outbound tourism spending, according to official data.
Experts from the International Monetary Fund said China's surplus has been trending down from the peak of 10 percent of GDP in 2007.This reflects strong investment growth, the appreciation of real effective exchange rate of the renminbi, weak external demand and progress in rebalancing.
Experts said that over the medium term, a further opening of the capital account will create substantially larger two-way capital flows, which would mean strengthening domestic financial stability.
The possible pilot programs to expand outbound personal investment may, however, be limited to some developed cities such as Shanghai, Shenzhen in Guangdong province, and Tianjin. The qualified investors for these pilot programs may also be limited to people with higher incomes and risk tolerance, said Li from Zheshang Securities.
In the near term, a possible expansion of outboard personal investment will not threaten the stability of the A-share market, as policy details are still being worked out without a specific launch timetable. The renminbi-denominated assets, including stocks and bonds, will remain attractive and the authorities are making efforts to encourage more inward foreign direct investment, said Li. | http://www.ecns.cn/news/economy/2021-02-23/detail-ihahvyux8633197.shtml |
Could you please explain to me the meaning of the geometric shape that contains an Equilateral triangle inside of a Vesica Pisces?
~~~~
ohhh now we’re getting into the fun stuff! Okay well if you read what i wrote previously how the Versica Piscis is about creating the balance between the dark/light and the masculine/feminine energy to create new earth. The equilateral triangle inside of the versica piscis is a star tetrahedron. At this density we see it as two triangles together from a 3rd dimensional view it looks like they lie flat. however from the higher dimensions the tetrahedrons interlock with one another creating the merkaba the light body and grand central heart of ascension which also plays central role inside of the crystalline energy that is directly inside the center of the earth.
Inside the center, lies the eye, the galactic center of the alignment within the core of the universe, merging together all the dimensions as one. It creates vector equilibrium, the dodecagon and also known as the 12 around one theory.
When i say we’re all entwined together within this web connected at the heart of the earth, i’m not being metaphorical. At the core center of the geometrical sequence is the crystalline energy that is the seed of creation. The flower of life are the 13 dimensions, seed of life are our chakras
the Seed of life inside the flower of life is the merging of the dimensions within our chakra system. The process of ascension. Seed of life inside the flower of life creates the Tree of life of hermetic qabalah, the grand map of the universe. | https://theawakenedstate.net/could-you-please-explain-to-me-the-meaning-of-the/ |
Elastic/Acoustic metamaterials comprise a new class of composite materials that possess unique effective material properties owing to their locally resonant substructures. Because of the inherent local resonance mechanism, elastic metamaterials have the ability to completely stop the propagation of elastic/acoustic waves over specific frequency regions (band gaps). However, the band gap regions of non-dissipative metamaterials are fixed, and therefore, broadband absorption is not possible. This draw back can be overcome by incorporating dissipative components into metamaterial substructures. Furthermore, the elastic/acoustic wave absorption can be enhanced through optimization of the metamaterials structural topology and constituent material properties. In this study, a dissipative elastic metamaterial with a multi-resonator substructure capable of broadband acoustic/elastic wave absorption is analyzed. Investigation of the wave absorption properties over a broad frequency range is then carried out via dispersion relations obtained from an analytical model as well as numerical modeling using the finite element method. Then, a biological evolution inspired algorithm is employed in order to carry out parameter optimization. By optimizing the dissipative metamaterials structural geometry and constitutive material properties, the absorption amplitude is significantly enhanced over a broad frequency range. A continuum microstructural design of the dissipative elastic metamaterial is then proposed and numerical simulations are conducted where the absorption/attenuation of elastic stress waves is used as the measure of performance. Finally, two optimal designs are chosen for fabrication using hybrid 3D printing and injection molding manufacturing methods.
Degree
M.S. | https://mospace.umsystem.edu/xmlui/handle/10355/57419 |
This blog post provides a preview of the content the Performance Architects team will discuss during a free, live webinar on Tuesday, March 26, 2019 at 12:30 PM EST entitled “Groovy-ETL: Building Better Oracle PBCS & OAC Integrations Using REST API.” Register here to attend the webinar!
Integrating systems that do not natively communicate with each other is the single biggest technical challenge in deploying a solution that uses more than one tool set. Oracle Planning and Budgeting Cloud Service (PBCS) and Oracle Analytics Cloud (OAC) are two industry darlings that each provide complete solutions on their own. However, when a solution calls for using both of these products, integrating them together is not for the faint of heart.
One could go for another purpose-built tool, like an off-the-shelf integration toolset, to build a custom solution…but sometimes integrating the integration toolset creates even more effort! This is especially true if the solution includes Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) or Infrastructure-as-a-Service (IaaS) cloud capabilities, because then one needs to deploy software subsets or agents on staging hardware to connect to on-premises systems, adding to cost and complexity. Take a moment to guess where this will end…
While PBCS and OAC expose a programming interface like REST API, it is simpler to connect to relational sources with technology like Java Database Connectivity (JDBC). Then again, sometimes data needs to be sourced from or written to data files. So how does one put all these technologies together?
How about looking at a solution that is based on open-source software and that can be deployed on existing hardware behind corporate firewalls to reach out to the cloud solutions of the world? Groovy-ETL functionality offers this type of approach.
Based on the Groovy scripting language, which can use REST API to communicate with PBCS and OAC, Groovy-ETL uses JDBC to communicate with relational databases like Oracle RDBMS or MS SQL Server. It also uses native interfaces to read or write to files. Groovy-ETL standardizes these communication interfaces so that they can be reused across multiple environments to meet different business needs, while at the same time providing the means to extend the solution capabilities with additional functions. The best part is that it can run on Windows and Linux without modifications, so the functionality delivered by a packaged solution can run the same way regardless of the operating system it is deployed on.
One person’s “perfect” solution may not be the same as another’s. The goal of a methodology like Groovy-ETL is to provide a single solution for integration needs. Such a solution is based on open-source technologies and is extensible to incorporate connections to anything that has a “query friendly” interface, like REST API or JDBC. While it takes some programming knowledge to add functionality, it requires almost none to use existing functions.
If you want to find out more or have questions, please sign up and attend our upcoming webinar that covers this topic in a bit more detail. If you have other questions that aren’t addressed in this blog post or the webinar content, please don’t hesitate to contact us at [email protected] or to leave a note below and we’ll be in touch to address your interests. | https://performancearchitects.com/groovy-etl-building-better-oracle-pbcs-oac-integrations-using-rest-api-webinar-preview/ |
A sample of young adults and seniors viewed one of two mock-theft videos where the culprit changed or did not change during the video and it was not found that CB affected their identification accuracy.
The Impact of Attention on Eyewitness Identification and Change Blindness
- Psychology
- 2015
The current study investigated whether differences exist in eyewitness identification and change blindness when manipulating attention. 126 undergraduate students were randomly assigned to either a…
Age differences in eyewitness memory for a realistic event.
- PsychologyThe journals of gerontology. Series B, Psychological sciences and social sciences
- 2014
Older adult suggestibility was no worse than that of younger adults, and younger adults had higher recall scores than older adults, although younger adults may have used strategic processing to encode misinformation to their detriment.
Crime Blindness: How Selective Attention and Inattentional Blindness Can Disrupt Eyewitness Awareness and Memory
- PsychologyPolicy Insights from the Behavioral and Brain Sciences
- 2018
Most people are not constantly watching for crimes and accidents. They are instead focused on other tasks. When people are focused on other tasks, they may fail to see crimes that should be obvious,…
Reevaluating the role of verbalization of faces for composite production: Descriptions of offenders matter!
- PsychologyJournal of experimental psychology. Applied
- 2019
The results have real-world but counterintuitive implications for witnesses who construct a face 1 or 2 days after a crime: after having recalled the face to a practitioner, an appreciable delay should be avoided before starting face construction.
CCTV Observation: The Effects of Event Type and Instructions on Fixation Behaviour in an Applied Change Blindness Task
- Psychology
- 2018
Little is known about how observers' scanning strategies affect performance when monitoring events in closed-circuit television (CCTV) footage. We examined the fixation behaviour of change detectors…
References
SHOWING 1-10 OF 54 REFERENCES
Eyewitness accounts of females and males.
- Psychology, Education
- 1979
In two experiments, college students looked at a series of slides depicting a wallet snatching (Experiment 1) or a fight (Experiment 2) and then took a multiple-choice test of accuracy for the…
‘Unconscious Transference’ Can Be an Instance of ‘Change Blindness’
- Psychology
- 2008
Three experiments investigated the role of ‘change blindness’ in mistaken eyewitness identifications of innocent bystanders to a simulated crime. Two innocent people appeared briefly in a filmed…
Eyewitness testimony.
- PsychologyAnnual review of psychology
- 2003
This work reviews major developments in the experimental literature concerning the way that various factors relate to the accuracy of eyewitness identification and problems with the literature are noted.
Closed-circuit television: how effective an identification aid?
- PsychologyBritish journal of psychology
- 2000
Two experiments simulated identification of suspects from CCTV recordings: Expt 1 simulated identification from whole body shots while Expt 2 showed close-up pictures of targets' faces, which indicated colour did not improve identification, but did prompt more description of the targets' clothing.
Attending but not seeing: The "other race" effect in face and person perception studied through change blindness
- Psychology
- 2005
We used a change blindness paradigm to examine the “other race” effect in perception. White Caucasian and Indian Asian participants viewed scenes in which White Caucasian and Indian Asian students…
Memory for centrally attended changing objects in an incidental real-world change detection paradigm.
- PsychologyBritish journal of psychology
- 2002
It is shown that change blindness for a conversation partner occurs in a variety of situations, and participants who noticed the substitution showed better memory for both pre- and post-change experimenters than participants who did not detect the change.
What you see is what you set: sustained inattentional blindness and the capture of awareness.
- PsychologyPsychological review
- 2005
The authors conclude that many--but not all--aspects of attention capture apply to inattentional blindness but that these 2 classes of phenomena remain importantly distinct.
Toupee or not toupee? The role of instructional set, centrality, and relevance in change blindness
- Psychology
- 2005
The influence of instructional set, centrality, and relevance on change blindness was examined. In a one-shot paradigm, participants reported alterations of items across pairs of driving scenes under…
Evidence for Preserved Representations in Change Blindness
- PsychologyConsciousness and Cognition
- 2002
In three experiments, it is shown that people often do have a representation of some aspects of the pre-change scene even when they fail to report the change, and they appear to "discover" this memory and can explicitly report details of a changed object in response to probing questions. | https://www.semanticscholar.org/paper/Change-Blindness-and-Eyewitness-Testimony-Davies-Hine/2008ac52ee5989f08c09e8ed906c2e6c26c97b7d |
A small flickering at dusk near the trees, a pinprick of light rising from the ground – fireflies bring an air of magic and wonder to long summer evenings. The twinkling conversation between beetles is commonly associated with mating patterns; and new data explains how swarms are able to achieve this communication.
Researchers Julie Hayes (Moses Biological Computation Lab, UNM); Raphaël Sarfati (BioFrontiers Institute, University of Colorado Boulder) and Orit Peleg (University of Colorado Boulder AND BioFrontiers AND Santa Fe Institute) recently published their findings in Science Advances, titled Self-organization in natural swarms of Photinus carolinus synchronous fireflies. The article investigates firefly collective behavior, while examining the internal structure of social interactions. Researchers demonstrated that firefly density induces a transition from uncorrelated flashing to synchrony and that information waves can propagate across the swarm.
There are thousands of firefly species in the world, but the Photinus carolinus is one of the few in North America that are known to synchronize their flash patterns. It is this twinkling spectacle that draws countless people to Tennessee every year to watch the Great Smoky Mountains Firefly Viewing – and a perfect opportunity for scientists to observe how swarms respond to synchronicity.
Countless studies are dedicated to theories of communication through oscillating patterns. However, only recently have researchers gathered spatiotemporal (collected across both space and time) data needed to understand the connection between actual natural patterns and communication theory.
“We used stereoscopic recordings to capture 10 days of peak firefly season in Great Smoky Mountain National Park,” explained Hayes. “Those recordings from two slightly different angles, constructed in 3D over time, allowed us to dynamically graph of a portion of the firefly swarm based on their flash patterns.”
The researchers used their multi-camera recordings to construct a cone-shaped portion of the firefly swarm each night – about 98 feet long and up to 33 feet wide. Each night, they recorded roughly half a million space-time coordinates.
“It appears that flashes tend to correlate strongly with terrain geometry, indicating that fireflies localize primarily in a thin layer about 3 feet above ground,” the research states. “This layer is crowded with bushes and short vegetation.”
The results of the study suggest that fireflies interact with each other in short proximity. When gathered in large numbers, they can relay information longer distances across the swarm, using oscillating flash patterns. The dynamic network of visual signals helps the swarm recognize changes in terrain and vegetation. Although Photinus carolinus males synchronize their rhythmic flashing with their peers locally, a global swarm synchronization is only possible if enough fireflies are active to transport the information.
“This model illuminates the importance of the environment in shaping self-organization and collective behavior,” the research states. “This self-organization would allow for the possibility for an individual to position itself to be more or less connected, for example, by flying above the swarm to be more visible and carry flashing information further.”
"We've had models of synchrony inspired by firefly synchronization for some time now,” Hayes concluded. “But we're finally starting to see how they're really achieving this emergent behavior."
Photo by Mac Stone. | https://news.unm.edu/news/research-illuminates-the-language-of-fireflies |
Q:
Salaries exercise
Hey there guys, I have just completed an exercise for uni and I would really like some feedback as I'm pretty new to using java. Areas that I could especially use some input in this instance: readability of the code, the names of the variables, the javaDoc comments, correct use of error statements AND MOST IMPORTANTLY the not3TimesHigher method, which took me ages to do, I'm still not sure whether this is the best way to do it.
A warning before anybody writes anything here:
Please do not suggest any overly complex updates to my code, I'm a beginner and I'm required to work within those limitations.
Anyway here is the problem sheet for reference:
A company stores the salaries of its employees in an ArrayList
allSalaries, an ArrayList of arrays of type double. Each entry in allSalaries is an array of the 12
salaries of an employee for the 12 months of the year.
Write a class Salaries with the constructor
• public Salaries() having no parameters to create an initially empty ArrayList.
And write the following methods:
• public void add(double[] employeeSalaries) to add the salaries of one employee to the field variable allSalaries.
• The method public static double average(double[] employeeSalaries) computes the average
salary for an employee. Note that any 0 entry should be disregarded, since a 0 means that the employee was
not employed in that particular month. For instance, the average of {1000,1000,2000,2000,0,0,0,0,0,0,0,0}
should be 1500.0 as the sum of the four non-zero values divided by 4.
If all values in the annualSalaries array are zero, the method should throw an IllegalArgumentException.
• The method public ArrayList averageSalaries() generates an ArrayList storing the average salaries of all employees that have at least one non-zero monthly salary. Make use of the method
average.
Hint: You need to catch possible exceptions thrown by the method average.
• The method public boolean not3TimesHigher() checks whether for each employee with at least one
non-zero monthly salary their average salary is not higher than three times the overall average salary of
the other employees. That is, you need here the average of the averages.
import java.util.ArrayList;
public class Salaries {
private ArrayList<double[]> allSalaries;
public Salaries() {
allSalaries = new ArrayList<double[]>();
}
public void add(double[] employeeSalaries) {
allSalaries.add(employeeSalaries);
}
/**
* @param takes an array of doubles; each index of the array represents
* the earnings of an employee for that particular month.
*
* @return average salary of an employee.
*/
public static double average(double[] employeeSalaries) {
double totalSalary = 0;
int totalMonths = 0;
for (int i=0; i<employeeSalaries.length; i++) {
if (employeeSalaries[i] > 0) {
totalSalary += employeeSalaries[i];
totalMonths++;
}
if (totalSalary == 0) {
throw new IllegalArgumentException("This chump didn't earn any money!");
}
}
return totalSalary / totalMonths;
}
/**
* Method traverses allSalaries calculating the average salary for each employee
* and appending it to a newly a instantiated ArrayList.
*
* @return ArrayList containing average salaries for all employees with at least one
* monthly salary above 0.
*/
public ArrayList<Double> averageSalaries() {
ArrayList<Double> averageSalaries = new ArrayList<Double>();
try {
for (int i=0; i<allSalaries.size(); i++) {
double avgEmployeeSalary = average(allSalaries.get(i));
averageSalaries.add(avgEmployeeSalary);
}
} catch (IllegalArgumentException e) {
System.out.println("Warning, attempted to add employee with zero earnings.");
}
return averageSalaries;
}
/**
* Method creates a new instance of averageSalaries which it then traverses,
* comparing each index (average employee salary) with the total average value
* of all other indexes * 3.
*
* @return false if any employee average salary is greater than the average of
* all other employee salaries * 3, true otherwise.
*/
public boolean not3TimesHigher() {
ArrayList<Double> avgS = averageSalaries();
for (int i=0; i<avgS.size(); i++) {
double employee = avgS.get(i);
avgS.remove(i);
double[] allOtherEmployees = new double[avgS.size()];
for (int j=0; j<allOtherEmployees.length; j++) {
allOtherEmployees[j] = avgS.get(j);
}
if (employee > (average(allOtherEmployees) * 3)) {
return false;
}
} return true;
}
public static void main(String[] args) {
//double[] bowie = {2456, 1330, 0, 5470};
double[] paul = {5, 8, 4, 6};
double[] ringo = {5, 7, 4, 6};
double[] john = {0, 0, 0, 0};
double[] george = {3, 7, 9, 8};
Salaries a = new Salaries();
//a.add(bowie);
a.add(paul);
//a.add(john);
a.add(ringo);
a.add(george);
//System.out.println(Salaries.average(john));
System.out.println(a.averageSalaries());
System.out.println(a.not3TimesHigher());
}
}
A:
Thanks for sharing your code.
I think you did a good job on this exercise.
Therefore I have only a few points to mention:
Correctness
I think your method averageSalaries() is not correct, since it returns after the first exception is caught and no other input rows are processed.
As I understand the exercise the "unpayed" entries should be ignored and all other entries should be processed.
This means, the code should be like this:
public ArrayList<Double> averageSalaries() {
ArrayList<Double> averageSalaries = new ArrayList<Double>();
for (int i=0; i<allSalaries.size(); i++) {
try {
}
}
}
Readability
Code Format
Use an Integrated Development Environment (like eclipse, intelliJ NetBeans or alike (if you don't do already) and use it's auto formatter feature.
Naming
Make your names as specific as possible.
In your method average() you have a variable totalMonths.
By this name I'd expect the total number of month processed, but it contains the count of moth with non zero payment.
In my view monthsWithPayment would be a better name.
avoid single letter names.
In your main() method the variable to refer to the Salaries object is named a.
Finding good names is the hardest part in programming.
So you should not give away a chance to practice good naming even it it is only a test methods for your exercise solution.
for loop
At many if not at all places the "for each" form of the for loop would improve readability:
for (double[] emploeeMonthlyPayments : allSalaries) {
try {
double avgEmployeeSalary = average(emploeeMonthlyPayments);
averageSalaries.add(avgEmployeeSalary);
// ...
Off Topic
I think this exercise is not a good one:
naming
The exercise requests you to write a method named not3TimesHigher returning a boolean value.
In my view this name is bad in thee ways:
according to the (Java Code Conventions)[ https://www.oracle.com/technetwork/java/codeconventions-135099.html] names of variables of type boolean and methods returning a boolean should start with is, has, can or alike.
names of variables of type boolean and methods returning a boolean should express positive conditions.
when using digits in names they should be the last part in the name
Therefore name of this method had better been isLessThanAverageTimes3()
Referring to the same (Java Code Conventions)[ https://www.oracle.com/technetwork/java/codeconventions-135099.html] names of methods should start with a verb.
But the method names requested in this exercise (average(), averageSalaries()) are nouns.
They should better be calculateAverage(), calculateAverageSalaries().
use of the static key word
The use of the static key word has more drawbacks than advantages.
It should be used with care and for a good reason.
I cannot see such a "good reason" for the use of the static key word in the method average().
| |
The general format for JavaDocs is:/** * Description * * @tag * class or method signature, or variable declaration.
The tags you will be required to use are:
- author (class comment)
- param (method comment)
- return (method comment)
- exception (method comment)
Single-line or multi-line comments may be added for clarity. Comments should have a blank line before them and be directly above the code they are commenting.// Single-line comment <code being commented> /* Multi-Line Comment */ <code being commented>
Rule of thumb: if a line or lines of code belong inside of another structure (class signature, method signature, code block), then indent one more tab.
Leave a blank line to separate logical sections of your code.
Adhere to the naming conventions for Classes, Methods, and Variables as discussed in earlier lectures.
Package names are always lower-case. | http://astutedata.com/computers/cis135/lecture08.html |
PROBLEM TO BE SOLVED: To provide a rubber composition for a tire which is excellent in ozone resistance and low fuel consumption property while taking the environment into consideration, and can prevent whitening, and a pneumatic tire using the same.
SOLUTION: The rubber composition for a tire contains a rubber component, carbon black, silica and different waxes (1) and (2). The rubber component contains a rubber (1) having a glass transition temperature of ≥-55°C and a polar group. With respect to 100 pts.mass of the rubber component, a content of the carbon black is 1-10 pts. mass, a content of the silica is 15-150 pts.mass, and a total content of the different waxes (1) and (2) is 0.5-8 pts.mass. The wax (1) is a natural wax containing a component having a softening point of <40°C, and the wax (2) is a polar natural wax containing a component having a softening point of ≥40°C.
COPYRIGHT: (C)2012,JPO&INPIT
PROBLEM TO BE SOLVED: To provide a rubber composition for a tire which is excellent in ozone resistance and low fuel consumption property while taking the environment into consideration, and can prevent whitening, and a pneumatic tire using the same.
SOLUTION: The rubber composition for a tire contains a rubber component, carbon black, silica and different waxes (1) and (2). The rubber component contains a rubber (1) having a glass transition temperature of ≥-55°C and a polar group. With respect to 100 pts.mass of the rubber component, a content of the carbon black is 1-10 pts. mass, a content of the silica is 15-150 pts.mass, and a total content of the different waxes (1) and (2) is 0.5-8 pts.mass. The wax (1) is a natural wax containing a component having a softening point of <40°C, and the wax (2) is a polar natural wax containing a component having a softening point of ≥40°C.
COPYRIGHT: (C)2012,JPO&INPIT | |
👐Confrontarsi con Terapisti esperti, umili e intraprendenti non è cosa da tutti i giorni!
❤..Ed anche un bel regalo di pasqua da parte di un Campione non indifferente!
William Garner Sutherland biography: The founder of Cranial Osteopathy.
1897 : 24 years old : he hears about Osteopathy. He is Surprised by the antagonist comments about it. He participates in conferences given by Still’s students Still (Edward C. Pickler, Charles Still, A.T. Still’s son). One day, he decides to visit Kirksville to get his own point of view. He is very surprised by what he finds: packed with patients reaching by entire trains, obvious and unexpected results, an indescriptible enthusiasm.
Imagine the feeling of such scenery. A new dawn of Hope for Healthcare in a World where Medicine was strongly approximative. Born on the ground of The American Civil War in a traumatised population.
Look at his intelligence, the sparks in his eyes: he was a survivor.
Un augurio di un mondo diverso🎐, sia che tu sia di una o dell'altra religione ⛪🕌🕍⛩.
Obuka terapeuta Welness centra Hotela Zlatibor MONA ---》THAI MASAŽI na stolu i podu. OSTEAS AKADEMIJA I HABER Bodywork Akademija na Zlatiboru.
בנוסף, חגורה זו מאפשרת תרגול עצמי של המטופל, הקלה על עומס/כאב וגם מתאימה לנשים בהריון ואחרי הלידה.שמחה לתת למטופלים שלי את הטיפול הטוב והמקצועי ביותר.
ידע זה כוח🎯💪🏽חג אביב שמח🌻!
בנוסף, חגורה זו מאפשרת תרגול עצמי של המטופל, הקלה על עומס/כאב וגם מתאימה לנשים בהריון ואחרי הלידה. שמחה לתת למטופלים שלי את הטיפול הטוב והמקצועי ביותר.
Posted @withrepost • @briangfox CARs or Controlled Articular Rotations have so many benefits. If you’d like to learn more or get a Rx, I recommend seeking out a Functional Range Systems Provider.
1️⃣Increase and promote joint health and longevity.
5️⃣Improve joint functionality . .
Hope everyone is enjoying their Easter!
These are the least things that will make the most difference.
That’s what true health care needs to look like.
Finally got here #redsquare #moscov #russia .
✨My favourite way to keep on top of muscular pain and joint niggles when I’m travelling and working! .
The carpal tunnel is a narrow passageway formed between the small bones of your wrist (carpal bones) and tendons from the muscles of your forearm. Within the carpal tunnel lies the median nerve; this nerve supplies sensation to your thumb, index finger, middle and 1/2 the ring fingers, as well as some of the muscles in your hand.
Compression of the median nerve within the carpal tunnel, AKA carpal tunnel syndrome (CTS), often results in numbness and/or tingling in the thumb and first 3 fingers of the hand. You may also experience hand/wrist pain and weakness.
Treatment is usually directed towards activity modification, symptom management to help reduce irritation, and exercises; some of which we will be going through later this week!
It can be good loosing control when you have a solid team around you. Group assisted Resistance Flexibility Large Intestine stretch. Perfection is one of the high traits associated with the large intestine with obsessive compulsiveness being the lower side. This energy helps complete things by bringing the right amount of structure in your life and letting go of it having to be just right will keep you in the perfect flow of life. Life is constantly moving and can never be held down to being exactly right. The best part is that you get to keep having more experiences. Physically when you do this pose you are doing a completing pattern of the upper and lower halves of the body, an exaggerated version of walking. When you do this stretch scissor your legs to gain stability so you can rotate through the torso. I always check in with this move especially before any hike or jog to get more cross lateral symmetry. Basically if one side is tighter than the other in this position you will create more restrictions every step of the way. This move can give you more efficient direction. .
Today I am missing Easter celebration with my loved ones. But thankfully, I have another family here - where we are all together in this sacrifice of putting in the hours and the effort needed to give our patients the best care possible.
☝️The four thenar muscles make up the intrinsic muscles of the thumb. They include the abductor pollicis, adductor pollicis, opponens pollicis, and flexor pollicis brevis. .
🎥In this video you can get a great visualize on how the Thumb motion is facilitated through the coordination of these intrinsic muscles and how the thumb musculature dynamically allows for precision pinching and power gripping.
1️⃣ IASTM to his lats using my new @hawkgrips_usa HG PRO tool, having him flex his arm and bend away so he can breathe into his rib cage on the side I am working on, a strategy I love using. Caution: some bruising may occur with IASTM however that is NOT the goal.
👀 ATH GET A HVLA ADJUSTMENT FOR THE FIRST TIME AT THE LEFT RIBS 3,4,1,2. FLAT HAND CONTACT LINE OF DRIVE A-P.
#pregnancy #prenatalwellness #prenatalyoga #prenatalmassage #bodywork #manualtherapy #backpainwhilepregnant #preggoproblems #healthandwellness #groundwellness #groundedmama #prenatalfitness #igmotherhood #thebump #fitbump #selfcare Important note: please consult with your health care professional prior to performing any unfamiliar movements or techniques. | https://www.pictadesk.com/hashtag/manualTherapy |
Political relations are good and bilateral cooperation is harmonious and friendly.
Germany was the only Western country to provide substantial support for the country’s democratisation process in the early 1990s by seconding a constitutional expert and supplying technical assistance for elections. German interests in Seychelles are represented by an honorary consul in Victoria, whose main task is assisting German tourists.
Economic Relations
Seychelles suffered a serious economic crisis in 2008, and has since implemented a successful reform programme with the help of the IMF. The country is making considerable progress on economic consolidation and in implementing its domestic political and economic reform agenda. The IMF and the World Bank give the country good marks for its efforts but regularly point to the susceptibility of the country’s economy to external shocks (e.g. the euro crisis).
German tourists as well as businesses active in the field of renewable energy form the backbone of direct economic exchange between Germany and Seychelles. Accounting for some 30 percent of gross domestic product, tourism remains the principal economic sector and creates the most jobs, along with the fishing industry. 350,000 international tourists visited Seychelles in 2017, setting a new record (2016: 305,000, 2015: 275,000). Some 56,000 Germans visited the country in 2018, making them the largest group of foreign visitors and – along with French, Italian and Russian tourists – one of the country’s principal sources of foreign currency.
Seychelles is successfully promoting the use of renewable energy: the country’s first environmentally friendly electricity has been produced by wind turbines since 2013. There are already some successful solar energy projects being conducted by German companies as part of the Federal Ministry for Economic Affairs and Energy’s Renewable Energy Export Initiative.
In 2017, bilateral trade was worth 30.9 million US dollars. Germany exported goods worth 16.6 million US dollars to Seychelles, mainly machinery, electrical and optical goods. German imports from Seychelles, mainly fish and fish products, amounted to 14.3 million US dollars.
Development cooperation
On account of the country’s relatively high per capita income compared with sub-Saharan Africa (over USD 15,859 in 2017), Seychelles is not a partner country of German development cooperation. It does, however, have cooperation agreements with the IMF and the World Bank as well as the EU and the African Development Bank.
Cultural relations
In September 2018, the Rombergpark Botanical Gardens in Dortmund signed an agreement with the Seychelles National Botanical Gardens on the exchange of scientists, experts and technical staff and on mutual assistance in the fields of horticulture, ecology, education, conservation of diversity and research. Work is also progressing on cooperation in the field of vocational training. The Ambassador assumed the patronage of an exhibition of works by the German artist Gabriele Schnitzenbaumer, who is partially based in Seychelles. Germany is viewed positively in Seychelles, and German products are held in high regard.
Disclaimer:
This text is intended as a source of basic information. It is regularly updated. No liability can be accepted for the accuracy or completeness of its Contents. | https://www.auswaertiges-amt.de/en/aussenpolitik/laenderinformationen/seychellen-node/seychelles/235736 |
The average US life expectancy dropped by a year in the first half of 2020 according to a new report from the National Center for Health Statistics, a part of the Centers for Disease Control and Prevention.
Life expectancy at birth in the total US population was 77.8 years — a decline of one year from 78.8 in 2019. For males, the life expectancy at birth was 75.1 — a decline of 1.2 years from 2019. For females, life expectancy declined to 80.5 years, a 0.9 year decrease from 2019.
Deaths from COVID-19 are the main factor in the overall drop in US life expectancy between January and June 2020, the CDC says. But it’s not the only one: A surge in drug overdose deaths are a part of the decline, too.
Women tend to live longer than men and in the first half of 2020, that margin grew: The difference in their life expectancy widened to 5.4 years, from 5.1 in 2019.
The report estimated life expectancy in the United States based on provisional death counts for January to June 2020.
Because the NCHS wanted to assess the effects of 2020 increase in deaths, for the first time it published its life expectancy tables based on provisional death certificate data, rather than final counts.
The authors point out a few limitations in these estimates.
One is that the data is from the first six months of 2020 – so it does not reflect the entirety of the COVID-19 pandemic. There is also seasonality in death patterns, with more deaths generally happening in winter than summer. This half-year data does not account for that.
Another limitation is that the COVID-19 pandemic struck different parts of the US at different times in the year. The areas most affected in the first half of 2020 are more urban and have different demographics than the areas hit hard by the virus later in the year.
As a result, the authors write, “life expectancy for the first half of 2020 may be underestimated since the populations more severely affected, Hispanic and non-Hispanic black populations, are more likely to live in urban areas.”
The report parallels the findings published last month by researchers at the University of Southern California and Princeton University, which found that the deaths caused by COVID-19 have reduced overall life expectancy by 1.13 years.
In the US, more than 488,000 people have died from COVID-19. The latest estimates from the University of Washington’s Institute for Health Metrics and Evaluation predict 614,503 US deaths by June 1. | https://www.avpress.com/opinion/editorial/life-expectancy-drops-by-full-year/article_928598c4-7587-11eb-89f2-bf85520e819e.html |
- Published:
VUB-CYBERLEGs CYBATHLON 2016 Beta-Prosthesis: case study in control of an active two degree of freedom transfemoral prosthesis
Journal of NeuroEngineering and Rehabilitation volume 15, Article number: 3 (2018)
-
2977 Accesses
-
12 Citations
-
7 Altmetric
-
Abstract
Background
Here we present how the CYBERLEGs Beta-Prosthesis was modified with a new control system to participate in the Powered Leg Prosthesis event, and to report on our experience at the CYBATHLON 2016 which was held in Zurich, Switzerland in October 2016. The prosthesis has two active degrees of freedom which assist the user with extra joint power at the knee and ankle to complete tasks. The CYBATHLON is a championship for people with disabilities competing in six disciplines, using advanced assistive devices. Tasks for CYBATHLON 2016 were chosen to reflect everyday normal task such as sitting and standing from a chair, obstacle avoidance, stepping stones, slope walking and descent, and stair climbing and descent.
Methods
The control schemata were presented along with the description of each of the six tasks. The participant of the competition, the pilot, ran through each of the trials under lab conditions and representative behaviors were recorded.
Results
The VUB CYBERLEGs prosthesis was able to accomplish, to some degree, five of the six tasks and here the torque and angle behaviors of the device while accomplishing these tasks are presented. The relatively simple control methods were able to provide assistive torque during many of the events, particularly sit to stand and stair climbing. For example, the prosthesis was able to consistently provide over 30 Nm in arresting knee torque in the sitting task, and over 20 Nm while standing. Peak torque of the device was not sufficient for unassisted stair climbing, but was able to provide around 60 Nm of assistance in both ascent and descent. Use of the passive behaviors of the device were shown to be able to trigger state machine events reliably for certain tasks.
Conclusions
Although the performance of the CYBERLEGs prosthesis during CYBATHLON 2016 did not compare to the other top of the market designs with regards to speed, the device performed all of the tasks that were deemed possible by the start of the competition. Moreover, the Pilot was able to accomplish tasks in ways the Pilot’s personal microcontrolled prosthesis could not, with limited powered prosthesis training. Future studies will focus on decreasing weight, increasing reliability, incorporating better control, and increasing the velocity of the device. This is only a case study and actual benefits to clinical outcomes are not yet understood and need to be further investigated. This competition was a unique experience to illuminate problems that future versions of the device will be able to solve.
Background
The CYBERLEGs Beta-Prosthesis is a transfemoral prosthesis with two active degrees of freedom, one in the knee and one in the ankle, designed primarily to help those with limited ambulation ability using standard prostheses due to weakness from advanced age or complicating illness. The prosthesis was originally created as a part of the larger CYBERLEGs Project, which combines this prosthesis system to replace a lost limb in parallel with an exoskeleton to assist the sound leg and hips, and a sensory array to control both systems. The end goal of the complete CYBERLEGs system was to assist those who have both a loss of a limb and weakness in the remaining limb to regain walking function and improve walking behavior. Here we have taken the CYBERLEGs prosthesis out of the complete CYBERLEGs environment and adapted it to function independently, including an entirely new control system, for use in the CYBATHLON 2016 competition held in Zurich, Switzerland in October 2016 .
Although the device has two powered joints, it is designed to allow a high level of passive behavior during the gait cycle through the use of passive components, either built into series elastic actuators, or springs that are inserted and removed from interaction by locking mechanisms. Through the use of these passive energy storage components, it is possible to, with simple control, create energy efficient gait cycles for normal walking [2, 3]. Moreover, the prosthesis is capable of providing the full ankle and knee torques during walking, as well as a large percentage of the torque required for normal sit to stand and stair climbing activities .
The CYBERLEGs Beta-Prosthesis was originally controlled using a gait intention detection system , which incorporated an array of IMU’s and pressure insoles for accurate center of pressure measurements of both of the feet. A system comprised of so many sensors and requiring many processing techniques was deemed too complicated for the competition and was replaced by a new, simpler control system which is described.
The CYBATHLON 2016 competition was designed to test the ability of everyday activities that anyone might face during the day, such as sitting and rising from a chair, maneuvering through obstacles, walking up and down steep slopes, and stair climbing and descent. By comparing performance in a parallel track obstacle course race, the competition was designed to gauge state-of-the-art systems in accomplishing these tasks . The competing teams used a variety of currently available active (Power Knee, Ossur), microcontroller (Rheo Knee XC, Össur and Genium X3, Otto Bock), and passive (Total Knee, Össur) devices and the competition also showcased a few new devices, such as the Rise Legs (Rise), AMP-Foot 4 (VUB) , Xiborg, and Ortokosmos (Metiz Hyperknee) offerings.
This paper presents first a brief overview of the workings of the CYBERLEGs Beta-Prosthesis as well as some key aspects of the design that were adapted specifically for the tasks of the Powered Leg Prosthesis event of CYBATHLON 2016. The control and representative behavior of the prosthesis during each of the tasks of the CYBATHLON is then presented. A discussion about the particular design choices and results from the CYBATHLON controller, including a discussion of implications for future developments, follows.
Methods
The CYBERLEGs Beta-Prosthesis is not built like a standard passive prosthesis in use by most people today, but includes motors in both the knee and the ankle for active energy input to the joint. It utilizes a unique combination of series elastic motors and also exploits locking spring mechanisms to achieve energy efficient regular walking with enough capability to perform other tasks. A short description of the joint construction is followed by the electronics system which was completely redone for the CYBATHLON. The Pilot is an integral part of the system, introduced after the electronics, followed by the state machine based control system and how it was run for each task.
The CYBERLEGs Beta-Prosthesis
The CYBERLEGs Beta-Prosthesis is an integrated transfemoral prosthesis containing independent active drives in both the knee and the ankle. These active drives allow the joint to provide both positive and negative work during a motion. Both the knee and the ankle are designed with series elastic actuators, allowing dynamic forces from the device to have a larger influence over its behavior. In this version, spring stiffnesses for both the knee and the ankle were chosen based on the torque angle characteristics of a 80 kg person walking at the ’normal’ velocity of 4.8 km/h, as defined by Winter. The prosthesis weighs around 6.5 kg, including the socket, shoe, electronics, and cover, which is considerably more than most prostheses, especially considering the batteries are external, but the device itself has about the same weight and inertial distribution as a normal leg. An image showing the device can be found in Fig. 1, with the major components labeled.
Ankle design
The ankle is a design based on a MACCEPA actuator with a parallel spring system. The actuator of this device has been previously discussed in [8, 9]. The additional parallel spring was added to this system to provide stability when unpowered as well as reduce the peak torque required by the ankle actuator which allowed for a reduction of the gear ratio of the actuator and increased velocities. A schematic of the ankle actuator can be found in Fig. 2.
In this ankle, the main motor is housed within the shank of the device. This motor is attached to a 33:1 planetary gearbox which is in turn driving a 10:1 hypoid drive gear. The shank can be slid relative to the knee to adjust for height as well as rotated for ankle and knee joint parallelism. This motor drives a moment arm which drives a crank slider to compress the series spring. This creates the joint torque of the device. The parallel spring is unilateral and engages at approximately 3 degrees of dorsiflexion. Key component values are found in Table 1.
Knee design
The knee of the system is composed of two major components, the Knee Actuator (KA) and the Weight Acceptance (WA). The WA is a stiff spring that is driven by a non-backdrivable screw feed so it can be positioned to either interact or avoid contact with the knee joint. The non-backdrivability allows it to create large extension torques without requiring power. This device is used for stiff knee behaviors, such as the weight acceptance phase of the gait cycle or when a straight and stiff leg is desired. The WA can be seen on the back side of the prosthesis in Fig. 1.
The KA provides the main flexion and extension torques for the majority of the gait cycle. This is done through a series elastic actuator actuating on a push/pull rod that flexes the knee joint. This actuator has two different spring constants which provide different stiffness behaviors between flexion and extension torques. This type of architecture has been shown in simulation and on the test bench to have a lower energy consumption than a stiff system due to the capability of storing and releasing energy in the series spring of both the WA and the KA systems . A schematic of this device can be found in Fig. 3. In this Figure, it can be seen that changing the position of the carriage (KA z ) can create an extension or flexion torque, but the WA position (WA z ) can only provide an extension torque due to the unilateral constraint at the WA spring.
Prosthesis attitude detection
The prosthesis was controlled by a finite state machine, which was driven by inputs from the prosthesis and from the thigh of the pilot. The majority of the state changes required for the controller were determined by inertial rate gyros found on the pilot’s thigh. This device was used to detect a number of behaviors, for example an intentional hip eversion to initiate stair climbing. This signal was analyzed using a Phase Plane Invariant method of the type of to determine the position of the hip while reducing error due to gyro drift. For many of the states, the prosthesis kinematic values could be used to determine state transitions, such as knee angle or ankle angles. The ankle MACCEPA actuator was also used to estimate ankle torque from foot placement, which was used as a trigger for some of the states. The exact use of how these signals are used to trigger state transitions can be found in “Events and control methods for the CYBATHLON” section.
Note that the prosthesis starts and can at any time be commanded, either through an error detection or deliberate intention, into the idle state. The idle state is the extended locked position with the WA raised and the knee carriage at full extension, which is considered to be the safest, most stable, and most predictable prosthesis state.
Prosthesis electronics
The prosthesis utilizes four custom made EtherCat slaves which are capable of reading all of the sensors of the system including SPI, digital I/O, and analog I/O interfaces. Three of the boards are also populated with an ESCON 50/5 Module (Maxon Motor ag, Sachseln, Switzerland) for motor driving. The fourth board was used for additional sensor input and provided a backup system that could replace one of the other driver boards if necessary. The EtherCat master was a laptop computer running Simulink (Mathworks, Natick MA, USA) and TwinCat software (Beckhoff Automation, Verl, Germany) to create a real-time EtherCat master on standard PC hardware. The EtherCat control loop was run at 1000 Hz, reading the entire prosthesis state and creating command velocity commands for the motor drivers. The low level motor drivers were configured in a closed loop velocity mode sampling at 5.36 kHz, tracking the velocity signal created by the main controller. Incremental encoders were located on each motor and joint outputs were measured by 14 bit magnetic absolute encoders. Angular velocity of the hip was measured by two analog output 1500 deg/sec 2DOF rate gyros oriented with a common axis along the longitudinal axis of the leg. The laptop was worn in the backpack of the system when running autonomously, and would be run from the bench while running tethered experiments. The prosthesis high level control was directed by a wrist worn touchscreen system which allowed the pilot to select the high level action he wished to use or perform actions such as reinitializing or disabling the prosthesis. This touchscreen diagram can be found in Fig. 4 and an image of how the touchscreen was worn can be found in Fig. 5.
The prosthesis was run with a 24V battery housed in the backpack, which is half of the original design voltage. This was done to reduce battery size and leave overhead for the motor drivers to protect from over voltage conditions during regenerative periods such as slope and stair descent. This limited the maximum velocity of the device to approximately half of the original design velocity. An emergency stop was placed on the strap of the backpack and a current limiting breaker was placed on the backpack for the competition, both of which would immediately cut all power to the system.
The pilot
The subject of the tests, who in the parlance of the CYBATHLON is named the pilot, was 58 year old Michel De Groote seen in Fig. 5, a transfemoral amputee since having osteosarcoma treatment in 1989. Michel weighs 60 kg without his prosthesis and stands 1.70 m tall. His current prosthetic limb is an Otto Bock 3C98-3 C-Leg paired with a standard passive ESR ankle. The pilot was recruited by our sponsor, VIGO International (Wetteren, Belgium), who also provided the socket system and prosthesis alignment for CYBATHLON 2016.
Michel has a relatively high femoral amputation limiting his ability to balance or apply large hip torques. This makes it extremely difficult to take stairs step over step or to balance on one leg with his current prosthesis, but in terms of the goals of CYBERLEGs this makes him an interesting test candidate. He was able to come to the lab and use the prosthesis around 14 h total, split across 5 different sessions of training and tuning. This amount of training is relatively short especially considering the amount of trust the pilot must have in the prosthesis to make it function correctly and the large weight and difference in functionality from his standard prosthesis.
Events and control methods for the CYBATHLON
The CYBATHLON 2016 Leg Prosthesis Race allowed pilots to compete on parallel tracks to complete several tasks related to daily life. These six different tasks consisted of the Sit-to-Stand (StS), hurdle navigation, slope climbing and descent, stepping stones, tilted path, and stair climbing and descent. Pilots were allowed 4 min to complete the entire parkour. Here we discuss the behavior and control of the prosthesis while performing each of these tasks.
At the beginning of each task the pilot selected an appropriate state machine to use for the task using the touchscreen. This allowed us to change the behavior of the prosthesis without having to develop a new gait intention detection system, and give the pilot a concrete indication about about which state machine was in operation. Each of these state machines consisted of trajectory generators for the KD, ankle actuator, and WA systems. These trajectories were either a torque or position trajectory, depending on the type of controller the state machine desired. The generator used a piecewise linear calculator that, upon entry of a new state, used the current position of the device to create the new trajectories and avoid discontinuities in the desired motor position. The torque or position rise rate, fall rate, and amplitude, were determined by experiment or estimation from modeling. Estimations of the positions of the actuators were initially calculated by looking at human data and dividing the task into states where the behavior of the system did not drastically change, the threshold for each of the states was then determined experimentally after initial guesses were made.
While the prosthesis was in position control mode, the motor position KD z , the ankle moment arm position (ϕ), or WA z , rather than the output kinematics or output torque of the system, was controlled with closed loop feedback. This method tracks a predetermined SEA rest position allowing the passive spring and device geometry to determine the overall joint impedance. This is different from the techniques of many powered prostheses which rely on output trajectory tracking with a true impedance controller [12, 13], instead relying on the natural impedance of the system to dominate.
The use of torque control mode was determined to be necessary during some tasks when position control mode failed to produce satisfactory results. Sit to stand was the first task where it was determined that being able to change the velocity of sitting to stand and stand to sit would be beneficial, which the position control system would not allow.
The following sections describe each of these state machines for each of the events, including the type of controller used for each state as well as the required conditions for state transitions.
Sit to stand
The pilot must sit and stand from a standardized chair, fully removing the feet from the ground when sitting. After each standing attempt, the pilot must then take a step ahead 1.20 m to a line and step back to the chair before sitting again. Use of hands is allowed to rise from the seat, but the seat back should not be used.
Figure 6 shows the sit-to-stand mode of the state machine, showing that it contained two different torque profiles based on whether the pilot was standing or sitting. Both of these states provide an extension torque, assisting during Sit to Stand and braking during Stand to Sit. The WA was not used during this function, and so was set to its lowest position. The ankle was moved using the position control to a slightly plantarflexed position, meaning the ankle moment arm angle (ϕ in Fig. 2) is set to -5 degrees with respect to the neutral position, so that the foot would lie flat on the ground while sitting and returned to straight while standing. The states were switched based on the knee angle.
Hurdle navigation
This section consisted of four hurdles, the first and last consisting of a horizontal bar at 200 mm from the floor, and a second bar at 1500 mm from the floor. The middle two hurdles consisted of a single horizontal bar at 350 mm from the floor. The width of the hurdles was 900 mm and spaced apart at intervals of 600 mm. The pilot was required to pass through the obstacles without knocking down any of the horizontal bars and without using their hands.
Hurdle navigation consisted of bending the prosthesis knee when the hip was bent so the prosthesis would clear the hurdle. This action was triggered by a threshold on the velocity of the hip flexion (H ω ) which then would then command the knee to bend by relating the hip angle (H θ ) to a position of the KA carriage. The relationship between the hip angle and carriage position was different for the lift and extension states. A full schematic of the hurdle navigation, including thresholds and command positions can be found in Fig. 7.
Ramp climbing and descent
The ramp climbing and descent section included climbing a steep 20° incline, opening and closing a door on the platform, then descending a 15° slope without the use of handrails.
Entering the slope climbing state machine from the idle state, the prosthesis was set in the slope descent mode. By descending a slope and allowing the knee to flex to a certain angle, the slope decent extension phase would begin and apply a different torque profile to the knee joint. During the slope descent the ankle angle was set to neutral, but was able to adapt to the slope due to the passive compliance of the system. To trigger the slope ascent, the pilot would perform a hip abduction movement which would place the leg into the slope swing phase. The slope swing phase is a position controlled state where the positions of KA z , WA z , and A ϕ are predetermined. To trigger the stance state of the slope climbing, the ankle angle must be deflected beyond a set angle. Because the motor position is constant, this corresponds to a known ankle torque, ensuring the ankle is on the surface and weight is transferred to the prosthesis. At this moment the KA applies a torque profile to the knee to assist with climbing the slope and reaching full leg extension. The WA is also raised to allow the pilot to push on it during pushoff and the ankle remains highly dorsiflexed. The pushoff phase is reached at a determined knee extension, where the ankle is then plantarflexed to provide pushoff. Note that if the device remains in any of the stair ascent states for longer than a timeout period (t), the device returns to the slope down state. A full schematic of the ramp climbing and descent control, including thresholds and command positions can be found in Fig. 8.
Stepping stones
The stepping stones task was a path of seven half cylinders placed with 600 mm intervals in the direction of walking and 750 mm in lateral movements. Only one foot could touch a stone, and the pilot was not allowed to touch the ground between the stones or any other hand rails.
Because the stepping stone task was not possible to safely maneuver for our pilot, due to the aformentioned balance problems due to a short residual limb and lack of balance specific adaptations like ankle inversion/eversion, we did not attempt this in the competition and therefore did not have a control section in the state machine.
Tilted path
The tilted path was a series of two platforms with a leading and trailing edge sloped at 18° and a width of 2000 mm. The center of the platform was sloped from the floor on one side to 300 mm height at the other side. The center slopes were alternated first sloping down toward the right and then toward the left. The two platforms were separated by 300 mm.
The tilted path could be handled by the pilot through normal walking, or if he desired it could be navigated with a leg that was in the idle state and therefore there was no tilted path specific state machine.
Stair climbing and descent
The stair climbing task required the pilot to climb and then descend a set of 6 standardized stairs without use of a handrail. Only one foot was allowed on each stair. Upon the first completion of an ascent and descent, the pilot was to pick up two plates with item on them from a table, and return over the stair case and place the plates on another table and finally return over the staircase one final time.
The state machine for stair climbing, which can be found in Fig. 9, was similar to the one for the slope climbing (See Fig. 7), mainly because the angle of the slope section was so large it essentially was much like climbing stairs with a different ankle angle. The ankle angle was held neutral for stance and pushoff, while during swing it was changed to a 20 degree dorsiflexion. All other commands were essentially the same between the two systems. Here again the compliance of the ankle was used in determining proper weight transfer to the new stance leg. Once again the ankle was used as a torque sensing device to detect foot fall and weight transfer on the new stance stair and for foot liftoff.
Results
The tasks that were attempted at the CYBATHLON were performed in the lab of the Vrije Universiteit Brussel, in Brussels, Belgium and the behavior of the prosthesis was recorded. The computer was not recording data during the actual competition to reduce the small possibility of errors occurring due to the saving functions and to simply reduce the load on the computer to ensure it was running at peak performance. The tests were designed to best emulate the behavior during the actual competition. These tests were all with the permission of the VUB Medical Ethics Commission (B.U.N. 143201526629). All data from the prosthesis was collected at 100 Hz and analyzed in MATLAB. The current values were then filtered using a low-pass, zero phase shift, two pole Butterworth with a cutoff frequency of 10 Hz.
The knee torque was determined using two different methods. The first was through an inverse kinematics model of the knee which is possible because the knee actuator is a series elastic device and by measuring the drive side and output link positions, the torque of the joint can be determined within the linear region of the series elastic spring. Outside of this region it is possible to estimate the torque of the actuator using the current of the motor to determine the output torque. In this method the current of the motor is used to determine the force applied by the ballscrew on the actuator, which is directly related to the output knee torque by the kinematics of the knee. These two methods show good consistency when the motor is being driven, but when being backdriven the current does not correspond to the output torque due to unmodeled efficiency losses during backdriving and driver reverse current capability, and so there are large deviations in the two methods . It should also be noted that here when the knee carriage is at its lowest position, there is a slight extension torque on the knee joint. This is just to add a bit of stiffness in the fully extended position if the WA is not in place.
Sit to stand
The pilot followed the sit-to-stand procedure and the knee angles and knee torque are presented in Fig. 10. The knee flexion is defined as a positive angular displacement, and therefore extension torques are defined as negative. Large negative torque can be seen during the sitting phase in the kinematic displacement model, but because this motion backdrives the knee motor, the actual motor current is very low and the current model does not show the correct output torque. While standing the prosthesis gives a modest 20 Nm assistive torque, and because this is a net positive work action, the current model agrees with the kinematic model.
The ankle moment arm is placed with a slight plantarflexion while in the sitting position. This allows the foot to sit flat on the ground while in the chair. The larger peak torques seen at the ankle are due to the parallel spring during the step forward and step back that was required for the task.
Although not seen in this example, when the sit to stand action becomes too fast the torque assistance decreases due to the limited velocity of the knee motor. In this example the only time when the knee motor fails to track the desired position is at the beginning of the stand state, partially because of the reduced motor velocity due to a lower bus voltage, and also because the motor must move a long distance to produce the desired torque target due to the geometry of the highly bent knee. The lack of velocity of the actuators poses a particular problem in terms of the goal of accomplishing the the CYBATHLON in minimal time, but under normal use this velocity limitation is not such a large issue.
Hurdle navigation
During the hurdle navigation the knee is flexed as a function of the hip flexion angle, allowing the pilot to control the knee flexion and extension by swinging his hip. Figure 11 shows the knee and ankle desired and actual behaviors during the test period. The hurdle navigation illustrates how the knee motor velocity is limited, showing a bit of tracking error in the desired and actual knee positions as he swings his hip quickly. Also a slight undulation of the knee occurs in areas of full flexion. This is due to the limited torque authority of the knee joint at high flexion due to the kinematics of the knee. At high flexion the knee Baseline Spring (K BL in Fig. 3) stiffness dominates the behavior of the system and the motor must travel long distances to make changes in the torque of the knee. This coupled with the limited velocity of the knee motor means the knee is prone to vibrations at large flexion when it is not on the ground and the WA is not engaged. The ankle is held in the neutral position for the entire traverse, using only the passive behavior to provide ankle torque and compliance.
Ramp ascent and descent
Figure 12 shows the ascent of the slope taking four steps, and two steps down. Once again during the descent there is a large difference in the two methods of calculating the joint torque due to backdriving of the system. This is also a task where the WA system was utilized to provide a stiffer knee while flexed. The blue trace in Fig. 12 shows the torque due to the summation of the KD system and WA system. During the swing phase, the KA provides a flexion torque by actuating against the WA during this motion. The net result is an extension torque while the leg is loaded during the early stance phase, at a higher stiffness than would be otherwise.
The ankle is commanded to maximally dorsiflex against the parallel spring to provide large clearance of the foot during the swing phase. Then the ankle is set back to the neutral position during stance and pushoff. The result is decent clearance and the ability to provide high pushoff torque. The end rest position was determined by experiment.
Stepping stones
The stepping stone task was not possible to safely maneuver for our pilot. This event requires that the pilot have excellent balance on the prosthetic limb, or have some sort of active control mechanism for accurate center of pressure. Because of the short residual limb of the pilot, he has limited balance control through the socket, and the prosthesis does not have inversion/eversion balance compensation to assist in this fashion. Adding active inversion and eversion of the ankle could potentially be very helpful for overall balance in this event.
Tilted path
The tilted path could be handled by the pilot through normal walking, or if he desired it could be navigated with a leg that was in the idle state. Due to inconsistent initiation of the standard walking gait, the pilot chose to use the Idle state during the competition. Although stiff, using the Idle state to walk is possible through the passive compliance of the leg, as well as through the use of exaggerated hip motions. The passive flexibility of the ankle allowed the pilot to keep the foot flat against the surface in the fore/aft direction. The slope was not significant enough to require much evasive action. By approaching the task at an angle, the path could be as easily navigated as a flat floor. During the competition, some participants simply skipped over the obstacle with their device, only using the sound foot on the sloped surface and swinging the prosthesis over the entire obstacle. It is possible that this obstacle was not long enough or simply not steep enough to really provide a challenge to the pilots.
Stair climbing and descent
Our pilot could only perform this task using the handrail, and therefore only went once over the staircase once using the handrail, step over step. Figure 13 shows a cycle of six steps up and five steps down. Here the velocity limitation of the knee joint is apparent and it is limiting the torque output, except for the case of the first step which was taken slower and reached the maximum torque of the knee at that angle. The motor drivers of the knee were limited to 8A during this test, and the knee reaches this during the first step. The actual maximum extension torque for the device is about 60 Nm peak at about 30 degrees knee flexion.
Once again the WA is used during this task to provide some assistance with the bent knee. The result is only a modest 5 Nm extension at full flexion. Here it can be seen how the ankle was used to detect the transition from the Swing phase to the Early Stance. Also how the ankle is able to provide push off during stair ascent is clearly visible. Once again it is possible that better control techniques may be able to increase performance of this task , although implementation of controllers like these may run into limitations of the series elastic actuators .
Discussion
CYBATHLON 2016 provided a perfect opportunity to improve the CYBERLEGs Beta-Prosthesis and gain a better understanding about what our device lacked with respect to real-world behavior by performing a standardized set of tasks. The competition also showed how a number of state-of-the-art devices compared with our device and with each other. It was apparent to us at the onset that our device was never intended to be run in a competition of such high intensity, and initial design decisions which were based on an entirely different target population would never allow the device to be highly competitive. Regardless, we determined that certain modifications could allow us to complete a number of the obstacles, and also allow us to gain insight to the benefits of powered prostheses in aggressive, active tasks.
Therefore the goal for competing in the CYBATHLON was never to win with this device, but rather to perform some of the tasks better than would be possible with a state-of-the-art passive device. Performing better not just in terms of task completion speed, but in terms of providing assistance to perform tasks more naturally and determining how to apply assistance to help perform these tasks for a regular user, and not necessarily a well trained athlete. In this goal there were definitely some things that were done well, and others that show limitations of the device and illuminate deficiencies that otherwise might have been missed.
Mechanically the prosthesis performed as designed and expected, without major failure. The controller, based on the combination of a limited set of sensors and user input, was able to fundamentally perform the tasks without a large amount of training. A necessary future addition to this device is an intention detection system as manually selecting state machines based on task is not ideal. Training time also has a large influence on the outcome of tests such as this. It is believed that if our pilot had much more time with a set control he would be able to optimize and utilize the device much more efficiently. In particular, we expect to see better use of the WA system during high extension torque operations. Regardless of these issues, we succeeded in creating a reliable state machine based system for control of the device which was able to perform most of the tasks of the CYBATHLON and have shown the active components of the device to be helpful in at least one aspect of each of the tasks.
It is very difficult to compare the behavior of of the CYBERLEGs Beta prosthesis to the other prostheses used in the competition because of a lack of data from those other devices doing the tasks from the competition. It would be interesting to really understand how other pilots were able to accomplish these tasks with empirical data, possibly using the CYBATHLON tasks as standard benchmarks for future studies. Another issue is that the level of fitness and familiarity of the device to the user has a large influence on the performance. When possible comparisons have been made to studies in the literature using these devices.
In the sit-to-stand task, the device performs quite well, providing a good amount of resistance while sitting and providing a solid assistance while rising from the chair. Only one other powered device, the Össur Power Knee, has been compared to current microcontroller based systems, [17, 18], but these papers show no benefit to the user in performing this task. These findings go against our experience with powered knee devices, where the patients who have used it find that any assistance at all in the prosthetic limb in the stand-to-sit and especially the sit-to-stand motion makes a noticeable difference in the ability to perform the action. It should be noted that in these papers the low level control of the prostheses, whether powered, microcontroller based, or passive were not able to be modified and may account for part of the difference in experience. The Wolf et al. noted that the subjects who participated in the study were relatively healthy, young, and with no underlying complications, and it is possible that a different group, who may have a larger strength deficit for example, may gain more benefit from active assistance. In these papers there is no detailed analysis about what limitations the Power Knee might have in these studies from a control or technical point of view, rather focusing on clinical outcomes. Other devices have been tested with sit to stand properties , but no direct comparisons to how the joint torque related to behavior outcome were reported.
The current prostheses, with the exception of the Power Knee, cannot provide any positive torque while rising from the chair requiring the sound leg to provide all of the assistance. Michel has reported that when the assistive torque of the prototype is set correctly it feels as though he is being thrown out of the chair, greatly assisting the motion. Too much assistance can be a bit unsettling, but illustrates that the powered prosthesis really has an effect on at least the feel of rising from a chair. Also the foot is able to adapt to the ground level, allowing a more natural foot position while seated and while rising. Whether these benefits are seen as a reduction of work of the sound limb or greater body symmetry during the action remains to be determined.
During the hurdle navigation the prosthesis performed quite well, extending and contracting exactly as we wished. There are issues with the speed it is capable of performing flexion, and the weight of the device is another issue for all of the tasks where the prosthesis must be held high off the ground for extended periods. This was slightly mitigated through the use of a waist strap system, but during events of high hip flexion, it was necessary to hold the socket with the hands to ensure that it didn’t slip. The behavior of the knee was good for this task, compared to other devices in the competition where, to get the correct knee flexion, some pilots pulled on their knees with their hands. For a race such as the CYBATHLON this is a really good method to get through quickly, but as a general solution it is a bit of a clumsy action to have to perform, particularly if the user is not very strong in the sound limb.
During slope descent, there was a high sensitivity to torque rate due to the way the torque method was implemented. The balance between too much and too little initial torque and torque trajectory changed the behavior of the knee dramatically, although once a good setting was found the behavior was reliable, as long as the pilot could commit to the step. Hesitation at the beginning of the step would cause a reduction of knee torque and cause a stiff behavior. In descent cases such as this it may be better to model the knee as a damper and use techniques from current microcontroller devices to handle this behavior. Indeed these types of dissipative actions are where microcontroller controlled damping systems excel.
Slope climbing also notably did not contain a large extension peak at the pushoff phase of climbing as stair climbing does, but this may be expected looking at biomechanical data (e.g. ) where there is an initial extension torque but then the knee torque changes into a flexion torque at the end of the stance phase. It is possible that with better control, possibly with a slope estimator , and training slope behavior could be greatly improved. The pilot did not use the WA system as much as was expected for this task. It was expected a high extension torque would be created by it at the beginning of the step ascent, using the spring to initiate leg extension by initiating a counter motion. This behavior may be simply because of a training issue, or simply not required for the task.
It was possible to perform step over step stair climbing and descent using a handrail and the torque curves in Fig. 13 show that the knee was able to provide a large assistive torque during climbing and dissipate a lot of work during descent. One issue is that he knee flexion at the beginning of stair ascent was not as large as it could be which may be caused by a combination of the prosthesis limitations and the pilot training. As it was set during the competition, the knee rests upon the WA when undergoing flexion during swing. This is so the pilot can load it during the beginning of the step up while the main actuator begins to gain torque. This was done this way because the main actuator cannot provide large torques at full flexion, and so it was hoped the WA could provide this during early step up. The pilot does not use this feature as much as we would have expected, and it is possible this can be changed with additional training. That said, the pilot cannot navigate stairs step over step at all with his every day prosthesis, and even though he had to relearn this task, the use of a powered prosthesis made it possible.
It should be noted that a well trained, strong individual can climb stairs step over step with all of the passive prostheses presented at the CYBATHLON. Pilots using most other devices (Genium, Orthokosmos, Rise, and three Ossur knees) completed this task without the use of handrails. Regardless, stair climbing is one function where having a powered knee is known to have a significant effect, reducing the required power generation of the sound limb, while performing slightly worse than the C-Leg in descent .
One omission from this summary is a discussion on level ground walking, which has be left out for a number of reasons. The first was that during the CYBATHLON, pilots were only required to take one or two steps between the different tasks; it was a very task oriented course and to switch to the walking state without an intention detection system would have meant manually switching state machines many times. Second the level ground walking methods are a bit more complex and are deserving of a more detailed analysis which, for brevity, is left out of this document.
Conclusions
This case study is about the adaptation of an active prosthesis for use in CYBATHLON 2016, a competition held in October 2016 in Zurich, Switzerland. An existing prototype, the CYBERLEGs Beta-Prosthesis, was modified and new high and low level control systems and electronics were designed and built for the competition. Doing this allowed us to focus on making the prototype reliable enough to function for testing sessions and competition, as well as completing real-world tasks that displayed the functionality of the simplified controller and overall mechanics of the device. This competition served as a large motivation getting our device functioning well enough to complete the tasks and really allowed us to illuminate problems that future versions of the device will be able to solve.
While we were able to only officially complete four out of the six tasks, step over step stair climbing was possible with the assistance of a railing, which was a great improvement over previous implementations. In fact out of the five tasks we were able to complete, each had aspects that we feel characterize the increased capability of using a powered prosthesis. For example rising from a seat is a difficult task for someone who is weak, and we are able to experimentally measure an assistive torque that would not be there with passive devices. Assistance can be measured for stair climbing, and obstacle avoidance as well. The measurement of these assistive torques will allow a better understanding of how different torque profiles can help in performing tasks and normalizing gait. In addition, the use of compliant actuators allowed for automatic joint adaptation to sloped surfaces and also allowed for the use of the ankle as a torque estimation device for state triggers. All of these things are possible with the device, albeit at a low velocity. In the future we hope to bring these capabilities to a device that is able to compete with the current state-of-the-art in terms of speed and control through weight reduction and actuator redesign.
Abbreviations
- τ :
-
Torque
- A:
-
Ankle
- A α :
-
Ankle moment arm angle with respect to the foot
- A ϕ :
-
Ankle Moment Arm Angle with respect the the shank, measured from the neutral position
- A θ :
-
Ankle angle
- H:
-
Hip
- H ω :
-
Hip angular velocity
- H θ :
-
Hip Angle
- IMU:
-
Inertial Measurement Unit
- K θ :
-
Knee Angle
- KA:
-
Knee Actuator
- K A z :
-
Position of the knee carriage from the bottom of the ball screw
- t :
-
time in sec
- WA:
-
Weight Acceptance
- W A z :
-
Position of the WA nut from the bottom of actuator
- VUB:
-
Vrije Universiteit Brussel
References
Riener R. The cybathlon promotes the development of assistive technology for people with physical disabilities. J NeuroEngineering Rehabil. 2016; 13(1):49. doi:10.1186/s12984-016-0157-2.
Geeroms J, Flynn L, Jimenez-Fabian R, Vanderborght B, Lefeber D. Design and energetic evaluation of a prosthetic knee joint actuator with a lockable parallel spring. Bioinspiration Biomimetics. 2017; 12(2):026002.
Geeroms J, Flynn L, Jimenez-Fabian R, Vanderborght B, Lefeber D. Energetic analysis and optimization of a maccepa actuator in an ankle prosthesis. Auton Robot. 2017. doi:10.1007/s10514-017-9641-1.
Flynn L, Geeroms J, Jimenez-Fabian R, Vanderborght B, Lefeber D. Cyberlegs beta-prosthesis active knee system. In: 2015 IEEE International Conference on Rehabilitation Robotics (ICORR): 2015. p. 410–5. doi:10.1109/ICORR.2015.7281234.
Ambrozic L, Gorsic M, Geeroms J, Flynn L, Lova RM, Kamnik R, Munih M, Vitiello N. CYBERLEGs: A User-Oriented Robotic Transfemoral Prosthesis with Whole-Body Awareness Control. IEEE Robot Autom Mag. 2014; 21(4):82–93. doi:10.1109/MRA.2014.2360278.
Cherelle P, Grosu V, Cestari M, Vanderborght B, Lefeber D. The amp-foot 3, new generation propulsive prosthetic feet with explosive motion characteristics: design and validation. Biomed Eng OnLine. 2016; 15(Suppl 3):145.
Winter DA. Biomechanics and Motor Control of Human Movement, 4th ed.United States of America: Wiley; 2009, p. 384.
Jimenez-Fabian R, Flynn L, Geeroms J, Vitiello N, Vanderborght B, Lefeber D. Sliding-Bar MACCEPA for a Powered Ankle Prosthesis. J Mech Robot. 2015; 7(March):1–2. doi:10.1115/1.4029439.
Flynn L, Geeroms J, Jimenez-fabian R, Vanderborght B, Vitiello N, Lefeber D. Ankle - knee prosthesis with active ankle and energy transfer : Development of the CYBERLEGs Alpha-Prosthesis. Robot Auton Syst. 2014; 73:4–15. doi:10.1016/j.robot.2014.12.013.
Holgate MA, Sugar TG, Alexander WB. A Novel Control Algorithm for Wearable Robotics using Phase Plane Invariants. IEEE Int Conf Robot Autom. 2009; May:3845–50.
Grosu V, Guerrero CR, Brackx B, Grosu S, Vanderborght B, Lefeber D. Instrumenting complex exoskeletons for improved human-robot interaction. IEEE Instrum Meas Mag. 2015; 18(5):5–10. doi:10.1109/MIM.2015.7271219.
Sup F, Varol HA, Mitchell J, Withrow TJ, Goldfarb M. Preliminary Evaluations of a Self-Contained Anthropomorphic Transfemoral Prosthesis. IEEE/ASME Trans Mechatron Joint Publ IEEE Ind Electron Soc ASME Dyn Syst Control Div. 2009; 14(6):667–76. doi:10.1109/TMECH.2009.2032688 .
Rouse EJ, Mooney LM, Herr HM. Clutchable series-elastic actuator: Implications for prosthetic knee design. Int J Robot Res. 2014; 33(13):1611–25. doi:10.1177/0278364914545673.
Verstraten T, Mathijssen G, Furnmont R, Vanderborght B, Lefeber D. Modeling and design of geared dc motors for energy efficiency: Comparison between theory and experiments. Mechatronics. 2015; 30:198–213. doi:10.1016/j.mechatronics.2015.07.004.
Ledoux ED, Lawson BE, Shultz AH, Bartlett HL, Goldfarb M. Metabolics of stair ascent with a powered transfemoral prosthesis. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC): 2015. p. 07–310. doi:10.1109/EMBC.2015.7319589.
Vallery H, Veneman J, van Asseldonk E, Ekkelenkamp R, Buss M, van Der Kooij H. Compliant actuation of rehabilitation robots. IEEE Robot Autom Mag. 2008; 15(3):60–9. doi:10.1109/MRA.2008.927689.
Highsmith MJ, Kahle JT, Carey SL, Lura DJ, Dubey RV, Csavina KR, Quillen WS. Kinetic asymmetry in transfemoral amputees while performing sit to stand and stand to sit movements. Gait Posture. 2011; 34(1):86–91. doi:10.1016/j.gaitpost.2011.03.018.
Wolf EJ, Everding VQ, Linberg AA, Czerniecki JM, Gambel CJM. Comparison of the Power Knee and C-Leg during step-up and sit-to-stand tasks. Gait and Posture. 2013; 38(3):397–402. doi:10.1016/j.gaitpost.2013.01.007.
Varol HA, Sup F, Goldfarb M. Powered Sit-to-Stand and Assistive Stand-to-Sit Framework for a Powered Transfemoral Prosthesis. In: Proceedings of the 2009 IEEE International Conference on Rehabilitation Robotics, Kyoto, Japan: 2009. p. 645–51. doi:10.1109/ICORR.2009.5209582.
Wolf EJ, Everding VQ, Linberg AL, Schnall BL, Czerniecki M, Gambel JM. Assessment of transfemoral amputees using C-Leg and Power Knee for ascending and descending inclines and steps. JRRD. 2012; 49(6):831–42.
Lay AN, Hass CJ, Gregor RJ. The effects of sloped surfaces on locomotion: A kinematic and kinetic analysis. J Biomech. 2006; 39:1621–8. doi:10.1016/j.jbiomech.2005.05.005.
Sup F, Varol HA, Goldfarb M. Upslope walking with a powered knee and ankle prosthesis: initial results with an amputee subject,. IEEE Trans Neural Syst Rehabil Eng Publ IEEE Eng Med Biol Soc. 2011; 19(1):71–8. doi:10.1109/TNSRE.2010.2087360.
Acknowledgements
The second author is supported by a PhD grant from Flanders Innovation & Entrepreneurship (VLAIO). The VUB CYBERLEGs team would also like to thank the Brussels Innoviris and AG Insurance for financial support for materials and travel costs related to the CYBATHLON. Thank you to Vigo International for help in providing people, sockets, hardware, and fitting support. This work has been partially funded by the European Commission 7th Framework Program as part of the project CYBERLEGs under grant no.287894, CYBERLEGs PlusPlus (H2020-ICT-2016-1 Grant Agreement #731931), and by the Research Foundation-Flanders (FWO) under grant number G.0262.14N. Thank you to our pilot Michel de Groot for all the time and effort.
Funding
CYBERLEGs under grant no.287894. ERC CYBERLEGs PlusPlus (H2020-ICT-2016-1 Grant Agreement #731931). The second author is supported by a PhD grant from Flanders Innovation & Entrepreneurship (VLAIO). Research Foundation-Flanders (FWO) under grant number G.0262.14N. Travel and costs related to materials for the competition were funded by grants from AG Insurance and Brussels Innoviris. None of the funding bodies had any role in the design of the study, collection, analysis, and interpretation of the data, or the writing of the manuscript.
Availability of data and materials
Please contact author for data requests.
Ethics declarations
Ethics approval and consent to participate
Written, informed consent was obtained from each subject. VUB Medical Ethics Commission (B.U.N. 143201526629).
Consent for publication
The consent of all participants, in particular our pilot Michel De Groot, was obtained before publication.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Flynn, L.L., Geeroms, J., van der Hoeven, T. et al. VUB-CYBERLEGs CYBATHLON 2016 Beta-Prosthesis: case study in control of an active two degree of freedom transfemoral prosthesis. J NeuroEngineering Rehabil 15, 3 (2018). https://doi.org/10.1186/s12984-017-0342-y
Received:
Accepted:
Published: | https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-017-0342-y |
4. Quality Education
The focus of the global goal “Quality Education” is accessibility of quality education and opportunities to participate in lifelong learning. Going to school, levels of education, teachers’ education, educational institutions’ compliance with requirements (incl. sanitary) and differences in study results are observed together.
The 2030 Agenda sets the target to ensure free, equitable and quality primary and secondary education for all. Before school, small children require early childhood development so that they would be ready to enter primary education. The number of youths and adults with technical and vocational skills for
employment must be increased.
A target to reach is that all learners have knowledge and skills for promoting sustainable development. It is important to achieve sustainable lifestyles, value gender equality, peace and non-violence, global citizenship and cultural diversity. Educational institutions must create favourable learning environments, be child-friendly and consider the needs of persons with disabilities.
The Estonian sustainable development strategy emphasises that the system of education and training is the foundation of economic development: education is the prerequisite of well-being. The global goal “Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all” is linked in Estonia with the following indicators of different
forms of education and participation of population groups in the educational system: | https://www.stat.ee/index.php/en/find-statistics/statistics-theme/sustainable-development/4-quality-education |
Many professional and fringe theatre designers make us their first port of call when sourcing costumes for their new productions, valuing the attention to detail and professional quality of the costumes and props.
For film, theatre, opera and television projects, the range of items is invaluable: from period and contemporary costumes and props to an extensive collection of distressed or broken-down costumes for peasant crowd scenes.
Film & Television projects include:
Harry Potter, Braveheart, Charlotte Gray, Wild Wild West, An Ideal Husband, The Mighty Boosh, Casualty 1906, and City of Vice. | https://www.nationaltheatre.org.uk/costume-and-props-hire/theatre-film-and-tv |
You may have worked hard to build up your assets in your estate. We consider you are doing a disservice to yourself and your family if you do not seek legal advice from a Wills and Probate Lawyer for the purpose of estate planning, ensuring you have a valid Will and that your wishes are properly recorded and carried out. It is, therefore, advisable to have your Will professionally drafted for this purpose.
For more information or advice concerning Wills and Probate, contact our experienced team at Rockliffs Lawyers today. | https://www.rslaw.com.au/another-example-problems-caused-will-kits/ |
National Science Foundation (NSF) programs supporting Arctic research greatly appreciate the formal and informal feedback recently provided by local and Indigenous communities and Arctic researchers on how NSF can improve inclusion of local and Indigenous voices, as well as Indigenous Knowledge (IK), in Arctic research. In a recent Dear Colleague Letter, NSF has outlined immediate actions being taken to support indigenous individuals and organizations.
Immediate Clarifications, Revisions, and Improvements to Arctic Programs
NSF has revised and clarified the Arctic Research Opportunities (NSF 21-526) and Navigating the New Arctic (NNA; NSF 21-524) solicitations to highlight ethical conduct of research in the Arctic. The updated solicitations also provide guidance on how to build true collaborations with local and Indigenous peoples in NSF-funded research and education.
NSF has also recently funded a Community Office for the Navigating the New Arctic Program, which is supporting the continued collaboration of Arctic scientists with local and Indigenous organizations and individuals.
Ongoing Outreach and Communication
NSF representatives continue to communicate frequently with local and Indigenous individuals and organizations to better understand their specific needs and potential ways to support more inclusive NSF-funded activities in the Arctic. These conversations have taken place with individual PIs, as well as larger audiences, including dedicated sessions at Arctic-focused and international science conferences.
NSF-wide Efforts
NSF's commitment to improvement in work with Indigenous individuals and organizations is aligned with one of the three pillars of Director Dr. Sethuraman Panchanathan's vision for NSF, 'ensuring inclusivity.' In addition, the Office of Polar Programs has created a new subcommittee of its Advisory Committee dedicated to Diversity and Inclusion, which is engaged with the NSF Committee on Equal Opportunities in Science and Engineering (CEOSE). The goal of these activities is to better understand the ongoing opportunities, challenges, and needs to continue to improve diversity, equity, and inclusion for NSF programs.
Resources for PIs and Arctic Communities
To increase the visibility and accessibility of these opportunities to a broader community, NSF has launched a landing page on Arctic Community Engagement (ACE). This page highlights solicitations and resources encouraging the inclusion of local Arctic communities and Indigenous Knowledge in NSF-funded projects.
NSF recognizes that these efforts are not the endpoint, but rather part of growing momentum toward increasing inclusion and equitable participation in research. NSF plans to continue to evolve procedures to ensure broadened support for equitable local and Indigenous participation in Arctic research projects. Please reach out to ACE [at] nsf.gov to submit feedback or provide input. | https://www.arcus.org/witness-the-arctic/2021/1/article/32191 |
- Recommended mattress height range: 12"-14"
- Bolt on bed rails
- Distressed pattern
- Square tapered legs
- Wood slats
Materials
- Hardwood solids
Description
To create the focal point of your kid's bedroom without having to change any existing furniture you may have, the Granite Falls Twin Panel Bed is perfectly designed. Constructed of kiln dried and Asian hardwood with a ladder design headboard, distressed finish, and raised frame moldings, this twin bed's simple, transitional design adds the finishing touch.
Care Instructions
Care: When you pick out your favorite case goods and take them home with you, you want to take care of these pieces and ensure that they will last in your home for many years. The first thing you should do is keep your case goods out of direct sunlight. This helps protect the surfaces from humidity and heat.
Get into the habit of using a soft dust-free cloth for routine maintenance in order to remove the accumulation of dust. Do not use any furniture cleaners or abrasives. This can damage the finish of the piece. | https://www.eldoradofurniture.com/granite-falls-twin-panel-bed.html |
This relaxation audio CD offers children, adolescents, and adults a variety of techniques for creating inner calmness, mental clarity, and beneficial physiological changes. Based on empirically-supported approaches to promote self-regulation, it is the perfect accompaniment to the Resilience Builder Program, or effective on its own. Tracks of varying lengths are devoted to calm and attentive breathing techniques, visualization, progressive muscle relaxation, mindfulness meditation, and self-talk. | https://www.cavershambooksellers.com/search/0878226575 |
How to Free Yourself From Emotional Baggage After a Breakup
What doesn't kill you may make you stronger, but it can also leave you with emotional baggage that you carry throughout your life. The aftermath of breakups can entail so many hurtful thoughts and negative assumptions that you are unable to thrive in future relationships. Sorting through your unresolved issues can help you grow as a person and will lead to better relationships with others in the future.
1 Give Yourself Time to Heal
One of the biggest mistakes people make is rushing from relationship to relationship without giving themselves time to heal. After a difficult breakup, allow yourself to grieve the loss of the relationship. Its normal to feel sad and hurt after a breakup. In her Psychology Today article "The 5 Stages of Grieving the End of a Relationship," clinical psychologist Jennifer Kromber explains that grieving a breakup is similar to mourning the loss of a loved one and similar stages of denial, anger, bargaining, depression and acceptance are experienced. Giving yourself time to sort through the pain will help you heal and prepare you for your next relationship.
2 Grow from this Experience
Although a breakup can leave you with feelings of inadequacy, low self-esteem and disappointment, it is also an opportunity to grow and become a better person. A study published in the September 2013 issue of PLOS ONE found that individuals who experienced the pain and ruminating thoughts of a breakup grew stronger, wiser and more self-cultivated, while attachment-avoidant individuals did not experience this growth. Turn recurring thoughts and regrets about your old relationship into something positive by asking yourself what you will do differently next time.
3 Learn to Forgive
Whether you need to forgive yourself or someone from your past, let go of anger and resentment. A study published in the March 2012 issue of "Psychological Science" found that unforgiving thoughts raise stress levels, blood pressure and heart rate and can deteriorate a person's health if they become chronic. Let go of resentment and grudges that you have been holding for years. Leave the past behind and understand that people often make mistakes because of ignorance or immaturity or as a result of their own emotional baggage. Try to see the other person's perspective and to empathize with that. Letting go of resentment will help you enter new relationships with a better mindset.
4 Reframe Your Thoughts
Your emotional baggage makes its way into your daily life through negative and self-limiting thoughts. Don't allow baggage to take control of you: challenge these thoughts. When you are having self-doubts or feelings of inadequacy, replace those thoughts with "I am a very valuable person" and "Others are lucky to have me." Stop yourself from making assumptions about other people based on your past experiences. Repeat statements to yourself such as "Although I have been let down by others, this is a different person" or "Not every person will betray me; there are good people in the world." | https://classroom.synonym.com/yourself-emotional-baggage-after-breakup-20405.html |
Yu's Printed Paper Sensors Talk Wins GRID SessionFischell Department of Bioengineering graduate student Wei W. Yu, advised by Assistant Professor Ian White, won the "Technology in the 21st Century" division at the University of Maryland's 2011 Graduate Research and Interaction Day (GRID). Yu took first place for his presentation of a technique that employs an ordinary inkjet printer to make an inexpensive biosensor component for use in surface-enhanced Raman spectroscopy (SERS).
GRID, which is run by the Graduate Student Government, is a campus-wide event in which graduate students from all parts of the university present and discuss their work with faculty and fellow students, enabling them to receive feedback from a broader audience and perfect their conference presentation skills. Participants make oral and poster presentations that are judged in a variety of categories by faculty, postdoctoral fellows, administrators, and other specialists from around campus.
Yu's talk, "Inkjet Printed Paper Sensors for Chemical and Biomolecular Analysis," explained that while market for biosensors is growing rapidly, current technology limits their wider use. Lab-on-a-chip solutions are popular and state-of-the-art, but complicated, costly, and fragile. SERS, a highly-sensitive technique that uses a laser to detect the presence of mere molecules, is also effective, but the fabrication of one of its key components, a nanopaticle-laced substrate used to amplify the signals generated by the laser, is technically difficult to create, and pre-made examples are both expensive and have a shelf life of only a few days.
Yu and White's goal was to create a sensitive, portable and inexpensive biosensor that requires no expertise to manufacture and can be used in a variety of applications, including the detection of food contamination, infections, cancer, pesticides, DNA, or even explosives. To accomplish this, they printed their sensors in silver nanoparticle ink on hydophobically pretreated paper using a consumer-grade inkjet printer. Using the paper substrate in conjunction with a portable Raman spectroscopy system, they successfully identified the presence of target Rhodamine 6G tracer dye molecules present in quantities as low as 10 attomoles.
"The fabrication method is extremely simple," says Yu. "Additionally, we have leveraged the ability to modify paper to allow for microfluidic techniques such as small sample requirements and analyte concentration." | https://bioe.umd.edu/news/story/yus-printed-paper-sensors-talk-wins-grid-session |
Problem characteristics in AI refers to finding an optimal way and a good solution to characterize the problem. In addition, problem solving is one of the key concerns in AI. Certainly, we know AI is a vast field and branch of computer science, there can be complex problems to solve.
Let us take a look at some of the major problem characteristics.
Major Problem characteristics
Firstly, we need to know if the problem is decomposable or not. For instance, block world problem or Tower of Hanoi are easily decomposable. In Tower of Hanoi some rules are define to move the disks from source to destination. Moreover, each movement is divided into steps.
Secondly, problem is categorize into following:
- Ignorable
- Recoverable
- Irrecoverable
Ignorable problems are those which can be solved using simple control structure. For instance, mathematical problems.
Recoverable problems are those where we can use backtracking to solve the problem. For instance 8-puzzle problem.
Irrecoverable problems are those where backtracking is not possible. For instance, chess.
Thirdly, we should know if the problem is universally predictable. Lets take look.
- Find out is it a good solution to the problem. It is categorize into two paths : Any path and Best path. Moreover, water jug problem, 8 puzzle problem comes under any path whereas Travelling Salesman Problem comes under best path.
- Is the solution a state of the world or not. This means is there a consistent information.
- How much knowledge is available for solving the problem and does it require interaction with a person.
Problem characteristics Steps
- Define the problem precisely.
- Give the initial input required.
- Apply knowledge
- Choose best optimal technique for problem solving.
Moreover, some problems are also solved using state space representation. State Space is the set of all reachable state from initial states.
Summary
In conclusion, we have learnt about the major problem characteristics in Artificial Intelligence. We have seen how to classify the problem and major steps involved in problem solving. | https://datacyper.com/problem-characteristics-in-ai/ |
COMMUNICATION BETWEEN SCHOOL/TEACHERS AND PARENTS/GUARDIANS
Parents/Guardians are the primary educators of their children. Teachers provide children with a formal professional education. Close cooperation and communication between all parties is the optimum approach for a successful time in school.
In St. Sylvester’s Infant School we have always maintained close contact with parents/guardians regarding their children’s’ progress through the following ways:
- Introductory meetings for the parents of children attending our school for the first time. This usually takes place in April / May of the year in which the child starts school
- The new incoming Junior Infant children are also invited to meet their class teacher in June of the year in which the child starts school
- All Junior Infant teachers meet the parents of their pupils in the 4th week of September to give a brief account of the programme the children will be following during that academic year. Senior Infant and First Class teachers meet the parents in the 2nd week of September
- Formal Parent/Teacher Meetings will take place the last week in October each year starting from the school year 2020/21.
- Parents attending meetings for children who have Special Education Needs will be invited early in the school year to meet the Special Education Teacher and the Class Teacher
- Informal meetings may take place by appointment if a child is experiencing any difficulties. In the interest of health and safety for all pupils, teachers would like to minimise unnecessary classroom disruptions. It is requested that parents send in a letter or email if they need a formal meeting with a teacher to relate relevant information about the child’s health, dental/hospital visit, holidays, etc. Informal contact can always be briefly made after school.
- Communication is made via notes, emails, texts, and our School Newsletters which is sent home at regular intervals. Each child has an A5 plastic folder provided by the school for notes between school and home. Parents are encouraged to check this regularly. Details of school holidays for the school year are sent home early September. In 2009 we introduced “Text a Parent” whereby one designated parent receives important text message about early/emergency closures, pending events etc.
- There are two School Notice Boards, one in each yard. These are an important means of communication between teachers and parents
- Formal school reports are sent to parents at the end of the school year or during the year in the event of transfer to another school. Parents receive only one formal report per year.
- Each class has a Class Rep who organises a WhatsApp Group to communicate information to the class regarding school events e.g. non uniform days and unplanned fun events. The class rep does not act as a liaison person between the class group and the class teacher/school. The WhatsApp group is not a forum for discussions that are personal to any individual (gossip). It is not an agony aunt discussion hub. Please maintain respectful dialogue when using this WhatsApp Group.
Contact Us
We are always keen to work alongside parents so please use this form to get in contact with us. You can also contact us by telephone. Walk in appointments are not possible at the moment due to Covid – 19 restrictions. We will get back to you as soon as we can. | https://stsylvestersinfantschool.com/communication/ |
TORRES, P.; MOYA, R. y LAMILLA, J.. Anisakid nematodes of interest in public health in fishes commercialized in Valdivia, Chile. Arch. med. vet. [online]. 2000, vol.32, n.1, pp.107-113. ISSN 0301-732X. http://dx.doi.org/10.4067/S0301-732X2000000100014.
In Chile, infection by Anisakid nematodes has been reported in humans associated with raw ("cebiche") and smoked marine fishes consumption. During 1994, 125 fresh marine fishes commercialized in markets from Valdivia, Chile, were microscopically examined for anisakids in the musculature. From the 10 species examined the following fish species were infected (n of infected/examined fishes) with Anisakis simplex (As), Pseudoterranova decipiens (Pd) and Hysterothylacium sp. (H sp.): the Chilean hake, Merluccius gayi (As 1/17; Pd 4/17), the tail-hake, Macrouronus magellanicus (Pd 1/4; H sp. 1/4), the red-conger-eel, Genypterus chilensis (Pd 9/18), the flat-fish, Paralichthys microps (As 1/10; Pd 7/10) and the Chilean mackerel, Trachurus murphyi (As 2/16; Pd 5/16). All isolated anisakid larvae were alive. The highest number of anisakid per fish (4 larvae) was detected in M. gayi and T. murphyi but the highest density, 3.3 worms/100 g of muscles, was observed in P. microps. The number of parasites was scarce, but their presence in the commercialized fishes in Valdivia, without freezing or sanitary inspection represents a potential risk
Palabras clave : anisakid; fishes; muscles. | https://www.scielo.cl/scielo.php?script=sci_abstract&pid=S0301-732X2000000100014&lng=es&nrm=iso&tlng=en |
Recently, there’s been a rise in the number of young unmarried couples living together and older couples merely “shacking up” instead of getting married. What these couples may not understand is that simply being in a long-term, committed relationship does not give them the rights and benefits of a married couple.
Misconception – California is a common law state
Though there are several states that do recognize common law marriage (which states as long as you live together in a committed relationship for a set number of years, you will receive the same rights as a married couple), California is not one of them. Living together, or cohabitating, does not guarantee the same rights to property and inheritance. In the eyes of the law, these couples are nothing more than “legal strangers.”
In other words, the rules governing community property and intestacy do not apply to unmarried couples. This can make some things easier, but others, such as the division or inheritance of real estate, can become quite sticky. Having a cohabitation agreement covering financial obligations during and after cohabitation, a will, and/or a living trust is a must to guarantee the correct distribution of property.
Misconception – Custody and Paternity automatically belong to both biological parents
Despite what you may believe, unwed fathers have very little, if any, rights when it comes to custody. Regardless of whether someone is the biological father or is in a long-term, committed relationship with the mother, the mother is automatically granted one hundred percent physical custody of a child born out of wedlock.
In California, though, courts do take into consideration the health, safety and welfare of the child. If the father is the primary caregiver or has signed a Declaration of Paternity at the time of birth, he may have more rights when it comes to custody and visitation.
Misconception – Cohabitants have rights in health care decisions
Unlike married couples, cohabitants do not have any rights when it comes to making medical decisions on behalf of their partner. Spouses, adult children and biological parents are treated as top-priority decision-makers, regardless of how long someone has lived with their partner.
Therefore, individuals must have a medical or healthcare directive in place. This includes a Health Care Declaration indicating how you are to be treated in emergency situations, as well as a Durable Power of Attorney for Healthcare designating who has the right to make medical decisions on the individual’s behalf.
Misconception – Cohabitants can dictate funeral arrangements
Although cohabitants may have told their partner what they want when they die, it doesn’t mean that person has the right to carry out those wishes. These decisions automatically go to the next of kin, and only a spouse, parent, child, grandparent, sibling or legally authorized representative may order a certified copy of the death certificate.
Having a will or other notarized legal affidavit granting these rights to the cohabitant is the only way to keep family members from going against the deceased’s wishes. | https://thelawyerking.com/california-law/common-misconceptions-unmarried-couples/ |
Hollywood Penthouse by Smith Firestone Associates
Designed in 2017 by Smith Firestone Associates, this contemporary penthouse is situated in Los Angeles, California.
Description
Inspired by the Brooklyn-esque nature of the surrounding historic buildings, this three-bedroom, four-bathroom penthouse pairs sleek, urban-contemporary design with luxury finishes and materials, reflecting industrial luxe aesthetics throughout the 4,708 square foot home. The glass-walled residence also features a secret bunk room and striking vistas of downtown LA to the Pacific and Hollywood Hills command attention from the 827 additional square feet of outdoor living space. An array of diverse raw materials create dramatic contrasts in design displaying unique combinations of rich wood grains, hints of metallic glazes and hammered metal cabinetry provide a relaxing and comfortable pied-a-terre, the epitome of modern, sophisticated urban living. | https://homeadore.com/2020/01/22/hollywood-penthouse-by-smith-firestone-associates/ |
Loading some great jobs for you...
First and foremost, culture and a team-centric mentality are the most important attributes we look for in a future team member.We are focused on creating and maintaining a comfortable, collaborative, and inclusive environment, where everyone has a voice and all suggestions will be given consideration, as long as they are delivered in a productive and respectful manner. This ideology is at the core of our who we are and want to be. If you''re still interested, keep reading...
The primary responsibility of this position is provide Onsite Technical Support, mainly comprised of small & medium size business with a presence in the NYC Metro Area. The future team member would work most at client sites mainly in the NYC Metro Area and occasionally from our Englewood Cliffs, NJ office. Currently we support 300+ clients in a wide array of industries. Access to transportation is required to travel to clients in NY and NJ.
In our industry, and more specifically the MSP sector, it''s difficult to capture all facets of a position, but we''ll try our best, and if this opportunity sounds like something all parties are interested in exploring further, we''ll move things along to the next steps.
Qualifications*
*While years of experience and certifications are pluses, we recognize intelligence, ability and potential is far more important.
Some other technical facets, details or requirements of the position may include, but not limited to:
Manage and execute projects to meet deadlines and client expectations.
Main Duties
Work with hardware and software vendors to verify timely product delivery and ensure that new equipment is installed and ready to operate on schedule.
Attributes
by Jobble
Launch your career - Create your profile now!Create your Profile
Loading some great jobs for you... | https://newyork.us.jobs/jobs/network-doctor/field-systems-engineer-new-york/1573301552672867141 |
Clay minerals are important reactive centers in the soil system. Their interactions with microorganisms are ubiquitous and wide-ranging, affecting growth and function, interactions with other organisms, including plants, biogeochemical processes and the fate of organic and inorganic pollutants. Clay minerals have a large specific surface area and cation exchange capacity (CEC) per unit mass, and are abundant in many soil systems, especially those of agricultural significance. They can adsorb microbial cells, exudates, and enzymes, organic and inorganic chemical species, nutrients, and contaminants, and stabilize soil organic matter. Bacterial modification of clays appears to be primarily due to biochemical mechanisms, while fungi can exhibit both biochemical and biomechanical mechanisms, the latter aided by their exploratory filamentous growth habit. Such interactions between microorganisms and clays regulate many critical environmental processes, such as soil development and transformation, the formation of soil aggregates, and the global cycling of multiple elements. Applications of biomodified clay minerals are of relevance to the fields of both agricultural management and environmental remediation. This review provides an overview of the interactions between bacteria, fungi and clay minerals, considers some important gaps in current knowledge, and indicates perspectives for future research. | https://discovery.dundee.ac.uk/en/publications/microbial-biomodification-of-clay-minerals |
This year marks the 20th anniversary of the Security Council taking up the protection of civilians in armed conflict on its agenda, as well as two important resolutions passed in 1999: Resolution 1265 on the protection of civilians in armed conflict and Resolution 1270, which included the first explicit protection of civilians mandate for a United Nations (UN) peacekeeping operation. This year also marks the 70th anniversary of the 1949 Geneva Conventions. We collectively urge Security Council members, the UN Secretary-General, and all UN Member States to take full advantage of the opportunity of these important anniversaries to meaningfully improve civilian protection in country-specific situations and advance an ambitious vision for the protection of civilians agenda.
There have been important strides in advancing the protection of civilians over the past twenty years, including through Security Council resolutions, the development of policy by the UN, and actions taken at the national level by governments and determined civil society actors to prioritize protection. These developments have been buoyed by the robust framework of international humanitarian law (IHL) and international human rights law (IHRL), which were developed to limit the impact of war on civilians and safeguard the security and dignity of human beings.
Yet, as we mark these important developments, civilians continue to suffer disproportionately from the devastating consequences of armed conflict. In Afghanistan, Central African Republic, Libya, Myanmar, Nigeria, South Sudan, Syria, Yemen and far too many other conflict situations, civilians are paying the highest price for the failure of parties to armed conflict – and those Member States that support them – to abide by the norms and laws that safeguard humanity.
Civilians are routinely targeted, as are the places in which they live, work, study, worship, or seek or provide medical care or humanitarian aid. Explosive weapons with wide-area effects are employed in populated areas, with devastating and generational consequences. Conflict-related sexual violence and gender-based violence are occurring at shocking levels, with women and girls facing heightened risk of sexual and gender-based violence during conflict. We are also witnessing a worrying retreat from multilateralism and the rules-based international order, which creates a permissive environment for violations and abuses against civilians in conflict zones.
The international community must collectively turn this worrying tide. We urge Security Council Members, the UN Secretary-General, and all UN Member States to take determined action to strengthen the protection of civilians and stand up for the norms and laws that are essential to safeguard civilians in conflict.
The upcoming UN Security Council Open Debate on the Protection of Civilians on May 23 is a crucial opportunity for Security Council members, the UN Secretary-General, and all UN Member States to make concrete commitments and pledges to strengthen the protection of civilians in armed conflict during the anniversary year and over the years to come. The following issues and recommendations should be the focus of collective action:
To Members of the Security Council: Use your voice and vote to prioritize the protection of civilians in the decisions and deliberations of the Council.
● Publicly recognize and affirm the protection of civilians in armed conflict as one of the core issues on the agenda of the Security Council. Recommit to fully implementing the provisions of Council resolutions on the protection of civilians, including resolutions 1894, 2175, 2286, and 2417, as well as thematic resolutions on children and armed conflict, women, peace and security, and sexual violence in armed conflict. Systematically call on all parties to armed conflict to take all feasible steps to ensure the protection of civilians. Respect and ensure respect for IHL by ceasing support for parties to armed conflict where there are serious allegations or risks of violations of IHL and violations or abuses of IHRL.
● Unequivocally condemn violations of IHL and violations or abuses of IHRL by all parties to armed conflict. This should include consistently condemning direct and indiscriminate attacks on civilians, deliberate targeting of schools, hospitals and other civilian infrastructure, and arbitrary denial of humanitarian access. Ensure that there are consequences for state and non-state actors who deliberately violate or disregard their obligations, including through accountability mechanisms. Consistently support the creation of international, independent investigative mechanisms in situations of armed conflict where there are significant civilian casualties. Commit to make the reports of such mechanisms public to bring greater transparency to the Security Council’s work in pursuit of accountability for grave violations and to deter future violations. Encourage parties to armed conflict to decisively and transparently investigate allegations of civilian harm committed by their forces.
● Strengthen the ability of UN peacekeeping operations to protect civilians by providing political support to these missions and ensuring they have adequate resources and capabilities to match their mandates, including Protection of Civilians Advisors, civilian and uniformed Gender Advisors, Women’s Protection Advisors, Child Protection Advisors, and the appropriate number of qualified human rights monitors. Proactively assess the performance of UN peacekeeping operations in delivering on protection of civilians mandates, including specific tasks for the protection of children, women, and people with disabilities, and ensure the full and effective implementation of the provisions of Security Council Resolution 2436 (2018). Ensure that the protection of civilians is prioritized in the context of downsizing, readjustment, or transition of peacekeeping operations.
● Support timely and decisive action aimed at preventing or ending the commission of genocide, crimes against humanity or war crimes. Publicly pledge not to vote against a credible draft resolution before the Security Council on timely and decisive action aimed at halting or preventing such crimes, in line with the Accountability Coherence and Transparency Group’s Code of Conduct (A/70/621, 2015).
● Regularly convene specific briefings or informal meetings on the protection of civilians in the context of country-specific situations on the Council’s agenda. Regularly invite UN officials with specific protection mandates and experts from local, national and international civil society to brief the Council on these issues, including speakers who can provide a gender- and age-specific analysis.
To the UN Secretary-General: Deliver on commitments to lead a “global effort” in support of the protection of civilians. Speak truth to power for civilians caught in conflict.
● Follow through on the commitment in your 2017 report on the protection of civilians in armed conflict to launch a “global effort” in support of the agenda. Deliver an ambitious vision to strengthen the protection of civilians in armed conflict today and over the next twenty years. Mobilize senior UN leaders and the agencies, offices, and departments of the UN behind this effort.
● Demand an end to attacks against civilians and strongly and publicly condemn violations of IHL and violations and abuses of IHRL by all parties to armed conflict. Press parties to armed conflict to transparently investigate and thoroughly report on allegations of civilian harm.
Spare no effort in promoting accountability for violations of IHL and violations and abuses of IHRL through national, regional, ad hoc, and international judicial mechanisms, including the International Criminal Court.
● Speak out forcefully against conflict-related sexual violence, gender-based violence, disability-based violence, and all grave violations of children’s rights in armed conflict. Fully exercise your authority in listing in your reports all parties to armed conflicts found responsible for perpetrating conflict-related sexual violence and any of the six grave violations against children in armed conflict. Use your influence, good offices, and the development of Action Plans to ensure these parties take meaningful steps to address the reasons for their listing.
● Ensure UN peacekeeping operations fully implement their mandates to protect civilians and take a comprehensive and whole-of-mission approach to protection. Vigorously address any incidents of underperformance or failure to protect civilians, including through accountability measures. Take steps to ensure that peacekeeping operations minimize harm to civilians, including through support to national security forces or parallel military operations, and ensure the full implementation of the UN Human Rights Due Diligence Policy on UN Support to Non-UN Security Forces. Ensure that UN peacekeeping operations safely and meaningfully engage local communities on their protection needs, taking care to ensure that all groups, including women, youth, children, and people living with disabilities, are proactively engaged so that their perspectives and capacities shape mission efforts to respond to protection threats.
● Establish a system-wide approach to record civilian harm and ensure that UN peacekeeping operations, special political missions, and other relevant UN agencies or offices in the field have the capacity and guidance to proactively monitor, analyze trends, and publicly report on civilian harm. Regularly share gender-, disability- and age- disaggregated information and analysis on protection of civilians trends with the Security Council to better inform its deliberations and decision-making.
To All UN Member States: Prioritize the protection of civilians at the national level, share and systematize good practices, and ensure full compliance with IHL and IHRL.
● Re-state your full commitment to upholding obligations under the 1949 Geneva Conventions and their Additional Protocols, as well as all relevant IHRL conventions. Accede to and implement any outstanding relevant treaties and conventions, including Additional Protocol I and II to the Geneva Conventions and the Optional Protocol on the Involvement of Children in Armed Conflict (OPAC). Publicly commit to prioritize the protection of civilians at the national level, including through the adoption and implementation of a national policy framework on the protection of civilians, and the establishment of specific policies and mechanisms to mitigate harm to civilians and respond to civilian harm. Further commit to the systematic collection of information and disaggregated data regarding civilian harm, and accept and encourage information from civil society regarding threats to civilians and civilian harm incidents. Fully promote and ensure accountability and transparency for violations of IHL and IHRL.
● Adopt and implement key policies and political declarations related to the protection of civilians agenda, including: developing, implementing and financing National Action Plans on Women, Peace and Security, and endorsing and implementing the Paris Principles and the Safe Schools Declaration.
● Support efforts towards the adoption of a multilateral political declaration on explosive weapons in populated areas during the 20th anniversary year. Such a declaration should commit states to avoid the use of explosive weapons with wide-area effects in populated areas given their devastating humanitarian impact on individuals and communities, including deaths, injuries and damage to vital civilian infrastructure, and the high likelihood of indiscriminate effects. Commit to develop strong national standards and restrictions on the use of explosive weapons with wide-area effects in populated areas. Review and strengthen policies and practices with a view to avoiding the use of explosive weapons in populated areas. Gather and make available relevant data, including through civilian harm tracking and civilian casualty recording processes. Contribute to assisting victims and their communities in addressing civilian harm from the effects of explosive weapons.
● Publicly recognize that the protection of civilians must be a priority objective in any security partnership and share best practices that would enable improvements in the protection of civilians by partner security forces. Clearly identify conditions regarding the protection of civilians that would trigger downgrading or termination of security partnerships. Strictly comply with the Arms Trade Treaty, which can help protect civilians in even the most difficult situations by placing IHL and IHRL at the center of decisions on whether or not to transfer arms.
● Reaffirm the core humanitarian principles, including that of impartiality which makes no distinction in the protection of rights of those at risk on the basis of nationality, race, gender, religious belief, class or political opinions, and states that humanitarian action should be independent and free from political influence. Recommit to facilitating timely and safe access to humanitarian assistance and protection to affected civilians, without any obstacles created by disproportionate military tactics or unreasonable bureaucratic impediments. Include humanitarian exemptions in any counter-terrorism legislation and policies to prevent unintended consequences or restrictions on humanitarian assistance. Explicitly condemn instances of killings and attacks on humanitarian and medical workers and ensure accountability for such attacks.
● Publicly recognize the importance of UN peacekeeping operations fully delivering on mandates to protect civilians. Take steps to implement the provisions of the Declaration of Shared Commitments on UN Peacekeeping Operations, particularly those commitments on strengthening the protection of civilians, improving performance and accountability, and sustaining peace, in order to ensure that momentum behind peacekeeping reform is maintained. Endorse and implement the Kigali Principles on the Protection of Civilians and the Vancouver Principles on Peacekeeping and the Prevention of the Recruitment and Use of Child Soldiers.
Endorsing Organizations:
Action Against Hunger
Amnesty International
Article 36
CARE
Center for Civilians in Conflict
Child Fund Alliance
Concern Worldwide US
FIDH
Global Centre for the Responsibility to Protect
Global Coalition to Protect Education from Attack
Human Rights Watch
Humanity & Inclusion
InterAction
The International Network on Explosive Weapons
International Rescue Committee
Norwegian Refugee Council
Oxfam
PAX
Save the Children
War Child
Watchlist on Children and Armed Conflict
World Vision International | |
Using Artist Spotlight posts and KhanAcademy resources as examples, present biographical and contextual information about a SCULPTOR whose work supports and inspires the sculpture that YOU are currently making. REMEMBER:
- The point of this research is to support your artistic process and to strengthen the development of your ideas. Use this artist exemplar to better understand and explain your choices of SUBJECT, COMPOSITION, CONTENT, and FORM (style, medium, technique).
- Additionally, this research should also be helpful to OTHERS. These posts will be used by your classmates to learn about sculptors beyond those presented by the teacher. These investigations should allow both you and you classmates to further understand how art history plays an ongoing role in contemporary art.
INCLUDE THE FOLLOWING COMPONENTS:
- Heading. "AWARENESS: Artist Spotlight #15 - Artist's name and birth/death dates"
- A picture of the artist.
- At least ONE of their most well-known sculptures, including credit line.
- At least ONE of an earlier and/or other sculpture, including credit line, so we can get a sense of their depth/breadth/development as an artist.
- A statement that explains the direct connection to YOUR sculpture. Why did you choose THIS sculptor to research? What is it about THIS sculptor's work (subject, composition, content, medium, process, etc.) that will help you with YOUR sculpture?
- Important facts/background info. (biographical, stylistic, contextual, etc.); a brief "report" to summarize their life and work - the things that made them famous and important to art history (writing & honor rules apply; cite/list your sources).
- Supporting information (videos, articles, websites, etc.) that will help explain the "who/what/where/why/how, etc." In addition to the information presented in your report, these resources should contain facts needed to answer the questions that you will write. Be selective - there will likely be many resources to choose from, but consider the time that you will be asking your classmate to spend when reviewing them (for example, don't include videos that are an hour long unless you can direct the reader to a specific, short segment that contains the info. that you need them to understand). NOTE: all links must be active.
- THREE questions that, when answered, will allow for thorough comprehension and review of the provided resources, the artist's place in art history, their impact on the art world, the reason(s) they work in 3D, AND the reader's ability to make meaning and find inspiration from their life and work. | https://www.mosleyart.com/website-assignments/category/all |
Groundbreaking Study Maps Key Brain Circuit
Biologists have long wondered how neurons from different regions of the brain actually interconnect into integrated neural networks, or circuits. A classic example is a complex master circuit projecting across several regions of the vertebrate brain called the basal ganglia. It’s involved in many fundamental brain processes, such as controlling movement, thought, and emotion.
In a paper published recently in the journal Nature, an NIH-supported team working in mice has created a wiring diagram, or connectivity map, of a key component of this master circuit that controls voluntary movement. This groundbreaking map will guide the way for future studies of the basal ganglia’s direct connections with the thalamus, which is a hub for information going to and from the spinal cord, as well as its links to the motor cortex in the front of the brain, which controls voluntary movements.
This 3D animation drawn from the paper’s findings captures the biological beauty of these intricate connections. It starts out zooming around four of the six horizontal layers of the motor cortex. At about 6 seconds in, the video focuses on nerve cell projections from the thalamus (blue) connecting to cortex nerve cells that provide input to the basal ganglia (green). It also shows connections to the cortex nerve cells that input to the thalamus (red).
At about 25 seconds, the video scans back to provide a quick close-up of the cell bodies (green and red bulges). It then zooms out to show the broader distribution of nerve cells within the cortex layers and the branched fringes of corticothalamic nerve cells (red) at the top edge of the cortex.
The video comes from scientific animator Jim Stanis, University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute, Los Angeles. He collaborated with Nick Foster, lead author on the Nature paper and a research scientist in the NIH-supported lab of Hong-Wei Dong at the University of California, Los Angeles.
The two worked together to bring to life hundreds of microscopic images of this circuit, known by the unusually long, hyphenated name: the cortico-basal ganglia-thalamic loop. It consists of a series of subcircuits that feed into a larger signaling loop.
The subcircuits in the loop make it possible to connect thinking with movement, helping the brain learn useful sequences of motor activity. The looped subcircuits also allow the brain to perform very complex tasks such as achieving goals (completing a marathon) and adapting to changing circumstances (running uphill or downhill).
Although scientists had long assumed the cortico-basal ganglia-thalamic loop existed and formed a tight, closed loop, they had no real proof. This new research, funded through NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, provides that proof showing anatomically that the nerve cells physically connect, as highlighted in this video. The research also provides electrical proof through tests that show stimulating individual segments activate the others.
Detailed maps of neural circuits are in high demand. That’s what makes results like these so exciting to see. Researchers can now better navigate this key circuit not only in mice but other vertebrates, including humans. Indeed, the cortico-basal ganglia-thalamic loop may be involved in a number of neurological and neuropsychiatric conditions, including Huntington’s disease, Parkinson’s disease, schizophrenia, and addiction. In the meantime, Stanis, Foster, and colleagues have left us with a very cool video to watch.
Blog Archives
About the NIH Director
Appointed the 16th Director of NIH by President Barack Obama and confirmed by the Senate. He was sworn in on August 17, 2009. On June 6, 2017. President Donald Trump announced his selection of Dr. Collins to continue to serve as the NIH Director. | |
Over £10,000 raised for Grief Encounter
Gloucestershire Cricket together with Rainbow @ Grief Encounter raised over £10,495 in August along with increased awareness for the Bristol-based charity with more still to come.
Rainbow Day was a charity day held on August 4 at The Bristol County Ground as Gloucestershire faced Sussex Sharks in the Vitality Blast.
Limited-edition Rainbow Day shirts and caps were sold in aid of the charity with the shirts selling out online and in the Club shop on the day with 100% of proceeds going to the charity.
Raffles for the signed Rainbow Day shirts of Michael Klinger, Andrew Tye and James Bracey took place on the day as well as bucket collections which raised an additional £2,000.
Gloucestershire players David Payne, Jack Taylor and Miles Hammond put their match-worn shirts up for auction on social media, adding to the already impressive sum.
Tom Smith, who initiated the charity day, said: “The Rainbow Centre are the real winners of this one. It’s been amazing to spread awareness about the charity. | https://www.bishopstonvoice.co.uk/over-%C2%A310,000-raised-for-grief-encounter |
If you are reading this blog, it is because you, as well as a high percentage of the population, have started to look for information through the Internet, instead of books and printed media. We have seen this use of technology increase in the last two decades, but since 2020, its growth has been exponential. Today, we see how children are born with this chip, this information about technologies that allow them to know how to use a tablet or a cell phone before they can talk.
Therefore, it is essential to have a protection strategy, a plan that allows us to have a secure life in the digital world. According to the Inter-American Development Bank in Latin America, only 14 of the 26 countries that are members of the IDB have a National Digital Strategy. The IDB document is focused on how countries can have a better organization on how to use technology, close access gaps, to keep children, adolescents, and adults safer.
With the help of this document, it is expected that a digital strategy can be developed:
- The rights of children and teenagers in the digital environment are made effective.
- Use of technology with ethical and responsible principles.
- Cybersecurity tips.
- How to manage the content and data shared by children and adolescents.
- Teaching parents to use parental controls on their children’s devices.
- Communicate to children and teenagers that there are governmental services to help and report when there are problems or digital crimes.
What makes this population group more vulnerable? Their age, their innocence, and their need to assert themselves before the world and build themselves as individual beings are characteristics of children and teenagers that are more accentuated than the adult population. So those of us who are adults, children, and adolescents, seek social acceptance, interpersonal connection, and fun through digital media. How can we protect them? First, by learning and protecting those of us who care for and educate them.
It is important to explain to them why they should not share personal information on digital platforms and be able to guide them in good practices. Doing these activities together can help build a bond of trust. Invite them to create an avatar instead of posting a picture of themselves and do the same yourself, lead by example. Teach them how to use privacy settings, choose the right people to share their information, photos, and activities with, and talk to them about the risks that exist in the virtual world.
In addition to the security settings and the activation of parental controls, ensuring that they don’t access websites or content that they shouldn’t be seeing. We should teach our kids about posting on sites. Deleting a post does not mean it’s permanently gone, all their online posts, comments, and shares are part of their digital footprint. We should also let them know the importance of privacy. We should also teach our children how much personal information is too much information online. Remember that these types of identifying information (names, dates of birth, school names, and hometown), if exposed in a data breach, could make them vulnerable. We recommend establishing a series of rules such as only sharing images with close family members, asking before sending a picture and using other names on their online profiles, among others.
How do we build digital care for the well-being of children and adolescents?
Communication is key, so the first step is to talk. Sharing personal experiences that happen to us even as adults helps to create a bond of trust and receptivity.
When talking to children it is important to approach the subject through metaphors and situations or concepts that are familiar to them, for example, caring for a pet or personal or physical care. With this, we are communicating the message that they should take care of themselves not only individually but also collectively, because, just as in the pandemic, when using a mask, we take care of the people we love, the same happens in the digital world: if we have good security, we are taking care of our family and friends.
- Communication in the cyber world is here to stay, that is why prohibition will not be efficient, it is much better to help them to open and choose safe spaces where they can communicate and play. In addition, this will allow us to know what kind of applications, platforms, and tools they use while using the devices.
- Another recommended practice is to determine fixed times for device use. When it is found that they are accessing content that is appropriate for their age, it is recommended to have a conversation in which the risks are explained and if necessary, proceed to block access to the site, game, application, etc.
- Establish a VPN connection in your home. This connection will help you to ensure that no stranger can identify any member of your family.
- Finally, it is important to teach them the parallels of the dangers between the physical and digital worlds. For example, if they are taught that it is dangerous to talk to strangers in the physical world, the same should apply to the web.
Today, technology occupies a very important place and with the arrival of the metaverse, it will become more and more important. Today let’s work on cybersecurity as a shared responsibility; we must help generate cybersecurity awareness from an early age, this will help reduce the risks to which they are exposed. | https://www.metabaseq.com/es/ciberseguridad-para-ninos-ninas-y-adolescentes/ |
Kimbal Musk, co-founder of The Kitchen Community, speaks for the length of the annual Milken Institute World Convention in Beverly Hills, California, U.S., on Tuesday, Could 3, 2016.
Patrick T. Fallon | Bloomberg | Getty Photos
Kimbal Musk, brother of Tesla CEO Elon Musk, sold 30,000 shares of the electrical automobile maker this month for roughly $25.6 million, in step with a securities submitting.
The younger Musk is a member of the board at the pioneering automaker. Every other director, Antonio Gracias, sold bigger than 150,000 shares earlier this month, in step with securities filings.
The sales attain after a dramatic flee for Tesla, which joined the S&P 500 gradual closing twelve months. Its inventory tag is up bigger than 400% in the past 12 months but has stalled in fresh weeks and is down a bit all around the final month.
Kimbal Musk’s sales came at ethical above $850 per portion, in step with securities filings. Tesla’s inventory closed at ethical under $805 per portion on Wednesday.
The sale represents about 5% of the younger Musk’s stake in Tesla. He held 600,000 shares as of October, in step with FactSet, making him the fifth greatest insider stockholder. He also sold a enormous quantity of inventory closing September.
Musk, a restaurateur, used to be previously a board member for Chipotle Mexican Grill. | https://africhoice.info/2021/02/11/kimbal-musk-sells-25-million-rate-of-tesla-shares/ |
- This event has passed.
Coaching Cafe
July 20, 2017 @ 3:00 pm - 4:30 pmFree
“Introducing Culture in Organizations” (for Fordham IT staff only)
- Explore the concept of “culture”
- Reflect on how cultures emerge and develop
- Discuss how culture affects your job and our organization.
Develop your professional skills with Coaching Cafe, a small, discussion-based group that explores work-related challenges together. Open to all Fordham IT staff.
Tickets are not available as this event has passed. | https://itnews.blog.fordham.edu/event/coaching-cafe-2/ |
As we conclude our first week back at Proctor, there remains a cloak of uncertainty over the campus. Although students and faculty alike are as connected as ever, the current state of the school in regards to COVID-19 is new to everyone. Faculty and Staff spent the better part of the summer planning every aspect of school in hopes that we would be able to return to in-person academics this fall. Aside from classes, one of the staples of Proctor life needed to be adjusted as well: afternoon activities.
Every student who attends Proctor is required to participate in an afternoon activity each trimester. These activities range from sports, such as traditional team sports like soccer, field hockey, and football, to less traditional sports like mountain biking and cross country running to non-competitive activities like dance or the Woods Team. Each part of Proctor’s afternoon activities’ routine needed to be modified in order to keep the community safe from disease, one aspect being the Athletic Trainers. During the normal year, the afternoon brings a swarm of athletes to Proctor’s athletic training room in need of all kinds of treatment: tape, stim therapy, crutches, ice, etc. But with COVID added to the mix, the school cannot allow a large number of students to crowd into one room, so they have come up with a number of solutions. When asked how the Athletic Training Room has adjusted to the numerous COVID safety protocols, Kelly Griffin-Brown, one of the Athletic Trainers, confirmed how much has changed, “First thing is that we’re seeing kids by appointment for injuries, which we’ve never done before. Usually, it was an open-door, ‘come on in’ policy. With COVID we’ve had to separate six feet per patient, so we can only accommodate about four at a time.”
In order to account for all students so that nobody would be late for practice, the Athletic Trainers not only set up an appointment system but also managed to maintain the old policy by creating an outside station where athletes can stop by whenever and get the basic treatment they need. In addition, Kelly and Chris Jones (Proctor’s second athletic trainer) is asking kids to be as responsible as possible for their own basic treatment, as COVID has made the system a little less time-efficient. And although it may seem unnecessarily troublesome to some, Kelly insists that Afternoon Activities are crucial to Proctor’s core, “I think that Afternoon Activity is more than just sports. It’s what you can find passion in in your life and I think a lot of students identify with either a team or an activity that really gives them that sense of purpose. We are about experiential education and afternoon activities are as experiential as learning gets.”
In addition to the Athletic Training Room protocols, there were also a lot of changes made to the sports schedule, one of which was to reduce the number of competitions scheduled for the fall. When asked about the specific precautions taken by Proctor’s athletic teams, Director of Athletics Gregor Makechnie ‘90 shared, “Proctor collaborated with peer schools in the Lakes Region to create a set of shared practices that safeguard the health of students, coaches, and training staff.” Some of these practices include a mask mandate for all staff involved in games, numerous health screenings for all parties involved, and a very limited number of spectators. Perhaps most importantly, Gregor made it clear that these necessary precautions would not dampen the Afternoon Activity experience. “During the afternoons, meaningful relationships develop through shared experience. We build community. We develop the habits and skills necessary for physical, mental, and emotional health. And, importantly, we have fun!”
In times of uncertainty and dismay, such as this pandemic that continues to affect our lives, it is not surprising that the Proctor family has managed to continue to enjoy each and every day we share on this beautiful campus. And as long as this community continues to persevere, we will be able to preserve Proctor’s authenticity, in and out of the classroom. | https://blogs.proctoracademy.org/proctor-athletics-embracing-a-new-normal |
When undertaking pool removal, you normally have to drain the pool, drill holes at the bottom, demolish the top part and fill it using the rubble and extra dirt soil. But removal of an above ground pool is less problematic and only requires you to drain the pool, tear it down and carry it away. Generally, demolishing an inground pool costs $6,500 while an above ground pool costs $2,200.
There are many benefits that may accrue from removing your pool, namely:
This is the most popular method of demolishing a pool. It entails draining the pool, boring holes at the bottom and tearing down the pool’s uppermost layer (18 to 36 inches), depositing the rubble at the bed of the pool, adding topsoil and dirt and compressing the soil. This process can be done without the supervision of an engineering technician unless this is required by the local authorities in your area.
Advantages:
Disadvantages:
This method of partial pool removal also entails pool draining, boring holes at the bed of the pool, demolishing the topmost pool section (18 to 36 inches), putting rubble at the bottom before filling and compacting. But unlike the previous method, the filling process is overseen by an engineering technician.Remember this method is normally used if the authorities make it mandatory. But it is highly recommended in case you are not confident about the skills of your contractor.
Advantages:
Disadvantages:
This entails draining the pool and then removing all the materials for example liner, fiberglass, rebar, concrete and so on. After carrying them away, the pool area is filled up and compaction is then done. All these activities are done without being supervised by an engineer.
Advantages:
Disadvantages:
This method involves draining the pool, removing all materials (for example fiberglass, liner, re-bar, gunite/concrete and so on) and carrying them away. Next, the space formerly covered by the pool is filled up and compacted under the direction of an engineer. The engineer then carries out a density testing procedure before presenting a final review certifying that the area is suitable for building
Advantages:
Disadvantages:
There are a various kinds of above ground pools, but the removal process is essentially the same for all of them. They are also much easier to remove than inground pools. Normally, it is advisable to hire a contractor with a good reputation to manage the removal of your pool.
But if you prefer to do it yourself, here is how to do it:
1)Draining the Pool: The easiest method of doing this is using a pump, and normally there is a sewer point inside a hundred feet of the pool
2)Tearing it Down: This process depends on the type of pool in your property.But it usually entails unscrewing the bolts and tearing down the walls with a sledgehammer.
3)Carrying it Away: Get a junk removal company or rent a dumpster to remove the resultant debris but ensure that you recycle if you can. This will minimize costs and assist to eliminate debris from the landfill
4)Conducting Site Repair: When the pool has been removed, a strip of dead grass (or sometimes stone or sand) will be left on the spot previously occupied by the pool. If there are plans for a new pool to replace the old one, then it can be left as it is. But if you prefer to cover the area with grass, ask if your contractor can help to repair the grass at a reasonable cost.
This depends on a number of factors such as:
Inground pools
On average, demolishing an inground pool costs between $3,500 and $7,000 for an average-sized pool that is somewhat easy to access. But watch out because costs can soar far beyond $10,000 for a big pool with a big deck that is hard to access. The standard cost of full demolition methods can rise to between $7,000 and $15,000.
Above ground pools
The cost of removing an above ground pool differs significantly, but it is normally costs less than inground pool removal.
While this depends on the local authorities in your area, you need a permit in the majority of cases. The costs may range from zero to a few hundred dollars depending on local municipality requirements. Also note that most local authorities have regulations stating how pools should be removed. Some authorities might have zoning codes or ordinances that require property owners to completely remove pools, not just fill them in.Where the authorities have permitted partial pool removal and demotion, there might be specified rules on the method to be used when filling in the pool.
Depending on how much heavy equipment is used during pool removal,various structures in your property such as driveways, sewer connections, landscaping, septic tanks and so on can get damaged. That is why it is critical to work with a knowledgeable pool removal contractor. The contractor will carefully think about the best way to access the pool and the type and size of equipment that is best suited to your particular yard and swimming pool.
Inground pool removal projects are expensive and so it is advisable to get a couple of estimates and several opinions on the best method of carrying out the pool demolition project.
You should get a written quote that includes details such as: | https://lakenormanexcavating.com/removing-demolishing-pool/ |
Veterinary Medicine Overview
If you hear the word, “veterinary,” you likely immediately think of animals. And rightly so since veterinary medicine is the branch of medicine that deals with the prevention, diagnosis and treatment of disease as well as disease prevention in animals of all types, from family pets to farm livestock and zoo animals.
What you may not know is that veterinary health care workers also contribute to human public health by working to control zoonotic disease, those diseases passed from non-human animals to humans, such as Lyme disease and West Nile virus, for example.
The scope of veterinary medicine is wide, covering all animal species, both domesticated and wild, with a range of conditions that can affect different species. Veterinary medical workers include:
- Veterinarians: Physicians who protect the health of both animals and humans, veterinarians may have their own practice caring for companion animals, or they may work in zoos, wildlife parks, or aquariums; focus on public health and regulatory medicine; enter academia or research; or they may pursue other career paths.
- Veterinary technicians: These workers assist veterinarians with surgery, laboratory procedures, radiography, anesthesiology, treatment and nursing and client education. Almost every state requires a veterinary technician to pass a credentialing exam to ensure a high level of competency.
- Veterinary assistants: They support the veterinarian and/or the veterinary technician in their daily tasks. The assistant may be asked to perform kennel work, assist in the restraint and handling of animals, feed and exercise the animals, or spend time on clerical duties.
Other roles in a veterinary office may include a receptionist and a practice manager (someone who manages the office’s business functions).
Animal behaviorists are also a part of the veterinary field though they are not usually found in a veterinarian’s office. They study the way animals behave and try to determine what causes certain types of behavior and what factors can prompt behavior change. Most animal behaviorists are employed in academic settings, usually in biology or psychology departments, where they teach and engage in high-level research.
Learn More
- Explore the veterinary career paths you can take.
- Read a brochure about becoming a veterinarian.
- Read a brochure about becoming a veterinary technician.
- Read a brochure about the veterinary health care team.
- Read profiles of veterinarians.
The Association of American Veterinary Medical Colleges has reviewed this overview. | https://explorehealthcareers.org/field/veterinary-medicine/ |
We are pleased to offer many wonderful classes to our youngest congregants, with the help and guidance of many compassionate adults and youth, that stimulate the development of creative and critical thinking skills as they ponder some of life’s biggest questions: Who am I? To whom do I belong? What do I believe? Is what I believe moral and ethical? May we all engage in this important spiritual work together!
We Are Many We Are One lessons encourage children to learn and play cooperatively, express their feelings. appreciate how we are all alike yet different, view nature as a source of gifts that needs our care, and celebrate different religions and cultures of the world.
This curriculum helps children develop a sense of home that is grounded in faith. Leaders ask questions about the purpose of having a home and the functions a home serves, for us as humans and for other animals. The program speaks of home as a place of belonging and explores the roles each of us play in the homes where we live. The program introduces the concept of a “faith home”—your congregation—which shares some characteristics with a family home. Like a family home, a faith home offers its members certain joys, protections, and responsibilities.
Participants embark on a pilgrimage of faith, exploring how Unitarian Universalism translates into life choices and everyday actions. In each session, they hear historic or contemporary examples of Unitarian Universalist faith in action. Stories about real people model how participants can activate their own personal agency – their capacity to act faithfully as Unitarian Universalists – in their own lives, and children have regular opportunities to share and affirm their own stories of faithful action. Through sessions structured around the Unitarian Universalist Principles, Faithful Journeys demonstrates that our Principles are not a dogma, but a credo that individuals can affirm with many kinds of action. Over the course of the program, children discover a unity of faith in the many different ways Unitarian Universalists, including themselves, can act on our beliefs.
The purpose of Riddle and Mystery is to assist children in their own search for understanding. Each of the sessions introduces and processes a Big Question. The first three echo Paul Gauguin’s famous triptych: Where do we come from? What are we? Where are we going? The next ten, including Does God exist? and What happens when you die?, could be found on almost anyone’s list of basic life inquiries. The final three are increasingly Unitarian Universalist: Can we ever solve life’s mystery? How can I know what to believe? What does Unitarian Universalism mean to me?
The success of our religious education program depends on the volunteer support of all of our congregants. We encourage everyone to become part of the excitement and learning that happens when our children get together in classes and activities. To volunteer in the RE program, sign up here.
If you have questions about our children’s classes or volunteering, contact Director of Religious Education. | http://uurochmn.org/religious-education/childrens-religious-ed/sunday-classes-chiild/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.