content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
---
abstract: |
We consider the strong Ramsey-type game $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$, played on the edge set of the infinite complete $k$-uniform hypergraph $K^k_{\mathbb{N}}$. Two players, called FP (the first player) and SP (the second player), take turns claiming edges of $K^k_{\mathbb{N}}$ with the goal of building a copy of some finite predetermined $k$-uniform hypergraph $\mathcal{H}$. The first player to build a copy of $\mathcal{H}$ wins. If no player has a strategy to ensure his win in finitely many moves, then the game is declared a draw.
In this paper, we construct a $5$-uniform hypergraph $\mathcal{H}$ such that $\mathcal{R}^{(5)}(\mathcal{H}, \aleph_0)$ is a draw. This is in stark contrast to the corresponding finite game $\mathcal{R}^{(5)}(\mathcal{H}, n)$, played on the edge set of $K^5_n$. Indeed, using a classical game-theoretic argument known as *strategy stealing* and a Ramsey-type argument, one can show that for every $k$-uniform hypergraph $\mathcal{G}$, there exists an integer $n_0$ such that FP has a winning strategy for $\mathcal{R}^{(k)}(\mathcal{G}, n)$ for every $n \geq n_0$.
author:
- 'Dan Hefetz [^1]'
- 'Christopher Kusch [^2]'
- 'Lothar Narins [^3]'
- 'Alexey Pokrovskiy [^4]'
- 'Clément Requilé [^5]'
- 'Amir Sarid [^6]'
title: 'Strong Ramsey Games: Drawing on an infinite board'
---
Introduction
============
The theory of positional games on graphs and hypergraphs goes back to the seminal papers of Hales and Jewett [@HJ] and of Erdős and Selfridge [@ES]. The theory has enjoyed explosive growth in recent years and has matured into an important area of combinatorics (see the monograph of Beck [@TTT], the recent monograph [@HKSSbook] and the survey [@Krivelevich]). There are several interesting types of positional games, the most natural of which are the so-called *strong games*.
Let $X$ be a (possibly infinite) set and let $\mathcal{F}$ be a family of finite subsets of $X$. The *strong game* $(X, \mathcal{F})$ is played by two players, called FP (the first player) and SP (the second player), who take turns claiming previously unclaimed elements of the *board* $X$, one element per move. The winner of the game is the *first* player to claim all elements of a *winning set* $A \in \mathcal{F}$. If no player wins the game after some finite number of moves, then the game is declared a *draw*. A very simple but classical example of this setting is the game of Tic-Tac-Toe.
Unfortunately, strong games are notoriously hard to analyze and to date not much is known about them. A simple yet elegant game-theoretic argument, known as *strategy stealing*, shows that FP is guaranteed at least a draw in any strong game. Moreover, using Ramsey Theory, one can sometimes prove that draw is impossible in a given strong game and thus FP has a winning strategy for this game. Note that these arguments are purely existential and thus even if we know that FP has a winning/drawing strategy for some game, we might not know what it is. Explicit winning strategies for FP in various natural strong games were devised in [@FH] and in [@FHkcon]. These strategies are based on fast winning strategies for *weak* variants of the games in question. More on fast winning strategies can be found in [@HKSS] and [@CFGHL].
In this paper we study a natural family of strong games. For integers $n \geq q \geq 3$, consider the strong Ramsey game $\mathcal{R}(K_q, n)$. The board of this game is the edge set of $K_n$ and the winning sets are the copies of $K_q$ in $K_n$. As noted above, by strategy stealing, FP has a drawing strategy in $\mathcal{R}(K_q, n)$ for every $n$ and $q$. Moreover, it follows from Ramsey’s famous Theorem [@Ramsey] (see also [@GRS] and [@CFSsurvey] for numerous related results) that, for every $q$, there exists an $n_0$ such that $\mathcal{R}(K_q, n)$ has no drawing position and is thus FP’s win for every $n \geq n_0$. An explicit winning strategy for FP in $\mathcal{R}(K_q, n)$ is currently known (and is very easy to find) only for $q = 3$ (and every $n \geq 5$). Moreover, for every $q \geq 4$, we do not know what is the smallest $n_0 = n_0(q)$ such that $\mathcal{R}(K_q, n)$ is FP’s win for every $n \geq n_0$. Determining this value seems to be extremely hard even for relatively small values of $q$.
Consider now the strong game $\mathcal{R}(K_q, \aleph_0)$. Its board is the edge set of the countably infinite complete graph $K_{\mathbb{N}}$ and its winning sets are the copies of $K_q$ in $K_{\mathbb{N}}$. Even though the board of this game is infinite, strategy stealing still applies, i.e., FP has a strategy which ensures that SP will never win $\mathcal{R}(K_q, \aleph_0)$. Clearly, Ramsey’s Theorem applies as well, i.e., any red/blue colouring of the edges of $K_{\mathbb{N}}$ yields a monochromatic copy of $K_q$. Hence, as in the finite version of the game, one could expect to combine these two arguments to deduce that FP has a winning strategy in $\mathcal{R}(K_q, \aleph_0)$. The only potential problem with this reasoning is that, by making infinitely many threats (which are idle, as he cannot win), SP might be able to delay FP indefinitely, in which case the game would be declared a draw. As with the finite version, $\mathcal{R}(K_3, \aleph_0)$ is an easy win for FP. The question whether $\mathcal{R}(K_q, \aleph_0)$ is a draw or FP’s win is wide open for every $q \geq 4$. In fact, using different terminology, it was posed by Beck [@TTT] as one of his “7 most humiliating open problems”, where he considers even the case $q = 5$ to be “hopeless” (see also [@Leader] and [@Bowler] for related problems).
Playing Ramsey games, we do not have to restrict our attention to cliques, or even to graphs for that matter. For every integer $k \geq 2$ and every $k$-uniform hypergraph $\mathcal{H}$, we can study the finite strong Ramsey game $\mathcal{R}^{(k)}(\mathcal{H}, n)$ and the infinite strong Ramsey game $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$. The board of the finite game $\mathcal{R}^{(k)}(\mathcal{H}, n)$ is the edge set of the complete $k$-uniform hypergraph $K_n^k$ and the winning sets are the copies of $\mathcal{H}$ in $K_n^k$. As in the graph case, strategy stealing and Hypergraph Ramsey Theory (see, e.g., [@CFS]) shows that FP has winning strategies in $\mathcal{R}^{(k)}(\mathcal{H}, n)$ for every $\mathcal{H}$ and every sufficiently large $n$. The board of the infinite game $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$ is the edge set of the countably infinite complete $k$-uniform hypergraph $K_{\mathbb{N}}^k$ and the winning sets are the copies of $\mathcal{H}$ in $K_{\mathbb{N}}^k$. As in the graph case, strategy stealing shows that FP has drawing strategies in $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$ for every $\mathcal{H}$. Hence, here too one could expect to combine strategy stealing and Hypergraph Ramsey Theory to deduce that FP has a winning strategy in $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$ for every $\mathcal{H}$.
Our main result shows that, while it might be true that $\mathcal{R}(K_q, \aleph_0)$ is FP’s win for any $q \geq 4$, basing this solely on strategy stealing and Ramsey Theory is ill-founded.
\[th::main\] There exists a $5$-uniform hypergraph ${\mathcal H}$ such that the strong game $\mathcal{R}^{(5)}(\mathcal{H}, \aleph_0)$ is a draw.
Apart from being very surprising, Theorem \[th::main\] might indicate that strong Ramsey games are even more complicated than we originally suspected. We discuss this further in Section \[sec::openprob\].
The rest of this paper is organized as follows. In Section \[sec::notation\] we introduce some basic notation and terminology that will be used throughout this paper. In Section \[sec::sufficientCondition\] we prove that $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$ is a draw whenever $\mathcal{H}$ is a $k$-uniform hypergraph which satisfies certain conditions. Using the results of Section \[sec::sufficientCondition\], we construct in Section \[sec::example\] a $5$-uniform hypergraph ${\mathcal H}_5$ for which $\mathcal{R}^{(5)}(\mathcal{H}_5, \aleph_0)$ is a draw, thus proving Theorem \[th::main\]. Finally, in Section \[sec::openprob\] we present some open problems.
Notation and terminology {#sec::notation}
========================
Let $\mathcal{H}$ be a $k$-uniform hypergraph. We denote its vertex set by $V(\mathcal{H})$ and its edge set by $E(\mathcal{H})$. The *degree* of a vertex $x \in V(\mathcal{H})$ in $\mathcal{H}$, denoted by $d_{\mathcal H}(x)$, is the number of edges of $\mathcal{H}$ which are incident with $x$. The *minimum degree* of $\mathcal{H}$, denoted by $\delta(\mathcal{H})$, is $\min \{d_{\mathcal H}(u) : u \in V(\mathcal{H})\}$. We will often use the terminology *$k$-graph* or simply *graph* rather than $k$-uniform hypergraph.
A *tight path* is a $k$-graph with vertex set $\{u_1, \ldots, u_t\}$ and edge set $e_1, \ldots, e_{t-k+1}$ such that $e_i = \{u_i, \ldots, u_{i+k-1}\}$ for every $1 \leq i \leq t-k+1$. The *length* of a tight path is the number of its edges.
We say that a $k$-graph $\mathcal{F}$ has a *fast winning strategy* if a player can build a copy of $\mathcal{F}$ in $|E(\mathcal{F})|$ moves (note that this player is not concerned about his opponent building a copy of $\mathcal{F}$ first).
Sufficient conditions for a draw {#sec::sufficientCondition}
================================
In this section we list several conditions on a $k$-graph $\mathcal{H}$ which suffice to ensure that $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$ is a draw.
\[th::HypergraphProperties\] Let $\mathcal{H}$ be a $k$-graph which satisfies all of the following properties:
(i)
: $\mathcal{H}$ has a degree $2$ vertex $z$;
(ii)
: $\delta(\mathcal{H} \setminus \{z\}) \geq 3$ and $d_{\mathcal H}(u) \geq 4$ for every $u \in V({\mathcal H}) \setminus \{z\}$;
(iii)
: $\mathcal{H} \setminus \{z\}$ has a fast winning strategy;
(iv)
: For every two edges $e, e' \in \mathcal{H}$, if $\phi : V(\mathcal{H} \setminus \{e, e'\}) \longrightarrow V(\mathcal{H})$ is a monomorphism, then $\phi$ is the identity;
(v)
: $e \cap r \neq \emptyset$ and $e \cap g \neq \emptyset$ holds for every edge $e \in \mathcal{H}$, where $r$ and $g$ are the two edges incident with $z$ in $\mathcal{H}$.
(vi)
: $|V(\mathcal{H}) \setminus (r \cup g)| < k-1$.
Then $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$ is a draw.
Before proving this theorem, we will introduce some more notation and terminology which will be used throughout this section. Let $e \in \mathcal{H}$ be an arbitrary edge, let $\mathcal{F}$ be a copy of $\mathcal{H} \setminus \{e\}$ in $K_{\mathbb{N}}^k$ and let $e' \in K_{\mathbb{N}}^k$ be an edge such that $\mathcal{F} \cup \{e'\} \cong \mathcal{H}$. If $e'$ is free, then it is said to be a *threat* and $\mathcal{F}$ is said to be *open*. If $\mathcal{F}$ is not open, then it is said to be *closed*. Moreover, $e'$ is called a *standard threat* if it is a threat and $e \in \{r,g\}$. Similarly, $e'$ is called a *special threat* if it is a threat and $e \notin \{r,g\}$.
Next, we state and prove two simple technical lemmata.
\[lem::missing1edge\] Let $\mathcal{H}$ be a $k$-graph which satisfies Properties (i), (ii) and (iv) from Theorem \[th::HypergraphProperties\]. Then, for every edge $e \in \mathcal{H}$, if $\phi : V(\mathcal{H} \setminus \{e\}) \longrightarrow V(\mathcal{H})$ is a monomorphism, then $\phi$ is the identity.
Fix an arbitrary edge $e \in \mathcal{H}$ and an arbitrary monomorphism $\phi : V(\mathcal{H} \setminus \{e\}) \longrightarrow V(\mathcal{H})$. It follows by Properties (i) and (ii) that there exists an edge $f \in \mathcal{H} \setminus \{e\}$ such that $V(\mathcal{H} \setminus \{e, f\}) = V(\mathcal{H})$. Hence, $\phi$ equals its restriction to $V(\mathcal{H} \setminus \{e, f\})$ which is the identity by Property (iv).
\[lem::uniquerg\] Let $\mathcal{H}$ be a $k$-graph which satisfies Properties (i) and (iv) from Theorem \[th::HypergraphProperties\]. For any given copy $\mathcal{H}'$ of $\mathcal{H} \setminus \{z\}$ in $K^k_{\mathbb{N}}$ and any vertex $x \in V(K^k_{\mathbb{N}}) \setminus V(\mathcal{H}')$, there exists a unique pair of edges $r', g' \in K^k_{\mathbb{N}}$ such that $x \in r' \cap g'$ and $\mathcal{H}' \cup \{r', g'\} \cong \mathcal{H}$.
Let $\mathcal{H}'$ be an arbitrary copy of $\mathcal{H} \setminus \{z\}$ in $K^k_{\mathbb{N}}$ and let $x \in V(K^k_{\mathbb{N}}) \setminus V(\mathcal{H}')$ be an arbitrary vertex. It is immediate from the definition of $\mathcal{H}'$ and Property (i) that there are edges $r', g' \in E(K^k_{\mathbb{N}})$ such that $x \in r' \cap g'$ and $\mathcal{H}' \cup \{r', g'\} \cong \mathcal{H}$. Suppose for a contradiction that there are edges $r'', g'' \in E(K^k_{\mathbb{N}})$ such that $\{r'', g''\} \neq \{r', g'\}$, $x \in r'' \cap g''$ and $\mathcal{H}' \cup \{r'', g''\} \cong \mathcal{H}$. Let $\phi : V(\mathcal{H}' \cup \{r',g'\}) \rightarrow V(\mathcal{H}' \cup \{r'', g''\})$ be an arbitrary isomorphism. The restriction of $\phi$ to $V(\mathcal{H}')$ is clearly a monomorphism and is thus the identity by Property (iv). Since $x$ is the only vertex in $(r' \cap g') \setminus V(\mathcal{H}')$ and in $(r'' \cap g'') \setminus V(\mathcal{H}')$, it follows that $\phi$ itself is the identity and thus $\{r',g'\} = \{r'',g''\}$ contrary to our assumption.
We are now in a position to prove the main result of this section.
*Proof of Theorem \[th::HypergraphProperties\]*. Let $\mathcal{H}$ be a $k$-graph which satisfies the conditions of the theorem and let $m = |E(\mathcal{H})|$. At any point during the game, let $\mathcal{G}_1$ denote FP’s current graph and let $\mathcal{G}_2$ denote SP’s current graph. We will describe a drawing strategy for SP. We begin by a brief description of its main ideas and then detail SP’s moves in each case. The strategy is divided into three stages. In the first stage SP quickly builds a copy of $\mathcal{H} \setminus \{z\}$, in the second stage SP defends against FP’s threats, and in the third stage (which we might never reach) SP makes his own threats.
**Stage I:** Let $e_1$ denote the edge claimed by FP in his first move. In his first $m-2$ moves, SP builds a copy of $\mathcal{H} \setminus \{z\}$ which is vertex-disjoint from $e_1$. SP then proceeds to Stage II.
**Stage II:** Immediately before each of SP’s moves in this stage, he checks whether there are a subgraph $\mathcal{F}_1$ of $\mathcal{G}_1$ and a free edge $e' \in K^k_{\mathbb{N}}$ such that $\mathcal{F}_1 \cup \{e'\} \cong \mathcal{H}$. If such $\mathcal{F}_1$ and $e'$ exist, then SP claims $e'$ (we will show later that, if such $\mathcal{F}_1$ and $e'$ exist, then they are unique). Otherwise, SP proceeds to Stage III.
**Stage III:** Let $\mathcal{F}_2$ be a copy of $\mathcal{H} \setminus \{z\}$ in $\mathcal{G}_2$ and let $z'$ be an arbitrary vertex of $K^k_{\mathbb{N}} \setminus (\mathcal{G}_1 \cup \mathcal{G}_2)$. Let $r', g' \in K^k_{\mathbb{N}}$ be free edges such that $z' \in r' \cap g'$ and $\mathcal{F}_2 \cup \{r', g'\} \cong \mathcal{H}$. If, once SP claims $r'$, FP cannot make a threat by claiming $g'$, then SP claims $r'$. Otherwise he claims $g'$.
It readily follows by Property (iii) that SP can play according to Stage I of the strategy (since $K^k_{\mathbb{N}}$ is infinite, it is evident that SP’s graph can be made disjoint from $e_1$). It is obvious from its description that SP can play according to Stage II of the strategy. Finally, since SP builds a copy of $\mathcal{H} \setminus \{z\}$ in Stage I and since $K^k_{\mathbb{N}}$ is infinite, it follows that SP can play according to Stage III of the strategy as well.
It thus remains to prove that the proposed strategy ensures at least a draw for SP. Since, trivially, FP cannot win the game in less than $m$ moves, this will readily follow from the next three lemmata which correspond to three different options for FP’s $(m-1)$th move.
\[lem::noThreat\] If FP’s $(m-1)$th move is not a threat, then he cannot win the game.
Assume that SP does not win the game. We will prove that, under this assumption, not only does FP not win the game, but in fact he does not even make a single threat throughout the game. We will prove by induction on $i$ that the following two properties hold immediately after FP’s $i$th move for every $i \geq m-1$.
(a)
: FP has no threat.
(b)
: Let ${\mathcal G}'_1$ denote FP’s graph immediately after his $(m-1)$th move. Then ${\mathcal G}_1 \setminus {\mathcal G}'_1$ consists of $i-m+1$ edges $e_m, \ldots, e_i$, where, for every $m \leq j \leq i$, $e_j$ contains a vertex $z_j$ such that $d_{{\mathcal G}_1}(z_j) = 1$.
Properties (a) and (b) hold for $i = m-1$ by assumption. Assume they hold for some $i \geq m-1$; we will prove they hold for $i+1$ as well. Since FP’s $(m-1)$th move is not a threat, SP’s $i$th move is played in Stage III. By the description of Stage III, in his $i$th move SP claims an edge $e' \in \{r', g'\}$, where both $r'$ and $g'$ contain a vertex $z'$ which is isolated in ${\mathcal G}_1$. If FP does not respond by claiming the unique edge of $\{r', g'\} \setminus \{e'\}$ in his $(i+1)$th move, then SP will claim it in his $(i+1)$th move and win the game contrary to our assumption (by Property (a), FP had no threat before SP’s $i$th move and thus cannot complete a copy of $\mathcal{H}$ in one move). It follows that Property (b) holds immediately after FP’s $(i+1)$th move. Suppose for a contradiction that Property (a) does not hold, i.e., that FP makes a threat in his $(i+1)$th move. As noted above, in his $i$th move, SP claims either $r'$ or $g'$ and, by our assumption that Property (a) does not hold immediately after FP’s $(i+1)$th move, in either case FP’s response is a threat. Hence, immediately after FP’s $(i+1)$th move, there exist free edges $r''$ and $g''$ and copies ${\mathcal F}^r$ and ${\mathcal F}^g$ of ${\mathcal H} \setminus \{z\}$ in ${\mathcal G}_1$ such that ${\mathcal F}^r \cup \{r', g''\} \cong {\mathcal H}$ and ${\mathcal F}^g \cup \{r'', g'\} \cong {\mathcal H}$. By Property (ii) and since, by the induction hypothesis, Property (b) holds for $i$, we have ${\mathcal F}^r \subseteq {\mathcal G}'_1$ and ${\mathcal F}^g \subseteq {\mathcal G}'_1$. Suppose for a contradiction that $e_1 \in {\mathcal F}^r$. Since ${\mathcal F}^r \cup \{r'\}$ is a threat, with $z' \in r'$ in the role of $z$, it follows by Property (v) that $r' \cap e_1 \neq \emptyset$. However, SP could have created a threat by claiming $r'$ in his $i$th move which, by Stages I and III of SP’s strategy, implies that $r' \cap e_1 = \emptyset$. Hence $e_1 \notin {\mathcal F}^r$ and an analogous argument shows that $e_1 \notin {\mathcal F}^g$. Since $|E({\mathcal G}'_1) \setminus \{e_1\}| = m-2$, it follows that ${\mathcal F}^r = \mathcal{G}'_1 \setminus \{e_1\} = {\mathcal F}^g$. Therefore, by Lemma \[lem::uniquerg\] we have $\{r', g''\} = \{r'', g'\}$. Since, clearly $r' \neq g'$, it follows that $\{r'', g''\} = \{r', g'\}$ contrary to our assumption that both $r''$ and $g''$ were free immediately before FP’s $(i+1)$th move. We conclude that Property (a) holds immediately after FP’s $(i+1)$th move as well.
\[lem::specialThreat\] If FP’s $(m-1)$th move is a special threat, then he cannot win the game.
Assume that SP does not win the game. We will prove that, under this assumption, FP does not win the game. We begin by showing that he does not win the game in his $m$th move. Let $e'$ be a free edge such that $\mathcal{G}_1 \cup \{e'\} \cong \mathcal{H}$. Playing according to the proposed strategy, SP responds to this threat by claiming $e'$. Let $f'$ denote the edge FP claims in his $m$th move. Suppose for a contradiction that, by claiming $f'$, FP completes a copy of $\mathcal{H}$. Note that $(\mathcal{G}_1 \setminus \{f'\}) \cup \{e'\} \cong \mathcal{H}$ and so there exists an isomorphism $\phi : V((\mathcal{G}_1 \setminus \{f'\}) \cup \{e'\}) \rightarrow V(\mathcal{G}_1)$. The restriction of $\phi$ to $V(\mathcal{G}_1 \setminus \{f'\})$ is clearly a monomorphism and is thus the identity by Lemma \[lem::missing1edge\]. However, $V((\mathcal{G}_1 \setminus \{f'\}) \cup \{e'\}) = V(\mathcal{G}_1 \setminus \{f'\})$ and so $\phi$ itself is the identity. It follows that $e' \in \mathcal{G}_1$ and thus $e' \in \mathcal{G}_1 \cap \mathcal{G}_2$ which is clearly a contradiction. We conclude that indeed FP does not win the game in his $m$th move. Next, we prove that, in his $m$th move, FP does not even make a threat. Suppose for a contradiction that by claiming $f'$ in his $m$th move, FP does create a threat. Immediately after FP’s $m$th move, let $f'' \in {\mathcal G}_1$ and $f''' \in K_{\mathbb{N}}^k \setminus ({\mathcal G}_1 \cup {\mathcal G}_2)$ be edges such that ${\mathcal H}' := ({\mathcal G}_1 \setminus \{f''\}) \cup \{f'''\} \cong {\mathcal H}$. Recall that ${\mathcal H}'' := ({\mathcal G}_1 \setminus \{f'\}) \cup \{e'\} \cong {\mathcal H}$ as well. Let $\phi : V({\mathcal H}'') \rightarrow V(\mathcal{H}')$ be an isomorphism. The restriction of $\phi$ to $V({\mathcal H}'' \setminus \{e', f''\})$ is clearly a monomorphism and is thus the identity by Property (iv). Since FP’s $(m-1)$th move was a special threat, it follows that $V({\mathcal H}'' \setminus \{e', f''\}) = V({\mathcal H}'')$ and thus $\phi$ itself is the identity. Therefore $e' \in {\mathcal H}'$. Since $e' \neq f'''$ we then have $e' \in \mathcal{G}_1$ and thus $e' \in \mathcal{G}_1 \cap \mathcal{G}_2$ which is clearly a contradiction. We conclude that indeed FP does not make a threat in his $m$th move.
It remains to prove that FP cannot win the game in his $i$th move for any $i \geq m+1$. We will prove by induction on $i$ that the following two properties hold immediately after FP’s $i$th move for every $i \geq m$.
(a)
: FP has no threat.
(b)
: ${\mathcal G}_1$ contains at most one copy of ${\mathcal H} \setminus \{z\}$.
Starting with the induction basis $i = m$, note that Property (a) holds by the paragraph above. Moreover, since FP’s $(m-1)$th move is a special threat, immediately after this move, there exists a vertex $u$ of degree two in ${\mathcal G}_1$. By Property (ii), this vertex and the two edges incident with it cannot be a part of any copy of ${\mathcal H} \setminus \{z\}$ in ${\mathcal G}_1$ immediately after FP’s $m$th move. Property (b) now follows since FP’s graph contains only $m-2$ additional edges. Assume Properties (a) and (b) hold immediately after FP’s $i$th move for some $i \geq m$; we will prove they hold after his $(i+1)$th move as well. As in the proof of Lemma \[lem::noThreat\], we can assume that in his $(i+1)$th move FP claims either $r'$ or $g'$. Since both edges contain a vertex which was isolated in ${\mathcal G}_1$ immediately before FP’s $(i+1)$th move, neither edge can be a part of a copy of ${\mathcal H} \setminus \{z\}$ in ${\mathcal G}_1$. Hence, Property (b) still holds. As in the proof of Lemma \[lem::noThreat\], if FP does make a threat in his $(i+1)$th move, then ${\mathcal G}_1$ must contain two copies ${\mathcal F}^r \neq {\mathcal F}^g$ of ${\mathcal H} \setminus \{z\}$ contrary to Property (b). We conclude that Property (a) holds as well.
\[lem::standardThreat\] If FP’s $(m-1)$th move is a standard threat, then he cannot win the game.
The basic idea behind this proof is that either FP continues making simple threats forever or, at some point, he makes a move which is not a standard threat. We will prove that, assuming SP does not win the game, in the former case there is always a unique threat which SP can block, and in the latter case, by making his own standard threats, SP can force FP to respond to these threats forever, without ever creating another threat of his own.
We first claim that, if FP does win the game in some move $s$, then there must exist some $m \leq i < s$ such that FP’s $i$th move is not a threat. Suppose for a contradiction that this is not the case. Assume first that, for every $m-1 \leq i < s$, FP’s $i$th move is a standard threat. We will prove by induction on $i$ that, for every $m-1 \leq i < s$, immediately after FP’s $i$th move, ${\mathcal G}_1$ satisfies the following three properties:
(a)
: ${\mathcal G}_1$ contains a unique copy ${\mathcal F}_1$ of $\mathcal{H} \setminus \{z\}$;
(b)
: Let $e_{m-1}, \ldots, e_i$ denote the edges of ${\mathcal G}_1 \setminus {\mathcal F}_1$. Then, for every $m-1 \leq j \leq i$, there exists a vertex $z_j \in V({\mathcal G}_1)$ such that $\{z_j\} = e_j \setminus V({\mathcal F}_1)$ and $d_{{\mathcal G}_1}(z_j) = 1$;
(c)
: ${\mathcal F}_1 \cup \{e_i\}$ is open and ${\mathcal F}_1 \cup \{e_j\}$ is closed for every $m-1 \leq j < i$.
Properties (a), (b) and (c) hold by assumption for $i = m-1$. Assume they hold for some $i \geq m-1$; we will prove they hold for $i+1$ as well. Immediately after FP’s $i$th move, let $e'_i$ be a free edge such that ${\mathcal F}_1 \cup \{e_i, e'_i\} \cong {\mathcal H}$. Note that $e'_i$ exists by Property (c) and is unique by Lemma \[lem::uniquerg\]. According to his strategy, SP claims $e'_i$ thus closing ${\mathcal F}_1 \cup \{e_i\}$. By assumption, in his $(i+1)$th move FP makes a standard threat by claiming an edge $e_{i + 1}$. It follows that $e_{i + 1} \setminus V({\mathcal F}_1) = \{z_{i + 1}\}$, where, immediately after FP’s $(i+1)$th move, $d_{{\mathcal G}_1}(z_{i + 1}) = 1$. Hence, Property (b) is satisfied immediately after FP’s $(i+1)$th move. Since $\delta({\mathcal H} \setminus \{z\}) \geq 3$ holds by Property (ii), it follows that Property (a) is satisfied as well. Finally, ${\mathcal G}_1$ satisfies Property (c) by Lemma \[lem::uniquerg\]. Now, by Properties (a), (b) and (c), for every $m-1 \leq i < s$, immediately after FP’s $i$th move there is a unique threat $e'_i$. According to his strategy, SP claims $e'_i$ in his $i$th and thus FP cannot win the game in his $(i+1)$th move. In particular, FP cannot win the game in his $s$th move, contrary to our assumption.
Assume then that there exists some $m \leq i < s$ such that FP makes a special threat in his $i$th move. We will prove that this is not possible. Consider the smallest such $i$. As discussed in the previous paragraph, immediately before FP’s $i$th move, ${\mathcal G}_1$ contained a unique copy ${\mathcal F}_1$ of ${\mathcal H} \setminus \{z\}$, and every vertex of ${\mathcal G}_1 \setminus {\mathcal F}_1$ had degree one in ${\mathcal G}_1$. If FP makes a special threat in his $i$th move by claiming some edge $f'_1$, then there exists a free edge $f'_2$ such that, by claiming $f'_2$ in his $(i+1)$th move, FP would complete a copy ${\mathcal H}_1$ of ${\mathcal H}$. Since $|V({\mathcal F}_1)| < |V({\mathcal H})|$, there is some vertex $u \in V({\mathcal H}_1) \setminus V({\mathcal F}_1)$. Immediately after FP’s $(i+1)$th move, the degree of $u$ in ${\mathcal G}_1$ is at most three. Hence, by Property (ii), $u$ must play the role of $z$ in ${\mathcal H}_1$. Therefore, ${\mathcal H}_1 = ({\mathcal F}_1 \cup \{f'_1, f'_2, e'_u\}) \setminus \{f'_3\}$, where $e'_u$ is the first edge incident with $u$ which FP has claimed and $f'_3$ is some edge of ${\mathcal F}_1$. Since, at some point in the game, $e'_u$ was a standard threat, and, at that point, ${\mathcal F}_1$ was the unique copy of ${\mathcal H} \setminus \{z\}$ in ${\mathcal G}_1$, there exists an edge $e''_u$ such that ${\mathcal H}' := {\mathcal F}_1 \cup \{e'_u, e''_u\} \cong {\mathcal H}$. Let $\phi : V({\mathcal H}') \rightarrow V(\mathcal{H}_1)$ be an isomorphism. It is evident that ${\mathcal H}' \setminus \{e''_u, f'_3\} = {\mathcal H}_1 \setminus \{f'_1, f'_2\}$ and that the restriction of $\phi$ to $V({\mathcal H}' \setminus \{e''_u, f'_3\})$ is a monomorphism and is thus the identity by Property (iv). However, $V({\mathcal H}' \setminus \{e''_u, f'_3\}) = V({\mathcal H}') = V({\mathcal F}_1) \cup \{u\} = V({\mathcal H}_1)$ and thus $\phi$ itself is the identity entailing $e''_u \in {\mathcal G}_1$. However, $e''_u \in {\mathcal G}_2$ holds by the description of the proposed strategy. Hence $e''_u \in {\mathcal G}_1 \cap {\mathcal G}_2$ which is clearly a contradiction.
We conclude that there must exist some $m \leq i < s$ such that FP’s $i$th move is not a threat. Let $\ell$ denote the first such move. In order to complete the proof of the lemma, we will prove by induction on $i$ that the following two properties hold immediately after FP’s $i$th move for every $i \geq \ell$.
(1)
: FP has no threat.
(2)
: Let ${\mathcal G}'_1 = {\mathcal F}_1 \cup \{f\}$, where ${\mathcal F}_1$ is the unique copy of ${\mathcal H} \setminus \{z\}$ FP has built during his first $m-1$ moves and $f$ is the edge FP has claimed in his $\ell$th move. Then ${\mathcal G}_1 \setminus {\mathcal G}'_1$ consists of $i-m+1$ edges $e_m, \ldots, e_i$, where, for every $m \leq j \leq i$, $e_j$ contains a vertex $z_j$ such that $d_{{\mathcal G}_1}(z_j) = 1$.
Properties (1) and (2) hold for $i = \ell$ by assumption, by the choice of $\ell$ and by Properties (a) – (c) above. Assume they hold for some $i \geq \ell$; we will prove they hold for $i+1$ as well. Proving Property (2) can be done by essentially the same argument as the one used to prove Property (b) in Lemma \[lem::noThreat\]; the details are therefore omitted. Suppose for a contradiction that Property (1) does not hold immediately after FP’s $(i+1)$th move. As in the proof of Property (a) in Lemma \[lem::noThreat\], it follows that there are free edges $r''$ and $g''$ and graphs ${\mathcal F}^r \subseteq {\mathcal G}'_1$ and ${\mathcal F}^g \subseteq {\mathcal G}'_1$ such that ${\mathcal F}^r \cup \{r', g''\} \cong \mathcal{H} \cong {\mathcal F}^g \cup \{r'', g'\}$. Since ${\mathcal F}^r \subseteq {\mathcal G}'_1$ and ${\mathcal F}^g \subseteq {\mathcal G}'_1$, it follows by Property (ii) that $V({\mathcal F}^r) = V({\mathcal F}^g)$. Let ${\mathcal F}_2 \subseteq {\mathcal G}_2$ be such that ${\mathcal F}_2 \cup \{r', g'\} \cong \mathcal{H}$ and let $z'$ be the unique vertex in $r' \setminus V({\mathcal F}_2)$. Note that $r' \setminus \{z'\} \subseteq V({\mathcal F}^r)$ and $g' \setminus \{z'\} \subseteq V({\mathcal F}^g)$. Hence $(r' \cup g') \setminus \{z'\} \subseteq V({\mathcal F}^r)$. By Property (vi), we then have $|V({\mathcal F}_2) \setminus V({\mathcal F}^r)| \leq |V({\mathcal F}_2) \setminus (r' \cup g')| < k-1$. However, $e_1 \cap V({\mathcal F}_2) = \emptyset$ holds by the description of the proposed strategy and $|e_1 \cap V({\mathcal F}^r)| \geq k-1$ holds by our assumption that FP’s $(m-1)$th move was a threat. This implies that $k-1 \leq |e_1 \cap V({\mathcal F}^r)| \leq |V({\mathcal F}^r) \setminus V({\mathcal F}_2)| = |V({\mathcal F}_2) \setminus V({\mathcal F}^r)| < k-1$ which is clearly a contradiction. We conclude that Property (1) does hold immediately after FP’s $(i+1)$th move.
Since FP’s $(m-1)$th move is either a standard threat or a special threat or no threat at all, Theorem \[th::HypergraphProperties\] follows immediately from Lemmata \[lem::noThreat\], \[lem::specialThreat\] and \[lem::standardThreat\]. [$\Box$\
]{}
An explicit construction {#sec::example}
========================
In this section we will describe a $5$-graph ${\mathcal H}$ which satisfies Properties (i) – (vi) from Theorem \[th::HypergraphProperties\] and thus $\mathcal{R}^{(5)}(\mathcal{H}, \aleph_0)$ is a draw. The vertex set of $\mathcal{H}$ is $\{{
\ifnum\pdfstrcmp{z}{z}=0
z
\else
v_{z}
\fi
}, {
\ifnum\pdfstrcmp{1}{z}=0
z
\else
v_{1}
\fi
},{
\ifnum\pdfstrcmp{2}{z}=0
z
\else
v_{2}
\fi
},{
\ifnum\pdfstrcmp{3}{z}=0
z
\else
v_{3}
\fi
},{
\ifnum\pdfstrcmp{4}{z}=0
z
\else
v_{4}
\fi
},{
\ifnum\pdfstrcmp{5}{z}=0
z
\else
v_{5}
\fi
},{
\ifnum\pdfstrcmp{6}{z}=0
z
\else
v_{6}
\fi
},{
\ifnum\pdfstrcmp{7}{z}=0
z
\else
v_{7}
\fi
},{
\ifnum\pdfstrcmp{8}{z}=0
z
\else
v_{8}
\fi
},{
\ifnum\pdfstrcmp{9}{z}=0
z
\else
v_{9}
\fi
}\}$, and its edges are $$\begin{aligned}
{
\ifnum\pdfstrcmp{r}{r}=0
r
\else
\ifnum\pdfstrcmp{r}{g}=0
g
\else
\ifnum\pdfstrcmp{r}{a}=0
a
\else
\ifnum\pdfstrcmp{r}{9,4}=0
b
\else
\ifnum\pdfstrcmp{r}{4,9}=0
b
\else
\ifnum\pdfstrcmp{r}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{r}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{r}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{r}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{r}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{r}{9,5}=0
e_5
\else
e_{?r?}
\fi\fi\fi \fi\fi\fi \fi\fi\fi \fi\fi
}&=\{{
}, {
}, {
}, {
}, {
}\},\\
{
\ifnum\pdfstrcmp{g}{r}=0
r
\else
\ifnum\pdfstrcmp{g}{g}=0
g
\else
\ifnum\pdfstrcmp{g}{a}=0
a
\else
\ifnum\pdfstrcmp{g}{9,4}=0
b
\else
\ifnum\pdfstrcmp{g}{4,9}=0
b
\else
\ifnum\pdfstrcmp{g}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{g}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{g}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{g}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{g}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{g}{9,5}=0
e_5
\else
e_{?g?}
}&=\{{
}, {
}, {
}, {
}, {
}\},\\
{
\ifnum\pdfstrcmp{a}{r}=0
r
\else
\ifnum\pdfstrcmp{a}{g}=0
g
\else
\ifnum\pdfstrcmp{a}{a}=0
a
\else
\ifnum\pdfstrcmp{a}{9,4}=0
b
\else
\ifnum\pdfstrcmp{a}{4,9}=0
b
\else
\ifnum\pdfstrcmp{a}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{a}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{a}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{a}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{a}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{a}{9,5}=0
e_5
\else
e_{?a?}
\fi\fi\fi \fi\fi\fi \fi\fi\fi \fi\fi
}&=\{{
}, {
}, {
}, {
}, {
}\},\\
{
\ifnum\pdfstrcmp{9,4}{r}=0
r
\else
\ifnum\pdfstrcmp{9,4}{g}=0
g
\else
\ifnum\pdfstrcmp{9,4}{a}=0
a
\else
\ifnum\pdfstrcmp{9,4}{9,4}=0
b
\else
\ifnum\pdfstrcmp{9,4}{4,9}=0
b
\else
\ifnum\pdfstrcmp{9,4}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{9,4}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{9,4}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{9,4}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{9,4}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{9,4}{9,5}=0
e_5
\else
e_{?9,4?}
\fi\fi\fi \fi\fi\fi \fi\fi\fi \fi\fi
}&=\{{
}, {
}, {
}, {
}, {
}\},\\
{
\ifnum\pdfstrcmp{1,5}{r}=0
r
\else
\ifnum\pdfstrcmp{1,5}{g}=0
g
\else
\ifnum\pdfstrcmp{1,5}{a}=0
a
\else
\ifnum\pdfstrcmp{1,5}{9,4}=0
b
\else
\ifnum\pdfstrcmp{1,5}{4,9}=0
b
\else
\ifnum\pdfstrcmp{1,5}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{1,5}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{1,5}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{1,5}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{1,5}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{1,5}{9,5}=0
e_5
\else
e_{?1,5?}
}&=\{{
}, {
}, {
}, {
}, {
}\},\\
{
\ifnum\pdfstrcmp{2,6}{r}=0
r
\else
\ifnum\pdfstrcmp{2,6}{g}=0
g
\else
\ifnum\pdfstrcmp{2,6}{a}=0
a
\else
\ifnum\pdfstrcmp{2,6}{9,4}=0
b
\else
\ifnum\pdfstrcmp{2,6}{4,9}=0
b
\else
\ifnum\pdfstrcmp{2,6}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{2,6}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{2,6}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{2,6}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{2,6}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{2,6}{9,5}=0
e_5
\else
e_{?2,6?}
\fi\fi\fi \fi\fi\fi \fi\fi\fi \fi\fi
}&=\{{
}, {
}, {
}, {
}, {
}\},\\
{
\ifnum\pdfstrcmp{3,7}{r}=0
r
\else
\ifnum\pdfstrcmp{3,7}{g}=0
g
\else
\ifnum\pdfstrcmp{3,7}{a}=0
a
\else
\ifnum\pdfstrcmp{3,7}{9,4}=0
b
\else
\ifnum\pdfstrcmp{3,7}{4,9}=0
b
\else
\ifnum\pdfstrcmp{3,7}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{3,7}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{3,7}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{3,7}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{3,7}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{3,7}{9,5}=0
e_5
\else
e_{?3,7?}
\fi\fi\fi \fi\fi\fi \fi\fi\fi \fi\fi
}&=\{{
}, {
}, {
}, {
}, {
}\},\\
{
\ifnum\pdfstrcmp{4,8}{r}=0
r
\else
\ifnum\pdfstrcmp{4,8}{g}=0
g
\else
\ifnum\pdfstrcmp{4,8}{a}=0
a
\else
\ifnum\pdfstrcmp{4,8}{9,4}=0
b
\else
\ifnum\pdfstrcmp{4,8}{4,9}=0
b
\else
\ifnum\pdfstrcmp{4,8}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{4,8}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{4,8}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{4,8}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{4,8}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{4,8}{9,5}=0
e_5
\else
e_{?4,8?}
\fi\fi\fi \fi\fi\fi \fi\fi\fi \fi\fi
}&=\{{
}, {
}, {
}, {
}, {
}\},\\
{
\ifnum\pdfstrcmp{5,9}{r}=0
r
\else
\ifnum\pdfstrcmp{5,9}{g}=0
g
\else
\ifnum\pdfstrcmp{5,9}{a}=0
a
\else
\ifnum\pdfstrcmp{5,9}{9,4}=0
b
\else
\ifnum\pdfstrcmp{5,9}{4,9}=0
b
\else
\ifnum\pdfstrcmp{5,9}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{5,9}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{5,9}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{5,9}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{5,9}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{5,9}{9,5}=0
e_5
\else
e_{?5,9?}
\fi\fi\fi \fi\fi\fi \fi\fi\fi \fi\fi
}&=\{{
}, {
}, {
}, {
}, {
}\}.\end{aligned}$$
\[fig::H5\] 
It readily follows from the definition of $\mathcal{H}$ that it satisfies Properties (i), (ii), (v) and (vi) from Theorem \[th::HypergraphProperties\]. We claim that it satisfies Properties (iii) and (iv) as well. We start with Property (iii).
\[lem::property3\] ${\mathcal H} \setminus \{{
}\}$ has a fast winning strategy.
We describe a strategy for SP to build a copy of ${\mathcal H} \setminus \{{
}\}$ in seven moves. The basic idea is to build a tight path of length $5$ in five moves, and then to use certain symmetries of ${\mathcal H} \setminus \{{
}\}$ in order to complete a copy of ${\mathcal H} \setminus \{{
}\}$ in two additional moves. Our strategy is divided into the following three stages.
**Stage I:** In his first move, SP claims an arbitrary free edge $e_1 = \{v_1, v_2, v_3, v_4, v_5\}$. For every $2 \leq i \leq 5$, in his $i$th move SP picks a vertex $v_{i+4}$ which is isolated in both his and FP’s current graphs and claims the edge $e_i = \{v_i, v_{i+1}, v_{i+2}, v_{i+3}, v_{i+4}\}$. If in his $6$th move FP claims either $\{v_1, v_2, v_3, v_4, v_9\}$ or $\{v_1, v_4, v_6, v_8, v_9\}$ or $\{v_1, v_3, v_5, v_8, v_9\}$, then SP claims $\{v_1, v_6, v_7, v_8, v_9\}$ and proceeds to Stage II. Otherwise, SP claims $\{v_1, v_2, v_3, v_4, v_9\}$ and skips to Stage III.
**Stage II:** If in his seventh move FP claims $\{v_1, v_2, v_4, v_6, v_9\}$, then SP claims $\{v_1, v_2, v_5, v_7, v_9\}$. Otherwise, SP claims $\{v_1, v_2, v_4, v_6, v_9\}$.
**Stage III:** If in his seventh move FP claims $\{v_1, v_4, v_6, v_8, v_9\}$, then SP claims $\{v_1, v_3, v_5, v_8, v_9\}$. Otherwise, SP claims $\{v_1, v_4, v_6, v_8, v_9\}$.
It is easy to see that SP can indeed play according to the proposed strategy and that, in each of the possible cases, the graph he builds is isomorphic to ${\mathcal H} \setminus \{z\}$.
It remains to prove that ${\mathcal H}$ satisfies Property (iv). We begin by introducing some additional notation. If $\phi : V(\mathcal{H}) \rightarrow V(\mathcal{H})$ is a monomorphism, and $e = \{a_1, a_2, \ldots, a_5\} \in \mathcal{H}$, then we set $\phi(e) := \{\phi(a_1), \phi(a_2), \ldots, \phi(a_5)\}$. For two edges $e, f \in \mathcal{H}$, let $\mathcal{H}_{ef} = \mathcal{H} \setminus \{e, f\}$.
Next, we observe several simple properties of $\mathcal{H}$ and of monomorphisms. Table \[DegreeTable\] shows the degrees of the vertices in $\mathcal H$ and Table \[IntersectionTable\] shows the sizes of intersections of pairs of edges in $\mathcal H$.
[|l\*[10]{}[|p[0.6cm]{}]{}|]{} Vertex &${
}$&${
}$&${
}$&${
}$&${
}$&${
}$&${
}$&${
}$&${
}$&${
}$\
Degree & 4 & 4 & 5 & 7 & 6 & 5 & 4 & 4 & 4 & 2\
[|l\*[9]{}[|p[0.6cm]{}]{}|]{} &${
}$&${
}$&${
}$&${
}$&${
}$ &${
}$&${
}$&${
}$&${
}$\
${
}$ & & 1 & 2 & 2 & 3 & 2 & 2 & 2 & 2\
${
}$ & 1 & & 2 & 3 & 2 & 2 & 2 & 2 & 2\
${
}$ & 2 & 2 & & 3 & 2 & 2 & 2 & 3 & 3\
${
}$ & 2 & 3 & 3 & & 4 & 3 & 2 & 1 & 1\
${
}$ & 3 & 2 & 2 & 4 & & 4 & 3 & 2 & 1\
${
}$ & 2 & 2 & 2 & 3 & 4 & & 4 & 3 & 2\
${
}$ & 2 & 2 & 2 & 2 & 3 & 4 & & 4 & 3\
${
}$ & 2 & 2 & 3 & 1 & 2 & 3 & 4 & & 4\
${
}$ & 2 & 2 & 3 & 1 & 1 & 2 & 3 & 4 &\
\[obs::H5properties\] The hypergraph $\mathcal{H}$ satisfies all of the following properties:
(1)
: $V(\mathcal{H}) \setminus \{{
}, {
}\} = \{{
}\}$ and ${
} \cap {
} = \{{
}\}$.
(2)
: ${
}$ is the unique edge satisfying $|{
} \cap {
}| = 3$ and $|{
}| = 2$.
(3)
: ${
\ifnum\pdfstrcmp{4,9}{r}=0
r
\else
\ifnum\pdfstrcmp{4,9}{g}=0
g
\else
\ifnum\pdfstrcmp{4,9}{a}=0
a
\else
\ifnum\pdfstrcmp{4,9}{9,4}=0
b
\else
\ifnum\pdfstrcmp{4,9}{4,9}=0
b
\else
\ifnum\pdfstrcmp{4,9}{1,5}=0
e_1
\else
\ifnum\pdfstrcmp{4,9}{2,6}=0
e_2
\else
\ifnum\pdfstrcmp{4,9}{3,7}=0
e_3
\else
\ifnum\pdfstrcmp{4,9}{4,8}=0
e_4
\else
\ifnum\pdfstrcmp{4,9}{5,9}=0
e_5
\else
\ifnum\pdfstrcmp{4,9}{9,5}=0
e_5
\else
e_{?4,9?}
\fi\fi\fi \fi\fi\fi \fi\fi\fi \fi\fi
}$ is the unique edge satisfying $|{
}| = 3$ and $|{
}| = 2$.
(4)
: There are precisely two tight paths of length five in $\mathcal{H}$, namely, $TP_1 := ({
}, {
}, {
}, {
}, {
})$ and $TP_2 := ({
}, {
}, {
}, {
}, {
})$.
(5)
: For every two vertices $u, v \in V(\mathcal{H})$, there are three edges $f_1, f_2, f_3 \in \mathcal{H}$ such that $|f_i \cap \{u,v\}| = 1$ for every $1 \leq i \leq 3$.
\[obs::monomorphisms\] Let $\mathcal{F}$ and $\mathcal{F}'$ be $k$-graphs, where $\mathcal{F}' \subseteq \mathcal{F}$, and let $\phi : V(\mathcal{F}') \rightarrow V(\mathcal{F})$ be a monomorphism. Then
(a)
: $d_{\mathcal F}(\phi(x)) \geq d_{\mathcal{F}'}(x)\geq d_{\mathcal F}(\phi(x))-|E(\mathcal F \setminus \mathcal F')|$ holds for every $x \in V(\mathcal{F}')$.
(b)
: If $P$ is a tight path of length $\ell$ in $\mathcal{F}'$, then $\phi(P)$ is a tight path of length $\ell$ in $\mathcal{F}$.
(c)
: Let $P = (f_1, f_2, \ldots, f_m)$ be a tight path in $\mathcal{F}'$, where $m \geq k$ and $f_i = \{p_i, \ldots, p_{i + k - 1}\}$ for every $1 \leq i \leq m$. If $\phi(P) = (e_1, e_2, \ldots, e_m)$, where $e_i = \{q_i, \ldots, q_{i + k - 1}\}$ for every $1 \leq i \leq m$, then either $\phi(p_i) = q_i$ for every $1 \leq i \leq m+k-1$ or $\phi(p_i) = q_{m+k-i}$ for every $1 \leq i \leq m+k-1$.
(d)
: For any pair of edges $x, y \in \mathcal F'$ we have $|\phi(x) \cap \phi(y)| = |x \cap y|$.
We prove that ${\mathcal H}$ satisfies Property (iv) in a sequence of lemmata.
\[3edges\] Let $e$ and $f$ be two arbitrary edges of ${\mathcal H}$ and let $\phi : V(\mathcal{H}_{ef}) \rightarrow V(\mathcal{H})$ be a monomorphism. If $\phi(e') = e'$ holds for every edge $e' \in \mathcal{H}_{ef}$, then $\phi$ is the identity.
Suppose for a contradiction that $\phi$ is not the identity. Then, there exist distinct vertices $u, v \in V(\mathcal{H}_{ef})$ such that $\phi(u) = v$. By Observation \[obs::H5properties\](5), there are three edges $f_1, f_2, f_3 \in {\mathcal H}$ such that $|f_i \cap \{u, v\}| = 1 $ for every $1 \leq i \leq 3$. Clearly, we may assume that $f_1 \notin \{e, f\}$ and thus $\phi(f_1) = f_1$ by the assumption of the lemma. Since $\phi(u) = v$, it follows that $\{u, v\} \subseteq f_1$ which is a contradiction.
\[zFixed\] Let $\phi : V(\mathcal{H}_{ef}) \rightarrow V(\mathcal{H})$ be a monomorphism. Then $\phi({
}) = {
}$.
Assume first that $\{e,f\} \cap \{{
}, {
}\} \neq \emptyset$. Then $d_{{\mathcal H_{ef}}}({
}) \leq 1$. Combined with Observation \[obs::monomorphisms\](a), this implies that $d_{\mathcal H}(\phi({
})) \leq 1 + |\{e, f\}| = 3$. Since $z$ is the only vertex of degree at most $3$ in $\mathcal H$, it follows that $\phi({
}) = {
}$.
Assume then that $\{e,f\} \cap \{{
}, {
}\} = \emptyset$. Since $\phi$ is a monomorphism, there exists a vertex $v \in V(\mathcal H_{ef})$ such that $\phi(v) = {
}$. Suppose for a contradiction that $v \neq {
}$. By Observation \[obs::monomorphisms\](a), we have $d_{{\mathcal H_{ef}}}(v) \leq 2$ and thus $d_{\mathcal H}(v) \leq 4$. Since ${
}$ is the only vertex of degree less than $4$ in $\mathcal H$, it follows that $d_{\mathcal H}(v) = 4$ and that both $e$ and $f$ contain $v$. Let $r' = \phi^{-1}({
})$ and $g' = \phi^{-1}({
})$ be the other two edges of $\mathcal{H}$ that contain $v$. By Observation \[obs::monomorphisms\](d), we have $|r' \cap g'| = |{
}| = 1$. Looking at Tables \[DegreeTable\] and \[IntersectionTable\], we see that the only choice of $r', g'$ and $v$ such that $d_{\mathcal H}(v) = 4$ and $r' \cap g' = \{v\}$ is $v = {
}$ and $\{r', g'\} = \{{
}, {
}\}$. Since both $e$ and $f$ contain $v$ as well, this implies that $\{e, f\} = \{{
}, {
}\}$, contrary to our assumption that $\{e,f\} \cap \{{
}, {
}\} = \emptyset$.
\[rg\] Let $\phi : V(\mathcal{H}_{ef}) \rightarrow V(\mathcal{H})$ be a monomorphism. If ${
}, {
} \in \mathcal {H}_{ef}$, $\phi({
}) = {
}$ and $\phi({
}) = {
}$, then $\phi$ is the identity.
Since $\phi$ is injective, $\phi({
}) = {
}) = {
}$, it follows by Observation \[obs::H5properties\] (1) that $\phi({
}) = {
}$.
By Observation \[obs::monomorphisms\](a), we have that $d_{\mathcal H}(\phi({
})) \geq d_{{\mathcal H_{ef}}}({
}) \geq 5$ which in turn implies that $\phi({
}) \in \{{
}, {
}, {
}, {
}\}$. Since, moreover, $\phi({
}) \in \phi({
}) = {
} = \{{
}, {
}, {
}, {
}, {
}\}$, it follows that $\phi({
}) = {
}$. Since ${
}$ is the unique edge in $\mathcal H$ containing ${
}$ but not ${
}$, we have that if ${
} \in {\mathcal H_{ef}}$, then $\phi({
}) = {
}$.
Since $\phi({
}) = {
}) = {
}$, it follows by Observation \[obs::monomorphisms\](d) and by Observation \[obs::H5properties\](2), that if ${
} \in {\mathcal H_{ef}}$, then $\phi({
})={
}$. Similarly, using Observation \[obs::H5properties\](3), it follows that if ${
} \in {\mathcal H_{ef}}$, then $\phi({
}) = {
}$. We distinguish between the following three cases.
**Case 1: ${
}, {
} \in {\mathcal H_{ef}}$**. As noted above $\phi({
}) = {
}$. Since $|{
} \cap {
}| = 2$, Observation \[obs::monomorphisms\](d) and Table \[IntersectionTable\] imply that $\phi({
}) \in \{{
}, {
}\}$. Since, moreover, $\phi({
}) = {
}$ by assumption, we conclude that $\phi({
}) = {
}$. Observation \[obs::monomorphisms\](d) then implies that $(|\phi(x) \cap {
}|, |\phi(x) \cap {
}|) = (|x \cap {
}|, |x \cap {
}|)$ for every edge $x \in {\mathcal H_{ef}}$. Looking at the rows corresponding to ${
}$ and ${
}$ in Table \[IntersectionTable\], we see that the pair $(|x \cap {
}|)$ is distinct for every $x \in \mathcal H \setminus \{{
}, {
}\}$. It follows that $\phi(x) = x$ for every $x \in {\mathcal H_{ef}}$. Hence, $\phi$ is the identity by Lemma \[3edges\].
**Case 2: ${
}, {
} \in {\mathcal H_{ef}}$**. As noted above $\phi({
}) = {
}$. Since $|{
} \cap {
}| = 2$, Observation \[obs::monomorphisms\](d) and Table \[IntersectionTable\] imply that $\phi({
}) \in \{{
}, {
}, {
}) = {
}) = {
}$ by assumption, we conclude that $\phi({
}) = {
}$. Observation \[obs::monomorphisms\](d) then implies that $(|\phi(x) \cap {
}|, |\phi(x) \cap {
}|) = (|x \cap {
}|, |x \cap {
}|)$ for every edge $x \in {\mathcal H_{ef}}$. Looking at the rows corresponding to ${
}$ and ${
}$ in Table \[IntersectionTable\], we see that the pair $(|x \cap {
}, {
**Case 3: $\{e, f\} \in \{{
}, {
}\} \times \{{
}, {
}\}$**. Observe that ${
} \in {\mathcal H_{ef}}$ and thus, as noted above, $\phi({
}) = {
}$. Looking at the row corresponding to ${
}$ in Table \[IntersectionTable\] and using Observation \[obs::monomorphisms\](d), we infer that $\phi({
}) = {
}$, $\phi({
}) = {
}$, $\{\phi({
}), \phi({
})\} = \{{
}, {
}\}$, and $\{\phi({
}), \phi({
})\} = \{{
}, {
}\}$. Since $\phi({
}) = {
}$, it then follows that $\phi({
}) = {
}$ and thus $\phi({
}) = {
}$. Let $x$ denote the unique edge of $\{{
}, {
}\} \cap {\mathcal H_{ef}}$. Looking at the row corresponding to $x$ in Table \[IntersectionTable\], we see that $|x \cap {
}| \neq |x \cap {
}|$. Using Observation \[obs::monomorphisms\](d), we conclude that $\phi({
}) = {
}$ and $\phi({
}) = {
}$. Hence, $\phi$ is the identity by Lemma \[3edges\].
Since, clearly, at least one of the above three cases must occur, this concludes the proof of the lemma.
\[FixV9\] Let $\phi : V(\mathcal{H}_{ef}) \rightarrow V(\mathcal{H})$ be a monomorphism. If ${
}, {
}) = {
}$.
Suppose for a contradiction that $\phi({
}) \neq {
}$. By Lemma \[zFixed\] we have $\phi({
})={
}$ which implies that $\phi({
}$. By Observation \[obs::monomorphisms\](a) we have $d_{\mathcal H}(\phi({
}))\leq d_{{\mathcal H_{ef}}}({
}) + 2 \leq 6$ which implies that $\phi({
}$.
Since $\phi$ is a monomorphism, we have $\{\phi({
}) \cap \phi({
})\}$ is the intersection of two edges, we must have $\phi({
}) \in \{{
}, {
}, {
}, {
}\}$. Combining this with the previous paragraph, we infer that $\phi({
}) = {
}$.
Note that $6 = d_{\mathcal H}({
}) = d_{\mathcal H}(\phi({
}) + 2$. Hence, $d_{{\mathcal H_{ef}}}({
}) = 4$ which implies that $\{{
}, {
}\} \cap \{e, f\} = \emptyset$. Since $\phi({
}) = {
}$ and $\phi({
}) = {
}$, we must have $\phi({
}) = {
}$.
Since ${
}, {
}$ is the unique pair of edges satisfying ${
} = \{{
}\}$, it follows that $\{\phi({
})\} = \{{
}, {
}\}$. Suppose for a contradiction that $\phi({
}) = {
}$. Then, by Observation \[obs::monomorphisms\](d) we have $3 = |{
})| = |{
}| = 2$. We conclude that $\phi({
}) = {
}$ and $\phi({
}) = {
}$. We can now determine the missing edges in ${\mathcal H_{ef}}$ and in $\phi({\mathcal H_{ef}})$.
\[Claime26e48\] ${
}, {
} \not \in {\mathcal H_{ef}}$ and ${
}, {
} \not \in \phi({\mathcal H_{ef}})$.
Suppose for a contradiction that ${
} \in {\mathcal H_{ef}}$. Since $|{
}| = 3$ and ${
} \notin {
}$, it follows by Observation \[obs::monomorphisms\](d) that $|\phi({
}) \cap \phi({
} \notin \phi({
})$. This is a contradiction since there is no edge $x \in \mathcal{H}$ such that $|x \cap {
} \notin x$.
Suppose for a contradiction that ${
} \in {\mathcal H_{ef}}$. It follows by Observation \[obs::monomorphisms\](d) that $4 = |{
}|$ and thus $\phi({
}) = {
}$. Since, moreover, ${
}) = {
}$, it follows that ${
}$, contrary to the definition of ${
}$.
} \in \phi({\mathcal H_{ef}})$. Let $x \in {\mathcal H_{ef}}$ be such that $\phi(x) = {
}$. Since $\phi({
}) = {
}$, it follows by Observation \[obs::monomorphisms\](d) that $4 = |{
}| = |{
} \cap x|$. Looking at the row corresponding to ${
}$ in Table \[IntersectionTable\], we infer that $x = {
}$. However, since ${
}$, we then deduce that ${
} = \phi({
}$ which is clearly a contradiction.
} \in \phi({\mathcal H_{ef}})$. Let $x \in {\mathcal H_{ef}}$ be such that $\phi(x) = {
}) = {
}$, it follows by Observation \[obs::monomorphisms\](d) that $4 = |{
}|$. Looking at the row corresponding to ${
}$ in Table \[IntersectionTable\], we infer that $x = {
}$. However, we already saw before that assuming ${
} \in {\mathcal H_{ef}}$ results in a contradiction.
We are now in a position to complete the proof of Lemma \[FixV9\]. Let $\mathcal{F} = \mathcal{H} \setminus \{{
}, {
}\}$. It follows from Claim \[Claime26e48\] that ${\mathcal H_{ef}}= \phi({\mathcal H_{ef}}) = \mathcal{F}$ and that $\phi$ is an automorphism of $\mathcal{F}$. Hence, in particular, $d_{\mathcal{F}}(\phi({
})) = d_{\mathcal{F}}({
}) = 5$. On the other hand, since $\phi({
}) = {
}$, it follows that $\phi(v_4) \in \{{
}, {
}, {
}, {
}\}$. Therefore $d_{\mathcal{F}}(\phi({
})) \leq 4$ which is clearly a contradiction.
\[TightPathPresent\] Let $\phi : V(\mathcal{H}_{ef}) \rightarrow V(\mathcal{H})$ be a monomorphism. Suppose that ${\mathcal H_{ef}}$ contains a tight path of length $5$. Then $\phi$ is either the identity or one of $({
}{
}{
}{
}{
}{
}{
}{
}{
})({
})$, $({
}{
}{
}{
}{
}{
}{
}{
}{
})({
})$, $({
}{
})({
}{
})({
}{
})({
}{
})({
})({
})$, $({
}{
})({
}{
})({
}{
})({
}{
})({
})({
})$, and $({
}{
})({
}{
})({
}{
})({
}{
})({
})({
})$.
By Lemma \[zFixed\] we know that $\phi({
}) = {
}$. Moreover, by Observation \[obs::H5properties\](4), we know that ${\mathcal H_{ef}}$ contains $TP1$ or $TP2$. Moreover, by Observation \[obs::monomorphisms\](b), if $TP1 \in {\mathcal H_{ef}}$, then $\phi(TP1) \in \{TP1, TP2\}$ and if $TP2 \in {\mathcal H_{ef}}$, then $\phi(TP2) \in \{TP1, TP2\}$. Accordingly, we distinguish between the following four cases.
**Case 1: $TP1 \in {\mathcal H_{ef}}$ and $\phi(TP1) = TP1$**. It follows by Observation \[obs::monomorphisms\](c) that either $\phi$ is the identity or $\phi = ({
}{
})({
}{
})({
}{
})({
}{
})({
})({
})$.
**Case 2: $TP1 \in {\mathcal H_{ef}}$ and $\phi(TP1) = TP2$**. It follows by Observation \[obs::monomorphisms\](c) that either $\phi = ({
}{
}{
}{
}{
}{
}{
}{
}{
})({
})$ or $\phi = ({
}{
})({
}{
})({
}{
})({
}{
})({
})({
})$.
**Case 3: $TP2 \in {\mathcal H_{ef}}$ and $\phi(TP2) = TP1$**. It follows by Observation \[obs::monomorphisms\](c) that either $\phi = ({
}{
}{
}{
}{
}{
}{
}{
}{
})({
}{
})({
}{
})({
}{
})({
}{
})({
})({
})$.
**Case 4: $TP2 \in {\mathcal H_{ef}}$ and $\phi(TP2) = TP2$**. It follows by Observation \[obs::monomorphisms\](c) that either $\phi$ is the identity or $\phi = ({
}{
})({
}{
})({
}{
})({
}{
})({
})({
})$.
\[94and59\] Let $\phi : V(\mathcal{H}_{ef}) \rightarrow V(\mathcal{H})$ be a monomorphism. If ${
}, {
}\in \mathcal H_{ef}$, then $\phi$ is the identity.
Suppose for a contradiction that $\phi$ is not the identity. By Lemma \[zFixed\] we know that $\phi({
}) = {
}$ and by Lemma \[FixV9\] we know that $\phi({
}) = {
}$. Assume first that $\phi({
}) = {
}) = {
}) = {
}$, and ${
}$ is the unique edge whose intersection with ${
}$ is $\{{
}\}$, we infer that $\phi({
}) = {
}$ is the unique edge containing both ${
} $ and $ {
}$, we infer that, if ${
} \in {\mathcal H_{ef}}$, then $\phi({
}) = {
}$ is the unique edge satisfying $|{
}| = 3$, $|{
}| = 2$, and $|{
}| = 0$, it follows by Observation \[obs::monomorphisms\](d) that, if ${
} \in {\mathcal H_{ef}}$, then $\phi({
})={
}$. Looking at the rows corresponding to ${
}$ in Table \[IntersectionTable\], we see that $(|x \cap {
}|)$ is distinct for every $x \in \mathcal{H} \setminus\{{
}, {
}\}$. This implies that $\phi(x) = x$ for every $x \in {\mathcal H_{ef}}$ and thus $\phi$ is the identity by Lemma \[3edges\] contrary to our assumption. Therefore, from now on we will assume that $\phi({
}) \neq {
}) = {
}) = {
}) = {
}$. We distinguish between the following three cases.
**Case 1: $\{e,f\} \subseteq \{{
}, {
}, {
}\}$**. Observe that ${\mathcal H_{ef}}$ contains $TP1$. Since, moreover, $\phi({
}) = {
}$ and $\phi$ is not the identity by assumption, it follows from Lemma \[TightPathPresent\] that $\phi = ({
}{
})({
}{
})({
}{
})({
}{
})({
})({
})$. Let $x \in \{{
}, {
}, {
}\} \setminus \{e,f\}$. Then $\phi(x)$ is not an edge of $\mathcal{H}$ contrary to $\phi$ being a monomorphism.
**Case 2: ${
} \in {\mathcal H_{ef}}$**. As noted above, $\phi({
}) = {
}$ is the unique edge intersecting ${
}$ in $3$ vertices, we have $\phi({
}) = {
}$, contrary to our assumption that $\phi({
}$.
**Case 3: ${
} \notin {\mathcal H_{ef}}$ and ${
}, {
} \in {\mathcal H_{ef}}$**. Since ${
}$ is the unique edge such that ${
} \in {
}$ and ${
}$, it follows that $\phi({
}) = {
}$. Similarly, since ${
} \notin {
}$, ${
}) = {
}) = {
}$, it follows that $\phi({
}) = {
}$. Then $$\begin{aligned}
\{\phi({
}) \cap \phi({
}) \cap \phi({
}) = {
} = \{{
}\},\\
\{\phi({
}) \setminus (\phi({
}) \cup \phi({
})) = {
} \setminus({
} \cup {
}) = \{{
}\},\\
\{\phi({
}) \setminus \phi({
}) = {
} \setminus {
} = \{{
}\}.\end{aligned}$$ Since, moreover, $\phi({
}) = {
}) = {
}$. Now, using $\phi({
}) = {
}) = {
}$, it is easy to see that $\phi({
}) = {
}$ and thus $\phi({
}) = {
}$, $\phi({
}) = {
}$ and $\phi({
}) = {
}$. However, then neither $\phi({
})$ nor $\phi({
})$ is an edge of $\mathcal{H}$. Since $\{{
}, {
}\} \setminus \{e,f\} \neq \emptyset$, this contradicts $\phi$ being a monomorphism.
\[94or59\] Let $\phi : V(\mathcal{H}_{ef}) \rightarrow V(\mathcal{H})$ be a monomorphism. If $\{e,f\} \in \{{
}, {
}\} \times \{{
}, {
}, {
}\}$, then $\phi$ is the identity.
Since $|\{e,f\} \cap \{{
}, {
}\}| = 1$ by assumption, ${\mathcal H_{ef}}$ contains either $TP1$ or $TP2$. Hence, $\phi$ must be one of the six permutations listed in Lemma \[TightPathPresent\].
Assume first that ${
}, {
} \in {\mathcal H_{ef}}$. By Lemma \[zFixed\] we know that $\phi({
}) = {
}$ and thus $\{\phi({
}), \phi({
})\} \subseteq \{{
}, {
}\}$. Therefore $\phi({
}) = {
}$ holds by Observation \[obs::H5properties\](1). This implies that $\phi$ is the identity since this is the only permutation listed in Lemma \[TightPathPresent\] which maps ${
}$ to itself.
Assume then that ${
} \in {\mathcal H_{ef}}$. This implies that $\phi$ is the identity since this is the only permutation listed in Lemma \[TightPathPresent\] which maps ${
}$ to an edge of $\mathcal{H}$.
\[Property4proof\] Let $\phi : V(\mathcal{H}_{ef}) \rightarrow V(\mathcal{H})$ be a monomorphism. Then $\phi$ is the identity.
Let $e'$ and $f'$ denote the two edges of $\mathcal H\setminus \phi ({\mathcal H_{ef}})$. Suppose for a contradiction that $\phi$ is not the identity. Observe that this implies that $\phi^{-1}$ is a monomorphism from $\phi({\mathcal H_{ef}})$ to $\mathcal{H}$ which is not the identity.
Since $\phi$ is not the identity, it follows from Lemma \[94and59\] that $\{{
}, {
}\} \cap \{e,f\} \neq \emptyset$. By Lemma \[94or59\] we then infer that $\{{
}, {
}, {
}\} \cap \{e,f\} = \emptyset$. Similarly, since $\phi^{-1}$ is a monomorphism which is not the identity, it follows from Lemma \[94and59\] that $\{{
}, {
}\} \cap \{e', f'\} \neq \emptyset$ and from Lemma \[94or59\] that $\{{
}, {
}, {
}\} \cap \{e', f'\} = \emptyset$.
}) = {
}, {
}) = {
}$ holds by Observation \[obs::H5properties\](1). By Lemma \[rg\] we know that $\phi({
}) = {
}) = {
}$, which implies that $\phi(x) \neq x$ for every $x \in V(\mathcal{H}) \setminus \{{
}, {
}\}$.
}, {
}$ are the only edges which do not contain ${
}$, and $\{e,f\} \setminus \{{
}, {
}, {
}\} \neq \emptyset$, it follows that $d_{{\mathcal H_{ef}}}({
}) \leq 5$. Since $\{e', f'\} \setminus \{{
}, {
}, {
}\} \neq \emptyset$, an analogous argument shows that $d_{\phi({\mathcal H_{ef}})}({
}) \leq 5$.
Suppose for a contradiction that ${
} \in \{e,f\}$. Then $d_{{\mathcal H_{ef}}}({
}) \geq 6$ and thus $d_{\phi({\mathcal H_{ef}})}(\phi({
})) \geq 6$ as well. Since, as noted above, $d_{\phi({\mathcal H_{ef}})}({
}) \leq 5$, it follows from Table \[DegreeTable\] that $\phi({
}) = {
}$. However, this contradicts the fact that $\phi$ does not fix any vertex of $V(\mathcal{H}) \setminus \{{
}, {
}\}$. It follows that ${
} \in \{e,f\}$. An analogous argument shows that ${
} \in \{e',f'\}$ as well.
Suppose for a contradiction that ${
} \notin \{e,f\}$. Then $|\phi({
})| = |{
}| = 3$ holds by Observation \[obs::monomorphisms\](d). Since ${
}$ is the only edge of $\mathcal{H}$ which intersects ${
}$ in $3$ vertices, it then follows that $\phi({
}) = {
}$. However, this contradicts the fact that ${
} \in \{e',f'\}$ as well.
We have thus shown that $\{e,f\} = \{e',f'\} = \{{
}, {
}\}$. Hence, $P = ({
}, {
}, {
}, {
})$ is the unique tight path of length $4$ in ${\mathcal H_{ef}}$ and in $\phi({\mathcal H_{ef}})$. Since $\phi({
}) = {
}$ and since $\phi(P) = P$ holds by Observation \[obs::monomorphisms\](b) it follows that $\phi({
}) = {
}$ contrary to $\phi$ not fixing any vertex of $V(\mathcal{H}) \setminus \{{
}, {
}\}$.
Concluding remarks and open problems {#sec::openprob}
====================================
As noted in the introduction, this paper originated from Beck’s open problem of deciding whether $\mathcal{R}(K_q, \aleph_0)$ is a draw or FP’s win. While it would be very interesting to solve this challenging problem, there are several natural intermediate steps one could make in order to improve one’s understanding of the problem. In this paper we constructed a $5$-uniform hypergraph ${\mathcal H}_5$ such that $\mathcal{R}^{(5)}(\mathcal{H}_5, \aleph_0)$ is a draw, thus refuting the intuition that, due to strategy stealing and Ramsey-type arguments, $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$ is FP’s win for every $k$ and every $k$-graph $\mathcal{H}$. It would be interesting to replace ${\mathcal H}_5$ with a graph.
\[q::graph\] Is there a graph $G$ such that $\mathcal{R}^{(2)}(G, \aleph_0)$ is a draw?
Our proof that $\mathcal{R}^{(5)}(\mathcal{H}_5, \aleph_0)$ is a draw, relies heavily on the fact that $\mathcal{H}_5$ has a vertex of degree $2$. Since this is clearly not the case with $K_q$, for $q \geq 4$, it would be interesting to determine whether this condition is necessary.
\[q::minimumDegree\] Given an integer $d \geq 3$, is there a $k$-graph $\mathcal{H}$ such that $\delta(\mathcal{H}) \geq d$ and $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$ is a draw?
Another important ingredient in our proof that $\mathcal{R}^{(5)}(\mathcal{H}_5, \aleph_0)$ is a draw, is the fact that SP can build ${\mathcal H}_5 \setminus \{z\}$ very quickly. A similar idea was used in [@FH] and in [@FHkcon] to devise explicit winning strategies for FP in various natural strong games. On the other hand, it was proved by Beck in [@BeckFast] that building a copy of $K_q$ takes time which is at least exponential in $q$. Intuitively, not being able to build a winning set quickly, should not be beneficial to FP. This leads us to raise the following question.
\[q::slow\] Is there a $k$-graph $\mathcal{H}$ with minimum degree at least $3$ such that $\mathcal{R}^{(k)}(\mathcal{H}, \aleph_0)$ is a draw and, for every positive integer $n$, FP cannot win $\mathcal{R}^{(k)}(\mathcal{H}, n)$ in less than, say, $1000 |V(\mathcal{H})|$ moves?
Acknowledgment {#acknowledgment .unnumbered}
==============
Part of the research presented in this paper was conducted during the two joint Free University of Berlin–Tel Aviv University workshops on Positional Games and Extremal Combinatorics. The authors would like to thank Michael Krivelevich and Tibor Szabó for organizing these events.
[99]{}
J. Beck, Ramsey games, *Discrete Mathematics* 249 (2002), 3–30.
J. Beck, **Combinatorial games: Tic-tac-toe theory**, Encyclopedia of Mathematics and its Applications 114, Cambridge University Press, Cambridge, 2008.
N. Bowler, Winning an infinite combination of games, *Mathematika* 58 (2012), 419–431.
D. Clemens, A. Ferber, R. Glebov, D. Hefetz and A. Liebenau, Building spanning trees quickly in Maker-Breaker games, *SIAM Journal on Discrete Mathematics* 29 (3) (2015), 1683–1705.
D. Conlon, J. Fox and B. Sudakov, Hypergraph Ramsey numbers, *J. Amer. Math. Soc.* 23 (2010), 247–266.
D. Conlon, J. Fox and B. Sudakov, Recent developments in graph Ramsey theory, in: **Surveys in Combinatorics 2015, Cambridge University Press** (2015), 49–118.
P. Erdős and J. L. Selfridge, On a combinatorial game, *Journal of Combinatorial Theory Ser. A.* 14 (1973), 298–301.
A. Ferber and D. Hefetz, Winning strong games through fast strategies for weak games, *The Electronic Journal of Combinatorics* 18(1) 2011, P144.
A. Ferber and D. Hefetz, Weak and strong $k$-connectivity games, *European Journal of Combinatorics* 35 (2014), 169–183.
R. L. Graham, B. L. Rothschild and J. H. Spencer, **Ramsey theory**, 2nd edition, Wiley, 1990.
A. W. Hales and R. I. Jewett, Regularity and positional games, *Transactions of the American Mathematical Society* 106 (1963), 222–229.
D. Hefetz, M. Krivelevich, M. Stojaković and T. Szabó, Fast winning strategies in Maker-Breaker games, *Journal of Combinatorial Theory Ser. B.* 99 (2009), 39–47.
D. Hefetz, M. Krivelevich, M. Stojaković and T. Szabó, **Positional Games**, Oberwolfach Seminars 44, Birkhäuser, 2014.
M. Krivelevich, Positional games, *Proceedings of the International Congress of Mathematicians (ICM)* Vol. 4 (2014), 355–379.
I. B. Leader, Hypergraph games, Lecture notes, 2008. Available at http://tartarus.org/gareth/maths/notes/.
F. P. Ramsey, On a problem of formal logic, *Proc. London Math. Soc.* 30 (1930), 264–286.
[^1]: Department of Computer Science, Hebrew University, Jerusalem 9190401 and School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, 6997801, Israel. Email: [email protected].
[^2]: Institut für Mathematik und Informatik, Freie Universität Berlin and Berlin Mathematical School, Germany. Email: [email protected]. Research supported by a Berlin Mathematical School Phase II scholarship.
[^3]: Institut für Mathematik und Informatik, Freie Universität Berlin, Germany. Email: [email protected]
[^4]: ETH Zurich, Switzerland. Email: [email protected]
[^5]: Institut für Mathematik und Informatik, Freie Universität Berlin and Berlin Mathematical School, Germany. Research supported by the FP7-PEOPLE-2013-CIG project CountGraph (ref. 630749). Email: [email protected]
[^6]: School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, 6997801, Israel. Email: [email protected]
| |
The natural world is a difficult place. Animals’ lives may be tough as they face resource competition and sometimes unfriendly settings.
However, in order to live and overcome the odds, some creatures evolved in some rather unique and surprising ways.
Here are seven species that have evolved in bizarre ways to survive in their environments. And this is list down five adaptation that our species may develop in order to continue surviving on earth.
To survive the winter, up to 60% of the bodies of Alaskan Wood Frogs freeze solid. They also cease to breathe and their hearts cease to beat. They may live in temperatures as low as -80 degrees Fahrenheit because of this. They thaw out and “come back to life” in the spring.
To attain this semi-frozen condition, the organisms accumulate enormous amounts of glucose in their organs and tissues (up to ten times the usual amount). The sugar solutes act as “cryoprotectants,” preventing their cells from shrinking or dying.
Kangaroo rats have evolved to live in the desert without ever drinking water. Instead, they obtain all of their moisture from the seeds they ingest. These animals have excellent hearing and can leap up to nine feet, allowing them to dodge predators.
To live in the icy Southern Ocean that encircles Antarctica, five groups of notothenioid fish produce their own “antifreeze” proteins. The proteins in their blood bond to ice crystals, keeping the fish from freezing. This adaptation is so remarkable that it helps to explain why these fish account for 90 percent of the region’s fish biomass.
Cuttlefish have the incredible ability to alter color and texture to fit in with their environment. They can monitor how much light is absorbed in their surroundings and utilize that knowledge to imitate it with their own pigments.
They contain three skin layers (yellow, red, and brown) that may be stretched in various ways to create varied colors and patterns. Their skin also possesses papillae, which give cuttlefish the appearance of being stiff, like coral. These characteristics help cuttlefish to evade predators and sneak up on unsuspecting victims.
For a long time, scientists believed that life could not survive in hydrothermal vents deep in the ocean. However, in 1977, they discovered huge tubeworms dwelling 8,000 feet below the ocean’s surface along the Galapagos Rift. In their habitat, these tubeworms are completely dark, and they live in water containing toxic gas and acid.
These organisms lack a stomach, intestine, and eyes. They are instead “bags of bacteria” with heart-like structures and reproductive functions. The bacteria inside the worms use the toxic hydrogen sulfide in the water as an energy source to produce carbohydrates, which would kill most other animals.
Many animals have created particular body components tailored to life in a given habitat. Webbed feet, sharp claws, whiskers, sharp fangs, big beaks, wings, and hooves are among them.
Adaptation can protect animals from predators or harsh weather conditions. Many birds can hide among thick grass and weeds, while insects may alter their colors to fit in. This makes it tough for predators to find them and feed on them.
Adaptations are classified into three types: Behavioural reactions are actions taken by an organism to help it live and reproduce. Physiological – a bodily process that aids an organism’s survival and reproduction. Structural – a feature of an organism’s body that aids in survival and reproduction.
Natural selection causes this to happen. Natural selection progressively modifies the nature of the species to become more fitted to the niche. If a species becomes very well adapted to its surroundings and the environment does not change, it may persist for a very long period before becoming extinct. | https://bowie1983book.com/list-down-five-adaptation-that-our-species-may-develop-in-order-to-continue-surviving-on-earth/ |
Art and Performance has a powerful impact on the social, emotional, physical and cultural wellbeing of all Australians. Victoria has a vibrant Aboriginal Art and Performance sector. In addition to the positive wellbeing outcomes of a thriving Arts and Performance sector, there are significant economic opportunities for People who work in the sector.
Song and performance allow us to maintain memory and language. Being creative enables us to pay respect to our traditional practices, our Ancestors, our Culture in ways that may have been considered lost but that have always been present.
The Torch provides art, cultural and arts vocational support to Aboriginal offenders and ex-offenders in Victoria. Through a range of programs and initiatives it enables artists, both emerging and established, within the prison system to find pathways to wellness and future financial stability through art.
The Short Black Opera is a national Indigenous not-for-profit organisation, based in Melbourne. They provide training and performance opportunities for Aboriginal and Torres Strait Islander performing artists. One of their projects is the Ensemble Dutala, Australia’s first Aboriginal and Torres Strait Islander chamber ensemble.45
Ilbijerri is a Woiwurrung word meaning ‘Coming Together for Ceremony’. The ILBIJERRI Theatre Company is one of Australia’s leading theatre companies, creating works by First Nations artists. Just recently, in July, ILBIJERRI put on a special online showing of Jack Charles v The Crown.46
The Koorie Heritage Trust is currently showing several online exhibitions including, an exhibition of Daen Sansbury-Smith’s work, called Black Crow47 and a group show called Affirmation, a photographic exhibition that explores the concept of truth in the context of place, Ancestral identity and cultural pride. The group show includes photographs by Paola Balla, Deanne Dilson, Tashara Roberts and Pierra Van Sparkes.48
VicHealth, an independent statutory body reporting to the Minister for Heath, has a health promotion framework for Aboriginal Victorians. The framework identified the Arts as a priority setting for action. VicHealth has supported a number of community driven arts initiatives including:
- The Black Arm Band: an ensemble of musicians drawn from Aboriginal communities across Australia.
- Songlines Aboriginal Music Corporation: Victoria’s peak Aboriginal music body.
- The Fitzroy/Collingwood Parkies DVD History Project: the project involves collating into a documentary, interviews with nine community elders, film footage and historical resources about the Aboriginal People known as the Fitzroy/ Collingwood Parkies group.
On the national level, the Indigenous Visual Arts Industry Support program, run by the Department of Infrastructure, Transport, Regional Development and Communications supports around 80 Indigenous arts centres and a number of art fairs, regional hubs and industry service organisations around Australia. In Victoria in this financial year, they are funding the Aboriginal Corporation for Frankston and Mornington Peninsula Indigenous Artists, the Gallery Kaiela Incorporated and the Koorie Heritage Trust.
Given the health, cultural and economic opportunities derived from a prosperous Arts and Performance sector, it is important to build the capacity of this sector. As evidenced above, there are many projects currently being actively pursued. Nevertheless, the
sector relies too heavily on the enthusiasm and commitment of its talent to paper over the significant administrative and financial challenges of building a career in the arts. The casualisation of the workforce for example created financial uncertainty for arts workers even before the current pandemic.
In addition, Aboriginal and Torres Strait Islander creatives must contend with the short comings of Australia’s intellectual property laws in the protection of their Indigenous Cultural and Intellectual Property. The very purpose of IP law is to incentivise creativity. However, when cultural designs, stories and dances can be copied and used by People with no connection to the source community, and without permission of Traditional Owners, without any legal repercussions, this can make Aboriginal creatives weary of sharing their knowledge and creative output. This economic lens doesn’t even begin to address the significant risk of cultural harm that can occur when cultural knowledge is used without consent, or misappropriated.
Discussion question: How do communities’ benefit from a vibrant and sustainable arts and performance sector?
- What can be done to strengthen the Arts and Performance sector?
- What measures would empower Aboriginal creatives to share their creative output?
- How can we build culturally safe environments in the Arts and Performance Sector? | https://www.aboriginalheritagecouncil.vic.gov.au/taking-care-culture-discussion-paper/art-and-performance |
People expect me to be really into Halloween. They figure because I’m a medium, I’m into ghosts and hauntings and stuff like that. That’s not really the case. There’s nothing spooky about the work I do – it’s really the opposite. Most people will do anything to feel connected with their loved ones in Heaven, so I guess you could say they’re running toward those spirits, not away from them.
That doesn’t mean I don’t like Halloween. I think it’s fun!
Alexa and I love celebrating Halloween in the same way most people do. We get all dressed up every year and our families do too. I love the creativity that goes into the Halloween costumes and decorations, and it’s fun to see the kids running from door to door trick-or-treating.
Since I get so many questions on this topic, I thought I’d share some thoughts about Halloween and the days that immediately follow it.
My Halloween History.
When I was a kid, my father hated Halloween. He felt that when you celebrated it, you invited negativity and evil souls into the house. He still let us dress up and trick-or-treat, but he wasn’t happy about it.
I thought my dad’s attitude was unusual until other people started saying similar things and asking me questions. They wondered if celebrating Halloween, watching scary movies, or practicing mediumship invited ghosts into to your house. The truth is no.
Why not? Well, it helps to understand the difference between a spirit and a ghost.
Spirits are deceased family members and friends that you loved, knew and cared for here in this world. They are familiar to you because you once shared a bond and connection with them.
A ghost is someone that you don’t know that has died. That’s what makes them a little bit scary. They too are spirits, just ones that you are not familiar with because you never knew them here in this world or had a bond with them.
Growing Up Psychic.
When I was a kid that’s the reason why I was so afraid of ghosts. When my family members in spirit used to visit me, it was fine. I loved connecting with my grandmother and other relatives in spirit. It brought me comfort and I never even realized when I was young that they died. The moment that strangers started to appear in spirit it would freak me out!
The more I pushed them away or tried to ignore them, the more persistent they got. My mom explained that they recognized my gift, and just wanted to get a message through to their own loved ones. They didn’t mean any harm.
Remember that just like your loved ones in spirit, ghosts used to be someone’s family as well. Even though you didn’t know them in life, it’s possible to sense their presence under certain circumstances.
For example, years ago my neighbor died. She had loved her house, but after she passed away her family had to sell it. The new family told me they could still feel her presence there. It wasn’t spooky, but they knew that as long as that house was standing, she’d be watching over it, and the people who lived there. As strange as it was, they felt comforted by it.
Are There Evil Spirits?
There is a darkside to psychic ability, and there are such things as negative souls just like there are negative people. My Mom always taught me , “Don’t go looking for trouble.” You won’t find them unless you seek them out. Negative souls like to spend time alone. Participating in Halloween or going to a reputable medium doesn’t open up a portal of invite spirits into the house.
Have I ever sensed a malignant spirit? Yes, but they’re extremely rare. I’ve had thousands of connections with loving, well-meaning souls for every one that was even slightly negative.
I don’t spend a lot of time dwelling on negative spirits and I certainly don’t go looking for trouble. I’d never dream of touching a Ouija board, and I don’t make it a habit to seek out haunted places. Remember that negativity of any kind can ony affect you if you invite it into your life.
Choose to do all things with your heart. If you don’t focus on darkness, you’ll be fine. Watch your intentions, keep your energy light, and to play it safe, don’t mess around with spells, the occult, or Ouija boards they are not needed. The love in your heart is the strongest connection between you and your loved ones.
All Saints Day and the Day of the Dead.
While I don’t feel a special attachment to Halloween just because I’m a medium, I do feel very connected with the two days after Halloween when Mexican families honor their departed loved ones.
While we are getting dressed up and having fun being scared, in Mexico families are getting ready to honor those they love who are now in spirit. Their celebration starts late on October 31st and goes through November 2nd. November 1st is “el Dia de los innnocentes” or the day of the children and All Saints Day. November 2nd is All Souls Day or the Day of the Dead.
In the Mexican culture, this is a special time that’s all about celebrating the lives of those who have passed over. Families visit the cemeteries where their ancestors and loved ones are buried, bring gifts, food, and pictures and spend the night with them thinking of the joyful times.
While this is not part of my culture, this celebration resonates with me and I deeply appreciate it. During this time, I like to remember my own lost loved ones in a very special way and send them prayers.
What is Heaven Like?
If you would like to learn more about Heaven and the Afterlife, you would really enjoy reading my new book ‘When Heaven Calls’. Each chapter is filled with secrets of afterlife and divine wisdom that I have learned over the many years I have had working with spirit. If you are interested, you can order a copy right now on Amazon by clicking here. I really hope you enjoy it. | https://meetmattfraser.com/blog/the-difference-between-ghosts-spirits/ |
Marine Researchers Studying Changes in Fish LifePublished: November 17, 2008
The new ocean inlet to Orange County’s Bolsa Chica wetlands that opened in 2006 now provides an opportunity for predators to enter the estuary, so a team of CSULB marine biology researchers have begun a study of the effects of these animals on marine life in the wetlands.
Biological Sciences’ Chris Lowe and graduate students Thomas Farrugia and Mario Espinoza are looking at shovelnose guitarfish, a species of ray, and gray smoothhounds, small sharks also known as sand sharks. Their research is supported by a grant from the USC Sea Grant program, administered by the National Oceanic and Atmospheric Administration.
Espinoza is from Costa Rica and is studying at CSULB with a Fulbright international education scholarship from the U.S. Department of State. He is focusing on the area’s smoothhound sharks, which typically only reach three to four feet in length and eat worms and crustaceans. Farrugia earned his bachelor of science at McGill University in Montreal, Canada, and is studying shovelnose guitarfish.
“This is only the second year since they opened the inlet by Huntington Beach so that open ocean water can now get into that estuary,” Lowe said. “The question is, how has this open inlet changed the marine organisms that live in there now that predators have open access to it?”
The shovelnoses and smoothhounds “are two coastal elasmobranches that we know use estuaries, based on work down in Baja and other locations throughout Southern California,” Lowe explained. “It’s thought that those species go into estuaries, probably for reproduction, to mate and pup. In Southern California, we’ve lost over 90 percent of our wetland habitat, so the reopening of the Bolsa Chica estuary could be really important in terms of understanding the impacts that all that habitat loss has had on these coastal shark and ray species.
“We’re sampling throughout all of Bolsa Chica in the full tidal basin that’s closer to Huntington Beach. That project will run probably for two years. We’ll be tagging and tracking and doing beach seines to determine abundance—how many animals are in there throughout the year, what species are there, how much they move, do they stay there all summer long or do they commute periodically?”
The seines are large nets in which animals are caught and counted before being released. A number of animals also will be fitted with small acoustic tags which will enable the researchers to track their movements.
“I did my bachelor’s degree in biology at the University of Costa Rica,” Espinoza said. “As soon as I was done, I started looking for scholarship opportunities in a marine field outside my country,” which he said lacks a strong marine science graduate program with an emphasis on elasmobranch studies.
Espinoza, who wants to become a university marine biology professor, said, “My motivation to follow a marine science career and to dedicate my life to the understanding of the ecology and behavior of sharks and rays encouraged me to apply for different research opportunities that would strengthen my experience and knowledge before starting graduate school.”
After interning at the Center for Shark Research in Florida’s Mote Marine Laboratory, he learned about the CSULB marine biology program and applied for a Fulbright scholarship to study with Lowe.
“His contribution to our understanding of the coastal environment and interactions of California local fisheries has been absolutely impressive,” said Espinoza. “He is now advising me through my graduate program to investigate the seasonal residency, habitat use and foraging behavior of the gray smoothhound shark in a Southern California restoration project located in Bolsa Chica.”
The loss of Southern California wetlands “is particularly critical considering that connectivity of estuarine fish may be threatened due to the extensive habitat loss and degradation of natural systems,” Espinoza said. “Restoration of estuarine habitats is emerging as a popular mitigation approach and has been successfully implemented in several coastal areas to offset the loss of fish habitat.”
Understanding the animals’ movement over space and time is vital to understanding the biology and life history of a species and “makes it possible to assess the importance of coastal restoration projects as a viable ecological approach that will increase habitat for other coastal economic species.” This research can provide useful recommendations to marine ecosystem managers in understanding and designing protected areas along the coast.
Farrugia has similar interests. “I was accepted at several master’s and Ph.D. programs, but after meeting with Dr. Lowe and seeing the research he was involved in, I decided to do my master’s with him. It also helped that CSULB is renowned for its marine biology program,” he said.
“I’m studying shovelnose guitarfish inside the Bolsa Chica estuary. I want to understand the timing of their arrival into Bolsa Chica—they use estuaries primarily during the summer—and what factors influence their movements into and within estuaries. Through this research, I hope to understand more about the behavior and physiology of shovelnose guitarfish, which are a source of wonder in California and a source of food in Mexico.”
Farrugia said that his career goals include working in marine natural resources conservation and management through scientific biological research. “Particularly, I am interested in studying the behavior of large marine animals (fish and marine mammals) as a tool to help in marine conservation.”
For more information, visit www.csulb.edu/web/labs/sharklab. | http://web.csulb.edu/misc/inside/2008/11/17/marine-researchers-studying-changes-in-fish-life/ |
The growing computing power, easy acquisition of large-scale data, and constantly improved algorithms have led to a new wave of artificial intelligence (AI) applications, which change the ways we live, manufacture, and do business. Along with this development, a rising concern is the relationship between AI and human intelligence, namely, whether AI systems may one day overtake, manipulate, or replace humans. In this paper, we introduce a novel concept named hybrid human-artificial intelligence (H-AI), which fuses human abilities and AI capabilities into a unified entity. It presents a challenging yet promising research direction that prompts secure and trusted AI innovations while keeping humans in the loop for effective control. We scientifically define the concept of H-AI and propose an evolution road map for the development of AI toward H-AI. We then examine the key underpinning techniques of H-AI, such as user profile modeling, cognitive computing, and human-in-the-loop machine learning. Afterward, we discuss H-Al's potential applications in the area of smart homes, intelligent medicine, smart transportation, and smart manufacturing. Finally, we conduct a critical analysis of current challenges and open gaps in H-AI, upon which we elaborate on future research issues and directions. | https://pure.ulster.ac.uk/en/publications/survey-and-tutorial-on-hybrid-human-artificial-intelligence |
MalaysiaThe Malaysian Health Ministry implements a universal healthcare system that offers public health services to the entire population at heavily subsidised rates.
In accordance with the Fees (medical) Order 1982, the fee schedule set by the Ministry of Health (MOH) states that citizens only pay RM 1 (SGD$0.32) for a general outpatient consultation and RM 5 (SGD$1.62) for a specialist consultation at public health facilities. Inpatient and investigation charges are also subsidized, but vary depending on the ward class.
In addition to the subsidised charges, many groups are exempt from hospital charges, including senior citizens, people with disabilities, as well as organ and blood donors.
Non-citizens pay a higher fee in accordance to the Fees Act (Medical) 1951 for Foreigners, which are RM 15 (US$4.85) and RM 60 (US$19.42) respectively for general and specialist outpatient consultations.
Public facilities are not allowed to refuse services to people who cannot afford to pay, and many have left without settling their bills. As a result, the amount of unpaid bills in one year (2006) previously totalled RM 26.1 million.
There are various public facilities available and strategically located nationwide, both in urban and rural areas. These include public hospitals, health clinics and 1Malaysia clinics (K1M), which are run by qualified nurses and medical assistants to provide basic medical care for the urban poor.
The public healthcare system co-exists with a private healthcare sector, where patients pay out-of-pocket or through private health insurance schemes for consultations, medicines and other health services. In October 2016, the implementation of a Full-Paying Patient (FPP) programme was announced in an MOH circular, and many postulated that the fee structures practiced by private sectors were being emulated due to budget cuts. This has since been clarified by MOH as false, and that the RM1 and RM5 fee structure will remain.
SingaporeUnlike the public healthcare system in Malaysia, the Singaporean Health Ministry implements a system of various schemes and compulsory savings to help citizens afford medical expenses.
The national medical savings scheme is known as Medisave, where working individuals set aside 8% to 10.5% of their monthly salary into a personal account, with which savings can be used to pay for medical expenses incurred at any local hospital for the individual or his immediate family members. However, there are withdrawal limits for inpatient cases, surgeries and certain treatments such as hospice care and psychiatric treatment.
There is also a national health insurance plan for citizens and Permanent Residents known as MediShield Life, which offers payouts so that patients pay less from out-of-pocket or Medisave for hospital bills. Premiums for the insurance are payable by Medisave, which is subsidised for those in the lower-to middle-income group. Medishield Life was launched in 2015, and reports show approximately 400,000 claims were made between November 2015 until September 2016, with a total payout of over S$600 million. The scheme is also open to Singaporeans living overseas.
Similarly, the government has also introduced ElderShield, a severe disability insurance scheme for those who require long-term care.
Other schemes include the Community Health Assist Scheme (CHAS) which was introduced in 2012 to enable citizens from lower-to-middle income households and Pioneers, who are elderly citizens born before 1950, to receive subsidies from participating GP clinics.
On top of subsidies and payouts through health insurance, Medifund, an endowment fund set up by the ministry, supplements the other schemes and serves as a safety net for needy Singaporeans who face financial difficulties with healthcare bills.
PhilippinesThe Philippine Health Insurance Corporation (PHIC), also known as PhilHealth, is a Government Corporation attached to the Department of Health that functions to administer the National Health Insurance Programme (NHIP). NHIP is the largest insurance programme in the country established to provide coverage and benefit payments, as a measure to ensure affordable, acceptable, available and accessible health care for citizens.
For inpatient services, PhilHealth provides basic coverage through reimbursements and is provided up to a ceiling, above which patients have to cover costs. The limits vary by hospital level, public and private, severity of case, and is also specified for the type of service such as room and board, drugs and medicines, supplies, radiology, laboratory and ancillary procedures, use of the operating room, professional fees and surgical procedures.
Certain outpatient services are also covered by PhilHealth, such as for day surgeries, dialysis, chemotherapy as well as radiotherapy, and for a tuberculosis-direct observed therapy (TB-DOTS) that was introduced in 2003.
The Universal Health Care (UHC) is a reform which aims to make essential and quality health services readily available and accessible to all Filipinos, particularly to those from the lower-income group. In order to address inequity in the health system, the UHC aims to improving the level of support provided by PhilHealth, introduce fixed payments for inpatient benefits, improve coverage on non-communicable diseases for outpatient services and improve facilities of Department of Health-retained hospitals, provincial hospitals, district hospitals and rural health units.
Hong KongThe Hospital Authority (HA) in Hong Kong acts provides citizens of Hong Kong with public hospitals and related health services such as general out-patient clinics, day hospitals and specialist clinics, which are charged as per the Gazette.
The charges are divided into public and private services, of which the former is further categorised into eligible persons and non-eligible persons.
Holders of the Hong Kong Identity Card, children under the age of 11 years who are residents in the country, as well as other persons approved by the HA’s Chief Executive are considered eligible persons, and pay HKD$45 (SGD$8.30) and HKD$100 (SGD$18.45) per attendance to general out-patient service and specialist out-patient services respectively.
Those who do not fall under the category of eligible persons, however, pay up to HKD$385 (SGD$71.03) per attendance for general outpatient services and HKD$1,110 (SGD$204.80) for specialist outpatient services.
Charges for health services at private sectors do not differ between eligible or non eligible persons, and are more expensive, depending on the service. An outpatient consultation may cost up to HKD$2,160 (SGD$398.53) per initial consultation, of which subsequent follow-ups may amount up to HKD$1,420 (SGD$262) per session.
To discourage late or non-payments of medical fees, administrative charges are imposed to and this is applicable to all outstanding medical charges.
IndiaIn India, there is a huge divide between rural and urban populations in access to healthcare. Approximately 70 percent of the population in rural areas have limited access to health facilities.
Most healthcare expenses are paid out of pocket, but many citizens from the lower income group are unable to afford medical bills, and are also unable to pay for coverage by health insurance. In order to address this, the government has implemented National Health Insurance Schemes to address health coverage for Below Poverty Line (BPL) families.
The National Urban Health Mission (NUHM) was started to meet the needs of the urban poor in healthcare, by making essential primary healthcare services available and reducing out of pocket payments for medical expenses.
The Rasthriya Arogya Nidhi (RAN) was also set up by the Ministry of Health & Family Welfare to provide financial assistance for patients who fall below the poverty line and are suffering from major life threatening diseases in order to help them receive medical treatment.
All elderly residents above the age of 60 are also eligible under the National Programme for Health Care of the Elderly (NPHCE), allowing them to receive free, specialised health care from facilities through the State health delivery system.
In order to provide citizens with comprehensive health security nationwide without exclusion or discrimination, India is aiming to achieve Universal Health Coverage, with a vision for a National Health Package to guarantee access to essential healthcare by the year 2020. MIMS
Read more:
The Medical Tourism Boom in Asia
5 difficulties Singaporean doctors face in tackling medical tourism
Top 10 countries with the best healthcare system
Sources: | https://today.mims.com/a-look-at-healthcare-systems-across-asia?channel=GN-Policies-Public-Health |
The Trossachs Collection is a personal exploration into the notion of home. Home involves an interaction between nature, community and warm human contact. To achieve this, the Trossachs Collection creates a conversation between Scotland's traditional past, and the landscape's contemporary present. Through an exploration and personal connection to her home environment, Orla aims to translate the environmental experience into textile designs. With a focus on colour, texture and tactility, this collection encourages similar experiences of emotions and connections for the user. Inspired by local folklore, history, music and landscape of the Trossachs National Park, these influences intertwine and result in this collection of printed and embroidered double sided blankets.
Trossachs blankets represent metaphorical nomadic homes: Encasing the user in their very own transportable textile home, which may be taken anywhere. When wrapped in a Trossachs blanket, you are immersed in your own sense of place, of home, wherever you may go.
Public Art Installation
Commissioned by Orkney Islands Council, and created in collaboration with poet Gabrielle Barnby to celebrate the centenary of George Mackay Brown.
Restored 100 year old oars, hand lettered with excerpts of poetry by George Mackay Brown and contemporary poetry by local pupils. Illustrated with icons related to the poets life and work.
The artwork is now part of the GMB trail, and is sited in Stromness Memorial Garden.
Oars kindly donated to the project by Orkney Historic Boat Society. | https://www.orlastevens.com/george-mackay-brown |
Campus Events Mark 50th Anniversary of the Stonewall Riots
In recognition of the 50th anniversary of the Stonewall riots, LGBTQ+ Western and Western faculty are collaborating to present “50 Years Since Stonewall: the June 1969 Stonewall riots, memory, and terrains of LGBTQ+ liberation.” The series of events for students, staff, faculty, and the community takes place May 22 and 23, offering multidisciplinary explorations of the Stonewall riots and ongoing struggles toward queer liberation.The Stonewall riots marked one of the most galvanizing periods in the fight for sexual and gender liberation and the six days of protest against transphobia, homophobia, and police repression offered powerful stories, movements, and acts of queer resistance, sexual and gender liberation, and racial, ethnic, and cultural solidarity. The riots inspired LGBTQ+ people throughout the country to organize in support of gay rights, and within two years after the riots, social movements for gender and sexual liberation were sparked in nearly every major city in the United States.
The events are as follows; additional details here. Faculty are welcomed to bring their classes.
Pride Postcards to LGBTQ+ Prisoners
A participatory event with Josh Cerretti, Assistant Professor of History
Wednesday, May 22, from 1-3:30 p.m. in the Miller Hall Collaborative Space
Many aspects of LGBTQ+ life were criminalized throughout US history and LGBTQ+ people remain disproportionately impacted by the criminal legal system. In memory of those arrested at Stonewall and in solidarity with those incarcerated today, we’ll be sending postcards celebrating Pride to incarcerated LGBTQ+ people. Stop by for ten minutes or stay the whole time.
Schooling After Stonewall
A panel with A Longoria, Instructor of Secondary Education, and community K-12 educators
Wednesday, May 22, from 4-6 p.m. in Miller Hall 152
What are the experiences of LGBTQ+ youth today? How can we best serve the evolving needs of Queer identities in schooling? This vision-setting panel conversation will highlight current youth work and perspectives on what schooling should look like after Stonewall. The panel conceives of schooling broadly, with a particular emphasis on K-12 schooling. It will consist of educators, community organizers, and activists. A brief overview of the state of schooling today, including legal and policy developments and implications, will precede a moderated panel.
Stones to the Wall: How to Remember a Riot
A talk by Chris E. Vargas, Assistant Professor of Art
Thursday, May 23 at 5 p.m. in Fraser 201
In this talk about his recent exhibition and residency at the New Museum in New York City entitled “Consciousness Razing: The Stonewall Re-memorialization Project,” Chris Vargas explores Stonewall as a geographically, demographically, and historically contested site. For the New Museum exhibition, Vargas’s Museum of Transgender Hirstory & Art (MOTHA) commissioned artists to propose new monuments to the 1969 Stonewall riots. In doing so, Vargas questions what we think we know about these riots, often cited as a formative event for gay liberation and the modern LGBTQI civil rights movement in the US. MOTHA’s “Consciousness Razing” finds new ways to uncover, recast, and recuperate elements of the past.
LGBTQ+ Western works to advance the holistic thriving of diverse LGBTQ+ students, faculty and staff at Western Washington University by collaboratively engaging the university community with transformational knowledge, resources, advocacy and celebration. You can sign up to receive periodic emails from LGBTQ+ Western at lgbtq.wwu.edu. | https://westerntoday.wwu.edu/news/campus-events-mark-50th-anniversary-of-the-stonewall-riots |
sensorless speed controller base on RF-MRAS, the third describes the model of the sensorless speed controller based on measured values directly from the induction motor as the voltage and current. The last is simulation results in the Matlab-Simulink environment. These results indicate that this proposed method can determine accurately, very quickly the speed of the induction motor and can be applied in the practice with high reliability, and low cost.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium provided the original work is properly cited.
Full Text:PDF
Time cited: 0
DOI: http://dx.doi.org/10.25073/jaec.201931.228
Refbacks
- There are currently no refbacks. | http://jaec.vn/index.php/JAEC/article/view/228 |
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to a fuel cell system integrated to utilize hydrogen produced by steam reforming of methanol.
2. The Prior Art
Fuel cells generate electricity through galvanic combustion of fuel process gas with oxidant process gas. Typically oxidant gas can be obtained from the fuel cell environment with little, if any, processing. The fuel process gas, on the other hand, is usually hydrogen and its generation requires processing of other fuels such as methanol. Direct oxidation of fuels such as methanol in fuel cells at practical current densities with acceptable catalyst loadings is not as economically attractive as conversion of methanol fuel to a hydrogen-rich mixture of gases via steam reforming and subsequent electrochemical conversion of the hydrogen-rich fuel stream to direct current in the fuel cell.
A very attractive fuel cell system currently undergoing commercial consideration is the reformed methanol fuel-phosphoric acid electrolyte- air system. Primary advantages of phosphoric acid electrolyte (85 wt. %) include ability to operate with fuel and ambient air containing CO.sub.2, ability to operate with a thin matrix electolyte (no liquid circulation) and chemical stability of the electrolyte over the operating temperature of the cell, e.g. 180°- 200° C.
The fuel cell itself is only part of the overall system, and other components of the system, e.g., generation of hydrogen fuel, are likewise important in terms of overall system size and cost effectiveness.
In one method used by the art to produce hydrogen by steam reforming, a methanol and steam feedback is passed through catalyst filled tubes disposed within a reactor or reformer. Fuel and air are combusted outside of the tubes in the reformer to provide heat for the endothermic catalytic reaction taking place within the tubes at about 300° C. In this process, the mixture of methanol and steam is converted to a gaseous stream consisting primarily of hydrogen (about 68%) and CO.sub.2 (about 21.7%), Co (about 1.5%) and H.sub.2 O (about 8. 8%). In order to improve the thermal efficiency of such apparatus, efforts have been directed to improve the uniformity of heat distribution in the tubes within the reactor while minimizing the amount of energy used to produce each unit of hydrogen containing gas.
For the most efficient operation of the steam reforming reaction, large surface areas are required to transfer the heat from the combusted gases to the tubes. In reformers presently used for steam reforming, small diameter reaction tubes are clustered closely together in the furnace so that heat transfer from the combusting gases in the reactor into the catalyst packed tubes is optimized.
The use of a plurality of tubes to accomplish heat transfer contributes to the large size and high cost of the reformer. A second drawback to such reformers is that the heat for the steam reforming process is provided indirectly by means of heat transfer through tube walls. This inefficient heat transfer has a detrimental effect in fuel cell systems in which the reformer and the fuel cell are fully integrated, i.e. the combustion gases for the reforming reaction are derived from the fuel cell exhaust. Thus, at the inlet of the reformer, it is impossible, because of the highly endothermic nature of the reaction, to supply enough heat to the surface area of the reformer tubes so there tends to be a large decrease in reactant temperature in the area adjacent the inlet. A large portion of each reactor tube, as a result operates at an undesirably low reaction temperature. The resultant effect of the fuel cell system is that, in order to effect complete methanol conversion, the reformer must necessarily be of a large size and concomittant high cost.
In copending patent application Ser. No. 743,204 filed of even date herewith, the production of hydrogen by the steam reforming of methanol is accomplished in a reformer of substantially reduced size by superheating a gaseous mixture of water and methanol at a temperature of about 800° to about 1100° F. and then feeding the superheated gaseous mixture to a reformer in contact with the catalyst bed contained therein, whereby at least a major portion of the heat for the endothermic steam reforming reaction is provided directly by the sensible heat in the superheated steam/methanol stream.
As a result of direct heating of the reformer feed gases augmenting indirect heat transfer through the wall of the reactor, a shell-and- multiple tube reactor arrangement or other means of increasing the heat transfer surface is not always required and the complexity and overall volume of the reformer can be substantially reduced. This invention is based upon the realization that an efficient practical integrated reformer-fuel cell system and process can be achieved by using a superheater, an essentially adiabatic methanol reformer and a fuel cell wherein the various exhaust streams from the components are utilized with other components of the system. In this system, the water to methanol ratio and temperature of the stream leaving the superheater are of such values that substantially all (at least about 75%, preferably 90%) of the heat required for the endothermic reforming reaction is contained within the reaction stream itself and, at most, only a small portion of the heat required for reforming is supplied through the wall of the reforming reactor. In addition to the reformer size reduction achieved by the use of the process disclosed in copending U.S. Ser. No. 743,204, a further reduction in reactor size is achieved by use of an essentially adiabatic reactor. This invention is also based on the realization that it is highly advantageous to integrate the steam reforming process disclosed in copending U.S. Ser. No. 743,204 with a fuel cell to form a fuel cell system whereby a continuous supply of hydrogen could be provided to the fuel cell from an essentially adiabatic steam reformer, the gases exhausted from the anode of the fuel cell providing thermal energy via combustion for superheating the water/methanol mixture.
It is therefore a primary object of the present invention to efficiently integrate a fuel cell with the steam reforming process to provide a thermochemical process for producing electrical energy in which the heat required for the endothermic reforming reaction is contained substantially completely within the stream fed to the reformer which produces hydrogen for a fuel cell so that a compact and efficient system may be obtained.
The above object of the invention is achieved in accordance with the fuel cell system of the present invention comprised of a heat exchanger, a burner, an adiabatic steam reformer and a fuel cell wherein a superheated mixture of water and methanol is first converted by an essentially adiabatic endothermic catalytic reforming reaction to hydrogen. The hydrogen, generated in the reformer, is directed to the fuel electrode of the fuel cell, and air is directed to the oxygen electrode to effect an electrochemical reaction to produce electricity and gaseous reaction products. A portion of the exhaust gases from the fuel electrode is combustible as it contains unreacted hydrogen. Furthermore, it is desirable to withdraw this portion of gas from the fuel cell to maintain a hydrogen-rich stream in the fuel cell thus optimizing fuel cell operation in accordance with the present state of the fuel cell art. The combustible gas exhausted from the fuel electrode is burned in the burner, the exhaust of which is fed to the heat exchanger to supply heat for superheating the water/methanol mixture fed to the reformer. Even though large amounts of water are used in the system and thus more heat is required to vaporize and preheat these methanol water mixtures containing large amounts of water, the heat generated can be effectively recovered and used in the system and process of the present invention. Further, parasitic power requirements are decreased, and the low concentration of carbon monoxide in the reformate should lead to improved fuel cell efficiency and extended fuel cell life, so the net methanol demand remains essentially constant or may decrease slightly (around 20%). Thus, the net energy production is at least substantially equivalent to that obtained using mixtures containing lesser amounts of water.
DETAILED DESCRIPTION OF THE INVENTION
Having set forth its general nature, the invention will best be understood from the subsequent more detailed description wherein reference will be made to the accompanying drawings which illustrate systems suitable for practicing the present invention.
Reference is now made to FIG. 1 of the drawings which schematically illustrates a flow scheme in accordance with this invention of the steam reforming of methanol for the production of hydrogen therefrom. As illustrated in FIG. 1, a water and methanol feedstock having a water/methanol mole ratio ranging from about 1.0 to about 10.0, preferably about 2.0 to about 9.0, and more preferably about 2.5 to about 4.0, is supplied via conduit 10 to vaporizer 11 wherein the water/methanol feed supplied thereto is heated to a temperature of about 200° to about 500° F. to convert the feedstock into a gaseous mixture. The hot gaseous steam/methanol stream then exits the vaporizer via line 12 and is supplied to superheater coil 13 contained in burner 14. The gaseous mixture contained in coil 13 is superheated to a temperature of about 700° to about 1100° F., and preferably about 850° F. to about 1000° F., the fuel for heating the mixture being supplied to burner 14 via conduit 16 together with an oxidizing gas such as air or another oxygen containing gas via conduit 15. When the reforming system is integrated with a fuel cell, the fuel burned in the burner 13 includes unreacted hydrogen gas exhausted from the anode side of the fuel cell which undergoes combustion with an oxidizing gas such as air or oxygen. The temperature and composition of the methanol/steam mixture leaving the superheater are such that at most only minimal additional heat will be required to obtain essentially complete conversion of the methanol contained therein. Table I sets forth the variation in weight hourly space velocity (and thereby reactor size) obtained by varying the methanol/water mole ratio of 4.5 to 9.0. ##TBL1##
For purposes of the present invention, H.sub.2 O to MeOH mole ratios of from about 2.5 to about 4.5 are preferred at temperatures of from about 900° to about 1100° F.
Gases resulting from the combustion reaction may exit burner 14 via line 14A to reformer 18 in contact with the outside of the catalyst bed. This would provide some additional heat to the reforming reaction and prevent heat loss from reactor 18, thereby reducing the size of reactor 18. In view of the fact that substantially all of the heat for the reforming reaction is supplied by preheating of the reformer feed gases, reformer 18 can be constructed in the form of a single tube having a length to diameter (aspect) ratio of less than 10:1, preferably less than 8:1, more preferably less than 6:1, most preferably from about 2:1 to about 6:1. The superheated steam/methanol gaseous mixture exits superheater coil 13 at a temperature of 850° to 1000° F. and a pressure of 14.7 to 150 psia via line 17 and is supplied to reformer 18 at the desired superheated temperature and pressure.
The superheated steam/methanol gaseous mixture is reformed as it passes through a tube packed with a suitable catalyst (not shown) contained in reformer 18. The steam reforming catalyst is typically a metal or metal oxide supported on an inert ceramic material. For example, a suitable steam reforming catalyst is zinc oxide (e.g. about 30 to 65% by weight zinc)/chromium oxide (about 5 to 35% by weight chromium) or a zinc oxide (about 5 to 20% by weight zinc)/copper oxide (about 15 to 40% by weight copper) combination supported on alumina (about 15 to 50% by weight).
It has been determined that steam reforming in accordance with the practice of the present invention is optimized and heating is accomplished more readily when the reformer tube is divided into two catalyst sections, i.e. a first section from the inlet to the reactor tube to an intermediate position in the reactor tube containing a catalyst which has relatively low activity but good resistance to high temperatures, such as zinc/chromium oxides and a second section extending from the end of the first section to the outlet area of the reactor tube containing a high activity catalyst such as copper/zinc oxides. Alternatively, the low activity, high temperature resistant catalyst may be used by itself.
In order to accommodate the endothermicity of the reforming reaction, heat is provided to reformer 18 as the sensible heat contained in the superheated gases. Thus, when methanol vapors and steam contact a catalyst such as a combination of zinc oxide and copper oxide at 500. degree. to 900° F. at atmospheric or higher pressure, methanol in effect decomposes to carbon monoxide and hydrogen while the carbon monoxide and steam correspondingly react according to the well known water gas shift reaction to form carbon dioxide and hydrogen as set forth below: ##EQU1## so that the overall reaction taking place in reformer 18 is: ##EQU2##
Thus, within reformer 18, methanol and steam react endothermically at high temperature to produce a gaseous product consisting primarily of steam, hydrogen and carbon dioxide which is recovered from reactor 18 and supplied via conduit 19 either to condenser means 20, wherein most of the water is removed from the gaseous hydrogen/carbon dioxide mixture by cooling the gaseous mixture to condense the water or directly via line 19a to the fuel cell. Where condensing means are used, water exits condenser 20 via line 21 and a gaseous mixture of hydrogen and carbon dioxide exits condenser 20 via line 22 and in this state may be supplied for direct utilization at the fuel side or anode of a fuel cell. If desired, the hydrogen/carbon dioxide mixture may be further fractionated, by means not shown, to recover separated quantities of hydrogen and carbon dioxide.
The introduction of superheated steam and methanol of the preferred temperatures and compositions into the catalyst bed in the reforming system, illustrated in FIG. 1, permits the reforming apparatus to be made more compact, at least substantially narrower and with fewer reaction tubes than an apparatus relying on a standard feed of water and methanol which would require a large number of reaction tubes for producing an equivalent reforming effect. For many applications, the reformer can, if desired, be constructed in the form of a single tube. In a typical hydrogen production process using the reforming system illustrated in FIG. 1, methanol is passed with steam over a catalyst at pressure typically ranging from 14.7 to 150 psia and temperatures in the range of about 850. degree. to about 1000° F. Typical steam to methanol mole ratios (H. sub.2 O/carbon) are in the range of about 2.5:1 to about 4:1. The conversion of methanol may be effected in one pass over the catalyst bed contained in the reformer.
FIG. 1 as described above, schematically shows small scale equipment for carrying out the methanol steam reforming process of the present invention. The foregoing principles are readily applicable to the design of large scale equipment for the production of hydrogen in accordance with well known techniques.
The system shown in FIG. 1 for steam reforming methanol into hydrogen is particularly adapted for use in, and can be efficiently integrated with, a fuel cell system. FIG. 2 shows one embodiment of a method and means for efficiently integrating a fuel cell with the reforming system illustrated in FIG. 1. FIG. 2 is a schematic diagram of a system which includes a methanol reformer integrated with a phosphoric acid electrolyte fuel cell.
Referring now to FIG. 2, in operation, water from collection tank 20 at less than 200° F., e.g. about 135° F., is introduced by way of charge line 20A to pump 21 and pressurized to 1-10 atmospheres (14. 7 to 150 psia). Methanol feed from methanol supply 80 is fed via line 79 to pump 76 which pressurizes the methanol to 1 to 10 atmospheres (14.7 to 150 psia) and delivers it via line 78 to mix with the water feed (line 24 and mixer 75). The pressurized mixture is then pumped through condenser 22 where it is used to cool hot (e.g. 520° F.) steam reformate exhausted from reformer 23. The temperature of the gaseous effluent from reformer 23 is adjusted to from about 135° to about 150° F. by heat exchange with the feedstock and, correspondingly, the temperature of the feedstock is raised to about 235. degree. F. The preheated feedstock is passed to vaporizer 25 via line 74. At vaporizer 25 the water/methanol feedstock is vaporized completely and heated to about 350. degree. F. The vaporized feedstock is directed through heat exchanger 26 via conduit 26A where it is superheated to about 900° F. The superheated feedstock is supplied to the inlet portion of reformer 23 via conduit 26B and into contact with a catalyst bed (not shown) contained within reformer 23. The superheated steam/methanol feedstock contains sufficient sensible heat to effect the endothermic hydrogen producing reforming reaction within reformer 23 without a substantial external heat source and there is produced a raw gaseous effluent stream having the following approximate composition in mole %: H.sub.2 23.0 to 60.0, CO.sub. 2 7.6 to 20.0, CO 0.0035 to 2.0 and H.sub.2 O 20.0 to 70.0.
The hot (e.g. 520° F.) product gas stream exiting reformer 23 is passed through line 23A and into condenser 22 where it is cooled to a temperature in the range of about 135° to about 150° F. to condense much of its water content by heat exchange with the as yet unreformed water/methanol feedstock being circulated through the condenser 22 on its way to the vaporizer 25, as previously described. The cooled reformate is passed from condenser 22 via valved conduit 27 to heat exchanger 31. Water separated out in condenser 22 is passed into collection tank 20 via conduit 29 to provide some of the water for the methanol/water reforming reaction. In heat exchanger 31, the hydrogen containing effluent is heated to about 315° F. by the fuel cell liquid coolant. The heated effluent is then directed via conduit 31A to anode compartment 32 of fuel cell 33. Ambient air is passed from a suitable source 34 through heat exchanger 36, heated to about 275° F. therein and passes into the cathode compartment 37 of the fuel cell 33 via conduit 36A.
The hot (e.g., 375° F.) exhaust gases from the cathode compartment 37 are routed via conduit 38 to heat exchanger 36 to preheat the incoming air to the cathode compartment 37 from the air source 34 and by passage through the heat exchanger are cooled to 209° F. The cathode exhaust includes oxygen from the air which was not consumed by the fuel cell as well as a considerable amount of water which was produced in the fuel cell at the cathode. The cooled cathode exhaust containing a large concentration of water vapor is routed via conduit 40 to condenser 42 to separate the gaseous cathode effluent constituents (e. g. O.sub.2, N.sub.2) from the water in the cathode exhaust, which water is condensed and routed via conduit 43 to be recycled for use in collection tank 20. The gaseous cathode effluents separated from the water condensed in condenser 42 are vented to the atmosphere through gas outlet 44 in the condenser 42.
The gaseous exhaust from anode compartment 32 of fuel cell 33 is routed via conduit 46 to burner 47. The anode exhaust, which is at a temperature of 350° to 400° F., contains some hydrogen which has not been consumed by the fuel cell, as well as carbon dioxide and water. Ambient air enters burner 47 via conduit 48A. The anode exhaust and air admitted to burner 47 mixes and burns in burner 47. The burner combustion products, including the water (in the form of steam), leave burner 47 via conduit 49 and are passed through heat exchanger 26 to supply heat to superheat the water/methanol feedstock already vaporized in vaporizer 25. The cooled burner exhaust from heat exchanger 26 is passed to reformer 23 via conduit 51 to provide additional heat while retarding reformer heat loss and then routed via conduit 52 to heat exchanger 50. After further cooling in heat exchanger 50, the burner exhaust gases are routed via conduit 53 to condenser 42 for recovery of water which is routed to mix tank 20 for use in the reformer feedstock.
Heat transfer fluid (e.g. a mineral oil type) is circulated through fuel cell 33 to maintain the desired operating temperature, namely 335. degree.-400° F. Heat transfer fluid, e.g. at a temperature of about 345° F., exhausted from the fuel cell 33 is routed to vaporizer 25 via conduit 54. At vaporizer 25, the heat transfer fluid is cooled and the feedstock is vaporized and preheated therein to about 335. degree. F. prior to being superheated to 900° F. in heat exchanger 26. The amount of heat required for vaporization and preheating of the feedstock is sufficient to produce about a 5. degree. F. drop in the temperature of the heat transfer fluid from the fuel cell, i.e. from 345. degree. F. (the temperature at the fuel cell exit) to 340° F. Typically, the heat transfer fluid circulated through vaporizer 25 is further cooled by being pumped through heat exchanger 56 by pump 55 prior to its passage to heat exchanger 31 via conduit 57 and its ultimate return to fuel cell 33 via conduit 58.
From the above description of suitable means for conducting the method of the present invention, it will be clear that various alternatives exist for maintaining the heat balance during the practice of the reforming process of the present invention when integrated with a phosphoric acid fuel cell system of the type hereinbefore described. The selection of a particular mode of operation will be dictated by overall process economics prevalent with any particular H.sub.2 -air fuel cell system and the desire to maximize the production of gaseous hydrogen while operating under the most beneficial conditions of temperature and pressure.
The following examples are offered for a better understanding of the reforming process of the present invention, but the invention is not to be construed as limited thereto.
EXAMPLE I
A mixture of air and hydrogen was separately fed to a burner equipped with a heating coil of the type shown in FIG. 1 of the drawings. In separate runs, a water/methanol feedstock at molar feed ratios of 4. 5 and 9.0 was preheated to a temperature of 900° F. in a mineral oil heated vaporizer 11. The preheated water/methanol feedstock was passed into heating coil 13 of burner 14 and superheated to 900° F. The water/methanol feedstock exited the burner at 900° F. and was passed into the inlet section of experimental subscale reformer 18 which consisted of a one-inch diameter pipe with a one foot long catalyst bed consisting of 206 grams of a ZnO/CuO combination catalyst on an alumina support. The composition of the catalyst of the type conventionally used for the water gas shift reaction comprised of 11.6% by weight Zn, 27.5% by weight Cu, and 29.9% by weight alumina. Reforming of the methanol in reformer 18 was accomplished at 14.7 psia and a constant weight hourly space velocity of 1.5 grams (gm) methanol feed/gm catalyst/hour. Steam reforming took place within the catalyst bed with the heat being provided directly thereto by the superheated gases flowing through reformer 18.
In both runs, effluent samples collected from condenser 20 and analyzed for CO content using a gas chromatograph indicated that the carbon monoxide level was below the calibration range of the gas chromatograph (100-200 ppm).
The methanol conversion for the 4.5 and 9.0 water/molar feed ratios in the two runs was 84.6 and 96.2% respectively. The first order rate constants overall were 2.8 and 4.9 hr.sup.-1 respectively. The projected weight hourly space velocities for the 4.5 and 9.0 water/molar feed ratios in the two runs to yield 99.8% methanol conversion was calculated to be 0.45 and 0.789 gm methanol feed/gm catalyst/hour respectively.
The temperature profile through the length of the reformer is shown in Table II below. ##TBL2##
The gaseous product exiting the reformer was cooled, collected and analyzed with a gas chromatograph (G.C.) during the course of the reforming reaction. The conversion results are summarized in Table III below. ##TBL3##
EXAMPLE II
An experiment was conducted with a shell and tube type reactor in which the catalyst was divided into two sections. The first section extended six inches from the inlet of the 13 reactor tubes. The second catalyst section extended the remaining 18 inches of the tubes to the outlet. The first section contained a low activity, high temperature resistant catalyst consisting of 978.0 grams of a zinc-chromium oxide catalyst composed of 55.0% by weight Zn, and 22.0% by weight chromium oxide. The second catalyst section contained 5884.0 grams of a high activity, low temperature catalyst consisting of ZnO/CuO combination with an alumina support; the catalyst comprised 11.6% by weight Zn, 27. 5% by weight Cu, and 29.9% by weight alumina and was of the type conventionally used for the water-gas shift reaction.
The water/methanol feedstock molar ratio was 2.0. Effluent samples collected from condenser 20 and analyzed for CO level indicated a concentration varying from 0.5 to 2.0 weight percent. The methanol conversion was 99.5%. The first order rate constant was 2.44 gm methanol feed/gm catalyst/hour. The weight hourly space velocity was 0.46 gm methanol feed/gm catalyst/hour.
The temperature profile through the length of the reformer is shown in Table IV below. This illustrates the effect of the upper, high temperature catalyst section in protecting the lower, low temperature catalyst from the high inlet temperatures necessary to complete the reaction with an optimally sized reactor. ##TBL4##
The conversion results are summarized in Table V below. ##TBL5##
While specific components of the present system are defined in the working examples above, many other variables may be introduced which may in any way affect, enhance or otherwise improve the present invention. These are intended to be included herein.
While specific components of the present system are defined in the working examples above, many other variables may be introduced which may in any way affect, enhance or otherwise improve the present invention. These are intended to be included herein.
Although variations are shown in the present application, many modifications and ramifications may occur to those skilled in the art upon reading the present disclosure. These, too, are intended to be included herein. | |
This invention relates to improved cooling arrangements for fuel cell systems.
Among the various types of fuel cell systems are those which include subassemblies of two bipolar plates between which is supported an electrolyte, such as an acid, in a matrix. The subassemblies, herein referred to as fuel cells, are oriented one atop another and electrically connected in series to form a fuel cell stack. Operation of the fuel cell, for example the reaction of hydrogen and oxygen to produce electrical energy as well as water and heat, is exothermic, and cooling of the cell components is necessary in order -to maintain component integrity. For example, the bipolar plates or the electrolyte matrix may be made of carbonaceous material bonded by a resin which tends to degrade at high temperatures. Prolonged operation at high temperatures would tend to degrade many components of a typical fuel cell. Further, the exothermic reaction can result in uneven temperature distribution across a fuel cell, thus limiting cell-operating temperature and efficiency, and additionally raising concerns about catalyst poisoning, for example, by carbon monoxide.
Accordingly, fuel cell systems have in the past been proposed with closed liquid cooling loops. Typically proposed are systems comprising a plurality of stacked cells where every fourth cell or so includes small metallic tubing through which cooling water is recirculated. Circulatory power is accordingly required, detracting from overall cell efficiency. This is complicated by large pressure drops in small diameter tubing, and the susceptibility of the cooling tubes to attack by mediums within the cell stack, such as acids in certain designs.
Also proposed are systems wherein large amounts of an oxidant, such as air, in quantities which are multiples of the stoichiometric amount necessary to carry out the electrochemical reaction, are circulated through a stack of fuel cells to additionally function as a cooling medium. As with liquid-cooled systems, an associated penalty is the large amount of circulatory power required.
More recently proposed have been systems including a stack of fuel cells with a cooling module placed between every fourth or so fuel cell in the stack. Air is manifolded so as to flow in parallel through the process oxidant channels of the fuel cells, as well as through cooling passages of the cooling module. The cooling module passages are much larger than the fuel cell process channels so that approximately eighty percent of the air flows through the cooling cell passages and the balance through the process cell channels. While such systems represent an improvement in terms of mechanical power requirements, additional improvements can be made. For example, where the amount of airflow is reasonable, that is, where an amount which does not require excess circulatory power is utilized, the air flowing through the cooling channels absorbs substantial amounts of heat energy as the cooling passage is traversed, resulting in less cooling at the exit end of the channel. This condition results in an uneven temperature profile in the fuel cell stack and attendant unbalanced reaction rates, voltage and current distributions, and limits maximum operating temperatures.
It is, therefore, an object of this invention to provide improved cooling arrangements for stacked fuel cell systems which preferably do not suffer excessively high pressure drops and circulatory power requirements and which provide for better temperature distribution throughout the fuel cell stack.
The invention resides in an electrochemical cell system of the type wherein two process fuel cells connected electrically are separated by a cooling module and a fluid oxidant is fed in parallel through process channels in said fuel cells and through cooling passages in said cooling module, said cooling passages having an inlet and an outlet, characterised in that: said cooling' passages are of a variable surface area per unit length which generally increases from said inlet to said outlet.
The invention also consists in an electrochemical cell system including (a) a plurality of stacked fuel cells electrically connected in series, each said cell having an electrolyte and electrodes disposed between a pair of bipolar plates; oxidant channels defined by said bipolar plates disposed on one side of said electrolyte for allowing the passage of an oxidant adjacent said electrolyte, and fuel channels defined by said bipolar plates disposed adjacent an opposite side of said electrolyte for allowing the passage of a fuel adjacent said electrolyte; and (b) a cooling module disposed between two of said fuel cells, said module including a plurality of cooling passages disposed to allow passage of a cooling medium therethrough, said cooling passages having an inlet and an outlet, said cooling passages having a surface area per unit length which increases in a predetermined manner from said inlet to said outlet.
Advantageously, said oxidant and cooling passages are substantially parallel, said system further comprising means for flowing an oxidant through said oxidant channels and a cooling fluid through said cooling channels in generally the same direction.
The cooling passages are substantially larger than the fuel cell process channels. The process channels are of generally constant cross-section throughout their length, or can include a variable cross section. The cooling passages, however, are provided with a surface area that varies in a predetermined fashion from inlet to outlet. More specifically, the cooling passages are provided with a smaller cross-sectional and/or larger surface areas at the outlets. The cooling passage inlets are preferably arranged so that the inlets are along the side of the cell stack exposed to fresh oxidant, and the outlets are along the side of the cell stack exposed to depleted oxidant. The surface area of the cooling cell passages progressively increases from inlet to outlet. In this manner, as the cooling air traverses the cell passages absorbing heat energy and correspondingly lowering its cooling ability, it also contacts the larger surface area, correspondingly increasing its cooling ability. The net result is more evenly distributed cooling resulting in more uniform cell temperatures.
In addition to varying the cooling surface area of the passages with position along the flow path within the cell, the lateral distance among adjacent channels can also be advantageously varied to more closely match uneven reaction distribution which tends to be higher at the process fuel inlet end of the cell and lower at the process fuel outlet end of the cell.
There are a number of manners in which the surface area of the cooling passages can be varied, including branching the passages from a singular inlet to a plurality of preferably smaller outlets. The passage shape can also be modified, for example, from rectangular to cruciform. Substantially rectangular-based channel shapes are preferred in order to most advantageously match cooling requirements with distribution of heat generation.
Figure 1 is an expanded perspective view of a fuel cell stack including a cooling arrangement in accordance with the invention;
Fig. 2 is a plan view, in cross section, of a portion of a cooling module;
Figs. 3, 4 and 5 are elevational section views taken respectively at III-III, IV-IV and V-V of Fig. 2;
Figs. 6, 7 and 8 are elevational section views, similar to Figs. 3, 4 and 5, for another embodiment of the invention, taken respectively at VI-VI, VII-VII and VIII-VIII of Fig. 18;
Figs. 9, 10, 11 and 12 are similarly elevational section views of yet another embodiment, taken respectively at IX-IX, X-X, XI-XI and XII-XII of Fig. 19;
Figs. 13 and 15 are plan views, in cross section, of additional embodiments of cooling modules;
Figs. 14 and 16 are views taken respectively at XIV-XIV and XVI-XVI of Figs. 13 and 15;
Fig. 17 is an expanded perspective schematic of selected portions of a fuel cell stack in accordance with another embodiment of the invention; and
Figs. 18 and 19 are cross-sectional plan views of additional cooling module passage configurations.
The invention will become more apparent from the following description of exemplary embodiments thereof when taken in connection with the accompanying drawings, in which:
Referring now to Figure 1, there is shown an electrochemical fuel cell system 10. The system includes a plurality of repeating fuel cells 12 arranged in a stack such that the cells 12 are electrically connected in series. Cell stacks can also be arranged in parallel.
An individual cell, such as the cell 12', includes two bipolar plates 14 between which are sandwiched an electrolyte, for example, in the form of a porous graphite matrix 16 saturated with an acid such as phosphoric acid. Many other materials and structures which incorporate an electrically insulating matrix material can also be utilized. The plates 14 can comprise a material such as compression molded graphite-resin composite, disposed on opposite sides of the electrolyte matrix 16 and electrodes 18, such as the cathode 20 and anode 22. Each electrode 18 can also be of a porous graphite material provided with a porous graphite fiber backing 24 for added structural integrity.
The bipolar plates 14 are provided with a set of process channels, including fuel channels 26 and oxidant channels 28. The channels 26, 28 are generally rectangular with slightly slanted edges 30 to facilitate fabrication as necessary, for example, to remove a fabrication die. The bipolar plates 14 also include grooves 32 matingly configured to receive the electrodes 18. Thus, when held together by means well known, such as bonding materials and an external frame, each cell represents a substantially sealed unit.
An oxidant, such as a halogen, or air or other oxygen-containing material, flows through the oxidant channels 28, and a fuel, such as hydrogen, organics or metals, flows through the fuel channels 26. Manifolds 27 are typically utilized to, for example, provide oxidant to the oxidant inlet side 34 of the cell system stack and to receive the oxidant from the oxidant outlet side 36 of the stack. Similarly, manifolds are provided on the fuel inlet side 38 and fuel outlet side 40. Electrical power and heat are generated by the interaction of the fuel and oxidant through the electrodes and electrolyte matrix 16. An exemplary fuel cell 13 utilizes hydrogen fuel, air as the oxidant and phosphoric acid as the electrolyte.
A substantial amount of heat is generated by the electrochemical reaction, and accordingly, the system stack 10 includes cooling modules 42. Dependent upon the operating temperatures desired, the cooling modules 42 are placed between fuel cells 12 at selected positions within the stack 10. A cooling module 42 may, for example, be placed between approximately every third cell to every eighth cell.
Each cooling module 42 is preferably comprised of a material similar to that of the bipolar plates 14, compression molded graphite-resin composite in the exmplary system. The cooling module 42 includes a plurality of passages 44, described more fully hereinafter. The cooling module 42 can be formed of one piece, although, as shown, two sections 46 are preferably separately fabricated and subsequently sealed together. The cooling passages 44 are preferably substantially rectangular, although other geometric shapes are equally possible. Where the cooling module is formed in two sections 46, cooling passage edges 48 are preferably slanted slightly, as are the fuel cell channels 28, approximately seven degrees from vertical, to accommodate removal of a die during fabrication.
The cooling passages 44 are preferably oriented generally parallel to the oxidant channels 28, although they can also be oriented parallel to the fuel channels 26. The latter, however, requires more complex manifolding. A cooling fluid flows through the cooling passages 44. In preferred form the cooling fluid and oxidant are the same medium, such as air. Thus, with the configuration shown, air is brought from a singular manifold 27 to the oxidant inlet side 34 of the fuel cell system stack 10, and flows in parallel and in the same direction through the cooling passages 44 and oxidant process channels 28.
As the cooling air flows within the passages 44 heat generated by the electrochemical reaction is absorbed. In order to maintain a relatively constant temperature across the fuel cells 12 without excessively large cooling airflow rates, the surface area of the cooling passages 44 is varied with the distance from the inlet end of the passages. A small surface area is provided at the inlet end and a larger surface area is provided at the exit end. The increase in surface area can be provided in a number of manners, such as by changing the geometry or perimeter of a given passage as a function of distance, or by dividing the passages into additional branches along the direction of flow. The variation can either be continuous or can include step changes. The actual passage shape may be dictated by fabrication preference.
A branching arrangement is shown in Figs. 2 through 5. Here a cooling passage 44 includes a first segment having a singular branch 44', a second segment including two branches 44", and a third segment including three branches 44''', respectively, from inlet toward outlet. Each segment is approximately one-third of the length of the entire passage. The rectangular shape shown is preferred since it is relatively simple to fabricate and match with desired cooling requirements. The surface area defined by the branches 44', 44", 44"', progressively increases.
Figs. 6 through 8 represent a series of segments, respectively, from inlet to outlet, where the geometry is modified to provide an increasing surface area. A similar sequence is shown in Figs. 9 through 12. Figs. 13 and 14 show a passage 44, rectangular in cross-section, which gradually increases in surface area from inlet to outlet, and Figs. 15 and 16 show a passage 44, circular in cross-section, which gradually increases from inlet to outlet. Many additional configurations are equally possible.
In addition to varying the surface area along the length of a passage to achieve a more even temperature distribution across the cells 12, adjacent passages can also be spaced laterally in a predetermined manner. Particularly, as shown in Fig. 17, cooling passages 44a and 44b are spaced closer together than, for example, passages 44c and 44d. As fuel traverses the channels 26 from inlet 38 to outlet 40, the fuel is gradually depleted. Accordingly, the heat generated by the exothermic reaction is greater at the fuel inlet 38 and less toward the fuel outlet 40. By spacing the cooling passages 44 so that more cooling air flows adjacent the fuel inlet and less cooling air flows at the fuel outlet, a more even temperature distribution across the cell is achieved.
2
2
The actual sizing, shaping and spacing of the cooling passages will vary with heat generation for any given cell system and with other factors affecting heat transfer such as the type and magnitude of cooling fluid flow. As an example of the change in surface area required across the cell, it is known that, where q(x) is the heat flux per unit area (Btu/ft-hr) generated in the bipolar plates, the temperature rise of the fuel cell channel above the local cooling air temperature iswhere A(x) is the cooling surface area per unit of plate area (dimensionless), h(x) is the local heat transfer coefficient (Btu/hr-ft °F), Tp(x) is the bipolar plate temperature (°F) and Ta(x) is the local cooling air temperature (°F).
a(o)
µ
The cooling air temperature satisfies the equationand,where w is the plate width, T is the temperature of the inlet cooling air (°F), q(x) is the average flux from x=o to x, m is the mass flow rate of cooling air per plate (lb/hr), and ε is the specific heat of the cooling air (Btu/Ib-°F). Accordingly,
o
a
The bipolar plates can be at constant temperature, Tp, by settingAs a specific case for illustration, if the heat generated per unit area in the bipolar plates q(x) is constant, q, then the intensity of cooling factorandwhere ΔT is the temperature rise of the cooling fluid.
1
Thus, if the temperature difference between the inlet air and the corresponding portion of the adjacent bipolar plate is 100°F, and the cooling air temperature rises 75°F while traversing the cooling passages, then the required ratio of surface area times heat transfer coefficient at the outlet, U(L), to surface area times heat transfer coefficient at the inlet, U(O), is 100/100-75=4:. This ratio is readily achieved by dividing a larger channel at the inlet into two or three channels at the exit particularly as an intensity. of cooling factor ratio of 4:1 does not require a 4:1 surface area ratio, since h will increase due to a smaller channel hydraulic diameter at the outlet.
It will be recognized by those skilled in the art that many cooling passage shapes are possible, and that hydraulic diameter is an important factor when selecting passage shape and surface area. Preferred shapes are those which reduce hydraulic diameter and simultaneously increase surface area so as to provide an increase in both the cooling area and the heat transfer coefficient, the intensity of cooling per unit area.
The cooling airflow which produces a 75°F rise would result in approximately a 75°F temperature variation among adjacent bipolar plates where cooling channel surface area variation, as disclosed, is not utilized. This temperature variation is reduced to approximately 25°F with the disclosed surface area variation.
The advantages resulting from a more even temperature distribution are substantial, including the allowance of higher average bipolar plate temperature for a given maximum temperature. The disclosed system not only reduces the airflow required and accordingly the required circulator power, but further alleviates the effects of carbon monoxide catalyst poisoning which decreases with increasing operating temperature and, accordingly, is lessened by the higher average temperature. | |
It might feel a little too cold to venture into the garden in the depth of winter, but February is the perfect month to undertake some garden maintenance that will help prepare for spring.
So, wrap up warmly and head out into your green space and get busy with some crucial gardening jobs to get ahead of the game as gardens prepare to come to life again.
Sow your tomato seeds
Growing your own tomatoes from seed is a simple and extremely rewarding task and February is the ideal time to sow tomato seeds if you have a greenhouse or even a warm, sunny window ledge.
To get started, fill a small pot with compost and water the soil liberally, before sowing four or five seeds on the surface. Place in your greenhouse or propagator – or if growing indoors cover with clear plastic and leave on a warm windowsill.
Seedlings should appear within fourteen days, at which point you can uncover the plants and move them to individual pots, watering regularly.
Prepare your borders
February is the best time to cut away last season’s dead perennials and remove any weeds or debris from borders.
Treat your flower beds to a thick mulch which will help to suppress weeds, but don’t cover any protruding bulb shoots, as this will stop warming sunlight from reaching them and could cause them to rot.
Install a bird box
National Nestbox Week takes place between 14 – 21 February, so why not encourage birds into your garden with the installation of a bird box.
Garden birds will soon be building nests and looking for safe places to hatch their chicks. Half-term is approaching and this is a great gardening opportunity to get the kids involved and take part in building nest boxes for robins, wrens and other birds.
Place your bird box in a safe location, and if possible, shelter it with surrounding trees or point the entrance hole towards the northeast as this will help to protect the interior from harsh rain and wind.
Why not install a small camera inside or close to your bird box to enjoy a front-row view of nesting birds and baby chicks from the comfort of your armchair!
Tidy up the evergreens and ornamental grasses
Late February is the best time to cut back any evergreen foliage to encourage new growth. The same is true with ornamental grasses which can be trimmed right back to within six inches from the ground, ready for healthy regrowth in the spring months.
Evergreens are very low maintenance but will benefit from a little TLC in the late winter. Treat your evergreens to a fertilising treatment to supplement the soil for sturdy growth. Try Empathy After Plant Feed to stimulate root growth and deliver nutrients.
Clean, repair and service your lawnmower
While it’s not quite time to start cutting the grass, there is nothing more frustrating than digging out the petrol mower from the back of the shed only to find it’s dirty and in need of repair.
Now is a good time to give your mower a thorough clean and mini service in preparation for spring. Always remember to remove the spark plug from the machine before you begin any maintenance, to prevent it from starting up unexpectedly.
To perform a basic mower service, take out the spark plugs to look for signs of dirt and give them a wipe, before checking the oil levels (there will be a dipstick to help you). Make sure the oil level is reaching the required level and top up if necessary.
It’s also a good idea to change the air filter before using the mower regularly – consult the user manual to see which is the right one for your model. They are fairly cheap to replace and will ensure a spring and summer of maintenance-free mowing.
If you are not confident in stripping back your lawnmower, there are many reputable servicing companies across the country that will be able to perform a lawnmower service for you.
Get mulching!
Mulch does amazing things to a garden. A liberal helping of good quality mulch can help to conserve water and aid the penetration of rain deep into the soil in winter months. A covering of mulch in February will help to inhibit weed growth, deter pests and insects and protect delicate plant roots from cold temperatures.
To get the best from your mulch, apply in a layer of 5cm thickness over moist soil, but avoid laying mulch on frozen ground as it will be less effective.
Begin a lawn care routine
Now is the time to fix any divots or raised areas in your lawn. Using a spade, dig into the affected area using the blade to make an ‘H’ shape then carefully lift back the turf and remove any excess earth before placing the turf carefully back together. If it is a hollow that needs to be filled, then simply add more soil before replacing the turf.
If your lawn is prone to moss, rake it over to remove the worst and treat the area with a moss killer. Remove any leaves and debris that has built up over the winter months to prevent them from smothering and weakening delicate blades of grass.
Purchase your pots, seeds and tools
February always feels like a real turning point in the garden. It may still be chilly outdoors, but the nights are getting lighter and there are encouraging signs of spring starting to appear.
But, even if the weather is not kind enough to get out in the garden this February, you can still get your gardening fix by creating a gardening plan, listing all the plants you want to purchase in the better weather and taking a visit to the local garden centre to buy any pots, compost and tools you will need ready for springtime. | https://www.surf4hub.com/home-garden/what-to-do-in-the-garden-this-february/ |
Triple Layer Chocolate Bars
This recipe has not been reviewed. Be the first to rate this recipe!
From the kitchen of Morgan Ruff
This recipe has been on northpole.com from the beginning… since 1996! We hope you enjoy this classic North Pole recipe. Let us know your opinion by submitting a review!
Log in to upload a photo
Recipe Details
|
Bake Time:
||
25
minutes
||
Yields:
|
Tags:
|
24 bars
|
Contains nuts, Good for parties/potlucks
Ingredients
1 1/2 cups graham cracker crumbs
1/2 cup cocoa
1/4 cup sugar
1/2 cup butter
1 can condensed milk
1/4 cup plain flour
1 egg
1 teaspoon vanilla flavoring
1 1/2 teaspoons baking powder
3/4 cup chopped nuts
1 package (12 ounce) semi-sweet chocolate chips
Directions
Preheat oven to 350 degrees.
Combine crumbs,1/4 cup cocoa, sugar, and margarine.
Press firmly on bottom of a 13x9 pan.
Beat together milk, flour, egg, vanilla and baking powder.
Stir in nuts; spread over prepared crust; top with chips.
Bake for 25 minutes or until set.
Cool.
Store tightly covered.
Reviews
This recipe has not been reviewed. Log in now to rate this recipe. | https://www.northpole.com/Kitchen/Cookbook/Triple-Layer-Chocolate-Bars-2749 |
I used to struggle with English grammar. I couldn’t understand why I didn’t get it but eventually found out that I wasn’t alone in this. For most people learning English as a second language (ESL), grammar is one of the most difficult areas to master. However, there are certain techniques you can use to help you get a better grasp on the subject and improve your understanding so that you can communicate more effectively in your new language.
Practice writing. Not all of us have the best handwriting and this can impact our ability to learn English grammar.
- Write down what you learn. The practice of writing is essential to vocabulary and grammar acquisition, so don’t be afraid to use a pen and paper! Whenever you see something in English that confuses you, try writing it down on a piece of paper—either in your native language or English. This will help cement the meaning of these words and expressions in your head.
- Write in your own words. One common way for learners to get confused about English grammar is when they’re given a sentence that uses an unfamiliar sequence of tenses (such as “I am going” instead of “I went”). Writing these sentences out by hand allows you to quickly spot mistakes like this so that they can be corrected before they become ingrained habits.
- Write down your examples: When learning new vocabulary words or concepts such as prepositions (under vs over), pronouns (he/she/it), or conjunctions (but vs, however), ask yourself questions about them until their meaning becomes clear before moving on with the lesson plan at hand! This strategy can also help avoid confusion between similar sounding words such as then vs than because they both mean “at that time.”
- Write down your questions: Ask yourself how certain phrases might be used differently if there weren’t punctuation marks around them; for example: “They had plans last night but canceled them because she got sick.” Here we could rewrite something like this sentence using commas instead: “They had plans last night but canceled them because she got sick.” This would change its meaning entirely because there’s no indication within those three words whether this was intentional behavior from either party involved – maybe one person wanted something else more important done first?
Study one grammar concept at a time.
To get the most out of your study time, focus on one grammar concept at a time. It’s not effective to try to learn everything at once. Instead, break down what you’re learning into smaller chunks so that each part is easier to understand and remember.
Learn and practice new vocabulary.
If you want to improve your grammar, you need to learn and practice new vocabulary. There are many ways to do this:
- Read and listen. This is the best way to build up your vocabulary, so make reading a regular part of your routine. If you don’t like reading, try listening instead! You can use audiobooks or podcasts (we have some great suggestions here).
- Make flashcards. Flashcards are an easy way to memorize words often in conversations with native speakers (or in written English). They’re also great for testing yourself on what you’ve learned so far! Check out our guide on how to make them here.
- Use apps and websites like Duolingo or Memrise which provide language learning games or courses with interactive features like quizzes or challenges that help users practice their skills in context by translating sentences from English into another language (or vice versa) – perfect for building up confidence when speaking!
- Look up unfamiliar words using a dictionary app such as Merriam Webster Dictionary & Thesaurus app (available via the App Store) which has more than 100 million definitions covering everything from slang terms used today through historical etymology meaning “ancient origin” etc., all cross-referenced by subject category making it easy
Apply your learning by reading and writing regularly.
Reading and writing are the most important ways to apply your learning. Reading will help you understand grammar better, as well as learn new vocabulary, while writing will help you practice grammar and improve your spelling, punctuation, and grammar.
You can read all sorts of things: novels, newspapers or magazines; blogs that interest you; websites about topics that interest you; even something like a cookbook if it’s what interests you most! And when it comes to writing — whether it’s emails or letters in English or something else for school like an essay — ensure that what you write is correct according to standard written English (i.e., don’t use slang terms when they’re not necessary).
Find a learning partner who can support you with your studies.
One of the best ways to improve your English grammar skills is by finding a learning partner. It’s important to find someone who is willing and able to help you with your studies, but it’s also essential that they are at the same level as you and motivated to learn. If your potential learner isn’t willing or able to share their new learning with you, then this might not be the right person for this type of partnership.
Use a whiteboard or flip chart when you are giving presentations.
- Use a whiteboard or flip chart when you are giving presentations.
- Write down key points on the board or flip chart for your audience to see.
- Then, write down additional key points on the board or flip chart that isn’t as important, but still, needs to be covered.
Use color-coded post-it notes when writing.
Learning the grammar of English is a lot like learning a new language itself. It’s easy to get lost in it, especially when you’re trying to remember all the rules. The best way to make sure you understand the rules is to use color-coded post-it notes when writing.
When learning a new language, students usually begin by learning nouns and adjectives first because they are less complicated than verbs and tenses. In this same manner, when studying grammar you should start with nouns first before moving on to verbs and tenses so that your mind doesn’t become too overwhelmed with information at once.
Your post-it notes should be separated into three sections: nouns (for example “boy”), verbs (for example: “run”), and adverbs (for example: “quickly”). You’ll also want other colors such as green or blue so that these words can be used in different situations such as conjunctions or prepositions where they aren’t really used as nouns but still have an important part within its sentence structure
Anyone can learn English grammar if they know how to do it effectively
Here are some effective ways for English learners to learn about grammar:
Practice writing in English. The best way to learn any language is by using it regularly and consistently. By writing down your ideas, you will be able to see what works and what doesn’t work when it comes to communicating with others. Not only will this help you develop better-written skills, but it will also reinforce the rules of grammar that you’ve learned along the way.
Read books in English with a dictionary by your side. If there are words that don’t make sense or concepts that seem too difficult for you right now, look them up! Learning new vocabulary is an important part of learning any language (just think about how useful it would be if everyone spoke their native tongue perfectly).
Find someone who has already taken steps towards mastering these concepts so they can share their knowledge with others – whether through tutoring sessions or simply talking through different ideas together at home after school each day (this also helps kids connect with their peers during those awkward adolescent years).
Conclusion
If you follow these tips, you should be able to improve your understanding of English grammar. | https://www.tigercampus.com.my/how-to-learn-english-grammer-better-and-faster/ |
Overview:
The Biological Oceanography Program supports research in marine ecology broadly defined: relationships among aquatic organisms and their interactions with the environments of the oceans or Great Lakes. Projects submitted to the program for consideration are often interdisciplinary efforts that may include participation by other OCE Programs.
The Biological Oceanography Program supports marine ecological projects in environments ranging from estuarine and coastal systems to the deep sea, and in the Great Lakes. Proposals submitted to the Program should have a compelling ecological context and address topics that will contribute significantly to the understanding of marine and the Great Lakes ecosystems. The Biological Oceanography Program often co-reviews and supports projects with other programs in the Division of Ocean Sciences and in the Directorate of Biology (BIO). Proposals may be more appropriate for programs in BIO as the lead program if the primary focus is on organismal physiology, cell biology, biochemistry, molecular genetics, population biology, systematics, etc. Similarly, some ocean-focused, interdisciplinary studies may be more appropriately directed to one of the other programs in the Division of Ocean Sciences or programs in the Division of Polar Programs as the lead program. Investigators are encouraged to contact Program Officers by e-mail to determine the appropriate program for their proposal.
You can learn more about this opportunity by visiting the funder's website.
Eligibility:
- Who may submit proposals:
- Institutions of Higher Education (IHEs) - Two- and four-year IHEs (including community colleges) accredited in, and having a campus located in the US, acting on behalf of their faculty members. IHEs located outside the US fall under paragraph 6. below.
- Special Instructions for International Branch Campuses of US IHEs:
- If the proposal includes funding to be provided to an international branch campus of a US institution of higher education (including through use of subawards and consultant arrangements), the proposer must explain the benefit(s) to the project of performance at the international branch campus, and justify why the project activities cannot be performed at the US campus.
- Non-profit, Non-academic Organizations - Independent museums, observatories, research laboratories, professional societies and similar organizations located in the US that are directly associated with educational or research activities.
- For-profit Organizations - US commercial organizations, especially small businesses with strong capabilities in scientific or engineering research or education.
- An unsolicited proposal from a commercial organization may be funded when the project is of special concern from a national point of view, special resources are available for the work, or the proposed project is especially meritorious.
- NSF is interested in supporting projects that couple industrial research resources and perspectives with those of universities; therefore, it especially welcomes proposals for cooperative projects involving both universities and the private commercial sector.
- State and Local Governments - State educational offices or organizations and local school districts may submit proposals intended to broaden the impact, accelerate the pace, and increase the effectiveness of improvements in science, mathematics and engineering education in both K-12 and post-secondary levels.
- Unaffiliated Individuals - Unaffiliated individuals in the US and US citizens rarely receive direct funding support from NSF.
- Recipients of Federal funds must be able to demonstrate their ability to fully comply with the requirements specified in 2 CFR § 200, Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards.
- As such, unaffiliated individuals are strongly encouraged to affiliate with an organization that is able to meet the requirements specified in 2 CFR § 200.
- Unaffiliated individuals must contact the cognizant Program Officer prior to preparing and submitting a proposal to NSF.
- Foreign organizations - NSF rarely provides funding support to foreign organizations.
- NSF will consider proposals for cooperative projects involving US and foreign organizations, provided support is requested only for the US portion of the collaborative effort.
- In cases however, where the proposer considers the foreign organization’s involvement to be essential to the project (e.g., through subawards or consultant arrangements), the proposer must explain why local support is not feasible and why the foreign organization can carry out the activity more effectively.
- In addition, the proposed activity must demonstrate how one or more of the following conditions have been met:
- The foreign organization contributes a unique organization, facilities, geographic location and/or access to unique data resources not generally available to US investigators (or which would require significant effort or time to duplicate) or other resources that are essential to the success of the proposed project; and/or
- The foreign organization to be supported offers significant science and engineering education, training or research opportunities to the US.
- Other Federal Agencies - NSF does not normally support research or education activities by scientists, engineers or educators employed by Federal agencies or FFRDCs.
- Under unusual circumstances, other Federal agencies and FFRDCs may submit proposals directly to NSF.
- A proposed project is only eligible for support if it meets one or more of the following exceptions, as determined by a cognizant NSF Program Officer:
- Special Projects. Under exceptional circumstances, research or education projects at other Federal agencies or FFRDCs that can make unique contributions to the needs of researchers elsewhere or to other specific NSF objectives may receive NSF support.
- National and International Programs. The Foundation may fund research and logistical support activities of other Government agencies or FFRDCs directed at meeting the goals of special national and international research programs for which the Foundation bears special responsibility, such as the US Antarctic Research Program.
- International Travel Awards. In order to ensure appropriate representation or availability of a particular expertise at an international conference, staff researchers of other Federal agencies may receive NSF international travel awards.
- Proposers who think their project may meet one of the exceptions listed above must contact a cognizant NSF Program Officer before preparing a proposal for submission.
- In addition, a scientist, engineer or educator who has a joint appointment with a university and a Federal agency (such as a Veterans Administration Hospital, or with a university and a FFRDC) may submit proposals through the university and may receive support if he/she is a faculty member (or equivalent) of the university, although part of his/her salary may be provided by the Federal agency. Preliminary inquiry must be made to the appropriate program before preparing a proposal for submission.
Find more grants like this
Find more grants like the Biological Oceanography Program by joining Instrumentl.
We help nonprofits and academics find more grants and take control of their grants process with a refreshingly intuitive online platform. | https://www.instrumentl.com/grants/national-science-foundation-biological-oceanography-funding-program |
The Research and Applications Scientist works with CCDC’s Discovery Science team on both science and software that is widely used by both high-profile industrial partners in the pharmaceutical and agrochemical industries as well as academic researchers and educators.
Research and Applications Scientist
The successful applicant will join the team in Cambridge, UK and engage in a wide range of duties from webinars and software testing, through to primary research and life-science consultancy services. There will be some international travel
Main Responsibilities include:
- You will be working with the Discovery Science team provide, scientific support for users as is necessary to build a strong scientific relationship;
- Obtain and maintain the required levels of scientific knowledge and expertise in CCDC’s products to be able to effectively perform the assigned scientific activities;
- Working with the Product Managers and the software development teams, carry out scientific software testing, as well as writing scientific documentation, marketing literature and other promotional material as required;
- Provide expert scientific user support externally, as well as internally collaborating with other CCDC staff as necessary to deliver positive outcomes for users;
- Provide scientific advice and guidance, from a user-perspective, as a part of Agile software development processes on relevant projects;
- Undertake strategic research projects in collaboration with the Discovery Science team and external partners to advance scientific understanding and to showcase the value of the CCDC software and science to our academic and industrial user communities;
- On occasion, provide professional services to external companies to maximise the value of using CSD software and data to address scientific questions.
Initially you will be based remotely. However, longer term you’ll be working in a close team of 6 (including you). We are a friendly and collaborative team so there is an expectation you will be able to spend significant time in the Cambridge office, however we are able to offer flexibility and home working.
The ideal candidate:
- PhD or equivalent in a chemistry related field
- Proven knowledge of computational chemistry in drug discovery. A demonstrable interest in structural chemistry and the role of intermolecular interactions in protein-ligand systems.
- Experience of using structure-based and ligand-based molecular modelling in the context of drug design
- Experience of using initiative to drive forward research ideas
- Experience of writing reports and summarizing scientific findings concisely and clearly
It will be essential you can demonstrate ongoing scientific project delivery. It would be great if you could write scripts in python, however there are 5 other team members proficient enough to train you. For further details on this position, please see the full job description and person specification as detailed on the careers page of our website.
To apply for the position, please send your CV and a covering letter to Petra Hales. | https://www.cambridgenetwork.co.uk/job/permanent/607390 |
The notion of two separable components is captured in such songs as I’ll Fly Away:
Some bright morning when this life is over
I’ll fly away
To that home on God’s celestial shore
I’ll fly away
It echoes the message of Psalm 70:10: “The days of our years are threescore years and ten; and if by reason of strength they be fourscore years, yet is their strength labour and sorrow; for it is soon cut off, and we fly away.” In other words, like John Brown (in Pete Seeger’s words), our bodies may be “mouldering in the grave.” Nevertheless, “we” (minds) will fly away (from our bodies) to something much better.
As with virtually every claim in philosophy, options, rivals, and critics abound. Among those in play have been:
- Materialism, which claims that there’s only body, just the physical, and when the body rots, that’s all she wrote. Mental states are nothing more than brain states. (Sometimes it’s called the “Mind-Brain Identity Theory”.)
- Idealism, which claims that talk of physical bodies is translatable without loss of meaning into talk of experiences. For instance, once you describe the look, smell, taste, feel, and sound of an apple, you’ve exhausted the meaning of ‘apple.’ On this model, you have actors/experiencers and experiences, none of it physical in the materialists’ sense. Death is simply transition from one sort of experience to another.
- Epiphenomenalism, a form of dualism, which recognizes the distinctiveness of mind and body, but with mind/consciousness simply along for the ride. The brain does its thing, and the mind experiences it as an act of will, an episode of worry, etc.
- Neutral monism, which says there is only one thing of which we speak, but in two different ways—using “P-predicates” for physical properties (such as the average length of the adult brain at 6.3 inches) and M-predicates (such as the feeling of satisfaction over vindication).
Of course, mind-brain dualists don’t have to sign off on Descartes’ pineal gland hypothesis. The aforementioned epiphenomenalists don’t, but neither do the occasionalists, who say that God manages all the interactions. For instance, when I decide to lift a pencil, he (on that occasion) prompts the body to lift it. And when someone jabs me with a pencil, he (on that occasion) gives rise my pain experience. That’s a lot of work for God, but he’s up to it, being omnipotent and omniscient.
Furthermore, idealists can have their own sort of mind-body dualism. For them, the experience of excruciating pain is utterly different from the experience of seeing a thumb banged by a closing car door. The former is mental, the latter physical.
And so it goes, with tweaks and reformulations and “breakthroughs” of one sort or another.
One surprising development is the emergence of “Christian physicalists.” Papers on this approach have surfaced at meetings both of the American Philosophical Association and the Evangelical Theological Society. Pardon my skepticism, but the phenomenon is reflective of the difficulty involved in doing everything justice, in “saving the appearances,” as they used to say for making sure that no phenomena were left hanging out there; and in wielding Ockham’s Razor (so as to not “multiply things beyond necessity”). | https://philosophipotamus.com/miscellany/pineal-gland/ |
A truth, intimately united with human aspiration and for centuries closely associated in the human heart with the festival whose modern symbol is the Christmas tree, is expressed in the words that have resounded ever since the time of the Mystery of Golgotha and that must be impressed still more deeply into the evolution of the earth. This truth, which has shone down through the ages, is associated with the words, “et incarnatus est de spiritu sancto ex Maria virgine” (“and is born of the Holy Spirit from the Virgin Mary”).
Most of the people of today seem to attach just as little significance to these words as they do to the Easter mystery of the Resurrection. We might even say that the central mystery of Christianity, the resurrection from the dead, appears to modern thought, which is no longer directed to the truths of the spiritual world, just as incredible as the Christmas mystery, the mystery of the Word becoming flesh, the mystery of the virgin birth. The greater part of modern humanity is much more in sympathy with the scientist who described the virgin birth as “an impertinent mockery of human reason” than with those who desire to take this mystery in a spiritual sense.
Nevertheless, my dear friends, the mystery of the incarnation by the Holy Spirit through the Virgin begins to exert its influence from the time of the Mystery of Golgotha; in another sense it had made itself felt before this event.
Those who brought the symbolic gifts of gold, frankincense, and myrrh to the babe lying in the manger knew of the Christmas mystery of the virgin birth through the ancient science of the stars. The magi who brought the gifts of gold, frankincense, and myrrh were, in the sense of the ancient wisdom, astrologers, they had knowledge of those spiritual processes that work in the cosmos when certain signs appear in the starry heavens. One such sign they recognized when, in the night between December 24 and 25, in the year that we today regard as that of the birth of Jesus, the sun, the cosmic symbol of the Redeemer, shone toward the earth from the constellation of Virgo. They said, “When the constellation of the heavens is such that the sun stands in Virgo in the night between December 24 and 25, then an important change will take place in the earth. Then the time will have come for us to bring gold, the symbol of our knowledge of divine guidance, which hitherto we have sought only in the stars, to that impulse which now becomes part of the earthly evolution of mankind. Then the time will have come for us to offer frankincense, the emblem of sacrifice, the symbol of the highest human virtue. This virtue must be offered in such a way that it is united with the power proceeding from the Christ Who is to be incarnated in that human being to whom we bring the frankincense.
This was the belief held for thousands of years, and as the magi felt compelled to lay at the feet of the Holy Child the wisdom of the gods, the virtues of man, and the realization of human immortality, symbolically expressed in the gold, frankincense, and myrrh, something was repeated as a historical event that had been expressed symbolically in innumerable mysteries and in countless sacrificial rituals for thousands of years. There had been presented in these mysteries and rituals a prophetic indication of the event that would take place when the sun stood at midnight between December 24 and 25 in the sign of the Virgin, for gold, frankincense, and myrrh were also offered on this holy night, to the symbol of the divine child preserved in ancient temples as the representation of the sun.
Thus, my dear friends, for nearly two thousand years the Christian words, “incarnatus de spiritu sancto ex Maria virgine” have resounded in the world, and so it has been ever since human thought has existed on the earth. In our times we can now present the question, “Do human beings really know to what they should aspire when they celebrate Christmas?” Does there exist today a real consciousness of the fact that, out of cosmic heights, under a cosmic sign, a cosmic power appeared through a virgin birth — spiritually understood — and that the blazing candles on the Christmas tree should light up in our hearts an understanding of the fact that the human soul is most intimately and inwardly united with an event that is not merely an earthly but a cosmic earthly event? The times are grave, and it is necessary in such serious times to give serious answers to solemn questions, such as the one raised here. With this in mind we will take a glance at the thoughts of the leading people of the nineteenth century to see whether the idea of Christ Jesus has lived in modern humanity in such a way as to give rise to the thought: the Christmas mystery has its significance in the fact that man wills to celebrate something eternal in the light of the Christmas candles.
Ernst Renan never tires of describing this idyll of Galilee, so remote from the world's historic events, so as to make it seem natural that in this idyll, in this unpretentious landscape, with its turtle doves and storks, those things could happen that humanity for centuries has associated with the life of the Savior of the world.
So, my dear friends, that truth from which the earth received its meaning, the truth toward which humanity has looked for centuries, is attractive to a thinker of the nineteenth century only as an idyll with turtle doves and storks.
This, my dear friends, is one of the voices of the nineteenth century. Let us listen now to another, the voice of John Stuart Mill, who also desires to find his way from the consciousness of the nineteenth century to the being whom humanity for hundreds of years, and to the prophetic mind of man for thousands of years, has recognized as the Savior of the world.
“Only so long as religions have to struggle with each other in rivalry, and are more persecuted than followed, are they beautiful and worthy of veneration, only then do we see enthusiasm, sacrifice, martyrs, and palms. How beautiful, holy, and loveable, how heavenly sweet was the Christianity of the first centuries, as it sought to equal its divine founder in the heroism of His suffering — there still remained the beautiful legend of a heavenly God who in mild and youthful form wandered under the palms of Palestine preaching human love and revealing the teaching of freedom and equality — the sense of which was recognized by some of the greatest thinkers, and which has had its influence in our times through the French Gospel” (of Liberty, Equality, and Fraternity).
Here we have this Heine Creed which regarded Him, whom humanity for centuries has recognized as the Redeemer of the world, as worthy of praise because we ourselves would have chosen Him, in our democratic fashion, even if He had not already held that exalted position, and because He preached the same Gospel as was preached later, at the end of the eighteenth century. He was therefore good enough to be as great as those who understood this Gospel.
Let us take another thinker of the nineteenth century. You know that I think very highly of Edward von Hartmann. I mention only those whom I do admire in order to show the manner in which the thought of the nineteenth century about Christ Jesus expressed itself.
Yet another voice I wish to quote, the voice of one of the principal characters in a romance that exercised a wide and powerful influence during the latter third of the nineteenth century over the judgment of the so-called “educated” humanity. In Paul Heyse's book, Die Kinder der Welt, the diary of Lea, one of the characters in the book, is reproduced. It contains a criticism of Christ Jesus, and those who know the world well will recognize in this judgment of Lea's one which was common to large numbers of human beings in the nineteenth century. Paul Heyse has Lea write, “The day before yesterday I stopped writing because an impulse drove me to read the New Testament once again. I had not opened the New Testament for a long time; it had been a long time since its many threatening, damning, and incomprehensible speeches had estranged and repelled my heart. Now that I have lost that childish fear, and the voice of an infallible and all-knowing spirit can be heard, since I have seen therein the history of one of the noblest and most wonderful of human beings, I have found much that greatly refreshed and comforted me.
Here you see the New Testament represented as it had to be if it was to provide satisfaction to such a typical person of the nineteenth century. Thus she says that everything great that she had formerly loved, even when shrouded in majesty, was yet happily and comfortably linked with her being by ties of human need. Because the New Testament contains a power that cannot be described in these terms, therefore, the Gospel failed to meet the needs of a person of the nineteenth century.
“When I read the letters of Goethe, of the narrow home life of Schiller, of Luther and his followers, of all the ancients back to Socrates and his scolding wife — I sense a breath of Mother Earth, from which the seed of their spirit grew, which also nourishes and uplifts mine own which is so much smaller.” Lea thus finds herself more drawn even to characters like Xanthippe than to the people of the New Testament, and this was the opinion of thousands and thousands of people in the nineteenth century.
It is fitting, my dear friends, to ask in these grave times what is really the attitude of soul of people today with regard to the candles they burn at Christmas? For this attitude of soul is a complex of such voices as we have just examined and that could be multiplied a hundred or thousand fold. But it is not fitting in serious times to ignore and disregard the things that have been said about the greatest mystery of earthly evolution. It is much more fitting today to ask what the official representatives of the many Christian sects are able to do to check a development that has led human beings right away from an inwardly true and genuine belief in that which stands behind the lights of Christmas time. For can humanity make of such a festival anything but a lie, when the opinions just quoted from its best representatives are imposed upon that which should be perceived through the Christmas mystery as an impulse coming from the cosmos to unite itself with earthly evolution? What did the magi from the East desire when they brought divine gifts of wisdom, virtue, and immortality to the manger, after the event whose sign had appeared to them in the skies during the night between December 24 and 25 in the first year of our era? What was it these wise men from the East wished to do? They wanted, by this act, to furnish direct historical proof that they had grasped the fact that, from this time onward, those powers who had hitherto radiated their forces down to earth from the cosmos were no longer accessible to man in the old way — that is, by gazing into the skies, by study of the starry constellations. They wished to show that man must now begin to give attention to the events of historical evolution, to social development, to the manners and customs of humanity itself. They wished to show that Christ had descended from heavenly regions where the sun shines in the constellation of Virgo, a region from which all the varied powers of the starry constellations proceed that enable the microcosm to appear as a copy of the macrocosm. They wished to show that this spirit now enters directly into earthly evolution, that earthly evolution can henceforth be understood only by inner wisdom, in the same way as the starry constellations were formerly understood. This was what the magi wished to show, and of this fact the humanity of today must ever be aware.
People of today tend to regard history as though the earlier were invariably the cause of the latter, as though in order to understand the events of the years 1914 to 1917 we need simply go back to 1913, 1912, 1911, and so on; historical development is regarded in the same way as evolution in nature, in which we can proceed from effect to impulse and in the impulse find the cause. From this method of thinking, that fable convenue which we call history has arisen, with which the youth of today are being inoculated to their detriment.
True Christianity, especially a reverent and sincere insight into the mysteries of Christmas and Easter, provides a sharp protest against this natural scientific caricature of world history. Christianity has brought cosmic mysteries into association with the course of the year; on December 24 and 25 it celebrates a memory of the original constellation of the year 1, the appearance of the sun in the constellation of Virgo; this date in every year is celebrated as the Christmas festival. This is the point in time that the Christian concept has fixed for the Christmas festival. The Easter festival is also established each year by taking a certain celestial arrangement, for we know that the Sunday that follows the first full moon after the vernal equinox is the chosen day, though the materialistic outlook of the present time is responsible for recent objections to this arrangement.
Can the starry constellations be perceived in human affairs? My dear friends, this perception is now demanded of us, the ability to read what is revealed through the wonderful key that is given us in the mysteries of the Christian year, which are the epitome of all the mysteries of the year of other peoples and times. The time interval between Christmas and Easter is to be understood as consisting of thirty-three years. This is the key. What does this mean? That the Christmas festival celebrated this year belongs to the Easter festival that follows thirty-three years later, while the Easter festival we celebrate this year belongs to the Christmas of 1884. In 1884 humanity celebrated a Christmas festival that really belongs to the Easter of this year (1917), and the Christmas festival we celebrate this year belongs, not to the Easter of next spring but to the one thirty-three years hence (1950). According to our reckoning, this period — thirty-three years — is the period of a human generation, thus a complete generation of humanity must elapse between Christmas festivals and the Easter festivals that are connected with them. This is the key, my dear friends, for reading the new astrology, in which attention is directed to the stars that shine within the historical evolution of humanity itself.
How can this be fulfilled? It can be fulfilled by human beings using the Christmas festival in order to realize that events happening at approximately the present time (we can only say approximately in such matters) refer back in their historical connections in such a way that we are able to perceive their birthdays or beginnings in the events of thirty-three years ago, and that events of today also provide a birthday or beginning for events that will ripen to fruition in the course of the next thirty-three years. Personal karma rules in our individual lives. In this field each one is responsible for himself; here he must endure whatever lies in his karma and must expect a direct karmic connection between past events and their subsequent consequences.
How do things stand, however, with regard to historical associations? Historical connections at the present time are of such a nature that we can neither perceive nor understand the real significance of any event that is taking place today unless we refer back to the time of its corresponding Christmas year, that is 1884 in this case. For the year 1914 we must therefore look back to 1881. All the actions of earlier generations, all the impulses with their combined activity, poured into the stream of historic evolution, have a life cycle of thirty-three yean. Then comes its Easter time, the time of resurrection. When was the seed planted whose Easter time was experienced by man in 1914 and after? It was planted thirty-three years before.
When, at the beginning of the 1880's, the insurrection of the Mohammedan prophet, the Mahdi, resulted in the extension of English rule in Egypt, when at about the same time a war arose through French influence between greater India and China over European spheres of control, when the Congo Conference was being held, and other events of a like nature were taking place — study everything, my dear friends, that has now reached its thirty-three years fulfillment. It was then that the seeds were sown that have ripened into the events of today. At that time the question should have been asked: what do the Christmas events of this year promise for the Easter fulfillment thirty-three years hence? For, my dear friends, all things in historic evolution arise transfigured after thirty-three years, as from a grave, by virtue of a power connected with the holiest of all redemptions: the Mystery of Golgotha.
It does not suffice, however, to sentimentalize about the Mystery of Golgotha. An understanding of the Mystery of Golgotha demands the highest powers of wisdom of which the human being is capable. It must be experienced by the deepest forces that can stir the soul of man. When he searches its depths for the light kindled by wisdom, when he does not merely speak of love but is enflamed by it through the union of his soul with the cosmic soul that streams and pulses through this turning point of time, only then does he acquire insight and understanding into the mysteries of existence. In days of old the wise men who sought for guidance in the conduct of affairs of human beings asked knowledge of the stars, and the stars gave an answer; so, today, those who wish to act wisely in guiding the social life of humanity must give heed to the stars that rise and set in the course of historic evolution. Just as we calculate the cyclic rotations of celestial bodies, so must we learn to calculate the cyclic rotations of historic events by means of a true science of history. The time-cycles of history can be measured by the interval that extends from Christmas to the Easter thirty-three years ahead, and the spirits of these time-cycles regulate that element in which the human soul lives and weaves in so far as it is not a mere personal being but is part of the warp and woof of historic evolution.
When we meditate on the mystery of Christmas, we do so most effectively if we acquire a knowledge of those secrets of life that ought to be revealed in this age in order to enrich the stream of Christian tradition concerning the Mystery of Golgotha and the inner meaning of the Christmas mystery. Christ spoke to humanity in these words, “Lo! I am with you always even to the end of the world.” Those, however, who today call themselves His disciples often say that; though the revelations from spiritual worlds were certainly there when Jesus Christ was living on earth, they have now ceased, and they regard as blasphemous anyone who declares that wonderful revelations can still come to us from the spiritual world. Thus official Christianity has become, in many respects, an actual hindrance to the further development of Christianity.
What has remained, however? The holy symbols, one of the holiest of which is portrayed in the Christmas mystery — these constitute in themselves a living protest against that suppression of true Christianity that is too often practiced by the official churches.
The spiritual science we seek to express through anthroposophy desires, among other things, to proclaim the great significance of the Mystery of Golgotha and the mystery of Christmas. It is also its task to bear witness to that which gives to earth its meaning, and to human life its significance. Since the Christmas tree, which is but a few centuries old, has now become the symbol of the Christmas festival, then, my dear friends, those who stand under the Christmas tree should ask themselves this question, “Is the saying true for us that is written by the testimony of history above the Christmas tree: Et incarnatus est de spiritu sancto ex Maria virgine? Is this saying true for us?” To realize its truth requires spiritual knowledge. No physical scientist can give answer to the questions of the virgin birth and the resurrection; on the contrary, every scientist must needs deny both events. Such events can only be understood when viewed from a plane of existence in which neither birth nor death plays the important part they do in the physical world. Just as Christ Jesus passed through death in such a way as to make death an illusion and resurrection the reality — this is the content of the Easter mystery — so did Christ Jesus pass through birth in such a way as to render birth an illusion and “transformation of being” within the spiritual world the reality, for in the spiritual world there is neither birth nor death, only changes of condition, only metamorphoses. Not until humanity is prepared to look up to that world in which birth and death both lose their physical meaning will the Christmas and Easter festivals regain their true import and sanctity.
To love another is to understand him; love does not mean filling one's heart with egotistical warmth that overflows in sentimental speeches; to love means to comprehend the being for whom we should do things, to understand not merely with the intellect but through our innermost being, to understand with the full nature and essence of our human being. | https://wn.rsarchive.org/Lectures/GA180/English/MP1983/19171223p01.html |
_________ is the study of the chemistry of living systems.
Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!*See Solution
*Response times vary by subject and question complexity. Median response time is 34 minutes and may be longer for new subjects. | https://www.bartleby.com/solution-answer/chapter-21-problem-1qap-introductory-chemistry-a-foundation-9th-edition/9781337399425/_________-is-the-study-of-the-chemistry-of-living-systems/475d7f3d-2634-11e9-8385-02ee952b546e |
⋅ High serial number yet nonetheless Mark VI?
As per the Henri SELMER “official” serial number list, the Mark VI was manufactured as of serial number 55201 (year of manufacture 1954) up to serial number 220800 (until 1973).
Occasionaly you see a SELMER Saxophone labeled Mark VI with a serial number higher than 220800. According to the list, it would not be a Mark VI.
There are 2 reasons for this:
- Seen on the updated version of a Saxophone model (e.g. the Mark 7 as “successor” of the Mark VI Tenor Saxophon with serial number 241xxx (!!), which doubtlessly exhibits all of the features of a Mark VI. Alto and Tenor Saxophones were manufactured singly as Mark VI up to 1975.
- Soprano and Baritone Saxophones were not manufactured as Mark 7 (although there could very well be a few Mark 7 Baritone Saxophone prototypes in existence).
Manufacture and labeling of both of these sizes as Mark VI continued up to ca. 1984. Hence, it is quite possible, for example, that a Saxophone labeled Mark VI 310xxx is indeed an authentic Mark VI – namely, as Baritone or Soprano Saxophone.
Because SELMER only produced the Mark 7 as Alto and Tenor Saxophones, and these sizes comprised quantity-wise the greatest portion of the production, a commensurate indication in the official serial numbers table was done without. | https://www.legendary-saxophones.com/en/?qa_faqs=%E2%8B%85-high-serial-number-yet-nonetheless-mark-vi |
Who is Eliud Kipchoge?
Eliud Kipchoge is all over the news right now for absolutely demolishing the marathon world record.
The 33-year-old Kenyan completed the Berlin marathon in an amazing two hours, one minute and 39 seconds. He beat Dennis Kimetto’s world record, set in 2014, by one minute and 18 seconds. Whilst that doesn’t sound like very much, when you’ve been running straight for over two hours, that’s a big jump.
What’s more, Eliud found himself having to run the last 17km (10.5 miles) of the marathon by himself as his pacemakers dropped out early. Pacemakers are there to set the pace, guide the runners along the route and encourage those at the front to set records. Not long after the 15km, two pace makers were unable to continue and the third had dropped out by 25km.
What is the Berlin Marathon?
The BMW Berlin Marathon is classed as one of six World Marathon Majors. It attracted 61,390 participants from 133 nations this year. It wasn’t just runners who entered the marathon though; there were also inline skaters, kids skating, wheelchairs and handbikers, among others. Of those, 40,775 finished the race.
A second world record was set at the 2018 Berlin Marathon ; Manuela Schaer from Switzerland won the women’s wheelchair race. She held the previous world record, set in 2013, and beat her own record this year by over one minute, finishing in 1:36:53 – her third consecutive victory in Berlin.
Athletics is not so much about the legs, it’s about the heart and mind.
What are the World Marathon Majors?
Alongside the Berlin Marathon, the other World Marathon Majors are:
- Tokyo Marathon
- Boston Marathon
- London Marathon
- Chicago Marathon
- New York City Marathon
What else has Eliud Kipchoge accomplished over the years?
Eliud first burst onto the scenes in 2003, taking gold at the world championships over 5,000m at just 18 years old. He then went on to win a silver medal at the 2004 Olympics and a bronze at the 2008 Olympics.
In 2012, he discovered that marathon running was where he really excelled. He’s since gone on to win 10 out of 11 races over 26.2 miles that he’s taken part in, including winning the London marathon three times and taking home a gold medal at the 2016 Olympics (Rio de Janeiro).
I’ll be the first to say it: if you train well, concentrate well, stay healthy, then you can run a world record and win medals without anything bad.
Just last year, Eliud wanted to smash the two-hour barrier for marathons and he took part in Nike’s Breaking2 project. Unfortunately, his final time of 2:00:25 wasn’t considered an official mark. Firstly, because it was 26 seconds over the target. Secondly because he had a team of 30 elite pacers that helped him along the way. But it’s believe that this is what gave him the mindset that he could and he would smash the current marathon world record.
Where did Eliud Kipchoge come from?
He comes from a simple, humble background in Kenya. Growing up on a farm, he’s not one to shy away from hard work and long hours. He could often be found cycling to Kapsabet as a kid in order to sell his family’s milk at the town market.
What’s Eliud Kipchoge’s diet, training and lifestyle like?
Living in a camp with the rest of his team whilst training (instead of in his home with his wife and children), Eliud likes to get up before the sun each day to go for a run. By the time he gets back to camp, he showers and eats before completing his daily chores. This might involve a little gardening, chopping vegetables for the communal dinner or even cleaning the toilets. Whilst he’s a self-made millionaire, he likes to stick to his root. He goes for a second run of the day at 4pm and then the team are all in bed by 9pm at the latest.
Even to this day, Kipchoge’s diet is a simple one. He drinks milk from cows that roam the fields close to his training camp and his meals are largely built around rice, ugali (a Kenyan staple) and sometimes beef.
The road to Berlin.@NikeRunning@berlinmarathon@NNRunningTeam #TeamGSC pic.twitter.com/9OSOoyMWrU
— Eliud Kipchoge (@EliudKipchoge) 11 August 2018
You can follow Eliud Kipchoge on Facebook, Twitter and Instagram to stay up-to-date with all of his latest runs, achievements and work. We’re sure there is a lot more to come from Eliud and will be following him closely ourselves. | https://jogger.co.uk/who-is-eliud-kipchoge-jogger-co-uk-runner-profile/ |
Not that there is one.
There are so many techniques and chef-y secrets out there, it’s a mind-field.
From salting to resting, there is an opinion on each step on what makes the perfect steak.
But is there actually the perfect way to prep and cook a steak?
Well, it depends.
You have to put into play your own personal preferences, such as:
- What cut of steak are you cooking?
- Is it dry-aged?
- Do you like a nice crust on the outside or is an even cook throughout more important to you?
- Do you prefer a buttery flavour or a smokey flavour?
- How many people are you cooking for?
These can all impact your preparation and cooking technique.
But below are some quick tips to cooking a steak (Based on something like a thick ribeye steak)
Quick tips to cooking steak
- Thickish cut steak – no more than 2.5cm/1″ thick, if you want to cook entirely on the stove (*thicker cuts need to be finished in the oven – see Restaurant Method, below).
- Bring to room temp: At least an hour before cooking. This makes an amazing difference to cooking through evenly.
- Pat dry and season: I personally adhere to 3 Michelin star chef, Marcus Waring’s method, which is to season the meat on one side and place this side down onto the hot pan.
- Get a heavy-based pan smoking hot (with a tiny amount of oil) before putting the steak in – it should sizzle when you place it in.
- Don’t be tempted to add butter (yet).
- After a couple of minutes of cooking, season the top of the meat and flip it over to cook for six to eight minutes on the other side (depending on thickness).
- Add flavour: Now, you can add butter, along with a garlic clove, fresh rosemary or thyme sprigs.
- Baste – baste the steak with the butter and herbs – it will come out deliciously buttery with a golden caramelization and char.
- Rest your steak for 10 minutes so it sucks its own juices back in and the fibres relax.
Additional tips
- If you’re using a meat thermometer, take the steak off the stove before your it reaches your preferred cook i.e. rare, medium-rare, well or the correct internal temperature (see below) as it will continue to cook as it rests.
- For the perfect home-cooked steak you’ll need a quality, heavy-based pan – cast iron is best. (A light, flimsy pan, just won’t hold the heat).
- Buy your meat from a butcher instead of a grocery store.
- Don’t be afraid of the fat; fat equals flavour
Internal temperature for your desired steak
- 55°C (131°F) for rare
- 60°C (140°F) for medium-rare
- 65°C (149°F) for medium
- 75°C (167°F) for well done
The restaurant method
It’s not anything tricky.
And is great for thicker, larger pieces of meat.
Simply:
- Sear the outside of the steak in a pan to get a nice colouring i..e the maillard reaction
- Finish off in the oven at 120 C / 220 F for approx. 8 to 10 minutes or until it’s reached the desired internal temp (see above)
- Rest and serve by cutting (against the grain) into thick slices. | https://louskitchencorner.freybors.com/2021/02/07/how-to-prep-the-perfect-steak/ |
CEED initiated programmes to spread awareness about air pollution in Patna On 18th November, 2017, at the crack of dawn, CEED...
Archive for tag: Help Patna Breathe
Children demand their right to breathe clean air in Patna
CEED driven initiative to create awareness against rising air pollution On the occasion of Children’s Day, hundreds of students from...
CEED: States along the Indo-Gangetic plain must collaborate to tackle air pollution
Experts at national conference chalked out a consolidated clean air action plan involving Uttar Pradesh, Bihar & Jharkhand On 31st...
CEED organises Stakeholders’ Consultation on clean and sustainable cooking solutions for rural India
Indian villages must be made ‘Smokeless’ to curb the rising indoor air pollution On 6th July, 2017 CEED organised a stakeholders’...
CEED urges Bihar Government to formulate a clean air action plan
Environmental experts from all over the country gathered at the National Conference on Clean Air Bihar organised by CEED in...
CEED’s continuous monitoring of air quality data reveals alarming air pollution levels in the schools of Patna
CEED demands from the Government to issue health advisory for bad air quality days. CEED released the findings of the level...
Students demand Government to issue health advisory for bad air quality days.
CEED organised a human banner formation at B. D. Public School in Patna. Students of the school participated in large...
Patnaites Talk About Alarming Air Pollution Levels in the City
Survey shows 9 Out of 10 people do not mind higher taxes levied if government takes strict actions to curb... | http://ceedindia.org/tag/help-patna-breathe/ |
Fotios, S., Yang, B., & Uttley, J. (2014). Observing other pedestrians: Investigating the typical distance and duration of fixation. Lighting Research and Technologying Res & Tech, 47(5), 548–564.
|
Abstract: After dark, road lighting should enhance the visual component of pedestriansâ interpersonal judgements such as evaluating the intent of others. Investigation of lighting effects requires better understanding of the nature of this task as expressed by the typical distance at which the judgement is made (and hence visual size) and the duration of observation, which in past studies have been arbitrary. Better understanding will help with interpretation of the significance of lighting characteristics such as illuminance and light spectrum. Conclusions of comfort distance in past studies are not consistent and hence this article presents new data determined using eye-tracking. We propose that further work on interpersonal judgements should examine the effects of lighting at a distance of 15m with an observation duration of 500ms.
Keywords: traffic safety; pedestrians; roadway lighting; visibility; light at night
|
Liu, X. Y., Luo, M. R., & Li, H. (2014). A study of atmosphere perceptions in a living room. Lighting Research and Technology, 47(5), 581–594.
|
Abstract: An experiment has been carried out to investigate the effect of lighting on the perception of atmosphere in a living room, using three types of light sources: halogen, fluorescent and LED lamps. In a psychophysical experiment, 29 native Chinese observers assessed eight lighting conditions having different luminances and correlated colour temperatures. For each condition, 71 scales were employed using the categorical judgment method. Factor analysis identified two underlying dimensions: liveliness and cosiness. This agrees with those found by Vogels who used Dutch observers to assess atmosphere perception. Both observer groups also agreed that an increase of luminance would make the room more lively. However, there were also some disagreements such as a higher CCT source would make the room more lively for Chinese observers but less lively for Dutch observers.
Keywords: lighting; indoor lighting; perception; Chinese; Dutch; aesthetics
|
Gaston, K. J., & Bennie, J. (2014). Demographic effects of artificial nighttime lighting on animal populations. In Environmental Reviews (Vol. 22, pp. 323–330). Canadian Science Publishing.
|
Abstract: Artificial lighting, especially but not exclusively through street lights, has transformed the nighttime environment in much of the world. Impacts have been identified across multiple levels of biological organization and process. The influences, however, on population dynamics, particularly through the combined effects on the key demographic rates (immigration, births, deaths, emigration) that determine where individual species occur and in what numbers, have not previously been well characterized. The majority of attention explicitly on demographic parameters to date has been placed on the attraction of organisms to lights, and thus effectively local immigration, the large numbers of individuals that can be involved, and then to some extent the mortality that can often result. Some of the most important influences of nighttime lighting, however, are likely more subtle and less immediately apparent to the human observer. Particularly significant are effects of nighttime lighting on demography that act through (i) circadian clocks and photoperiodism and thence on birth rates; (ii) time partitioning and thence on death rates; and (iii) immigration/emigration through constraining the movements of individuals amongst habitat networks, especially as a consequence of continuously lit linear features such as roads and footpaths. Good model organisms are required to enable the relative consequences of such effects to be effectively determined, and a wider consideration of the effects of artificial light at night is needed in demographic studies across a range of species.
Keywords: diurnal; lighting; night; nocturnal; light pollution; light at night; Photoperiodism; demography; demographics; population dynamics
|
De Almeida, A., Santos, B., Paolo, B., & Quicheron, M. (2014). Solid state lighting review – Potential and challenges in Europe. In Renewable and Sustainable Energy Reviews (Vol. 34, pp. 30–48).
|
Abstract: According to IEA estimates, about 19% of the electricity used in the world is for lighting loads with a slightly smaller fraction used in the European Union (14%). Lighting was the first service offered by electric utilities and still continues to be one of the largest electrical end-uses. Most current lighting technologies can be vastly improved, and therefore lighting loads present a huge potential for electricity savings.
Solid State Lighting (SSL) is amongst the most energy-efficient and environmentally friendly lighting technology. SSL has already reached a high efficiency level (over 276 lm/W) at ever-decreasing costs. Additionally the lifetime of LED lamps is several times longer than discharge lamps. This paper presents an overview of the state of the art SSL technology trends.
SSL technology is evolving fast, which can bring many advantages to the lighting marketplace. However, there are still some market barriers that are hindering the high cost-effective potential of energy-efficient lighting from being achieved. This paper presents several strategies and recommendations in order to overcome existing barriers and promote a faster penetration of SSL. The estimated savings potential through the application of SSL lighting systems in the European Union (EU) is around 209 TWh, which translates into 77 million tonnes of CO2. The economic benefits translate into the equivalent annual electrical output of about 26 large power plants (1000 MW electric). Similar impacts, in terms of percentage savings, can be expected in other parts of the World.
Keywords: Lighting; solid-state lighting; LED; lighting technology; review; Europe
|
Fuller, G. (Ed.). (2013). The Night Shift: Lighting and Nocturnal Strepsirrhine Care in Zoos. Ph.D. thesis, , .
|
Abstract: Over billions of years of evolution, light from the sun, moon, and stars has provided
organisms with reliable information about the passage of time. Photic cues entrain
the circadian system, allowing animals to perform behaviors critical for survival and
reproduction at optimal times. Modern artificial lighting has drastically altered
environmental light cues. Evidence is accumulating that exposure to light at night
(particularly blue wavelengths) from computer screens, urban light pollution, or as
an occupational hazard of night-shift work has major implications for human health.
Nocturnal animals are the shift workers of zoos; they are generally housed on
reversed light cycles so that daytime visitors can observe their active behaviors. As a
result, they are exposed to artificial light throughout their subjective night. The goal
of this investigation was to examine critically the care of nocturnal strepsirrhine
primates in North American zoos, focusing on lorises (Loris and Nycticebus spp.) and pottos (Perodicticus potto). The general hypothesis was that exhibit lighting design affects activity patterns and circadian physiology in nocturnal strepsirrhines. The
first specific aim was to assess the status of these populations. A multi-institutional husbandry survey revealed little consensus among zoos in lighting design, with both red and blue light commonly used for nocturnal illumination. A review of medical records also revealed high rates of neonate mortality. The second aim was to
develop methods for measuring the effects of exhibit lighting on behavior and
health. The use of actigraphy for automated activity monitoring was explored.
Methods were also developed for measuring salivary melatonin and cortisol as
indicators of circadian disruption. Finally, a multi-institutional study was conducted
comparing behavioral and endocrine responses to red and blue dark phase lighting.
These results showed greater activity levels in strepsirrhines housed under red light than blue. Salivary melatonin concentrations in pottos suggested that blue light
suppressed nocturnal melatonin production at higher intensities, but evidence for
circadian disruption was equivocal. These results add to the growing body of
evidence on the detrimental effects of blue light at night and are a step towards
empirical recommendations for nocturnal lighting design in zoos. | http://alandb.darksky.org/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%2C%20area%20FROM%20refs%20WHERE%20keywords%20RLIKE%20%22Lighting%22%20ORDER%20BY%20notes&submit=Cite&citeStyle=APA&citeOrder=&orderBy=notes&headerMsg=&showQuery=0&showLinks=0&formType=sqlSearch&showRows=5&rowOffset=20&client=&viewType=Print |
Recently, LEDs serving as a light source are used for various purposes in terms of their longer life and energy savings. Especially during recent years, luminous efficiency of the LEDs for high-light use is improving and thus the LEDs are used for a lighting purpose.
In a case of a white LED used for the lighting purpose, the light quantity can be increased by applying a larger current to the LED. However, a performance of the LED can be degraded under such a severe condition that a large current is applied. Therefore, there is concern that the LED package and a LED module cannot have a longer life and a high reliability. For example, when an electric current flowing through the LED is increased, the heat attributed to the LED increases. Accordingly, the temperature tends to rise in the LED module for lighting and a system thereof, which can cause a deterioration of the LED module and the system. In this regard, only about 25% of the electric power to be consumed in the white LED is converted into the visible light and the rest of the electric power is directly converted into the heat. Therefore, it is required to release the heat from the LED package and the LED module. For example, various types of heat sinks are used to release the heat, wherein the heat sink may be mounted to a bottom surface of a package substrate in order to improve the heat release.
In general, the LED does not have a high resistance property against a static electricity, and thus the designs or measures for protecting the LED from such stress attributed to the static electricity may be employed (see, Patent Literature 1). For example, a Zener diode may be provided in electrically parallel connection with the LED. This can reduce the stress of the LED upon the applying of the overvoltage or overcurrent to the LED. However, in a case of a surface-mounted type LED package 200 as illustrated in FIG. 26, the Zener diode element 270 is disposed on a package substrate 210 such that it is in reversely parallel connection with the LDE element 220. The disposition of the Zener diode element 270 on the substrate makes the size of the entire package larger. That is, a further downsizing of the LED package cannot be achieved.
| |
From time to time the NAS invites our members and other guests to write articles for NAS.org. The opinions expressed therein do not necessarily reflect the official position of the National Association of Scholars.
The National Association of Scholars has been concerned with the conflict between equality and diversity in higher education for much of its existence; included in the first 400 articles at the NAS website, 11 articles were specifically on diversity and 8 specifically on racial preferences. Social justice, also a frequent topic at NAS, is directly dependent on how you define equality, and is embedded throughout higher education. On May 21, 2009, addressing these concerns among others, Peter Wood wrote an essay entitled "Where do we start? Reforming American Education." In it he stated that "[the NAS] opposes...racial preferences," but it also "favors...scholarly inquiry founded on reason and civil debate." He expanded on this:
It doesn't hurt to have a debate over whether America should stick with its Jeffersonian ideal of 'All men are created equal', or switch to the new concept of 'diversity', in which the conception that 'All groups are inherently different', takes precedence…We might benefit as well from a good debate over the essential characteristics of our civilization. Has it on the whole provided a successful path for human flourishing or is it mainly a legacy of various kinds of oppression?0
This essay is intended to contribute to that debate, a debate between classical liberalism and postmodernism.
The above conflict over equality set out by Dr. Wood is part of the greater paradox between the policies and legislation that came out of the post-1965 civil rights movement and the equality and liberty concepts developed during the 1760 to 1776 period that became the U.S. Constitution. This paradox is frequently revealed in the conflicting opinions of cultural critics and observers when they comment on civil rights next to Constitutional concepts. A few examples will serve to ring this out.
Robert Putnam is professor of public policy at Harvard's Kennedy School of Government, and has conducted painstaking research over many years on the importance of social bonds in community. His influential 2000 book, Bowling Alone, describes the consequences to society of the decline of social capital, specifically in civic, religious and other private, voluntary activities. He states that "the bonds of our communities have withered, and...this transformation has very real costs"1. These "bonds" are what deTocqueville found to be so effective in his 1835 analysis Democracy in America about classical liberalism, American style. Professor Putnam also states unequivocally that in the first half of the 20th century "American society was...more segregated and racist than in the 1960's and 1970's" and this period "marginalized Americans because of race, gender, social class or sexual orientation"2. This interpretation of civil rights, which is properly termed social civil rights, suggests intervention by government into interpersonal relations and conflicts, in contradistinction to Madisonian civil rights, as we will see.
A second example comes from the writings of the noted historian and member of the Kennedy Administration, Arthur Schlesinger, Jr. In 1992 he wrote The Disuniting of America in which he laments that the recent "ethnic upsurge began as a protest against the Anglocentric culture...and threatens the original theory of America"3, the "American Creed" as he and Myrdal term it. Professor Schlesinger also states that "I have been a life-long advocate of civil rights"4 and he called for "shamefully overdue recognition to...minorities...spurned during the high noon of Anglo dominance"5. The "history of women, of immigration, of blacks, Indians, Hispanics and other minorities" explain why "voices long silent ring out of the darkness of history"6, and finally, "The result has been a reconstruction of American history7". Professor Schlesinger believes that civil rights require the intervention of government, but not to the point where it undermines "The Creed". This is the paradox.
The third example comes from the work of Gunnar Myrdal. Professor Myrdal was a Swedish social scientist, professor of International Economics at the University of Stockholm and author in 1942 of a major work entitled An American Dilemma: the Negro Problem and Modern Democracy. Professor Myrdal began his voluminous work by stating his acceptance of the overriding effectiveness of American classical liberalism in its Anglo-protestant form, what he terms the 'American Creed':
The unanimity around this Creed is the great wonder of America. The 'old' Americans adhere to the Creed as the faith of their ancestors. The others -Negroes, the new immigrants, the Jews and other disadvantaged and unpopular groups - could not possibly have invented a system of political ideals which better corresponds to their interests. So, it has developed that the rich and secure, and the poor and insecure have come to profess the identical social ideals8.
Yet nearly the entire two volumes of his book, some 1500 pages, is focused on what he sees as gross violations of this Creed regarding civil rights, first of Negroes, but also of Mexicans, Jews, poor whites, women, children and all the "disadvantaged". Professor Myrdal states that in addition to the Negro, "The masses of whites were also kept from political participation"9 in the South, and "women and children, their present status, reveal striking similarities to those of Negroes"10. He further states that "Mexicans are kept in a status similar to the Negro's...Italians, Poles, Finns are distrusted in some communities; Germans, Scandinavians and the Irish are disliked in others..."11. He concludes that "American civilization is permeated by animosities and prejudices attached to ethnic origin...or race...which keep the minority groups in a disadvantaged economic or social status. They are contrary to the American Creed"12. The paradox is that these ethnic and racial factions in America support and benefit from this American Creed which, it is claimed, discriminates against them. This conflict is commonplace among observers of the American Creed who otherwise believe in it. For example, The U.S. Supreme Court endorsed this government intervention between factions with its "disparate impact" ruling of 1971 and its "diversity" ruling three decades later. Perhaps the "American Creed" is not properly understood.
The way out of this paradox, between James Bryce's "amazing solvent power which American institutions, habits and ideas exercise upon newcomers of all races"13 and Myrdal's rank "animosities and prejudices that are contrary to the American Creed" is found in a proper understanding of Constitutional civil rights. The Founders of the Constitution were greatly concerned with the problem of civil rights, with what they termed the "control of the violence of factions"14, finally summarized in Federalist Papers Nos. 9, 10 and 51. This was debated during the 1760-1776 period leading up to the Revolutionary War but, most importantly, by the Federalists and Anti-Federalists during the post-War (1777-1788) debate and adoption of the U.S. Constitution. The pamphlets of that period and their summation in the Federalist Papers show how the solution to these ancient problems of republics was reasoned through and incorporated into the Constitution; it provides the answer to the above paradox.
One of the four major concepts debated during that post-war period, which established the Federalist and Anti-Federalist alignment, concerned the creation of a republic form of government extending over a large population and geographic land area. The results of this debate, its codification, became the primary reason why American classical liberalism became known as American exceptionalism. This concept provided the means of guaranteeing, on balance, what Myrdal termed the social rights of minorities, what Madison and Montesquieu called civil rights for factions and interests. American exceptionalism, until recently, was unique in the world in its ability to form a consensus across multiplicitous racial and ethnic factions and a continent size area.
The received wisdom in 1760, when the complaints about British rule first began to arise, was that a republic had to be of limited population and land area because of the instabilities created by factions. This came from the monumental work of Montesquieu in 1748, entitled The Spirit of the Laws. This book was a central text in eighteenth century thought and provided an analysis of the political systems of antiquity (Athens, Sparta and Rome), as well as of the contemporary republics of Switzerland and Holland. Montesquieu was referred to throughout the Constitutional period; his analysis that only a small, homogeneous people could succeed as a republic was vigorously advanced by the Anti-Federalists, particularly after the Constitutional Convention and during the ratification debates in the state conventions. They alerted their fellow citizens to "those dangerous elements - factions, interests, and parties" and asked how could "a republican constitution...cope with the fact that the larger the unit of government, the greater the number of factions and the smaller the chance...to control them"15.
John Adams had set the Federalist stage in his 1776 pamphlet: "You and I, my dear friend, have been sent into life at a time when the greatest lawgivers of antiquity would have wished to live. How few of the human race have enjoyed an opportunity of making an election of government...to form and establish the wisest and happiest government that human wisdom can contrive"16. In 1788 the Federalists, Timothy Pickering, John Stevens, James Wilson, Alexander Hamilton and James Madison for example, responded to the Anti-Federalists, first by exhorting their fellow citizens to "rise to the extraordinary occasion before them by thinking freshly and fearlessly"17. Then, in response to Montesquieu and his Anti-Federalist adherents Hamilton showed that those "tiny republics of classical antiquity were in fact scenes of constant and often fatal squabbling; only the larger confederacies had any stability", and that "the dimensions that Montesquieu must have had in mind were far shorter of those of the present states"18.
Both Federalists and Anti-Federalists contributed to the final constitutional result. They both agreed on the abhorrence of the national government intervening in the individual lives of citizens. The pseudonymous "Brutus" expressed these fears in 1788 that
the national government will introduce itself into every corner of the city and country; it will wait upon the ladies in their carriages.. [and] at church; it will enter the homes of every gentleman; it will take cognizance of the professional man in his office; it will watch the merchant in his store; it will follow the mechanic to his shop; it will be a constant companion of the farmer; and finally, it will light upon the head of every person in the United States. To all the language in which it will address them will be, GIVE! GIVE!19
This concern about the intrusion by the national government was incorporated by James Madison in his Federalist mechanism to guarantee social civil rights, as we will see.
The Federalists advanced another innovation and a corollary to the 'large republic' concept, to accommodate republican ideas to the reality of the United States. The received wisdom from Hobbes, Rousseau and Blackstone was that sovereignty was indivisible, residing entirely with the king or Parliament20,21. George Mason in 1788 reasserted that "two concurrent powers cannot exist long together; the one will destroy the other"22. The Federalists responded that for the conditions relevant to the United States sovereignty could be divided between the several states and the national government, much like the coexistence of powers between cities and their states23.
With these innovative concepts in place the United States would become a federal republic encompassing a continent and with multiplicitous factions and interests, divided sovereignty and a resistance to federal intervention, which would become the elements of the guarantee for civil rights. It was left to James Madison to bring all this together to give the federal republic its ability to control factions and assure civil rights; his Federalist Papers Number 10 and 51 gave this argument "its ultimate range, depth and intellectual elegance"24. In Federalist 51 he states clearly what Constitutional civil rights are:
In a free government the security for civil rights must be the same as that for religious rights. It consists in the one case in the multiplicity of interests, and in the other in the multiplicity of sects. The degree of security in both cases will depend on the number of interests and sects; and this may be presumed to depend on the extent of the country and number of people comprehended under the same government25.
Madison discusses the alternative of national government intervention to achieve civil rights in Federalist 10: "The latent causes of faction are sown in the nature of man…Removing the cause of faction by destroying the liberty which is essential to its existence...was worse than the disease. Liberty is to faction what air is to fire. Liberty ...is essential to political life" and, finally, the curing of "the mischief of factions" is not by "removing its causes" but "by controlling its effects"26. Madison's configuration was not entirely new; Voltaire in his English Letters of 1734, which incidentally, caused his exile from France, states that "If one religion only were allowed in England, the Government would very possibly become arbitrary; if there were but two, the people would cut one another's throats; but as there are such a multitude, they all live happy and in peace"27.
This condition of short-term discomfort of minorities was inadvertently recognized by Professor Schlesinger when he noted that "nearly all minorities succumb" to "mutual suspicion and hostility...[which] are bound to emerge in a society bent on defining itself in terms of jostling and competing groups"28. Professor Myrdal termed this "bewildering impression...of chaotic unrest as paradoxical", but when "the American Creed is detected, the cacophony becomes a melody"29. Further insight into this came in 1962 from the Preface to the 20th anniversary edition of The American Dilemma. The authors stated that "changes [in the Negro problem] have occurred at a considerably more rapid rate than was anticipated;...the changes have occurred in harmony with the traditions of the American Creed"30. For civil rights at the federal level after 1954 and the "Supreme Court's historic decision wiping out separate but equal", they observed, "it could go no further in principle, for it was now operating in full accord with the Constitutional provisions for full equality"31. In their 20th-year Summary they concluded "the change of the preceding 20 years appeared as one of the most rapid in the history of human relations"32. These trends were confirmed in greater detail and updated by Stephen and Abigail Thernstrom in their 1997 book, America in Black and White. They found that during the two decades between 1940 and 1960 Black Americans improved their position faster than any other group in American history33, just as Myrdal had found. Extending Myrdal's analysis, they found that between 1940 and 1970, Black white collar jobs, annual income, poverty rate and high school education numbers improved by more than 50%, and, of great significance, black voter registration went from 3% to 51%33.
The early 1960's were a period of great good will, of what Montesquieu and Madison would term "republican virtue," towards the civil rights movement; but then with the subsequent 40 years of very active governmental intervention, the attitude changed markedly, to one of manipulation, personal gain, and resentment on all sides. There were also significant losses in key areas to Black Americans, as reported by the Thernstroms33. This has spread to many other factions, as is suggested by the books cited in the first three paragraphs of this essay. A 2009 analysis of this paradox in regards to gender is presented by University of Pennsylvania (Wharton School) Professors Betsy Stevenson and Justin Wolfers in their 48-page research paper, "The paradox of declining female happiness." They conclude that, "the robust evidence [is] in favor of a rather puzzling paradox: women's relative subjective well-being has fallen over a period in which most objective measures point to robust improvement in their opportunities"34.
The Constitutional civil rights approach involving the competition of factions, parties and interests under an observed Constitution unravels the opening and closing paradox and, after all, has emerged the subtler, more effective, if imperfect, choice. In 18th century terms, we should dis-establish race and gender today just as religion was dis-established in 1789.
George Seaver is a former Teaching Fellow and postdoctoral Fellow at Harvard University and at the Massachusetts Institute of Technology.
References
0. Wood, P., 2009: Where do we start? Reforming American Education. National
Association of Scholars, May 21, 2009. <www.nas.org>
- Putnam, R., 2000: Bowling Alone. Simon and Schuster, NY. pp. 402, 17, 201.
- Ibid., p. 195, 202.
- Schlesinger, A. Jr., 1992: The Disuniting of America. W.W. Norton, NY. p.43
- ibid., p.75.
- ibid., p. 15.
- Ibid., p. 65, 66.
- Ibid., p. 66.
- Myrdal, G., 1942: The American Dilemma: the Negro Problem and Modern Democracy. Randon House. NY. p. 573.
- ibid., p. 999.
- ibid., p. 1073.
- ibid., p. 53.
- ibid., p. 52.
- Bryce, J. 1888: The American Commonwealth. Vol. II. London. p. 328.
- Madison, J. (Publius), 1788: The Utility of the Union as a Safeguard Against Domestic Faction and Insurrection. The Federalist Papers, No. 10. 1788.
- Bailyn, B., 1992: The Ideological Origins of the American Revolution. Harvard University Press, Cambridge, MA. 1967. P. 300.
- ibid., p. 272.
- ibid., p. 352.
- ibid., p. 361.
- ibid., p. 337.
- ibid., p. 199.
- Rousseau, J., 1755: The Social Contract. P. 422.
- Bailyn, B., ibid., p. 336.
- ibid., p. 360-361.
- ibid., p. 366.
- ;Madison, J.(Publius), 1788: Separation of the Departments of Power. The Federalist Papers No. 51. 1788.
- Madison, J. (Publius), 1788: Federalist Paper Number 10.
- Voltaire, 1734: Religious Toleration. English Letters, 1734.
- Schlesinger, A. Jr., ibid. p. 112.
- ;Myrdal, G., ibid., p. 3.
- Myrdal, G. and A. Rose, 1962: An American Dilemma; the Negro Problem and Modern Democracy. Pantheon Books, NY. p. xxvii.
- ibid., p. xxxiii.
- ibid., p. xliii.
- Thernstrom, S. and A. Thernstrom, 1997: America in Black and White. Simon and Schuster, NY. pgs. 83, 187, 236, 240, 265, 355. | https://www.nas.org/blogs/article/the_paradox_of_constitutional_and_post-1965_civil_rights |
An organization called the National Motor Freight Traffic Association (NMFTA) publishes a list of freight class designations, codes, and subclasses for many frequently shipped items (https://classit.nmfta.org/). To view this list, you must pay a subscription fee. Learn more about freight classes from the experts at Koho for free on our freight classes pages.
Freight Class 500 freight is the most expensive to ship. This classification is reserved for items of very high value or for items that use lots of space but weigh very little.
Multiply the length, width, and height of your shipment, then divide the total weight of your package by that number. If your shipment is 4 feet long, 5 feet wide, and 4 feet tall, you would multiply 4 x 5 x 4 to get 80 cubic feet. If it weighs 800 pounds, you would divide 800 / 80 to get 10 pounds per cubic foot. | https://www.gokoho.com/nmfc-codes/gall-58840 |
the series of processes by which nitrogen and its compounds are interconverted in the environment and in living organisms, including nitrogen fixation and decomposition.
nitrogen fixation
the chemical processes by which atmospheric nitrogen is assimilated into organic compounds, especially by certain microorganisms as part of the nitrogen cycle.
nitrogen-fixing bacteria
bacteria that convert nitrogen in the air into forms that can be used by plants and animals
Ammonification
the formation of ammonia compounds in the soil by the action of bacteria on decaying matter
Nitrification
the process by which nitrites and nitrates are produced by bacteria in the soil
Denitrification
process in which fixed nitrogen compounds are converted back into nitrogen gas and returned to the atmosphere
phosphorus cycle
the cyclic movement of phosphorus in different chemical forms from the environment to organisms and then back to the environment
THIS SET IS OFTEN IN FOLDERS WITH...
Abeka 8th grade SES chapter 5
40 terms
dixiebelle39
Science SES chapter 8 study sheet for test 8
42 terms
dixiebelle39
Abeka Science 8, pages 202-205 (Radiation Doughnut…
21 terms
dixiebelle39
SES Quiz 26
10 terms
dixiebelle39
YOU MIGHT ALSO LIKE...
Chapter 18 Vocabulary part 2
26 terms
Auburn34
Biogeochemical cycles - S9
17 terms
apmrslee
Ch. 18 Introduction to Ecology
41 terms
katelynkoebel
Ecology Final
51 terms
yjedian723
OTHER SETS BY THIS CREATOR
Tengo que vs. quisiera - CHORES v. HOBBIES
46 terms
dixiebelle39
Chapter 11 health gra
48 terms
dixiebelle39
..
2 terms
dixiebelle39
. | https://quizlet.com/387224527/ecology-chapter-18-vocabulary-flash-cards/ |
Rees, Aaron, and Yoav are developing novel imaging techniques for tracking viruses as they diffuse, assemble, and disassemble.
Victoria, Annie and Ming are developing robust pigments by studying the nanostructure of materials.
Mohammad is using capillary forces to self-assemble high aspect-ratio nanopillars into chiral structures.
Anna is using holographic microscopy to study the mechanics of flagella-driven bacterial motion.
Anna is studying the effects of particle shape on the adsorption trajectories of particles that are breaching interfaces.
We are a research group in the School of Engineering and Applied Sciences and the Department of Physics at Harvard University. We do experiments to understand how complex systems such as interacting nanoparticles or proteins spontaneously order themselves — a process called self-assembly or self-organization. We use optical techniques that we develop in our lab to observe both natural systems (such as viruses) and synthetic ones (such as colloidal particles, perhaps dressed up with some interesting biomolecules) in three dimensions and on short time scales. We use the results of these studies to make useful materials and to gain a deeper understanding of the physics of assembly, organization, and life.
LaNell gave a talk at TEDxBeaconStreetSalon about her research! Check it out on YouTube.
Emily has successfully defended her thesis "Dynamic and Responsive Systems from DNA-Mediated Colloidal Interactions." Congratulations!
Aaron has sucessfully defended his thesis "Using Interferometric Scattering Microscopy to Study the Dynamics of Viruses". Congratulations!
A fundamental unsolved problem is to understand the differences between inanimate matter and living matter. Although this question might be framed as philosophical, there are many fundamental and practical reasons to pursue the development of synthetic materials with the properties of living ones. There are three fundamental properties of living materials that we seek to reproduce: The ability to spontaneously assemble complex structures, the ability to self-replicate, and the ability to perform complex and coordinated reactions that enable transformations impossible to realize if a single structure acted alone. The conditions that are required for a synthetic material to have these properties are currently unknown. This Colloquium examines whether these phenomena could emerge by programming interactions between colloidal particles, an approach that bootstraps off of recent advances in DNA nanotechnology and in the mathematics of sphere packings. The argument is made that the essential properties of living matter could emerge from colloidal interactions that are specific—so that each particle can be programmed to bind or not bind to any other particle—and also time dependent—so that the binding strength between two particles could increase or decrease in time at a controlled rate. There is a small regime of interaction parameters that gives rise to colloidal particles with lifelike properties, including self-assembly, self-replication, and metabolism. The parameter range for these phenomena can be identified using a combinatorial search over the set of known sphere packings.
The effects of contact-line pinning are well known in macroscopic systems but are only just beginning to be explored at the microscale in colloidal suspensions. We use digital holography to capture the fast three-dimensional dynamics of micrometer-sized ellipsoids breaching an oil-water interface. We find that the particle angle varies approximately linearly with the height, in contrast to results from simulations based on the minimization of the interfacial energy. Using a simple model of the motion of the contact line, we show that the observed coupling between translational and rotational degrees of freedom is likely due to contact-line pinning. We conclude that the dynamics of colloidal particles adsorbing to a liquid interface are not determined by the minimization of interfacial energy and viscous dissipation alone; contact-line pinning dictates both the time scale and pathway to equilibrium.
A major fabrication challenge is producing disordered photonic materials with an angle-independent structural red color. Theoretical work has shown that such a color can be produced by fabricating inverse photonic glasses with monodisperse, nontouching voids in a silica matrix. Here, we demonstrate a route toward such materials and show that they have an angle-independent red color. We first synthesize monodisperse hollow silica particles with precisely controlled shell thickness and then make glassy colloidal structures by mixing two types of hollow particles with the same core size and different shell thicknesses. We then infiltrate the interstices with index-matched polymers, producing disordered porous materials with uniform, nontouching air voids. This procedure allows us to control the light-scattering form factor and structure factor of these porous materials independently, which is not possible to do in photonic glasses consisting of packed solid particles. The structure factor can be controlled by the shell thickness, which sets the distance between pores, whereas the pore size determines the peak wave vector of the form factor, which can be set below the visible range to keep the main structural color pure. By using a binary mixture of 246 and 268 nm hollow silica particles with 180 nm cores in an index-matched polymer matrix, we achieve angle-independent red color that can be tuned by controlling the shell thickness. Importantly, the width of the reflection peak can be kept constant, even for larger interparticle distances. | https://manoharan.seas.harvard.edu/ |
This menu is related to processes requiring a numerical calculation:
- Form finding
- Analysis
Iterative process
This opens the window of iterative process for nonlinear calculation of the structure.
Surface loads
The tensile structures analyzed with WinTess3 accept loads on the nodes and on the surface of the membrane. To be able to enter loads on the surface of the membrane, it is necessary that there are elements. If there is no any element, the program will warn us if we select surface loads and we generate them through the menu Elements | Automatic generation. There are different types of surface loads:
- wind loads
- snow loads
- internal pressure (only pneumatic structures).
- prestress loads
Safety factors
From version 3,117 , WinTess introduces a new way to apply the safety factors to the structures.
NOTE: Although this method is much more versatile and is better suited to the modern normative, if preferred, the user can ignore it and continue to calculate as it was before the version 3,117.
Window of safety factors is seen above. The coefficients are grouped in two columns:
LOADS:
It is an escalating coefficient. That is to say that the loads are multiplied by this value. By default the value is 1, so that if you do not change, the calculation is done as always.
On the other hand, if this value is changed, it must be borne in mind that results (reactions, displacements, etc. ) will be affected by this value.
If you are analyzing what standards or codes define as SLS (Serviceability Limit State), these coefficients tend to be all equal to 1.
MATERIAL:
It’s a reduction coefficient., i.e. the resistance of the materials are divided by this value. By default, these values are the same as, what WinTess3, until version 3.117, regarded as typical values (membrane = 5, tube = 1.65, cables = 3).
Apply default values
By this button, all the boxes of the safety factors are filled with the default values entered in the options window.
If you are analyzing what the standards or codes define as ULS (Ultimate Limit State), the program multiplies the values of the safety factors of the loads and materials by obtaining a Global Safety factor for that object, and uses this value for the calculation of the ratio of all the objects in the structure: membrane, cables, pipes, …
Update unbalanced forces
The program always evaluates the unbalanced forces of the structure during the calculation process. In fact it does not stop the iterative process until the imbalance is practically non-existent. Now, at a given moment, if we want to know the state of balance of the structure, especially if we have made some kind of edit, we can use this menu to force an update of these unbalanced forces.
Cp manual (Wind)
The program prepares itself to assign wind coefficient Cp to different elements.
If we select an element or group of elements (through a window) and then click the right button of the mouse, we get:
In this window, we assign Cp value to the selected elements. We must be careful to give a positive value for suction and a negative value for pressure loads.
Cn (Snow)
The program prepares itselt to assign snow coefficient Cn to different elements. The default value of Cn is 1.
This value is used to multiply the snow load in certain areas of the membrane, which may be different:
1) In areas that are highly exposed to wind, Cn may be < 1
2) In low areas where it is possible to accumulate snow, Cn can be > 1
We select an element or a group of elements (through a window) and then press the right mouse button, we get:
In this window, we assign the Cn value to the selected elements.
We can visualize Cn values of each element at any time by using the Cn button on the right column.
Form finding (global)
This menu initiates the process of form finding by the Force Density Method. This method is a linear process of solving a system of equations where the coordinates of the nodes are unknown. For the balance it is assumed that each bar has a tension equal to its length multiplied by a factor or “density“.
If we touch the button in the Form Finding state, this menu is run.
If the number of nodes is not very high, this process is very fast. However, if the number of nodes exceed certain values, the process slows down quite a bit, and if the number of nodes is too high, it could even crash the program. For this reason, there is another method to find the form what we call “step by step” and discussed below.
Form finding (step by step)
This menu initiates the process of form finding by the Step by Step method. This method is an iterative process in which a node is resolved each time. Its coordinates take the value obtained by the Force Density method applied only on this node.
Once all the nodes are modified, it returns to the start of the process. This repetition is done as many times as needed until the maximum displacement of any node in the iteration is very small.
This process is much slower than the method of global or Force Density, but it has the advantage that the program doesn’t lock even if the number of nodes is very high.
Form finding (automatic)
This menu does not execute any action but it activates or deactivates a change sensor. During the state of Form finding, if this change sensor is activated, whenever you modify the data of a node or a bar, the program automatically finds the new form.
This is very comfortable for structures of few nodes, since we now don’t have to click on button on the menu to find new forms. However, for structures with many nodes, this can be annoying since finding the process could take a long time, and we don’t always want to find a new form on having modified data. Usually we modify various data and then find the form.
If the sensor is activated, the menu presents check mark to the side of the menu.
Output
Especially in the state of Analysis, it is good to know the results obtained. WinTess3 can display a large number of tables with the results in nodes, bars, cables, tubes, etc. However these tables can be very long and of little interest.
This is why the most interesting results are grouped in a combined table of results. We can get this table using this menu, and also using the button on the left called “Output”. | http://www.wintess.com/wintess-manual/using-wintess/menus/menu-calculate/ |
At the beginning of the semester, the Office for Religious, Spiritual and Ethical Life introduced open hours at Graham Chapel. These are essentially times when the Chapel is available for general student use. The Chapel, which before was only unlocked for reserved events, will now be unlocked from 9 a.m. to 5 p.m. on weekdays.
The new open hours are advertised as an opportunity for students to “stop by to reflect, pray, pause or catch your breath,” emphasizing the importance of taking a moment for yourself. Though praying is one way to do this, you can also pause in the Chapel for non-religious reasons.
The Chapel’s open hours are an opportunity for students to sit down and de-stress for a moment. However, not everyone may be comfortable with using the Chapel this way—the University was founded as a seminary, and the Chapel is filled with specifically religious objects and decorations. This may be off-putting for some, but the Office wishes the change to be welcoming and to make the Chapel “open to all.”
That means making the space open to both non-religious students and to students of all religions. Though the Chapel is decorated with primarily Christian imagery, the opening of the Chapel as well as the upcoming Interfaith Week (Feb. 14-21) aims to welcome students of all spiritual or religious beliefs.
Additionally, Graham Chapel is only one of the spaces on campus dedicated to prayer or reflection. Reflection rooms can be found in the Danforth University Center, Hillman and Lee Halls, Lopata House and Olin Library. If the Chapel is not a space you feel comfortable in, these rooms are available for student use.
However, the Chapel’s open hours are not only for religious or spiritual students. The more people use the Chapel as a space of reflection and claim it as a student space, the less it will feel tied to specific religious beliefs. The Chapel is now a space open to students unaffiliated with academics, and we should take advantage of this. It is a centrally located space on campus and a beautiful building for students to appreciate and use.
Graham Chapel is one of Wash. U.’s most recognizable buildings. Until now, most of the opportunities for students to be in the Chapel have been for events like The Date. During these, there is not much opportunity to take in the beauty of the space. Even if you don’t use the Chapel as a religious space, take a moment to look around and enjoy the building, both aesthetically and as a place to put University life on pause for a moment. | https://www.studlife.com/forum/2020/02/10/staff-editorial-graham-chapel-open-hours-offers-new-space-to-reflect-destress/ |
The construction of non secular buildings represents an extraordinary chance for the architect to be aware of the production of quantity, area, and shape. Sacred structure is much much less made up our minds than different development projects by means of sensible requisites, norms, and criteria. in most cases, it really is loose to spread as natural structure.
Thus in layout phrases this construction activity deals huge, immense freedoms to the architect. whilst, in spite of the fact that, the designated atmospherics of sacred areas name, at the a part of the architect, for a hugely delicate remedy of faith and the correct cultural and architectural traditions.
In a scientific part, this quantity introduces the layout, technical, and making plans basics of establishing church buildings, synagogues, and mosques. In its venture part, it additionally provides approximately seventy learned constructions from the final 3 a long time. Drawing upon his in-depth wisdom of the topic and his decades of publishing adventure, the writer deals a important research of the conceptual and formal facets that mix to create the spiritual impression of areas (e. g. , the floor plan, the shapes of the areas, the occurrence of sunshine, and materiality).
The Building of England: How the History of England Has Shaped Our Buildings
From awe-inspiring Norman castles, to the houses we are living in, Simon Thurley explores how the structure of this small island inspired the world.
The development of britain places into context the importance of a country’s architectural heritage and reveals the way it is inextricably associated with the cultural prior – and present.
Saxon, Tudor, Georgian, Regency, even Victorian and Edwardian are all well-recognised architectural types, showing the effect of the occasions that mark every one interval. Thurley seems to be at how the structure of britain has advanced over 1000 years, uncovering the ideals, rules and aspirations of the folk who commissioned them, equipped them and lived in them. He tells the interesting tale of the advance of structure and the developments in either structural functionality and aesthetic effect.
Richly illustrated with over 500 drawings, pictures and maps, Simon Thurley strains the background and contemplates the way forward for the constructions that experience made England.
Building Structures Illustrated: Patterns, Systems, and Design (2nd Edition)
A brand new variation of Francis D. okay. Ching's illustrated advisor to structural design
Structures are an important part of the development strategy, but essentially the most tricky ideas for architects to know. whereas structural engineers do the certain consulting paintings for a undertaking, architects must have sufficient wisdom of structural idea and research to layout a development. construction buildings Illustrated takes a brand new method of structural layout, displaying how structural structures of a building—such as an built-in meeting of components with trend, proportions, and scale—are regarding the basic points of architectural layout. The booklet incorporates a one-stop advisor to structural layout in perform, a radical remedy of structural layout as a part of the whole construction procedure, and an outline of the old improvement of architectural materails and constitution. Illustrated all through with Ching's signature line drawings, this new moment variation is a perfect consultant to constructions for designers, developers, and students.
•Updated to incorporate new details on construction code compliance, extra studying assets, and a brand new word list of terms
•Offers thorough assurance of formal and spatial composition, application healthy, coordination with different development structures, code compliance, and masses more
•Beautifully illustrated by way of the popular Francis D. ok. Ching
Building buildings Illustrated, moment version is definitely the right source for college students and execs who intend to make educated judgements on architectural layout.
- Iconic Australian Houses 70/80/90
- Classical Architecture The Poetics of Order
- Architectural Detailing: Function, Constructibility, Aesthetics (3rd Edition)
- Le Corbusier. The Villa Savoye
Additional resources for Architecture Now! Vol. 1
Example text
Organic small molecules on the other hand, are generally not very soluble in common solvents and are therefore typically deposited using organic molecular beam deposition (OMBD) or by organic vapor deposition. 1 Chemical description of the organic compounds shown in Fig. 1 Abbreviation Chemical description Alq3 tris(8-hydroxyquinoline) aluminum Btp2 Ir(acac) bis(2-(2’benzothienyl)pyridinato-N,C3 )(acetylacetonate)iridium(III) CNPPP 2-[(6-Cyano-6-methylheptyloxy)-1,4-phenylene] copolymer COT 1,3,5,7-cyclooctatetraene DCM2 4-(Dicyanomethylene)-2-methyl-6-(julolindin-4-yl-vinyl)-4H-pyran DFH-4T α,ω-diperfluorohexyl-quaterthiophene DFHCO-4T 5, 5”’-diperfluorohexylcarbonyl-2,2’:5’,2”:5”,2”’-quaterthiophene F5Ph bis((2,4-difluoro)phenylpyridine)-(2-(1,2,4-triazol-3-pentafluorophenyl)pyridine)iridium(III) GDP16b 2,3-Bis(4-fluorophenyl)quinoxaline(3-tert-butyl-5-(2-pyridyl)pyrazole)iridium(III) MeLPPP methyl-substituted ladder-type poly(para-phenylene) PαMs poly-α-methylstyrene PF2/6 poly(9,9-di(ethylhexyl)fluorene) PTAA Poly(triarylamine) PTCDI-C13 H27 N,N’-ditridecylperylene-3,4,9,10-tetracarboxylic diimide Fig.
Upon revolving the sample, centrifugal force spins off the fluid and a uniform film is created. Afterwards, the persistent solvent can be removed by baking the sample on a hotplate at elevated temperatures. The resulting film thickness depends on the molecular weight of the organic material, the concentration of the solution and the spin-speed. For the organic compounds of Fig. 2. The layer thickness was determined using a Veeco Dektak V200-Si stage profiler or by spectroscopic ellipsometry (Sopra GESP-5).
Using liquid solutions of organic dye molecules . Due to their broad gain spectrum and wide tuning range commercial dye lasers have been existing for many years and are used for various applications. However, the complex and bulky laser design, requiring regular maintenance, as well as the need to employ large volumes of organic solvents are inherent drawbacks of this technology. The first solid-state lasers employing organic materials were demonstrated in 1967 by Soffer and McFarland using dye-doped polymers and were followed by the realization of lasing in doped single crystals in 1972 and in pure anthracene crystals in 1974 . | http://mypotluckkitchen.com/index.php/books/architecture-now-vol-1 |
Types of fat include vegetable oils, animal products such as butter and lard, as well as fats from grains, including maize and flax oils. Fats are used in a number of ways in cooking and baking. To prepare stir fries, grilled cheese or pancakes, the pan or griddle is often coated with fat or oil. Fats are also used as an ingredient in baked goods such as cookies, cakes and pies. Fats can reach temperatures higher than the boiling point of water, and are often used to conduct high heat to other ingredients, such as in frying, deep frying or sautéing. Fats are used to add flavor to food (e.g., butter or bacon fat), prevent food from sticking to pans and create a desirable texture.
Ultimate Indo-European origin of the word is the subject of continued debate. Some scholars have noted the similarities between the words for wine in Indo-European languages (e.g. Armenian gini, Latin vinum, Ancient Greek οἶνος, Russian вино [vʲɪˈno]), Kartvelian (e.g. Georgian ღვინო [ɣvinɔ]), and Semitic (*wayn; Hebrew יין [jaiin]), pointing to the possibility of a common origin of the word denoting "wine" in these language families. The Georgian word goes back to Proto-Kartvelian *ɣwino-, which is either a borrowing from Proto-Indo-European or the lexeme was specifically borrowed from Proto-Armenian *ɣʷeinyo-, whence Armenian gini. An alternate hypothesis by Fähnrich supposes *ɣwino- a native Kartvelian word derived from the verbal root *ɣun- ('to bend'). See *ɣwino- for more. All these theories place the origin of the word in the same geographical location, Trans-Caucasia, that has been established based on archeological and biomolecular studies as the origin of viticulture.
Vitamins and minerals are required for normal metabolism but which the body cannot manufacture itself and which must therefore come from external sources. Vitamins come from several sources including fresh fruit and vegetables (Vitamin C), carrots, liver (Vitamin A), cereal bran, bread, liver (B vitamins), fish liver oil (Vitamin D) and fresh green vegetables (Vitamin K). Many minerals are also essential in small quantities including iron, calcium, magnesium, sodium chloride and sulfur; and in very small quantities copper, zinc and selenium. The micronutrients, minerals, and vitamins in fruit and vegetables may be destroyed or eluted by cooking. Vitamin C is especially prone to oxidation during cooking and may be completely destroyed by protracted cooking.[not in citation given] The bioavailability of some vitamins such as thiamin, vitamin B6, niacin, folate, and carotenoids are increased with cooking by being freed from the food microstructure. Blanching or steaming vegetables is a way of minimizing vitamin and mineral loss in cooking.
Texture plays a crucial role in the enjoyment of eating foods. Contrasts in textures, such as something crunchy in an otherwise smooth dish, may increase the appeal of eating it. Common examples include adding granola to yogurt, adding croutons to a salad or soup, and toasting bread to enhance its crunchiness for a smooth topping, such as jam or butter.
Camping food includes ingredients used to prepare food suitable for backcountry camping and backpacking. The foods differ substantially from the ingredients found in a typical home kitchen. The primary differences relate to campers' and backpackers' special needs for foods that have appropriate cooking time, perishability, weight, and nutritional content.
Many cultures hold some food preferences and some food taboos. Dietary choices can also define cultures and play a role in religion. For example, only kosher foods are permitted by Judaism, halal foods by Islam, and in Hinduism beef is restricted. In addition, the dietary choices of different countries or regions have different characteristics. This is highly related to a culture's cuisine. | http://mecookbook.com/wine-country-drinking-game.html |
The Patient Access Partnership (PACT) actively contributes to the discussion on access to healthcare and organised a meeting of the MEP Interest Group on Access to Healthcare on 29 June. This event was jointly chaired by MEPs Karin Kadenbach, Andrey Kovatchev, European Patients’ Forum Secretary General Nicola Bedlington and PACT Secretary General Stanimir Hasardzhiev, and provided an opportunity for all interested stakeholders to discuss the future needs of healthcare systems.
The Dutch Presidency has been focusing on the area of pharmaceuticals during their Presidency, which have led to a set of Council Conclusions entitled ‘Strengthening the balance in the pharmaceutical systems in the EU and its Member States’. These Conclusions, adopted at the EPSCO Council held on 17 June, demonstrate the growing recognition of the need to address unequal access to medicines across the EU, with an emphasis on innovation, access and sustainability.
Jointly chaired by MEPs Karin Kadenbach, Andrey Kovatchev, European Patients’ Forum Secretary General Nicola Bedlington and PACT Secretary General Stanimir Hasardzhiev, the 29 June event provided an opportunity for all interested stakeholders to be informed of and discuss the efforts of the Dutch Presidency and the content of the Council Conclusions, as well as of the views and activities of the European Parliament and Commission.
Speaking in the event, DG Santé’s Andrzej Rys stated that debate on access to medicines is taking place at national, EU and international level, addressing the proportionality of prices with health benefits, the affordability and access to effective treatment and the efficiency of pharmaceutical spending. The Commission is committed to provide the methodology for an evidence based analysis of the impact of the incentives in the current EU legislative instrument on innovation as well as on availability.
Christian Siebert (DG GROW) underlined the Commission’s interest in this topic as well as its complexity. A triangle of interrelated concepts, i.e. sustainability, innovation and access, each with its own value and each impacting on each other make it difficult to determine the precise impact of specific actions and incentives in this area. Multi-stakeholder involvement is a must if progress is to be made.
Representing the Dutch Presidency, Marcel van Raaij described the Council Conclusions as a political statement which serve to address the pharmaceutical system as a whole, with the aim of ‘rebalancing’ the system to make it work as intended – and this entails legislation, innovation, incentives and national policies. The Conclusions are also intended to initiate a longer-term strategic cooperation to ensure consistency and continuity, owned by all member states.
MEP Soledad Cabezon Ruiz (ES-S&D) informed the audience of the own Initiative report on improving access to medicines which is currently being drafted with a view to discussing this in Committee in early October. This will address issues related to quality, safety and innovation, as well as therapeutic value and also look at how industry sets the pricing, taking the value of the new products into account.
Speaking on behalf of the European Patients’ Forum, Nicola Bedlington referred to an EPF consensus statement which makes the case for a patients and human rights based perspective in the debate on access to medicines and advocates for systems that address patients’ needs irrespective of their means, stating that ‘now is the time for shared responsibility and strong leadership to move forward in this area’.
PACT’s Secretary General Stanimir Hasardzhiev stated that ‘medicines are a product like no other as patients ‘lives depend on them. The current inequalities in access need to be addressed. Once products are registered they should be accessible to all patients’. Furthermore, he emphasised the need to take a holistic approach in addressing access to healthcare which recognises the importance of all five elements of access, as defined by PACT (availability, adequacy, accessibility, affordability, and appropriateness).
The report from the event is available below:
Notes: | https://eupatientaccess.eu/archives/174 |
Successful digital transformation can create sustained growth and competitive advantage, but it’s often a highly complex undertaking that requires coordination across people, processes, organizational cultures and, of course, technology. In addition, each journey is shaped by the industries in which a particular enterprise is undergoing transformation. And with unprecedented challenges currently facing hospitals and healthcare facilities, the need for digital transformation to help address increased workloads and mission critical activity is even greater.
Navigating digital transformation alone can mean steep learning curves and delays, but with the support of established open source communities and effective collaboration with a global systems integrator (GSI), healthcare organizations can improve digital transformation timelines and outcomes.
Creating a connected ecosystem
The healthcare industry continues to undergo waves of increased digitalization, including the adoption of data-intensive technologies like electronic health records (EHR), wearable medical and other IoT devices, telehealth, augmented reality (AR), and artificial intelligence (AI). Despite recent world economic conditions as a result of COVID-19, research indicates that the adoption rate of technologies to help streamline healthcare workstreams and scale up systems will continue to grow in the coming years.
Important gains in foundational IT capabilities play an essential role in this trend. For example, big data and analytics solutions aggregate inputs from myriad systems to generate meaningful patient and operational actionable insights. 5G networks offer higher internet speeds that can accelerate EHR transfers and data interchanges while improving IoT reliability and remote monitoring abilities.
These IT-driven capabilities provide numerous opportunities to improve the patient experience, including:
- Maintaining a 360-degree view of each patient;
- Catching and addressing health issues earlier;
- Better and faster identification of effective treatment plans;
- Fewer diagnostic failures and related rework;
- Downward cost pressure on positive healthcare outcomes.
Staying informed about the most effective use cases for each of these new technologies and developing related skills among in-house teams can be a challenge. That’s where working with an experienced GSI can help realize opportunity quickly, cost effectively, and without sacrificing security.
Mitigating risk with the right partner
The current wave of digitalization means data simultaneously resides in multiple places – such as medical facilities and patient devices – and routinely travels between them via a combination of private and public networks. This reality creates new vulnerabilities for possible cyberattacks and data security risks.
Securing data at rest, on wire, and on cloud is essential for compliance with federal HIPAA regulations, protecting personal health information (PHI), and overall business viability. However, while healthcare organizations are bound by federal regulations, patients and end users are often not properly educated on how to protect their data.
Protecting organizations and patients requires a skillful blend of expertise and technology. A knowledgeable GSI is equipped to help solve complex challenges and can build effective security capabilities into every aspect of transformation projects, so the focus can stay on improving operations and healthcare outcomes.
Thinking for the future
As healthcare industry leaders continue to embrace new technologies – including the latest supported, open source innovations – to address emerging healthcare IT needs while safeguarding data and complying with evolving regulations, it’s important to establish a roadmap of priorities and goals in order to make data-driven decisions. Aligning with overarching goals of optimizing system agility and resilience, enhancing the patient experience, and improving patient health outcomes, here are some suggested considerations to keep in mind:
- Establish a multicloud strategy that promotes secure data management, flexibility, and interoperability.
- Safeguard EHRs and PHI by complying with the standards set by HIPAA and HITECH and utilizing NIST-certified data security standards and platforms.
- Embed AI in diagnostic processes to improve prevention, early detection, treatment plan development, and effective drug discovery.
- Optimize compute costs and portability using containerization and cloud technologies, which provide an easier way to port workloads from on-premises to virtualization and on private and public clouds.
Adopting the open source way for healthcare
These ambitious enterprise healthcare IT goals are achievable by leveraging open source innovations, particularly those using containers to increase efficiency and speed of application deployment at scale in conjunction with automation tools for added management capabilities. Open source technologies are driving rapid innovation in the fields of AI, predictive analytics, data integration, information exchange and cybersecurity standards. In fact, many of Red Hat’s offerings comply with FIPS 140-2 certified by NIST for providing secure platforms for healthcare IT workloads, both on-premises and in the cloud.
Working with the right GSI helps organizations get the most value from Red Hat’s open source solutions by applying industry and technology-specific expertise, experience, and intellectual property. For example, Red Hat collaborates with DXC Technology to deliver solutions such as the DXC Managed Container Platform as a Service (MCPaaS) offering at scale, freeing healthcare organizations from the overhead of managing a cloud platform while allowing them to realize important cloud milestones from day one.
See it in action
Open source technology is a proven path for extending the capabilities and performance of legacy infrastructure while improving privacy, agility and patient outcomes. See what Intermountain Healthcare has been able to achieve since upgrading its aging IT environment with help from Red Hat.
To learn more about how open source technologies can support IT modernization and the journey to cloud strategies, please visit redhat.com/health.
About the author
Banu Bhandaru is a senior solutions architect in Red Hat’s Channel Alliances organization with a special focus on emerging cloud technologies. He has over 20 years of experience in the IT industry, helping customers adopt and implement CRM, Business Automation, Middleware and NoSQL solutions on-premises and in the cloud. | https://www.redhat.com/en/blog/driving-healthcare-it-transformation-global-systems-integrators |
Why do leaves fall off trees?
Have you ever noticed that when you cut down a tree — or a tree dies suddenly — the dead leaves stay on the tree for a long time? The leaves go from green to brown, but stay on the dead tree branches.
Every autumn the leaves die too, but they first turn colors and then fall off the tree. Why do leaves stay on the tree when the tree dies suddenly, but fall off the tree when a tree goes into fall?
First, it is important to understand why leaves fall off the trees every year. The tree uses a lot of resources to manufacture each leaf, so it seems a waste to go through that same process every year. It turns out that, just like everything else in nature, there is a very good reason for this process.
It’s on purpose
Some believe that as the leaves start to receive less sunlight, the health of the leaf declines and it will become brittle and fall off. A good wind will easily knock off the colorful dying leaves. It is not so simple. The trees actually do this on purpose.
The daylight in fall gets shorter and shorter, as the sun’s path in the sky becomes shorter and shorter due to the direction of the Earth’s tilt at that time of year. The shorter days, combined with cooler temperatures, trigger a chemical change in the tree. This chemical change is basically the tree saying to the leaves, “It’s been a great year — now get off of me.”
This chemical signal activates a layer of specialized “abscission” cells between the leaf and the stem. The word “abscission” has a similar meaning to the word “scissors,” and the cells make the cut. The leaf falls to the ground.
But why?
We all know that the main purpose of a leaf is to gather sunlight and convert that sunlight to energy for the tree. (You better know this by now!) However, this process could damage the tree in the winter for a couple of reasons.
— A tree transports the energy the leaves make using large amounts of water. Water freezes in the winter and expands. This expansion of freezing water would severely harm the tree’s cells.
— In areas where there is snow and ice, leaves catch all that extra weight and would pull many of the branches off the tree. This would also severely harm the tree.
It’s not just the cold
During a prolonged dry spell, the tree might not have enough water to properly function. If the leaves start to die on the tree, the connected branches might also die. Instead, some trees have evolved so that the chemicals kick in and cut the leaf off. The branch goes dormant and survives to produce leaves next spring.
You might wonder about those evergreen plants — the ones that keep their leaves all winter long. These plants evolved to survive the winter in a different but just as effective way. Instead of the leaves falling off, these plants developed a type of anti-freeze liquid inside them that resists freezing.
Just like all life, plants have had millions of years to perfect themselves through many new adaptations. Next time you see a leaf floating to the ground, know that it was time for the tree to say goodbye and cut the leaf loose.
Mike Szydlowski is science coordinator for Columbia Public Schools.
TIME FOR A POP QUIZ
1. If you see dead leaves stuck to a single tree branch in winter, what might be true about the tree?
2. What makes leaves fall off of trees?
3. Why do trees want to lose their leaves in the fall?
4. Is fall the only time cells cut leaves loose? Why or why not?
5. Evergreen plants do not lose their leaves in winter. How are they not damaged?
LAST WEEK'S POP QUIZ ANSWERS
1. The story uses the term metamorphosis in the first paragraph. What does that word mean?
Metamorphosis means to change from one form to another.
2. What percent of pumpkins are used for food?
2%
3. Name one benefit and one negative in using pumpkins to clean polluted soil.
Benefit: Using pumpkins is cleaner and cheaper than traditional methods.
Negative: The process of cleaning the soil takes longer using pumpkins.
4. Might there be a danger to wildlife by using the pumpkin clean up method? How could you fix it?
The pumpkins grown in polluted soils would be contaminated. One solution would be to block off the pumpkin crops from animals using fences and screen.
5. Why is the protein in pumpkin skin an important discovery for humans?
Some disease causing microbes are becoming resistant to our medicines so scientists have to find new medicines they will react to. | https://www.columbiatribune.com/story/news/local/2020/11/04/why-do-leaves-fall-off-trees-science-has-answers/6121764002/ |
To play a hand of poker, players place an “ante” (a nickel into the pot). Each player receives five cards. The highest hand, known as a straight, is called a “high pair,” and the lowest hand, known as a “low pair,” is called a “low flush.” These hands tie when there is no higher pair, and the high card breaks ties. Pairs and better hands, such as straights and high cards, also tie, and are known as hands with three or more cards.
The best possible hand at any time is the “nuts.” In poker, a “nuts” hand is a pair of sevens or a pair of jacks. When the river comes, a player’s hand is called a “backdoor flush,” and it involves hitting the necessary cards on the turn and river. This hand, along with the high hand, gives the player a significant statistical advantage. However, it’s important to remember that the player who lands the best hand isn’t necessarily the best one.
There are many types of poker, but the three main types are stud, draw, and community card games. Most friendly poker tables let the dealer decide which type of game to play, whereas more formal tournaments will specify the rules for each one. When playing poker, it is important to know when to release a hand and when to bet, so you can maximize your chances of winning. If you don’t know the rules, you’ll be at a disadvantage if you try to figure out what wild cards are. | http://gardencourtretirement.com/?p=6 |
The Trustees selected Martin Meyerson to succeed Gaylord Harnwell when they unanimously elected him president at their meeting on 28 January 1970. Meyerson had enormous shoes to fill when he took office on the first of September. The Educational Survey and the Integrated Development Plan of the Harnwell era set the pace for ambitious and aggressive strategic planning and fundraising for the new administration. Luckily, Meyerson could boast enormous experience in the field of urban planning. In fact, he had even served as Professor of City and Regional Planning at the University’s Graduate School of Fine Arts from 1956 to 1957. He also boasted years of administrative experience. Meyerson successfully picked up where Harnwell left off by designing and implementing an extensive educational plan at the beginning of his tenure and a wildly successful fundraising initiative towards the end of his tenure. Minorities and women also made important strides towards achieving equality at Penn during the Meyerson administration, as he designed and continuously revised a detailed affirmative action plan throughout his term.
At the onset of his presidency Meyerson outlined a number of educational goals he hoped to accomplish. In January 1972 he released Directions for the University of Pennsylvania in the Mid Seventies to the Trustees. In this progress report Meyerson revealed plans to attract new sources of funding in order to increase the University’s endowment. He made other suggestions for: extending and reinforcing professional training; correlating educational programs; strengthening undergraduate professional programs; allowing undergraduates to pursue graduate options; expanding undergraduate opportunities; focusing on good programs; and creating new interdisciplinary programs. The most important revelation Meyerson made in this document was his plan to create a University Development Commission, comprised of faculty and students who would evaluate and make suggestions for educational improvement.
While none of the changes proposed by the various workshops in Reinforcement and Change in Undergraduate Education were considered final, their work strongly influenced the development of undergraduate education at Penn during the 1970s.
The most famed educational planning initiative of the Meyerson era began its work in February 1972. At their meeting on 11 February the Trustees determined that representatives from the faculty, from the student body, and certain administrative officers would meet, discuss, and formulate recommendations for general policies regarding the educational objectives of the University. In May 1972 the work teams released their first progress reports to the Trustees. University Development Commission: Summary of Progress Reports of the Work Teams described the Commission’s work in the areas of: reallocation; undergraduate education; educational living patterns; endowed scholarships and fellowships; endowed professorships; libraries; intra-university cooperative programs; inter-institutional cooperative programs; continuing education; audiovisual resources; graduate education, and professional schools. The Commission highlighted the basic values of education at Penn, which included a dedication to teaching and scholarship, a new emphasis on excellence in teaching, sustained attention to undergraduate education, and matching the strength of the key professional schools in all other schools.
In February 1973 the culmination of the year-long Development Commissions study appeared in the form of the Commission’s final report, Pennsylvania: One University. The Development Commission ultimately made 94 recommendations for improving educational programs centered around five themes: educational directions, academic planning, tools for scholarship and learning, the changing membership of the University, and the arts and environment at the University. Meyerson published a follow-up document to the Development Commission, titled, The Implementation of the Development Commission Recommendations in February 1973. This report described action being taken towards making Penn “One University,” a phrase coined and popularized by Meyerson. The areas considered in the report included: reallocation of funds, undergraduate education, graduate and professional education, black presence, intrauniversity cooperation, continuing education, interinstitutional cooperation, endowed professorships, the library, and future planning. The University administration planned to act on the Commission’s recommendations as soon as possible. Two years later, in January 1975, the administration released another report describing the progress of the Development Commission, Second Report on the Implementation of the Development Commission Recommendations. By this time the University had successfully merged the College, the College for Women, the Graduate School of Arts and Sciences, and the social science programs previously housed in the Wharton School to create the new Faculty of Arts and Sciences. This second report described further improvements as related to the five key areas determined by the Commission’s final report from February 1973.
Not only did Meyerson work to improve education at Penn, but he also pushed to diversify the campus. Building on the national policy established by US President Lyndon Johnson in 1965, Meyerson introduced a comprehensive affirmative action plan for Penn in June 1971. At the 11 June 1971 meeting of the Trustees Meyerson requested that each school and department submit its own proposal for increasing the number of women and minority faculty, revising the nepotism rule, establishing a daycare center, and appointing an ombudsman. In February 1976 Meyerson issued a summary of Penn’s affirmative action plan as it had developed since 1971. The Affirmative Action Program of the University of Pennsylvania intended to hire more women and minority faculty and staff, but it also worked in earnest to maintain a standard of fairness. The 1976 publication reported on general principles for affirmative action in addition to principles for an academic setting and the specific affirmative action program for the University. The University’s affirmative action plan described responsibility; policies and procedures for non-academic personnel; policies and procedures for academic personnel; utilization analyses, goals, and timetables; and also included an internal audit and hiring analyses.
From the inception of the Affirmative Action plan it became clear that it specifically targeted the visibility of women and African Americans. In April 1971 a Committee on the Status of Women reported on “Women Faculty in the University of Pennsylvania,” which appeared in three installments in the Almanac. The Committee was charged with: collecting information on female representation in the various department and schools; soliciting views from members of the Penn community on female presence; studying the percentage of women in academic positions, including their rank and discipline; and making recommendations to ensure an equal representation of women. The Committee generally found that women were not fairly represented in the higher ranks. They also found that the University had a tendency to prefer male candidates over their female counterparts. In Part III they urged the University to increase the number of women faculty.
The African American presence at Penn also concerned the Meyerson administration. Although the Development Commission made recommendations for increasing the black presence at the University, the administration did not feel it did enough. As a result, Provost Stellar established a Task Force on Black Presence in August 1976. The Task Force submitted “Report of the Task Force on Black Presence” to the Trustees on 10 June 1977. In the report the Task Force proposed methods for supporting the black presence through the affirmative action program, undergraduate and graduate admissions, the curriculum, and through improving university life for African American students, faculty, and staff. After reviewing the report, even Trustee Kaysen admitted that the University had not done enough to solidify a strong African American presence on the campus. The Report of the Task Force on Black Presence, however, did reveal the Meyerson administration’s desire to both increase the number of African Americans on campus and improve the quality of life for Penn’s existing African American community.
While Martin Meyerson worked to improve the social and learning environments at Penn he also developed plans for a major fundraising campaign; he understood that he would never see his visions for the University realized if he didn’t have a sufficient amount of money at his disposal. The Trustees first resolved to launch a campaign to begin in 1975 at their meeting on 11 January 1974. The administration worked in secret over the next year. In October 1975, having already built a nucleus fund of $45.8 million, the Trustees announced a five year goal to raise $255 million. “Program for the Eighties” was motivated by Meyerson’s “One University” vision and therefore called for support from every single member of the Penn community. As the 7 October issue of the Almanac informed, “Trustee John Eckman outlined a comprehensive campaign plan in which no stone is to be unturned locally, nor across the nation.” The administration’s assertive effort made the campaign an enormous success. By the end of June 1980 the Program for the Eighties reached its goal. The campaign also ranked among the top three fundraising efforts by American universities in history.
While the administration focused its attention and energies on the Program for the Eighties, it also was forced to manage an extended period of hardship in University finances. Then, in March 1978, more than 1,200 students participated in a three-day student sit-in in College Hall in order to protest proposed budget cuts that would drop hockey and other sports, as well as the professional theater program at the Annenberg Center. The protest ended only when the Trustees, administrators, faculty, and students reached a settlement that promised to give students a stronger voice in University governance. A 31-point settlement agreement went as far as to give seats on the Board of Trustees to one student and one faculty member, a first for major private institutions. The administration also created a new “1978 Task Force on University Governance” in response to the sit-in. In its final report, the second Task Force on University Governance reviewed the 1970 Task Force’s Report (see Harnwell), analyzed the problems created by shrinking resources, and proposed changes, accordingly. The Task Force reaffirmed the experimental plan to offer seats on the Board to one student and one faculty member.
Together they lifted the intellectual and societal aspirations of this community of scholars and increased the vitality of the University as an educational institution of international stature.
Meyerson molded “One University” characterized by excellence. | https://archives.upenn.edu/exhibits/penn-history/institutional-planning/meyerson |
Q:
A Few Area Related Questions...
I need a little help with a few simple Geometry question that I need to resolve:
How to know if given a point of (X, Y) is that point inside or outside a list of shapes that I have with some information of those shapes.
1- A Rectangle (I have the Length and Width and the (X, Y) of one of the corners of the rectangle)
2- A Square (I have the size of the side and the (X, Y) of one of the corners of the square
3- A Circle (I have the radius and the (X, Y) of the center of the circle)
4- A Triangle (I have the three (X, Y) for each vertices of the triangle)
5- A Donut (I have the (X, Y) of the center of the donut and both the small and big radius)
Thanks for any help that can be provided...
A:
Both $1,2$ are similar cases, let's focus on the rectangle, since a square is simply a special case of a rectangle. As mentioned in the comments you need to know the orientation of the shape. Let's assume that you're given the bottom left corner $(x_0,y_0)$ of the rectangle. We can characterize the points in this rectangle as
$$
R = \{ (x,y) \mid x = x_0 + u \cdot w, y = y_0 + v \cdot h, u,v \in [0,1] \}
$$
Where $w = \mathrm{width}, h = \mathrm{height}$. Then a point $(x,y)$ is in $R$ (and thus inside the rectangle) iff $x = x_0 + u \cdot w, y = y_0 + v \cdot h$ for some $u,v \in h$. Note now that a square is simply a rectangle with $w=h$.
Next a circle and a donut are similar shapes, we can consider a circle to be a donut with inner radius of $0$. Let the donut be centered at $(x_0,y_0)$ with inner radius $r_i$ and outer radius $r_o$. Then this shape can be described by the set
$$
D = \{ (x,y) \mid r_i^2 \le (x-x_0)^2 + (y-y_0)^2 \le r_o^2\}
$$
so to test whether or not a given $(x,y) \in C$ (and thus whether the point is inside the circle) we need to check wether or not $ r_i^2 \le (x-x_0)^2 + (y-y_0)^2 \le r_o^2$.
EDIT: Realized I made the wrong lines for the triangle, doh!
Finally for a triangle with corners $(x_0,y_0),(x_1,y_1),(x_2,y_2)$ consider the lines from the point to we're testing $(x,y)$ to each of the corners:
$$
y - y_0 = m_{0}(x - x_0) \\
y - y_1 = m_{1}(x - x_1) \\
y - y_2 = m_{2}(x - x_2)
$$
where $m_{i} = \frac{y_i - y}{x_i - x}$. In order to test whether or not this point is in the interior of this triangle we would need to have the sum of the angles (taking the smallest of the angles) between these lines to be $2 \pi$, otherwise if it's on the exterior it will be less than $2 \pi$. As for calculating these angles you would need some linear algebra. Construct vectors for each of the lines, take dot products and then find the angle that way. Although this is intuitive it clearly isn't the fastest way, other algorithms are detailed in many programming websites such as this one.
| |
Chemical engineering is the direction and design of chemical reactions on an industrial scale, for the purpose of energy production as well as human development in general. Chemical engineering jobs involve using scientific and engineering principles for researching, developing and manufacturing chemicals, drugs, along with a wide range of products.
Typically, chemical engineers have to design experiments, create safety procedures for working with dangerous chemicals, conduct tests as well as monitor results throughout production.
Also, click here to know the tips for time management.
Role & Duties
Chemical engineers implement the principles of chemistry, physics as well as mathematics to solve problems that involve the production process. They design equipment for manufacturing, planning, and testing of products.
On a typical day, chemical engineers perform the following functions:
- Research to generate new and enhanced manufacturing processes
- Elaborate safety procedures for those working with possibly dangerous chemicals
- Evolve methods to distribute components of liquids and gases or to generate electrical currents using controlled chemical processes
- Outline and plan the layout of equipment
- Conduct experiments and observe the performance of processes throughout production
- Troubleshoot problems with manufacturing processes
- Assess equipment and processes to ensure compliance with safety and environmental regulations
- Estimate production costs for management
Skills
Analytical skills: Chemical engineers must possess strong analytical skills. This is necessary to figure out why a particular design does not work as planned.
Creativity: Chemical engineers must be able to discover new ways of applying engineering principles. They have to invent new materials, advanced manufacturing techniques, as well as new applications in chemical engineering.
Math skills: Chemical engineers use calculus and other advanced topics in mathematics for analysis, design, and troubleshooting in their work.
Problem-solving skills. Chemical engineers have to troubleshoot problems related to workers’ safety as well as problems related to manufacturing and environmental protection. They must also be able to anticipate and identify problems to prevent losses for their employers and stop environmental damage. | https://newsd.co/chemical-engineering-jobs/ |
Four Alternatives for Resolving Family Matters During COVID-19
Family life cannot be put on hold, and family conflicts continue to arise despite the closure of our primary dispute-resolution mechanism, the courts. The most common causes of separation and divorce include financial stress and unemployment, illness or death of a family member, difference in conflict resolution styles and lack of communication, all of which families may be experiencing more acutely during the COVID-19 pandemic.
Issues which arise on marriage breakdown include property and pension division, parenting time, financial support and a wide range of other topics which are pivotal to family life. What conflict resolution options are available?
The spectrum of conflict resolution options available to families in British Columbia range from classic negotiation, where the parties have the most control over the resolution of their dispute, to court, where judges make decisions which are imposed on the parties. This article will explain four conflict resolution options which remain available and the pros and cons of each.
- Negotiation
Classic negotiation is a one-on-one form of dispute resolution where parties negotiate directly with one another. The benefit of direct negotiation is that it may be cost- and time-effective; however, there are many downsides to direct negotiation including power imbalances, lack of legal information available to the parties, regret over the outcome, lack of reality-testing in the proposed outcome and the resulting inability of the parties to document and implement the agreed upon result.
- Mediation
In mediation, parties work with a neutral third-party professional to guide them in identifying the issues, facilitating effective communication and assisting them in reaching flexible, balanced, personalized solutions. In mediation, resolution comes as a result of the parties’ mutual agreement. Mediation takes an integrative approach as opposed to a distributive approach to family conflict; rather than dividing benefits among participants, an integrative mediator will work collaboratively with the parties to allow them each to maximize the benefits available to them. The advantages of mediation are extensive and include control over the outcome, an emphasis on privacy and confidentiality, and the ability to engage a legal professional cost-effectively to reach and document a mutually-beneficial resolution without hostile and timely litigation.
- Meditation-Arbitration (Med-Arb)
Med-Arb is a hybrid of mediation and arbitration, and combines the benefits of mediation with the added benefit of arbitration, including finality. In a Med-Arb, the parties agree at the outset that they will use their best efforts to resolve the conflict by mutual agreement but, failing that, that the mediator will become an arbitrator and render and final and binding decision on unresolved matters. The advantage of Med-Arb is the assurance of an outcome. The disadvantage of Med-Arb is that, ultimately, the outcome may not be one that either party would have agreed to.
- Arbitration
Arbitration operates much like a court hearing, but the decision-maker is an industry professional as opposed to a judge. Arbitration results in a decision being imposed by an impartial decision maker after hearing the evidence. An arbitrator’s decision is binding. The disadvantage of arbitration is that, as with court, the parties do not have control over the outcome; the advantage of arbitration is that the decision-maker is an expert in the field of family law and arbitration can often be scheduled a lot more quickly than a court hearing.
Use of Courts for “Urgent” Family Matters
Court remains an option for “urgent” family law applications and we have received some guidance from the Ontario Superior Court as to what will constitute urgency (Thomas v. Wohleber, 2020 ONSC 1965):
- The concern must be immediate; that is one that cannot await resolution at a later date;
- The concern must be serious in the sense that it significantly affects the health or safety or economic well-being of parties and/or their children;
- The concern must be a definite and material rather than a speculative one. It must relate to something tangible (a spouse or child’s health, welfare, or dire financial circumstances) rather than theoretical;
- It must be one that has been clearly particularized in evidence and examples that describes the manner in which the concern reaches the level of urgency.
Despite the ongoing ability of the court to hear urgent matters, judges are highlighting the importance of alternative dispute resolution during these uncertain times (Ribeiro v. Wright, 2020 ONSC 1829): Right now, families need more cooperation. And less litigation.
Tips for Remote Video Conferencing
Mediators and arbitrators across the province are working with videoconference providers to ensure that families can access mediation and arbitration services during the COVID-19 pandemic. If you plan to mediate or arbitrate remotely, please keep the following guidelines in mind to protect your privacy and confidentiality:
- Ensure that your dispute resolution professional creates a private meeting for your videoconference that requires a password for you to enter;
- Ensure that all recording abilities are disabled (mediations are without prejudice communications);
- Create a quiet space where you can participate in the videoconference fully, without the distraction of work and without the presence of children, who may be the subject of discussion; and
- Commit to and seek a commitment from the other party to maintain the confidentiality of the mediation by not allowing other participants within hearing distance, unless their presence has been previously agreed upon.
Achieving Conflict Resolution: Never Cut What You Can Untie
18th Century French moralist Joseph Joubert is credited for saying that you should “never cut what you can untie.” The goal of preserving and improving family relationships, while reaching mutually-beneficial solutions, is at the heart of mediation.
Mediation empowers families to resolve disputes themselves, using a wide array of tools, in a safe, supported, professional environment; mediation allows families to resolve their disputes practically, efficiently and creatively; mediation ensures that families’ affairs are kept private and confidential; and mediation provides people with the opportunity to work with a family law professional to move beyond conflict to resolution and lay the groundwork for healthy ongoing relationships.
If you would like support and assistance in resolving any of these matters, please contact Emily Anderson at Linley Welwood LLP, who is a Family Law Dispute Resolution Professional accredited by the Law Society of BC.
© Emily Anderson, Linley Welwood LLP
The contents of this article do not constitute legal advice. Readers should seek legal advice in relation to their own specific circumstances. | https://www.linleywelwood.com/blog/four-alternatives-for-resolving-family-matters-during-covid-19/ |
Competent Perspectives and the New Evil Demon Problem Lisa Miracchi University of Pennsylvania December 20, 2015 Forthcoming in The New Evil Demon: New Essays on Knowledge, Justification and Rationality, Oxford University Press, eds. Fabian Dorsch and Julien Dutant. The New Evil Demon problem is a problem for externalist theories of justification, and has been a subject of ongoing debate since it was introduced in 1983 by Keith Lehrer and Stewart Cohen.1 Lehrer and Cohen ask us to: Imagine that, unknown to us, our cognitive processes, those involved in perception, memory and inference, are rendered unreliable by the actions of a powerful demon or malevolent scientist. It would follow on reliabilist views that under such conditions the beliefs generated by those processes would not be justified. This result is unacceptable. The truth of the demon hypothesis also entails that our experiences and our reasonings are just what they would be if our cognitive processes were reliable, and, therefore, that we would be just as well justified in believing what we do if the demon hypothesis were true as if it were false. -Lehrer & Cohen (1983), p. 192 In other words, Lehrer and Cohen ask us to imagine that the subjective, first-personal character of our mental lives is the same as it normally is, but that we are being radically deceived, so that (nearly?) none of our beliefs are connected to the world in the way they normally are, in the way that provides us with knowledge of the world.2 Nevertheless, Lehrer and Cohen maintain, such a deceived subject would still be justified. Let us call duplicates of the sort Lehrer and Cohen are imagining perspectival duplicates. 1Thanks to Nic Bommarito, Cameron Boult, Brian Cutter, Julien Dutant, John Greco, Christoph Kelp, Rachel McKinney, Alan Millar, Ernest Sosa, Kurt Sylvan, and Alex Worsnip. 2See also Cohen (1984). 1 Competent Perspectives and the New Evil Demon Problem The internalism/ externalism distinction is made in many different ways, but one way to make it is to distinguish those who think that disconnection from the world in the evil demon scenario precludes justification (externalists) from those who do not (internalists). Externalists hold that one's reliable connection to the world is all that matters for justification; internalists hold that what matters is the subject's perspective. Many externalists reject the internalist's claim that subjects in the evil demon scenario have a positive epistemic standing in common with normal subjects, maintaining that positive epistemic standing is entirely a matter of how one is connected to the world. This, however, has well-known problems. For example, let us start with someone who is clearly epistemically virtuous-someone who seeks out proper evidence on questions of interest, reasons thoroughly and effectively about these questions, and persistently develops and hones her intellectual abilities in the service of acquiring and maintaining knowledge of herself and the world. Let us also take someone who is clearly epistemically vicious-someone who is intellectually lazy, who indulges in wishful thinking and bad reasoning that supports whatever she wants to believe, who refuses to acquire new ways of thinking or reasoning despite having good evidence that her current methods are misleading, and so on. Now, we imagine that perspectival duplicates of our two subjects are in evil demon scenarios. Are these duplicates epistemically on a par with each other? Intuitively not: unfortunate circumstances make both of their belief-forming methods equally completely unreliable, but that does not not erase all epistemic differences.3 The duplicate of our virtuous epistemic agent is still more virtuous than the duplicate of our vicious epistemic agent, for she is still reasoning better. Internalists take this kind of consequence to be a reductio of the externalist position-epistemic standing must be the sort of thing that differentiates between these two perspectival duplicates.4 Internalists are in large part concerned with trying to capture the difference that our perspective on the world and on our own abilities makes 3Here and throughout I will ask whether the beliefs of an epistemic subject are justified/rational or not and why. This is not meant to be read as a question about all the beliefs of a subject. I assume that virtuous duplicates (and agents) are capable of failing to believe from epistemic competence from time to time, and these cases are not of interest here. Likewise, mutatis mutandis for other agents. We are considering here the epistemic status of beliefs that are exercises of epistemic competences or other propensities to believe, and it is only for the sake of concision that I talk about "the beliefs" of certain kinds of epistemic agents. 4See Cohen (1984), p. 283 for discussion of this kind of case. 2 Competent Perspectives and the New Evil Demon Problem to our epistemic standing. Without it, they worry, our epistemic standing becomes too divorced from what makes us human, too much a matter of a mere machine functioning properly, rather than a person grappling with questions. I agree. An adequate theory of our epistemic standing must take into account our subjective perspectives on the world, how things seem to us, what considerations we are bearing in mind, how we are trying to reason, and whether we are properly committed to knowing how the world is, even if it is not always in our favor. However, the traditional internalist account of justification has its serious shortfalls too. It fails to capture the sense in which our virtuous epistemic agent is epistemically better off than her perspectival counterpart- for one, she is actually competent. She doesn't just mean well, she reliably and effectively acquires and maintains knowledge about her environment. Even when she goes wrong and falsely believes, her errors are of an entirely different magnitude than the errors her perspectival duplicate makes. This difference should be reflected in our account of epistemic standing. One strategy for solving this problem which has seemed attractive to many is to allow for more than just one kind of positive epistemic standing that falls short of knowledge. Perhaps we should separate the externalist notion of justification (what I'll henceforth call "justification") from the internalist notion of justification ("rationality"), letting everyone agree to disagree. As things stand, however, this move has serious costs. First, many internalists and externalists hold that knowledge can be analyzed in terms of justification, truth, and some anti-Gettier feature.5 For such views the question arises: which kind of justification should serve in such an analysis? If it is the externalist kind, why does the internalist kind also seem necessary for knowledge? What work is it doing in our epistemology then? (And vice versa.) What do these two kinds of epistemic standing have to do with each other? Why do justification and rationality both count as positive epistemic statuses? Are there more positive epistemic statuses? Why stop at these two? Why not let a thousand flowers bloom? One might think these issues are avoided by those adopting a knowledgefirst approach, but at least initially the problem becomes even worse. Knowledgefirsters claim that knowledge is not analyzable in terms of justification (or rationality) but is rather epistemically fundamental, and all epistemic statuses are derivative from the epistemic status of knowledge.6 But now we 5This project clearly motivates both Lehrer and Cohen. 6Knowledge-firsters differ on whether they claim knowledge is conceptually or metaphysically unanalyzable in terms of other epistemic and mental features. Here and else3 Competent Perspectives and the New Evil Demon Problem have not one but two kinds of positive epistemic status that need accounting for in terms of knowledge. Knowledge-firsters either have to settle for a less explanatory epistemic theory than belief-firsters or they have their work cut out for them. Being a knowlege-firster myself, this much mystery makes me pretty uneasy. Whew. This is where the debate is right now, and it is no wonder that many externalists (both knowledge-firsters and belief-firsters) have turned to the idea of an excuse to try to get out of the trouble.7 On this view, only beliefs that are knowledge have positive epistemic status at all, but there are other cases where a subject may be blameless, or excusable, for having a belief. Evil demon scenarios, they claim, are cases of this sort. However, this strategy doesn't fix the problem. First, it doesn't explain why the subject is epistemically excusable in virtue of being a perspectival duplicate. What is it about our perspectives that makes a difference to blameworthiness?8 Often this is assumed, but it cannot be in this context. Here we return to the original internalist call to make our humanity relevant to epistemology-to make our interests and goals have a role to play in our epistemic lives. What is it, then, about our perspectives that provides us with excuses? Moreover, why do our firstpersonal perspectives make a difference to excusability despite not making a difference to genuine epistemic standing? More needs to be said here than has been done to date.9 where I am primarily interested in metaphysical questions. See Ichikawa & Jenkins (manuscript) for a helpful discussion of the diversity of knowledge-first views. 7Williamson (this volume), Littlejohn (2012). 8E.g. Littlejohn (2012) argues that evil demon subjects still "pursue their epistemic ends rationally and responsibly" (59) and that this provides them with an excuse for believing as they do. Why is it, however, that evil demon subjects have these virtues despite being wholly unreliable, and why should these provide one with an excuse for not having genuine justification? As , Littlejohn himself himself agrees (58), the strategy treating our world as the normal world for any subject in any world (as Williamson (this volume), Comesaña (2002), and others do), is an undue privileging of our own situation rather than a genuine explanation of the norms applicable in the evil demon world. 9As my main aim in this paper is to offer my own solution, I cannot defend this claim in detail. However, to better see the kind of worries at issue, I will briefly consider Williamson (this volume)'s proposed solution to the new evil demon problem by appealing to derivative norms. He argues that although evil demon subjects violate the primary epistemic norm- believe only what you know-they can satisfy the secondary and tertiary norms of having a general disposition to believe only what one knows and doing what a person who had such a disposition would do in such a situation. Satisfying these norms, he claims, is sufficient for having an excuse for violating the primary norm. He does not, however, explain (i) why certain secondary and tertiary norms are generated by a primary norm, (ii) why the virtuous duplicate complies with these derivative norms, given that her world is thoroughly unlike 4 Competent Perspectives and the New Evil Demon Problem Second, proponents of this strategy also use it to account for cases in which we have a justified false belief in normal environments.10 They thus fail to account for the epistemic difference between our virtuous agent who exercises her epistemic competence and yet believes falsely on a particular occasion, and her perspectival duplicate. Lastly and most importantly, this kind of strategy fails to explain the way in which the perspectival duplicate of our virtuous epistemic agent is in some sense also virtuous. It is not merely that she is less in the wrong than the vicious duplicate; she is doing something epistemically right, at least, and we should be able to give an account of what it is.11 It's time for a new strategy. In what follows, I will extend my direct virtue epistemology (2015a; 2015b) to explain how a knowledge-first framework can account for two kinds of positive epistemic standing, one tracked by externalists, who claim that the virtuous duplicate lacks justification, the other tracked by internalists, who claim that the virtuous duplicate has justification, and moreover that such justification is not enjoyed by the vicious duplicate. It also explains what these kinds of epistemic standing have to do with each other. In short, I will argue that all justified beliefs are good candidates for knowledge, and are such because they are exercises of competences to know. However, there are two importantly different senses in which a belief may be a good candidate for knowledge, one corresponding to an externalist kind of justification and the other corresponding to an internalist one. In section 1, I discuss the New Evil Demon problem in more depth, and argue that externalists cannot easily dismiss it. In section 2, I review some core features of my direct virtue epistemology and explain how it already delivers an externalist kind of justification. In section 3, I explain what kind of positive epistemic standing perspectival duplicates have, and why this epistemic standing is dependent on the normative status of knowledge. In section 4, I show how this normative status may be explained using the ours (he just claims that our world is the world of evaluation for normal scenarios (p. 14)), and (iii) why compliance with the derivative norms should provide one with an excuse for violating a primary norm. Such a task is necessary for a full account of epistemic excuse and seems just as difficult as that of explaining how there can be two kinds of positive epistemic status. 10See, e.g. Williamson (this volume). 11Some sophisticated versions of the view might have something to say here. E.g., Littlejohn (2012) admits that there is a sense of justification that corresponds to the internalist sense of justification (personal justification) and claims that evil demon subjects can have it. However, we want a unified account of epistemic standing, and Littlejohn does not provide one. 5 Competent Perspectives and the New Evil Demon Problem tools of virtue epistemology. In section 5, I show how the account solves the new evil demon problem in a more satisfactory way than existing accounts. We end up with a view of knowledge, justification, and rationality that is plausible, motivated, and theoretically unified. 1 What Exactly Is the New Evil Demon Problem? The New Evil Demon Problem asks us to consider a scenario in which everything seems to be the same to us as it normally does, but in which we are radically deceived by some evil creature with the power to make us undergo such a persistent illusion. For simplicity's sake, I will consider the way of filling out the case on which we are, and always have been, brains in vats which an evil demon has made to have the thoughts, experiences, etc., we would have if things were normal.12 A normal question for someone with externalist leanings to have at the very outset is whether such a scenario is even possible. Sure, I can imagine being a brain in a vat and having all the same experiences and thoughts as I do now, but could I actually be one? An influential line of argument says No. If our experiences, thoughts, and so on are in part determined by our relations to our environment, then we couldn't have the same experiences, thoughts, and so on if we were brains in vats. Such considerations are typically brought up to counter skeptical worries about our knowledge of the world.13 We appeal to the idea that what our mental states are about is at least in part determined by our relations to the world in order to rule out the possibility that we could be radically deceived.14 Adopting inspiration from this approach, the hardline externalist about justification might then push the point here. If I couldn't experience or think about the same things in the evil demon scenario as I do now, then I wouldn't have the same experiences or thoughts. Things wouldn't seem to be the same to me. Thus the brain in a vat scenario is not one in which I form the same beliefs on the basis of the same experiences in the same ways but now these ways are highly unreliable. The new evil demon scenario is 12This construal ignores complications that arise from recent envatment. These complications won't change the moral of the story. 13See Putnam (1981). 14There are of course questions to ask about this strategy (e.g. see Brueckner (1986) for plausible worries), but the point here is not to show that the externalist has a convincing response to skepticism. Rather, I just wish to point out that it is a move someone with externalist leanings (both in semantics and in epistemology) might plausibly make. 6 Competent Perspectives and the New Evil Demon Problem thus metaphysically impossible, and so not a counterexample to externalism. We can happily disregard troubling intuitions about such cases. I think that this response to the internalist's challenge misses the point. All that matters, in order for the evil demon scenario to pose a problem for externalism about justification, is for it to be conceivable, not possible. The point of raising the scenario is not that externalism fails to be extensionally adequate, but that it wholly credits reliability with being responsible for epistemic standing. The scenario is being used to teach us that beliefs have certain kinds of epistemic standing in virtue of certain properties of our perspectival lives-if our perspectival lives could be preserved while our reliable connection to the world were severed, some things would still be going epistemically right with us. The virtuous perspectival duplicate would still be reasoning in the right sort of way-she would believe properly on the basis of her experience, she would engage in proper inferences, ask the right questions, and so on. She would have a certain epistemic standing that is preserved because it is determined by certain perspectival features of her mental life. It is irrelevant whether one could, as a matter of metaphysical possibility, have this aspect of one's mental life preserved in the absence of reliable connections to the environment. What matters for the epistemic internalist is that any contributions that reliable connections to the environment make to epistemic standing are made via their giving rise to the perspectival aspects of our mental lives. This is a point that is often overlooked and so it is worth making again: sometimes, all you need in order for a case to make a point is for it to conceptually separate two properties. This can show that intuitively certain properties A are responsible for epistemic standing N (or whatever philosophically interesting feature you're interested in) and other properties B are not. It doesn't matter whether the A properties could exist without the B properties. Perhaps the B properties are metaphysically necessary for the A properties. But even if that is so, if our intuitions are on the right track, the B properties contribute to N only via grounding the A properties. Accordingly, the internalist can claim that content externalism is beside the point. Perhaps there couldn't be radical deception scenarios where I form the belief that I'm sitting under a tree on the basis of my experience of doing so. Nevertheless, when we imagine the case, we judge that the subject is doing something right, whereas someone who judges that there are pink elephants in the room on the basis of the same experience is not doing something right. That is all we need to suppose in order for the evil demon scenario to generate a problem for externalism. A solution to the new evil demon problem, then, will explain how there 7 Competent Perspectives and the New Evil Demon Problem is a kind of epistemic standing that is not directly determined by our reliable hook-up to the world, but rather by our mental, first-personal, perspectives: how things seem to us, how we are reasoning, what we are aiming at when we are reasoning in certain ways or asking certain questions. However, a solution need not appeal to only features that are present in the evil demon scenario. If the scenario is, as seems highly plausible to me, metaphysically impossible, then our account of which epistemic agents have the perspectival features responsible for rationality may indeed appeal to how the subject is related to the world in normal cases. We should accept the internalist point that mental features make a direct difference to epistemic standing without conceding that such mental features are soleley determined by what is inside the head. That the mental features responsible for internalist justification do depend on what is in the world can be illustrated by introducing a third perspectival duplicate: a merely well-meaning one. A merely well-meaning agent values knowledge and tries to form and maintain beliefs in knowledgeable ways, but systematically and widely fails. She doesn't have a good sense of what considerations bear on questions of interest; her reasoning is not logical, or in accordance with proper induction or abduction; she thinks that complex explanations (other things being equal) are more likely to be correct than simpler ones, and so on. Nevertheless, she has no idea of the extent of her shortcomings. (Too many of us are often in the position of meaning well with respect to some aim, being nevertheless incompetent at it, and having little or no idea that this is the case.) To make the comparison between the virtuous and merely well-meaning agents more concrete, consider a merely well-meaning moral and epistemic agent with respect to racial justice. This person values racial equality, but ignores evidence that police statistically treat black and white citizens differently, instead focusing on statistics such as those suggesting that black people are more likely to commit crimes. She has a friend who discusses with her worries that her black son might have a dangerous encounter with police when he is out with his friends at night. Our agent, in trying to console her, says, "Don't worry; as long as he doesn't do anything wrong he'll be fine".15 Although our agent means well, and values racial justice and knowledge, she doesn't properly value either of them. In believing and acting as 15See Dotson (2011) for an excellent virtue-theoretic critique of this kind of practice. This is a case where the audience is testimonially incompetent with respect to race (in Dotson's sense). See esp. pp. 246-249 for discussion of a similar example. 8 Competent Perspectives and the New Evil Demon Problem she does, she fails to manifest proper respect for what it takes to get onto the facts in this domain. Meaning well just isn't good enough. Of course, our merely well-meaning epistemic agent is highly unreliable. But her epistemic shortcomings do not stop there. Her perspectival duplicate in the evil demon scenario is intuitively worse off than the virtuous duplicate. This is so despite the fact that from her perspective, she cannot tell the difference between her situation and a virtuous one. Her position is subjectively indiscriminable, and yet she fails to believe rationally. Meaning well does not make it so: merely meaning to believe rationally does not thereby make one believe rationally.16 If our intuitions about the evil demon scenario are to be taken seriously, we are now confronted with the challenge of explaining the difference in epistemic standing between the virtuous and merely well-meaning duplicates in mental terms, not just in terms of reliability. This issue faces all theories of justification, and it is more difficult to solve than is often acknowledged.17 For example, consider this passage by Cohen: Beliefs produced by good reasoning are paradigm cases of justified belief and beliefs arrived at through fallacious or arbitrary reasoning are paradigm cases of unjustified belief. Whether or not reasoning results in false belief, even if this happens more often than not, is irrelevant to the question of whether the reasoning is good. To maintain otherwise would be on a par with confusing truth and validity. -Cohen (1984), p. 283. Here Cohen suggests that the question of whether an agent reasons rationally is orthogonal to the question of whether she reasons reliably. But that is not so. Deductive reasoning, after all, is plausibly epistemically valuable precisely because it is conditionally (perfectly) reliable. Thus truth-connectedness, perplexingly, does seem to matter for internalist justification. Moreover and more importantly, the majority of our belief formation and retention does not rely on deductive reasoning, but on the basis of heuristics, induction, and abduction. These kinds of reasoning are not plausibly reduced to logical reasoning; instead, what makes these ways of 16I here put aside views that entail the opposite conclusion, such as plausibly Foley (1987). I think we can respect the core internalist insight without giving it up. 17Cohen (1984) is clear that we need a theory of what makes good reasoning good on the first order. 9 Competent Perspectives and the New Evil Demon Problem forming beliefs rational seems to depend on whether or not they are ways of reliably getting onto the facts.18 What originally seemed like a clear-cut distinction between externalist kinds of epistemic standing-which have to do with being appropriately hooked up to the world-and internalist kinds of epistemic standing- which have to do with having the appropriate subjective mental life-is starting to look much less clear. How might we articulate what features of the subject's mental life determine internalist justification without collapsing into a form of externalism, or ending up with the unpalatable consequence that merely meaning to believe rationally makes it so? If we can answer this question, we can solve the new evil demon problem in a truly satisfying way-in a way that does justice to both internalist and externalist insights. In the rest of the paper I will show how my direct virtue epistemology can be extended to do just this. 2 Externalist Justification for Direct Virtue Epistemology The epistemic theory I defend is a knowledge-first virtue epistemology. It shares with other kinds of reliabilist virtue epistemology the idea that knowledge is an achievement that is due to our epistemic competence, and that epistemic competences are by nature reliable at accomplishing what they are competences to do. However, it is knowledge-first in holding that epistemic competences are competences to know, rather than to believe truly, and so the theory is direct in the sense that it claims that the competences responsible for knowledge are competences to do that very thing, not to do something that falls short of knowledge. Competences to know must therefore be reliable with respect to knowledge, not just true belief. Competences to know are reliable but typically fallible; they not only have exercises that are cases of knowledge (manifestations), but they typically also have exercises that are constitutively failures to know (degenerate exercises). This feature of competences is central to the view: instead of supposing, as belief-firsters do, that epistemic competences are exercised in a way that is neutral with respect to whether or not they accomplish their aim, I argue that exercises of epistemic competence always entail either success or failure.19 The success cases (cases of knowledge) are metaphysically and explanatorily more fundamental than the failure cases, however. 18Williamson (this volume) also makes this point. 19I hold this to be true for competences more generally. See Miracchi (manuscript ) for further discussion. 10 Competent Perspectives and the New Evil Demon Problem First, it is essential to epistemic competences that they manifest in cases of knowledge. It is not essential or necessary for them to be able to have degenerate exercises: reliability with respect to knowledge might be perfect, e.g., such as some claim is the case with the Cogito.20 Moreover, degenerate exercises are only exercises of competences because their conditions deviate from manifestation conditions; thus degenerate exercises of competence depend on manifestations for their status as epistemic states at all. Being an exercise of competence, rather than being the most epistemically fundamental case, is instead a disjunctive kind- that of either manifesting or degenerately exercising one's competence- and it is thus metaphysically and explanatorily dependent on manifestations and degenerate exercises.21 Nevertheless, the category of exercise of epistemic competence does interesting theoretical work. Beliefs have a certain kind of positive epistemic standing in virtue of being members of that category: exercises of competence are as a matter of their nature likely to be cases of knowledge. If knowledge is the fundamental epistemic good-qua the achievement of the epistemic domain-and reliability with respect to a good is therefore derivatively a good of that kind, then reliability with respect to knowledge is an epistemic good. A belief is justified in the externalist sense, then, just in case it is an exercise of a competence to know. According to direct virtue epistemology, not only is justification metaphysically and explanatorily dependent on knowledge, so is belief. Beliefs constitutively aim at knowledge.22 That is, beliefs are just the kind of mental state that aim at knowledge as a matter of their nature. We may now put the point as follows: As the performances that aim at knowledge, beliefs are the candidates for knowledge.23 However, rather than being a unified kind, beliefs admit of importantly different varieties, in accordance with the facts in virtue of which they have knowledge as their aim. In cases of justified belief, it is because the performance is an exercise of competence (a manifestation or a degenerate exercise) that it aims at knowledge, and so 20Sosa (2007), pp. 16-17 also makes the suggestion that the Cogito should be thought of as a case of a manifestation of a perfectly reliable competence. 21Why do I call exercises of competence a disjunctive kind? Isn't that an oxymoron? As we'll see, some beliefs can have normative statuses in virtue of being exercises of competence. Thus although the exercise of competence is a disjunctive notion, instances of it have properties in virtue of being a member of that kind. Thus kind-talk is warranted, at least in my view. Thanks to Neil Mehta for pressing me on this question. 22Bird (2007) and Sutton (2007) also hold this view. 23In the sense I am using the term "candidate" here, cases of knowledge are also candidates for knowledge. 11 Competent Perspectives and the New Evil Demon Problem is a belief. This is just a special case of the idea that in exercising a competence to A, the agent aims to A.) But there are other ways for a performance to aim at knowing. These are unjustified beliefs.24 We may now put the view about justification slightly differently (though I think equivalently) to what I propose in Miracchi (2015a). According to the theory on offer, an agent's belief that p is externalist-justified just in case it is, as a matter of its nature, a good candidate for knowledge in the probabilistic sense. Exercises of competence as such (a) aim at knowledge, and so are beliefs, and (b) are likely to be cases of knowledge. Exercise of competence are thus as a matter of their nature good candidates for knowledge in the probabilistic sense. I now wish to expand this conception of justification and hold that a belief is epistemically justified-in either the externalist or the internalist sense-just in case it is a good candidate for knowledge as a matter of its nature. Moreover, a belief is a good candidate for knowledge if and only if it is an exercise of epistemic competence. Miracchi (2015a) shows how exercises of epistemic competence are good candidates for knowledge in an externalist sense. I will now argue that they are good candidates for knowledge in an internalist sense: all exercises of epistemic competence thereby meet a mental requirement for being knowledge. In the next section I will explain what that mental requirement is, and in section 4 I will explain why all and only exercises of epistemic competence meet that requirement. 3 Rational Believing Is A Kind of Properly Valuing Knowledge A promising place to start is by looking at some insights from recent work on derivative value, and those in epistemology who are already applying it to the epistemic domain. Several people have argued recently, perhaps most notably Thomas Hurka (2001), that some acts and attitudes are valuable because they instantiate or manifest proper ways of valuing something valuable.25 For example, it is not only good to provide food and shelter to the homeless, it is also good to value the acts of providing food and shel24See Miracchi (2015a) pp. 50-51 for further discussion. 25This has been of particular interest in the literature on fitting attitude theories of value, for those theories try to analyze (certain) values in terms of being worthy of certain attitudes. Regardless of whether this project is on the right track, it has reminded us that certain acts are valuable because they manifest proper ways of valuing the valuable. The articulation of the view in terms of manifestation of a proper way of valuing is due to Kurt Sylvan (manuscript), but I think it is a useful way of clarifying explaining Hurka's original view, rather than a development of the view. 12 Competent Perspectives and the New Evil Demon Problem ter to the homeless-perhaps by writing a journalistic piece about an organization that does so effectively. Perhaps such writing will increase the number of donations, and so increase the number of homeless people given food and shelter, and so be instrumentally good. However, even if the article were to fail in this regard, it would nevertheless be good merely for the reason that it manifests the author's valuing the providing of food and shelter to the homeless. Kurt Sylvan (manuscript) recently pursues this line of thought in providing an account of epistemic value, where truth is the fundamental epistemic value, and cases of believing that properly value the truth are thereby derivatively (non-instrumentally) valuable. He claims that "beliefs are epistemically valuable because they manifest certain ways to place value on accuracy in thought".26 Sylvan then claims that he can analyze certain epistemic normative properties such as rationality, coherence, and knowledge in terms of different ways of valuing the truth. While I am less optimistic about being able to account for these epistemic normative properties in the way that Sylvan does, I think he is on the right track in investigating the kind of epistemic standing that the internalist is getting at when she claims that the subject in the evil demon scenario is still justified in believing as she does.27 Properly valuing an epistemic good is clearly something that is inherently first-personal, that has to do with how we mentally, perspectivally, proceed in our epistemic inquiries. By placing attention on whether or not the subject properly values the truth (or knowledge!) in believing as she does, we are placing our attention on something that is clearly a feature of her mental life. As this stands however, it won't quite do, for two reasons. First, I need to explain why the kind of proper valuing I am claiming is constitutive of epistemic rationality is plausibly something that all beliefs have, and is not overly intellectualized.28 Second, it is important to distinguish the kind of derivative value that beliefs can have from the kind that performances which are not beliefs can have. For example, one way of valuing knowledge is to create schools. But the act of creating schools, if epistemically valuable, is valuable in a very different sense from the epistemic value of believing rationally. It is certainly not epistemically rational in the same sense that beliefs are epistemically rational. Hurka's and Sylvan's accounts, 26Sylvan (manuscript) p. 4. 27Sylvan is clear that he does not mean for such an account to be a contender to a moderate reliabilism, which claims that reliability is an epistemic good; rather he aims to be augmenting such a view-accounting for a more internalist kind of positive epistemic standing. 28Hurka and others face an analogous challenge with respect to the moral domain. 13 Competent Perspectives and the New Evil Demon Problem however, do not make this distinction.29 Note that, in this case, analogously to the journalism case above, the claim is not that writing journal articles or opening schools is epistemically valuable because it is a way of promoting knowledge, but that it is epistemically valuable because it embodies the proper valuing of knowledge. This kind of proper valuing, however, is different and more removed from the kind of proper valuing our beliefs have because we value what it takes to know in believing as we do. How shall we understand the difference? 4 Competent Perspectives Virtue epistemology, and in particular my direct virtue epistemology, can help with both of these issues. First I will take up the question of distinguishing the kind of derivative value beliefs can have from other kinds of derivative value. Then, I will answer the charge of over-intellectualizing rationality. According to virtue epistemology, epistemology is a performance domain. This means its normativity is structured in terms of certain aims that are fundamental to the domain, and the agency involved in attaining those aims. The primary bearers of epistemic value are the performances that are candidates for being attainments of the fundamental epistemic aim(s) of the domain. As discussed in the previous section, for direct virtue epistemology knowledge is the fundamental aim, and beliefs are the performances that are candidates for knowledge. This is why beliefs are the immediate bearers of epistemic properties.30 Other performances may bear epistemic properties only as they relate to the performances that are candidates for knowledge. Opening a school is an example of a performance that is not a candidate for knowledge, and therefore has epistemic status only at a remove. Opening a school does not aim at knowledge in virtue of its nature. Only beliefs do that.31 One who opens a school with the right motives both increases the amount of knowledge in the world and manifests proper valuing of knowledge, but not by performing in a way that is itself a candidate for knowledge. Thus, although it is both instrumentally valuable and manifests proper valuing 29Sylvan does not discuss this problem for his view. 30Epistemic agents too are immediate bearers of epistemic properties, because they are the ones who achieve knowledge. 31This avoids Berker (2013a,b) style worries. When beliefs are formed or maintained in the aim of creating further knowledge, they are not aiming at knowledge qua candidates for knowledge. 14 Competent Perspectives and the New Evil Demon Problem of knowledge, it is epistemically valuable only in a derivative sense. Within the virtue-theoretic framework I have offered here, we can now provide a motivated restriction on the kind of proper valuing of knowledge that is constitutive of epistemic rationality: it is properly valuing knowledge in aiming to know. In other words, the kind of proper valuing we are after is a kind of practical valuing: A belief is rational just in case the epistemic agent properly values what it takes to achieve her aim of knowledge in believing as she does. Now we can address the other worry for the account, namely that it over-intellectualizes epistemic rationality. Do we really properly value knowledge as the aim of our performance every time that we know? Of course, sometimes we know things we would rather not know. We might even wish that we could allow other more practical considerations to sway us. However, I think that even in such cases there is a sense in which we properly value knowledge as the aim of our doxastic performance. The epistemically virtuous agent does not experience a blind attraction to believing in a way that is, as a matter of how things turn out, knowledge; rather, the fact that this is so guides her in her reasoning. In manifesting her competence, the virtuous epistemic agent is attracted to certain patterns of reasoning precisely because they are ways of acquiring and maintaining knowledge. It is precisely because the evidence unequivocally points to p that one believes p, even when one would rather not do so. The "because" as I am using it here is not merely causal. It entails a kind of sensitivity, from the subject's own perspective, to the fact that to perform in a certain way is to provide oneself with, or maintain, knowledge. This is exactly the kind of feature we have been looking for: believing and reasoning in certain ways because they are ways of knowing suffices for the agent to properly value knowledge as the aim of belief in believing as she does. After all, to perform in a certain way because doing so would be an achievement of one's aim is plausibly the best way to value what it takes to achieve one's aim in performing as one does. To say that the virtuous epistemic agent is attracted to certain ways of reasoning because they are ways of acquiring and maintaining knowledge does not commit one to the claim that the agent believes, or can articulate, this attraction.32 Nor does it commit one to the claim that the agent has a desire, or other pro-attitude, to believe in a way that is a way of knowing distinct from her coming to believe or her maintaining her belief. This 32Note also that if we required knowledge, this would immediately lead to a vicious regress. 15 Competent Perspectives and the New Evil Demon Problem would over-intellectualize rationality, as has been widely noted.33 Rather, I am describing what it is to come to believe or maintain one's belief in a competent way on the first order. It has an ineliminably perspectival aspect to it.34 Although this was not the focus of Miracchi (2015a), the sense in which the subject has an aim to know in exercising her epistemic competence was always supposed to be perspectival. As opposed to some other virtue epistemologists such as Ernest Sosa who assimilate the aiming of everyday belief to biological functioning, for me it is very important that the aim that is constitutive of belief is mental in a way that the function of a heart to pump blood is not.35 Rather than thinking of aims to know as biological or evolutionary, we should think of them as a distinctively mental kind of directedness. If we do not, then it is by no means clear that we are dealing with performances of the epistemic agent in any important sense. Virtue epistemology, which was designed to center the agent in our epistemic theorizing, thus falls prey to the same problems that reliabilism does: it turning us (qua epistemic beings at least) into mere machines that are reliably hooked up to the world. We are clearly much more than that, and it is the task of the naturalistically-minded epistemologist to articulate how this might be the case.36 Once we have gotten this far, however, the idea that the way in which we aim at knowing when we manifest our epistemic competence entails proper valuing of that aim is not far off. In such cases, having the aim of knowledge in believing as one does is competently aiming for knowledge. It is being drawn to take certain objections seriously, to revise one's commitment in light of (seeming) counter-evidence or to undermine that counter-evidence, and so on. It is being drawn to certain ways of believing because they are ways of knowing, and that is just what it is to properly 33E.g. see Cohen (1984)'s criticisms of Lehrer. Sylvan (manuscript) is also clear that understanding rationality to be proper valuing does not commit one to such a view. 34It is plausible that an analogous phenomenon occurs in the moral domain. The virtuous moral agent, in exercising her competence, just is just motivated to do what is right. She is drawn, first-personally, perspectively, towards an action and in doing so she manifests properly valuing doing what is right. 35See esp. Sosa (2015), p. 20. I think there are many problems with a biological or evolutionary account of epistemic aims, but this is not the place to discuss them. 36To be clear, the claim is not that we need a non-naturalist or anti-physicalist account to explain these mental properties. However, acknowledging these mental properties and their centrality to epistemology is crucial for developing an adequate theory. 16 Competent Perspectives and the New Evil Demon Problem value knowledge as the aim of one's performance.37 This should allay our fears that the account over-intellectualizes rationality. Rather, it appeals to the most basic perspectival-motivational features that are present in any manifestations of competence that are properly called performances by the agent. However, I haven't yet explained what happens in a case of rationality that falls short of knowledge. One can only believe in a certain way because it is a way of knowing if that way of believing is indeed a way of knowing. What, then, about rational beliefs that fall short of knowledge? In such cases, we typically imagine a subject situated so that her belief falls short of knowledge, and yet her case is indiscriminable from one where she knows; it is for her as if she were acquiring or maintaining knowledge, even though she is not. But again we must remember that it can't be indiscriminability as such that does the trick. Recall our merely well-meaning agent. Although she believes in a way that is indiscriminable from a way of knowing, she doesn't believe rationally. We have to explain what is going on with the merely rational believer in a way that goes beyond reference to the indiscriminability of her situation. Here direct virtue epistemology is again well-poised to provide an answer. As noted above, according to most kinds of virtue epistemology, epistemic competences are typically fallible. Certain conditions might preclude agents from achieving their aims, while nevertheless they perform competently. In such cases the exercise, though degenerate, is still fully competent. The agent was merely unlucky.38 Moreover, it is plausible that the reason why the rational agent's situation is indiscriminable from a case of knowledge is that she exercised her competence.39 We can think of what happens here as analogous with perceptual illusions: in cases of illusion, a thing seems to be a certain way even though it is not. What explains why things look that way are the very 37As an example, consider consider an epistemic case, where an agent competently deduces q from p and If p then q, thereby coming to know that q. When the agent is motivated to infer q, she is properly aiming at knowledge, aiming at knowledge in a way that properly values what it takes to know. This agent may not have a concept of modus ponens. She may not be able to tell you that the form of inference is valid. However, as long as it is this property of the inference that is perspectivally guiding her in believing as she does, she properly values what it takes to know. 38Of course it behooves an agent to continually try to hone her competences to reduce the probability that she will be undone by bad luck, but that does not mean that in such cases the fault for her failure lies with her. 39Presumably, a fault would be with you if you had a sense that you were failing to get onto the facts. 17 Competent Perspectives and the New Evil Demon Problem same competences that explain why things look the way they are to you in the good case. Much work in vision science presupposes that this is true. This is why illusions are so empirically interesting-they reveal something about how the competences responsible for veridical perception work.40 Similarly, we might think of the rational bad case as a case of epistemic illusion: a certain way of believing seems to be a way of knowing, and this is explained by appeal to the same competence that provides you with knowledge on other occasions. So, although you don't believe as you do because so believing is a way of knowing (you can't, because it's not), you still competently, perspectivally, aim at knowledge. In cases of failure, the virtuous epistemic agent competently, albeit mistakenly, believes as she does because she competently takes so believing to be a way of knowing. Her failure does not reflect badly on how she was proceeding; she was merely unlucky.41As such, she still properly values knowledge as the aim of her performance. We now have what are plausibly necessary and sufficient conditions for properly valuing knowledge in believing as one does, and so for rational belief: in cases of rational belief one believes in a certain way because one competently takes that way to be a way of knowing. In so doing, one properly values knowledge as the aim of one's belief. Just as in the case of externalist justification, the mental performance that satisfies this requirement differs in the cases of knowledge and rational false or Gettiered belief. When the exercise of competence is a manifestation, the agent believes in a certain way because it is a way of knowing. When the exercise of competence is degenerate, the agent believes in a certain way because she competently (albeit mistakenly) takes a way of believing to be a way of knowing. In both cases, the agent properly practically values knowledge as the aim of her performance because she competently takes the way in which she believes to be a way of knowing. This account of proper valuing knowledge in believing as one does allows us to distinguish the virtuous epistemic agent from the merely wellmeaning and virtuous agents in mental terms. The merely well-meaning agent is not epistemically competent: she is not disposed to believe in certain ways because those are ways of knowing. As such, even though she cannot tell that she is failing to know, she does not competently aim, and so does not properly value her aim in the sense at issue. 42 40See e.g. Palmer (1999). 41This discussion is closely related to interesting issues of direction of fit which I cannot get into here. 42Of course, sometimes one might be faultless for failing to have a competence-perhaps 18 Competent Perspectives and the New Evil Demon Problem The view I am advocating does claim that there is a kind of faultlessness in cases of rational false or Gettiered belief, and so one might wonder whether it falls prey to the objections I made against appeal to excuses in the introduction. However, this new approach is importantly different. First, it explains why certain cases are faultless by appeal to what is going well in the situation, namely that the agent exercises her competence. Because the agent exercises her epistemic competence (albeit degenerately), she properly values knowledge as the aim of her performance in believing as she does. Even in the degenerate case her belief is by nature a good candidate for knowledge because it manifests proper valuing of knowledge as the aim of her belief. The account also explains why indiscriminability from cases of knowledge seems to matter, and why (only) certain cases of mere indiscriminabilty are faultless. When the indiscriminability is due to an epistemic competence being exercised, the agent competently takes a certain way of believing to be a way of knowing. These are the only cases of indiscriminability that are cases of rational belief. We can thus distinguish the merely well-meaning epistemic agent from the truly rational one. Lastly, the account of rationality on offer here presents it as importantly the same in kind as externalist justification. Exercises of epistemic competence are the good candidates for knowledge, both in an externalist and an internalist sense. When an agent exercises her epistemic competence, she believes in a way that is thereby likely to be knowledge (externalist sense), and she also believes in a way that properly values knowledge as her aim (internalist sense). Moreover, these two features are not independent of one another. It is no accident that the beliefs that are externalist-justified are also internalist-justified, because the very facts that are constitutive of competence possession determine both the reliability of one's exercises and the features of one's epistemic perspective. As such, it avoids the problems set out in the introduction for theories on which rational false or Gettiered belief is excusable failure. 5 Conclusion: Diagnosing the New Evil Demon Problem So far, I have explained the difference between epistemically virtuous, vicious, and merely well-meaning agents in the actual world. But how does because of a developmental situation. But we can distinguish these two kinds of faults just like we can distinguish two kinds of luck: there's (bad) luck you have for failing to have have certain competences, and then there's (bad) luck you have for failing to manifest the competences you do have. Only the latter are directly relevant to epistemic standing. 19 Competent Perspectives and the New Evil Demon Problem this relate to our original question, namely solving the new evil demon problem? The perspectival duplicates are brains in vats, and so don't have any epistemic competences that satisfy the requirements of direct virtue epistemology (they are completely unreliable). However, recall section 1. There I claimed that in considering the evil demon scenario, we are conceiving of a case where the subject's epistemic perspective remains the same even though her reliable connection to the world is severed. This teaches us that epistemic agents have a certain kind of epistemic standing in virtue of their mental lives, and not directly in virtue of their reliable connections to the world. A solution to the new evil demon problem requires explaining the epistemic differences between cases of rational and irrational belief in terms of the subject's mental perspective. If we have good reason to think that the evil demon scenario is metaphysically impossible, we do not need to appeal to features of the subjects' mental lives that (metaphysically) could obtain in the evil demon scenario. As discussed in that section, I doubt that any account of internalist justification will be able to explain the difference between the virtuous and merely well-meaning duplicates in a way that avoids appeal to mental features that cannot be had in the evil demon scenario. This is for the simple reason noted above, that what counts as good reasoning (except for logical cases) depends on what it takes to get onto the facts in the agent's world. Inductive, abductive, and heuristic kinds of reasoning are all good or bad in large part because of how the world outside one's head is. On a knowledge-first virtue-theoretic approach, this is to be expected. Competences to know are fundamental to epistemic evaluation. We cannot explain what makes a belief rational except in terms of what is required for knowledge. Although properly valuing the valuable is a distinctively first-personal phenomenon, then, and part of one's mental life, it is not independent from one's relation to the facts. There are mental differences between the virtuous, vicious, and merely well-meaning duplicates, but in order to explain them we need to appeal to how these agents are situated in their worlds. If this is correct, then we may reject the metaphysical possibility of the evil demon scenario in a motivated way. The mental feature constitutive of epistemic rationality-proper valuing of knowledge as the aim of one's performance-could not be had in the evil demon scenario. However, as long as we are allowing that the evil demon scenario has perspectival duplicates, we allow ourselves the metaphysical impossiblity that the subject believes in a way that properly values knowledge, even though she is not 20 Competent Perspectives and the New Evil Demon Problem reliably hooked up to the world. It is important, then, that the evil demon scenario is metaphysically impossible, but not in the way we originally might have supposed. We do not get to reject the case as irrelevant to our epistemic theorizing. We do, however, allow ourselves to appeal to the agent's connection to the world when explaining the differences between various first-personal perspectives. By accepting the world-dependence of our perspectival lives, we arrive at a unified virtue epistemology. We can explain what justification and rationality have to do with one another, and with knowledge. We can also answer a question that has long been plaguing internalists, namely why certain well-meaning agents are rational, and others not. This problem seemed intractable, I will now suggest, because we were supposing epistemology to be independent of philosophy of mind. We were assuming that we could put aside discussion of how the world contributes to our perspective on it in discussing epistemic standing, but we cannot do so. Our connection to the world does not merely reliably hook us up to it, so that we "produce" beliefs that are likely to be true; it provides us with the kind of grip on the world that can guide us, from our own perspectives, towards knowledge. References Berker, S. (2013a). Epistemic teleology and the separateness of propositions. Philosophical Review, 122(3), 337–393. Berker, S. (2013b). The rejection of epistemic consequentialism. Philosophical Issues, 23, 363–387. Bird, A. (2007). Justified judging. Philosophy and Phenomenological Research, 74(1), 81–110. Brueckner, A. (1986). Brains in a vat. Journal of Philosophy, 83(3), 148–167. Cohen, S. (1984). Justification and truth. Philosophical Studies, 46(3), 279– 295. Comesaña, J. (2002). The diagonal and the demon. Philosophical Studies, 100, 249–266. Dotson, K. (2011). Tracking epistemic violence, tracking practices of silencing. Hypatia, 26(2), 236–257. 21 Competent Perspectives and the New Evil Demon Problem Foley, R. (1987). The Theory of Epistemic Rationality. Harvard University Press. Hurka, T. (2001). Virtue, Vice, and Value. Oxford University Press. Ichikawa, J. J. & Jenkins, C. I. (manuscript). On putting knowledge 'first'. Lehrer, K. & Cohen, S. (1983). Justification, truth, and coherence. Synthese, 55(2), 191–207. Littlejohn, C. (2012). Justification and the Truth Connection. Cambridge University Press. Miracchi, L. (2015a). Competence to know. Philosophical Studies, 172(1), 29–56. Miracchi, L. (2015b). Knowledge is all you need. Philosophical Issues, 25(1), 353–378. Miracchi, L. (manuscript-). Achievements and exercises: A theory of competence. Palmer (1999). Vision Science: From Photons to Phenomenology. MIT Press. Putnam, H. (1981). Brains in a vat. In Reason, Truth, and History. Cambridge University Press. Sosa, E. (2007). A Virtue Epistemology: Apt Belief and Reflective Knowledge, volume 1. Oxford University Press. Sosa, E. (2015). Judgment and Agency. Oxford University Press. Sutton, J. (2007). Without Justification. Cambridge, MA: MIT/Bradford. Sylvan, K. (manuscript). What isn't the truth-connection? Williamson, T. (this volume). Justifications, excuses, and skeptical scenarios. In F. Dorsch & J. Dutant (Eds.), The New Evil Demon: New Essays on Knowledge, Justification and Rationality. Oxford University Press.
| |
Phone:
(314) 487-5553
GUEST CARD
I Found
AHEAP 53 - I Apartments
on AffordableSearch.com
AHEAP 53 - I Apartments
(314) 487-5553
3601 Lemay Ferry Rd
Saint Louis, MO 63125
Equal Housing Opportunity
The information in this brochure is supplied by the apartment community owner, manager, or other third party. AffordableSearch.com cannot and does not verify or ensure the accuracy or completeness of this information. All AffordableSearch.com users must verify the information and assume all risk for inaccuracies. | https://affordablesearch.com/PrintableBrochure.aspx?id=6590 |
This project will educate the local residents of the importance of groundwater protection and provide financial assistance to those who need to properly abandon their unused well. This project will also support the upgrade of nonconforming sewage treatment systems to reduce nutrient contributions to groundwater and surface water through groundwater permeation.
This project will address stormwater runoff concerns and erosion issues along the Sauk River.
This project will continue the restoration of Osakis Lake and protect the water quality of the Sauk River by addressing stormwater runoff from urban and rural areas. Activities include assisting eight landowners in designing and funding their shoreland restoration and rain garden projects.
The Discovery Farms program is a farmer-led effort to gather information on soil and nutrient loss on farms in different settings across Minnesota. The mission of Discovery Farms Minnesota is to gather water quality information under real-world conditions.
This Sauk River Watershed District project will conduct the Whitney Park river clean-up, adopt a river program and other community events as part of their healthy living programs; will collaborate with the city of St. Cloud to install a rain garden demonstration site at Whitney Park; use local radio and public television stations to promote the District’s “neighborhood rain garden initiative” and other incentive programs.
The Minnesota Ag Water Quality Certification Program (MAWQCP) is a voluntary opportunity for farmers and agricultural landowners to take the lead on implementing conservation practices that protect water quality. Those who implement and maintain approved conservation practices will be certified and in turn obtain regulatory certainty for a period of ten years. This program will help address concerns about changing regulatory requirements from multiple state and federal agencies. | https://www.legacy.mn.gov/projects?search_api_fulltext=&%3Bamp%3Bf%5B0%5D=fiscal_year%3A2007&%3Bamp%3Bf%5B1%5D=fiscal_year%3A2010&%3Bamp%3Bf%5B2%5D=project_facet_administered_by%3A165&%3Bamp%3Bf%5B3%5D=project_facets_counties_affected%3A433&%3Bf%5B0%5D=project_facet_administered_by%3A10004617&f%5B0%5D=project_facet_watershed%3A70&f%5B1%5D=project_facet_watershed%3A57&f%5B2%5D=project_facet_watershed%3A42&f%5B3%5D=project_facets_counties_affected%3A429&f%5B4%5D=project_facets_counties_affected%3A413&f%5B5%5D=project_facet_activity_type%3A144 |
Members of Congress are marking Startup Day Across America on Tuesday to support and promote startups in their districts, and Rep. Jared Polis has a busy schedule.
The Colorado Democrat, who is vacating his seat to run for governor, was an internet entrepreneur and venture capitalist before entering Congress, and he’s now committed to supporting those with aspirations similar to his.
In his Boulder-based 2nd District, he has visits planned to Stuffn’ Mallows, a marshmallow and s’mores company; Food Corridor, the first online marketplace for food businesses to connect with available commercial kitchen space; Herbal Heart Apothecary, a women-owned skin care company that infuses herbs they grow themselves in their products; and Samples World Bistro, an internationally focused lunch and dinner spot.
While Polis’ pre-congressional background is in tech (he was a co-founder of BlueMountain.com, an online greeting card company, and founded ProFlowers.com, an online florist), he said he tries to highlight nontech startups “to show the startup world isn’t just about software and e-commerce.”
He also is hosting a roundtable discussion Tuesday with panelists from four other local startups.
“At each startup, we try to learn: ‘What do you do? Are there any barriers that you’re facing from policy? What are you biggest challenges?’” Polis said.
“Startups are a very important engine of economic growth, and another important thing to note is that most startups don’t work out. If there are 10 startups, we hope that one or two of them are tomorrow’s great company,” he said. “That’s the nature of the risk-taking that entrepreneurs are engaged in when they start a new company. We, as policymakers, should be supportive of risk-taking because that’s what leads to great job creation and success.”
Five years ago, Polis teamed up with California Republican Rep. Darrell Issa, a successful car-alarm manufacturer before coming to Congress, to talk about getting colleagues on board with Startup Day.
“We were talking about how many members of Congress just don’t have experience or awareness about startups and entrepreneurial activity because they come from different backgrounds,” he said. “Both Darrell and I are entrepreneurs and we said, ‘Let’s find a way to connect members of Congress with startups in their communities.”
About 80 lawmakers from both sides of the aisle and both chambers are scheduled to participate Tuesday in events in their districts.
Both of Polis’ and Issa’s districts have a large tech industry, but they don’t want it to stop there.
“[There’s] this misconception that somehow startups only occur in places like Silicon Valley, Boulder, or New York,” Polis said. “In reality, there’s people with ideas that are transforming them into reality in every congressional district in this country and every ZIP code in this country.”
Polis and Issa help their colleagues find at least one startup to visit on Startup Day, and that choice is often related to an issue that the member is working on.
“Members can kind of choose how they want to do this, but the important part is that tomorrow’s successful business with thousands of employees is today’s garage startup,” Polis said. “We want members of Congress to be aware of how early-stage companies can grow — how they raise capital, issues around employment and resources, barriers to their growth, all those real-life issues that entrepreneurs and small companies face.” | |
A comprehensive source of information on state, local, utility, and selected federal incentives that promote renewable energy. DSIRE now includes incentives for energy efficiency.
This website has a great deal of information related to the national and international pricing of solar panels, economics of solar panels, and headline news within the solar community.
This website provides information into the basic functioning of solar panels.
This website provides information about the Department of Energy's (DOE) campaign to encourage the installation of solar panels in the US.
This webpage details one of the most powerful incentive programs for homeowners to install solar panels in Ohio. | http://teams.eas.muohio.edu/solarpower/Links.html |
The Intel Corporation is one of the most religiously inclusive companies in America, according to the 2020 Corporate Religious Equity, Diversity and Inclusion (REDI) Index.
This is due in no small part to the significant investment made by the company to incorporate religious diversity into their overall diversity and inclusion framework. The organization’s commitment to religious inclusivity is seen in their willingness to incorporate new employee resource groups (ERGs). For example, Intel Corporation’s inclusion of a resource group dedicated to Bahá’í believers is notable. Across the entire Fortune 200 companies, no other company provides a dedicated community for practitioners of the Bahá’í faith, which globally has 8.5 million followers according to the World Religion Database.
Religious inclusivity has many benefits. One in particular stands out to Hadi Sharifi, the head of Intel’s Muslim ERG: It makes the company feel like his home and he feels like his coworkers are family, regardless of their faith.
Hear his words below. And join us Feb. 9-11 at the national Faith@Work conference for the whole Intel panel and more!
Hadi Sharifi – What Impact Does Intel’s Faith ERGs Have? from Religious Freedom & Business Fnd on Vimeo. | https://religiousfreedomandbusiness.org/2/post/2021/01/a-benefit-of-workplace-religious-inclusion.html |
REGRESSION: (compared to previous public release: Yes, No, ?): ?
DESCRIPTION:
When you take a picture with the device rotated 90 degrees clockwise, the picture will display as a square in the Gallery app (as well as in the Camera roll), with the sides cut off. If you then swipe to the previous or next picture, the sides of the picture with inverted rotation will be displayed. This happens regardless of picture resolution or front/back camera and also happens for pictures taken or created on other non-Sailfish devices. Additionally, due to the cut off edges, it’s hard to zoom in and out properly.
STEPS TO REPRODUCE:
1a) Take picture with device rotated 90 degrees clockwise or
1b) Move any picture with 90 degrees clockwise rotation to folder indexed by Gallery on Sailfish device
2) Open picture in Camera roll or Gallery app
EXPECTED RESULT:
Full picture is displayed.
ACTUAL RESULT:
Picture is cut off.
ADDITIONAL INFORMATION:
I’ve uploaded a picture with inverted rotation over here: | https://forum.sailfishos.org/t/photos-with-inverted-rotation-are-displayed-incorrectly-in-gallery/7341 |
Sequence of Events:
07/04/97 17:05:09 Nominal Time of Parachute Deploy
07/04/97 17:05:29 Heat Shield Separation
07/04/97 17:05:59 Nominal Time of Lander Separation
07/04/97 17:06:56 Approximate Time-Radar Altimeter Ground Acquisition - 1 mile above ground
At approximately 17:05:09 the parachute will deploy at an altitude of 8.6 kilometers or 5.3 miles. When a parachute is deployed in the Earth's thick atmosphere there is a short period of rapid deceleration. However, because the Martian atmosphere is so thin the rate of deceleration is gradual. When the chute deploys, Pathfinder is traveling at a speed of 900 miles per hour, or 392 meters per second. Approximately 83 seconds later, Pathfinder has reached a terminal, or steady state, velocity of 134 miles per hour, or 64 meters per second. At this point more pyrotechnics fire to release the heat shield from the back shell which forms the upper cone shaped surface of the entry vehicle. At approximately 17:05:59 the lander separates away from the back shell and descends on a tether. Approximately one minute later the radar altimeter in the lander will determine the height of Pathfinder above the surface of Mars. As Pathfinder descends through the atmosphere, the radar altimeter updates the flight computer to establish the exact times of the remaining sequences. | https://mars.nasa.gov/MPF/mpf/realtime/para.html |
Wandle Valley Park is rich in history and has its beginnings during the Industrial Revolution. The Wandle Valley Regional Park includes a variety of open spaces that include Mitcham Common, Beddington Park, and Farmlands. The combination of all these open spaces tallies up to about 900 hectares of South London.
The park offers a variety of recreational activities and opportunities. Hiking and jogging trails, play parks for kids, picnic spots and more can be found here. The wildlife itself is an attraction. There are over 250 bird species that has been listed in the different areas included in the park. Wandle Valley is a great place for birdwatching enthusiasts.
Besides all these, the park also hosts different habitats for flora, fauna and wildlife. These habitats include wetlands, grasslands, river banks, and the river itself. There are lots to see and lots to experience at the Wandle Valley Regional Park. It is one of the biggest and most beautiful parks in the world. With all the planned improvements, it will become the best place to find some peace and spend time with nature. | http://www.wandlevalleypark.org.uk/ |
There were no inhabitants and no sun—just the sovereign Framer and Shaper.
Then came the twin heroes Hunahpu and Xbalanque who, armed with blowguns, began a journey to hell and back confronting the folly of false deities and death. They played a momentous ball game with the death lords of the Underworld. And they prepared the way for the planting of corn, from which deities created humans.
This is the story of the Mayan “Popol Vuh,” also known as “The Book of the People” or “The Book of the Woven Mat.” It’s the creation story of the Quiche Maya of what today is the Guatemalan Highlands northwest of Guatemala City.
It’s the creation story that poet and translator Michael Bazzett has translated. And he will read poems from it at 6 p.m. Thursday, March 28, at Ketchum’s Community Library.
Bazzett, who translated it from the K’iche’ language, has received many awards for his new translation of the ancient Mayan creation story. It’s been hailed as the New York Times Best Poetry Book of 2018 and a World Literature Today Notable Translation.
It’s a story that asks not only “Where did you come from?” but also “How might you live again?”
Bazzett, a mythology teacher at The Blake School, a college prep school in Minneapolis, learned of the Popol Vuh while visiting Mexico. He spent years working on the translation in Guatemala.
Thursday’s presentation is made possible by Dr. Mac Test and the Boise State University English Department. | http://karenbossick.com/Story_Reader/5961/Poet-Translator-to-Recite-Guatemalan-Creation-Myths/ |
The below is from “The Military Leader”, a military blog. It’s really quite good especially for CERT Team Leaders and others.
==========================================================================
I’ve had a lot of conversations lately about organizational culture and vision. [To me, vision is where the team is going and culture is the behavior, beliefs, and norms that get it there.] One point of dispute deals with when the new leader of an organization (say, an incoming commander) should begin shaping the culture and setting the vision.
Some feel that culture-setting is a ‘Day 1 activity’ that centers on the leader’s influence…“I’m the new leader and here’s how I want things to run.” Others feel it is haphazard and potentially disastrous to join a team and immediately set it off on a new course…“I need to understand the culture before I know what to change.”
Regardless of your personal preference, it’s tough to argue that leaders should ignore culture and vision. Even a leader who immediately drives vision and culture will have to assess whether or not the team is meeting the intent. Identifying and understanding culture, for all leaders, is a critical task.
Identifying the Culture
What you’ll find below is a list of questions to ask yourself and your team to identify the culture of your organization. Maybe you’re the leader who is more comfortable getting to know the team before making changes: “What are its strengths? What motivates people? Where are we weakest?”
Or maybe you’ve already issued guidance about vision and culture and you want to see how it has resonated with the formation: “Did they really understand what I told them? Is the lowest level leader following the vision? Has the culture drifted from what it was a year ago?”
These questions will help you get to the bottom line. And although it’s tempting to dismiss them as innocuous, I encourage you not to. Ask around your organization and listen to the insight you get from a question like, “What are we for?” I think the answers will surprise you and bring clarity to culture.
- What is our core purpose? What are we for? (This points to The Why.)
- What is our mission? What are we purposed to do? (This points to The What.)
- What are we NOT purposed to do? Are we engaged in areas we shouldn’t be?
- What priorities affect our mission? Our culture? Is there “guidance from higher” that drives our behavior on a routine basis? Do we engage that guidance frequently enough for it to matter?
- What is this team’s most important asset? How do we protect it?
- Where do we want the team to be in one year? Or five? Is that vision consistent among leaders and followers at every echelon?
- What behavior do we espouse?
- What does good performance look like? How do we reward it?
- What behavior is unacceptable? What is a “fireable offense?”
- What personal behavior traits make us successful?
- What interpersonal behaviors make us successful?
- What behavior compromises our effectiveness?
- How do we respond to failure?
- How do we respond to success?
- What premium do we place on trust?
- How do we cultivate trust inside and outside the organization?
- How do we define our leadership environment?
- What is the “leadership DNA” of this team?
- How do we grow as individuals? As an organization? Is there a spirit of growth?
- How do we disagree with superiors? Is dissent encouraged, or even allowed?
- How do we leverage the creative and intellectual capital of the organization’s members?
- How do we encourage critical thinking from every member of the team?
- How do we connect with our people?
- How do we show compassion and empathy for one another?
- How do we respond when a person’s personal and professional obligations come into conflict?
- Does the lowest level in the organization know the vision? Do we live and lead by it?
- How aligned is the vision with the culture we’re trying to create? How do we know when they’re not aligned? | http://www.ho-ho-kuscert.org/27-questions-to-identify-culture-and-define-vision/ |
Cost-effectiveness of surfactant replacement therapy in a developing country.
A comparison study was conducted to evaluate the cost effectiveness of surfactant replacement therapy in the treatment of hyaline membrane disease (HMD). Study population included neonates admitted because of HMD severe enough to require assisted ventilation with FiO2 greater that 0.4 per cent. This group (n = 44) was compared with the outcome for neonates treated in the same centre 1 year before surfactant became available (n = 39). Comparison between the two groups was made in relation to cost of care depending on the duration of hospitalization. The duration of hospitalization in the survivors of the treated group was shorter (P = 0.06); accordingly, the cost of care was less. A savings of US $11,880 per patient survived in the treated group was expected, the nationwide financial impact of this treatment modality is discussed.
| |
The results of three recent trials, the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial, the Action in Diabetes and Vascular Disease (ADVANCE) trial, and the Veterans Affairs Diabetes Trial (VADT), indicate that lower blood glucose levels are not always better. The major aim of all three randomized controlled trials was to determine whether lowering the blood glucose (hemoglobin A [A1C]) to normal or near-normal would 1c reduce the occurrence of cardiovascular events. In the ACCORD trial, unexpected deaths in the intensive therapy (IT) group, over and above the number of deaths in the conventional therapy (CT) group, resulted in the early discontinuation of the trial.1 There were no findings of increased mortality related to IT in the ADVANCE trial; however, preliminary analyses have found no reduction in rates of macrovascular events in the IT group.2 The VADT also found no difference in macrovascular event rates between IT and CT groups.3 Taken together, data from all three of these trials suggest there is no benefit of aggressive glycemic control on macrovascular complications in diabetic patients.
These outcomes leave many of us questioning what we thought we knew. Glycemic control as close to the normal range as possible purportedly simulates the nondiabetic state. It is well established that tight glycemic control reduces rates of microvascular complications in diabetic patients; why wouldn't the same hold true for macrovascular events and cardiovascular mortality? A closer look at these important clinical trials may clarify some points but may also leave us with further questions.
All three trials shared the aim of determining the relationship between glycemic control and macrovascular events (myocardial infarction and stroke); however, the study designs, populations, and outcomes were somewhat different. ACCORD randomized 10,251 patients with type 2 diabetes (mean age 62 years) and a baseline median A1C of 8.1%. IT was individualized by the investigators with the goal of rapidly and safely lowering A1C to < 6%. Combinations of glucose-lowering drugs were used in each group and included metformin, sulfonylureas, thiazolidinediones (TZDs), and insulin. Subjects in the IT group attended more frequent follow-up visits than the CT subjects. At 1 year, the CT group had achieved a median A1C of 7.5%, compared to the IT group, which achieved a median A1C of 6.4%. Subjects were subsequently followed for a total of 3.5 years, during which the A1C levels remained relatively stable.1
The ADVANCE trial randomized 11,140 patients with type 2 diabetes (mean age 67 years) and a baseline mean A1C of 7.5% to one of two groups (IT or CT). The goal of the IT arm was to lower the A1C to 6.5%. Subjects in the IT group were given modified-release gliclazide (all other sulfonylureas were discontinued), as well as additional treatments suggested by the study protocol at the discretion of the treating provider. The combination of medications used in both groups included metformin, sulfonylureas, TZDs, acarbose, and insulin. As in ACCORD, subjects in the IT group attended more frequent visits than those in the CT group. After 5 years of follow-up, the mean A1C was 7.3% in the CT group and 6.5% in the IT group.2
The most recent of the three trials is the VADT, completed only days before its findings were released at the American Diabetes Association (ADA) 68th Annual Meeting and Scientific Sessions in San Francisco in June of this year. The VADT randomized 1,791 U.S. veterans (97% male, mean age 60 years) to IT versus CT with a baseline mean A1C of 9.5%. The goal of the IT group was to decrease A1C to < 7%. The majority of subjects received several drugs in combination, including metformin, rosiglitazone, glimepiride, and insulin. After 6.5 years of follow-up, the mean A1C was 8.4% in the CT group and 6.9% in the IT group.4
Of the three trials, only ACCORD found an increase in rates of cardiovascular events and death associated with intensive therapy. The trial was stopped prematurely because of this finding.1 In ADVANCE and VADT, investigators found no difference in mortality or cardiovascular outcomes between groups.2,3 Hypoglycemia was problematic, to some extent, in all three trials. In ACCORD, 16% of subjects in the IT group experienced hypoglycemia requiring assistance versus 5.1% in the CT group, although only one death in each group was identified as “probably” related to hypoglycemia.1 In ADVANCE, 2.7% of subjects in the IT group had at least one severe episode of hypoglycemia compared to 1.5% in the CT group. These findings included one fatal episode in the IT group and one episode in each group causing permanent disability.2 In the VADT, severe hypoglycemia occurred in 21% of subjects in the IT group versus 10% in the CT group. In addition, the VADT found that an episode of severe hypoglycemia within 3 months before a cardiovascular event was a strong predictor (second only to a previous event) of the event.3
Depending on the severity of the episode and pre-existing comorbidities, hypoglycemia may trigger myocardial infarction, stroke, and ventricular arrhythmias.5,6 A low level of glucose in the blood stimulates sympathetic neural activation as well as catecholamine secretion, resulting in an increased heart rate, blood pressure, and overall workload on the heart.7 Additionally, as a result of these hemodynamic changes, shear stress in the arterial wall may contribute to destabilization of atherosclerotic plaque8 and possibly precipitate an atherothrombotic event. Severe episodes of hypoglycemia usually occur in patients who lack early warning signs. Hypoglycemia unawareness, usually ascribed to patients with type 1 diabetes, may similarly affect patients with type 2 diabetes. In a sub-study of the Treating to Target in Type 2 Diabetes study,9 112 subjects taking one of three different insulin regimens were given continuous glucose monitoring systems. The investigators found that asymptomatic low blood glucose levels (< 56 mg/dl) occurred 40 times more frequently than self-reported hypoglycemia episodes. Surprisingly, most episodes occurred in the daytime.10
Duration of type 2 diabetes seems to be predictive of hypoglycemia awareness and the severity of hypoglycemic episodes. In a small clinical study, the counterregulatory responses to hypoglycemia were examined in nondiabetic subjects and compared to two groups of subjects with type 2 diabetes. One group of diabetic subjects was controlled with sulfonylureas, suggesting the presence of endogenous insulin secretion associated with a shorter duration of disease. The second group of diabetic subjects was insulin deficient as evidenced by low serum C-peptide levels. The counterregulatory response to hypoglycemia was intact in the nondiabetic subjects and those controlled with sulfonylureas but almost absent in the insulin-deficient type 2 diabetic patients.11 Other investigators have found that treatment with insulin for > 10 years is a predictor of increased risk of severe hypoglycemia in type 2 diabetes.12 In addition, when patients with type 2 diabetes become insulin deficient, they experience severe hypoglycemia at a frequency approaching that of patients with type 1 diabetes.13 Along with the findings from recent clinical trials, these data suggest that, at least in patients with longstanding type 2 diabetes, hypoglycemia may be a more serious problem than previously appreciated.
Although no cardiovascular benefit was found when IT groups were compared to CT groups, the diversity of baseline glycemic control, pre-existing comorbidities, and duration of diabetes in these subjects merits exploration. In the ACCORD trial, data suggested that patients with a lower baseline A1C and those without cardiovascular disease may derive benefit from intensive glucose lowering, although the study was not designed to test this hypothesis.1 In the VADT, there was a significant relationship between longer duration of diabetes and risk of cardiovascular events. Subjects who had diabetes for < 7 years appeared to gain cardiovascular benefit from IT, whereas those with the longest duration of diabetes (up to 24 years) were found to have excess cardiovascular risk and did not benefit from intensive therapy.3
There are several key messages to take away from these important clinical trials. We now have even more evidence that aggressive treatment early in the course of type 2 diabetes is warranted. The negative outcomes in these studies occurred in subjects with the longest duration of diabetes, the highest baseline A1Cs, and the strongest history of pre-existing cardiovascular disease.
Even from clinical experience, it is apparent that type 2 diabetes of long duration is, in many ways, a different disease than early type 2 diabetes. A 45-year-old man with new-onset type 2 diabetes, an A1C of 8%, and mild dyslipidemia and hypertension may benefit greatly from a very aggressive approach targeting lipids, blood pressure, and blood glucose, as well as providing lifestyle counseling and anti-platelet therapy. Lowering his A1C to a normal or near-normal level with a combination of support with weight loss and exercise in addition to pharmacological therapy may greatly reduce his risk of a future cardiovascular event.
Because he is early in his disease process, he could probably achieve all of these goals without needing a regimen that would put him at risk for hypoglycemia. If he were, by virtue of his medication regimen, at risk for hypoglycemia, he would have an intact counterregulatory system that would make hypoglycemia unawareness and severe hypoglycemia highly unlikely.
In contrast, consider a different patient with an A1C of 8%. This is a 75-year-old man with a 20-year duration of type 2 diabetes, on insulin > 10 years, a history of coronary artery disease with prior stent placement, and suboptimally treated dyslipidemia and hypertension. In this patient, lowering A1C to 7% or even 7.5% may be acceptable and less risky than attempting a near-normal glycemic target.
Ten years ago, the U.K. Prospective Diabetes Study provided evidence that well-controlled blood pressure was more effective than tight glycemic control in the prevention of macrovascular complications in patients with type 2 diabetes.14,15 Other pivotal clinical trials of the past decade demonstrated coronary heart disease risk reduction associated with lipid lowering16,17 and anti-platelet therapy18 in diabetic patients. Because of the widespread dissemination of these data and successful translation into clinical practice, our patients with type 2 diabetes have benefited from reduced cardiovascular risk.
Indeed, the good news from these recent clinical trials is that the incidence of cardiovascular events in all of the groups studied was much lower than predicted from previous epidemiological data. In ACCORD, subjects in both groups had lower mortality than reported in studies of similar patients.1 The annual rate of macrovascular events in ADVANCE was lower than anticipated based on previous studies of patients with type 2 diabetes, and the authors suggest the greater use of statins, anti-hypertensive medications, and anti-platelet agents as the likely reason. In the VADT, the event rate was less than one-third of what was predicted.3 These lower overall event rates indicate that most subjects had already achieved optimal cardiovascular risk reduction.
A review of the current ADA standards of medical care19 will confirm that these recommendations are still relevant and, in fact, may be even more relevant given recent findings. In particular, recommendations for A1C lowering appear to be right on target. The 2008 recommendations state a goal of < 7% for nonpregnant adults in general and as close to normal (< 6%) as possible without significant hypoglycemia in selected patients. There are less stringent A1C goals in high-risk patients (those with a history of severe hypoglycemia, comorbid conditions, longstanding diabetes, and cardiovascular complications). Table 1 lists current recommendations for blood pressure and lipids as well.15
Future clinical trials will focus on questions that arose from the seemingly paradoxical results of ACCORD, ADVANCE, and VADT. For now, despite the apparent controversy, clinicians can be confident that the treatment we have been prescribing and recommending for our patients is safe and effective.
Footnotes
Betsy B. Dokken, PhD, NP, CDE, is an assistant professor of medicine in the Section of Endocrinology, Diabetes, and Hypertension at the University of Arizona in Tucson. She is an associate editor of Diabetes Spectrum. | https://spectrum.diabetesjournals.org/content/21/3/150 |
Novel 1,3-dioxanes from apple juice and cider.
Extracts obtained by XAD solid-phase extraction of apple juice and cider were separated by liquid chromatography on silica gel. Several new 1,3-dioxanes including the known 2-methyl-4-pentyl-1,3-dioxane and 2-methyl-4-[2'(Z)-pentenyl]-1,3-dioxane, were identified in the nonpolar fractions by GC/MS analysis and confirmed by chemical synthesis. The enantioselective synthesis of the stereoisomers of the 1,3-dioxanes was performed using (R)- and (R,S)-octane-1,3-diol and (R)- and (R,S)-5(Z)-octene-1,3-diol as starting material. Comparison with the isolated products indicated that the natural products consisted of a mixture of (2S,4R) and (2R,4R) stereoisomers in the ratio of approximately 10:1, except for 1,3-dioxanes generated from acetone and 2-butanone. It is assumed that the 1, 3-dioxanes are chemically formed in the apples and cider from the natural apple ingredients (R)-octane-1,3-diol, (R)-5(Z)-octene-1, 3-diol, (3R,7R)- and (3R,7S)-octane-1,3,7-triol, and the appropriate aldehydes and ketones, which are produced either by the apples or by yeast during fermentation of the apple juice.
| |
New Solar Panels at Nazareth
While staff and students enjoy their summer break, work has begun on installing 300+ solar panels on our Wheeler Auditorium.
Nazareth College seeks to improve our environmental footprint and continue to look for new ways to run our school more efficiently.
We look forward to helping put clean energy back in the grid and help lower our energy costs over the coming years.
By installing these solar panels, the College anticipates an energy savings of $228K over the next 10 years. | https://www.nazareth.vic.edu.au/solar-panels-at-naz/ |
An arbitrator this week revoked a law that strengthened Spokane’s police ombudsman powers to investigate allegations of officer misconduct independently of the police. Spokane’s Police Guild had challenged the new powers as a change in working conditions that must be negotiated with the Guild as part of its contract with the city, and the arbitrator agreed.
As Spokesman’s Spokesman-Review reported, the original law, passed in 2008, allowed the ombudsman (currently Tim Burns) to sit in on police investigations into officer misconduct and label whether those investigations were thorough, timely and fair. He can order additional information and interviews from internal affairs officers. If the chief doesn’t oblige, he can appeal to the mayor, who has the final say. He also is allowed to make broad policy recommendations. Last year, the City Council unanimously voted to give the ombudsman the power to conduct his own investigations, including interviewing witnesses, and make those reports public.
The Spokesman-Review noted that the guild’s president said his members objected to the revised rules because they believe allowing Burns to conduct his own investigation could jeopardize his ability to remain objective when analyzing internal affairs probes. | https://aclu-wa.org/blog/liberty-link-police-guild-has-spokane-ombudsman-where-it-wants-him |
As technology solution professionals, the threat of disaster – whether natural or man-made – is always top of mind, and tech companies understand that a solid plan for protection is key to business survival. Still, few of us were completely prepared for the unprecedented devastation brought on by the weather-related disasters of 2017, particularly the catastrophic Atlantic hurricane season.
Our partners in the paths of those record-breaking storms were faced with the overwhelming responsibility of protecting more than equipment and data. Livelihoods – and lives – were at stake. We’ve been inspired by the stories of courage and resilience, and the spirit of our IT community coming together with a helping hand in the aftermath.
Headquartered in Tampa and expecting a hard hit from Hurricane Irma, ConnectWise experienced firsthand the vulnerability of impending disaster. While we’re happy to say our strategic disaster plan was executed with successful results, we learned lessons and gained insight that we’d like to share.
Our free guide has been created with the unique needs of technology businesses in mind, and includes disaster readiness checklists based on industry expertise and experience.
All Fields Required.
Ready to Talk? | https://www.connectwise.com/platform-integrations/disaster-recovery |
How coordinating work across different parts of your organization with DevOps depends on team size due to architectural coupling.
Why transformations that start by addressing the biggest inefficiencies in development processes are more successful.
Why people need a common understanding of how their current approaches are causing inefficiencies for the overall software development and delivery system to change their way of working.
How mapping your current deployment pipeline including metrics with teams helps understanding the biggest inefficiencies in their software development processes.
How Increasing the frequency of deployment while maintaining or improving quality and security using DevOps forces out inefficiencies that have existed in your processes for years.
The book Starting and Scaling DevOps in the Enterprise by Gary Gruver provides a DevOps based approach for continuously improving development and delivery processes in large organizations. It contains suggestions that can be used to optimize the deployment pipeline, release code frequently, and deliver to customers.
InfoQ readers can download an excerpt from Starting and Scaling DevOps in the Enterprise.
InfoQ interviewed Gary Gruver about what makes DevOps fundamentally different from other agile approaches, how DevOps can help to optimize requirements and planning activities, metrics for continuous integration, the difference between scaling with tightly coupled architectures and with loosely coupled architectures, types of waste in large organizations and how to deal with them, and why executives should lead continuous improvement and how they can do that.
Gary Gruver: As I started working with more and more different large organizations that wanted to transform their software delivery processes, I discovered one of the biggest challenges was getting the continuous improvement journey started and aligning everyone on implementing the changes. For this to work I feel pretty strongly that you should start the continuous improvement process with the changes that will have the most significant impact on the organization so you build positive momentum. I found I was spending most of my time analyzing the processes in different organizations and helping them identify those areas. I saw a lot of common problems but I also found each organization had some unique challenges that resulted in different priorities for improvement. Overtime I started refining the process I used for analyzing different businesses and felt it was important to document the approach to share as pre-work for my workshops. I would send this out ahead of time to a few key leaders in the organization to start mapping out the deployment pipeline so we would have a good straw dog for the workshops. The workshops would then really helped get everyone aligned on the changes that would have the biggest impact and get everyone excited about starting the journey. The reason I decided to publish the book is that I realized I can’t do workshops for every company that needs to improve their processes and I thought others might find the approach helpful.
Gruver: The book is intended for large organizations that have tightly coupled architectures. Small organizations or large organizations like Amazon that have architected to enable small teams to work independently won’t learn much by reading this book. They would be better served by reading the DevOps cookbook to identify some best practices that they have not implemented yet. This book is not intended for them. It is instead for larger organizations that have to coordinate the development, qualification, and release of software across lots of people. This book provides them with a systematic approach for analyzing their processes and identifying changes that will help them improve the effectiveness of their organizations.
InfoQ: What makes DevOps fundamentally different from other agile approaches?
Gruver: I try not to get too caught up in the names. As long as the changes are helping you improve your software development and delivery processes then who cares what they are called. To me it is more important to understand the inefficiencies you are trying to address and then identify the practice that will help the most. In a lot of respects DevOps is just the agile principle of releasing code on a more frequent basis that got left behind when agile scaled to the Enterprise. Releasing code in large organizations with tightly coupled architectures is hard. It requires coordinating different code, environment definitions, and deployment processes across lots of different teams. These are improvements that small agile teams in large organizations were not well equipped to address. Therefore, this basic agile principle of releasing code to the customer on a frequent basis got dropped in most Enterprise agile implementations. These agile teams tended to focus on problems they could solve like getting signoff by the product owner in a dedicated environment that was isolated from the complexity of the larger organization.
The problem with that approach is that in my experience the biggest opportunities for improvement in most large organizations in not in how the individual teams work but more in how all the different teams come together to deliver value to the customer. This is where I believe the DevOps principle of releasing code on a more frequent basis while maintaining or improving quality really helps. You can hide a lot of inefficiencies with dedicated environments and branches but once you move to everyone working on a common trunk and more frequent releases those problems will have to be address. When you are building and releasing the Enterprise systems at a low frequency your teams can brute force their way through similar problems every release. Increasing the frequency will require people to address inefficiencies that have existed in your organization for years.
InfoQ: How can DevOps help to optimize requirements and planning activities?
Gruver: My view of DevOps is optimizing the flow of value through organizations all the way from a business idea to a solution in the hands of the customer. From this perspective, it requires analyzing waste and inefficiencies in your planning and requirements process. In fact, I see this as one of the biggest sources of waste in many large organizations because they build up way too much requirements inventory in front of developers. As lean manufacturing has taught us this excess inventory leads to waste in terms of rework and expediting so it should be minimized as much as possible. There are others in the DevOps community that tend to look at DevOps as starting at the developer and moving outward because that is where a lot of the technical solutions of automation and infrastructure as code are implemented. Again I would not get too caught up in the naming. This is not about doing DevOps. It is about addressing the biggest sources of waste and inefficiencies in your organization. If you can develop and release code in a day but it takes months for a new requirement to make it through your requirements backlog you probably need to be taking a broader end to end view of your deployment pipeline that includes your planning process and move to a more just in time process for requirements and planning.
InfoQ: Which metrics do you recommend for continuous integration?
Gruver: The first step is understanding the types of issues you are finding with continuous integration. It is designed to provide quick feedback to developers on code issues. The problem that I frequently see though is that the continuous integration builds are failing for lots of other reasons. The tests may be flaky. The environments may not be configured correctly. The data for running the test may not be available. These issues will have to be addressed before you can expect the developers to be responding to the feedback from continuous integration. Therefore, I tend to start by analyzing why the builds are failing. This is one of the first steps you use to prioritize improvements in your process. Next it depends on what you are integrating. If you are integrating code from a small team, you probably want to measure how quickly the team is addressing and fixing build issues. If you have a complex integration of a large system, I am much more interested in keeping the build green and making sure the code base is stable by using quality gates to catch issues upstream because failures here impact the productivity of large groups of people. There is a lot more detail and metrics in the book because it really depends on what you are integrating at which stage in the deployment pipeline.
InfoQ: In the book you distinct between scaling with tightly coupled architectures and with loosely coupled architectures. What makes it different, and how does that impact scaling?
Gruver: From my perspective, DevOps is a lot about coordinating work across different people in the organization and the number of people you have to coordinate depends on the size of your organization and the coupling of your architecture. If you have a small organization or a large organization with a loosely coupled architecture then you are working to coordinate the work across 5-10 people. This takes one type of approach. If on the other hand, you are in a large organization with a tightly coupled architecture that requires 100s of people to work together to develop, quality, and release a system it takes a different approach. It is important to understand which problem you are trying to solve. If it is a small team environment, then DevOps is more about giving them the resources they need, removing barriers, and empowering the team because they can own the system end to end. If it is a large complex environment, it is more about designing and optimizing a large complex deployment pipeline. This is not they type of challenges that small empowered team can or will address. It takes a more structured approach with people looking across the entire deployment pipeline and optimizing the system.
InfoQ: Which types of waste do you often see in large organizations?
Gruver: Most large organizations with tightly coupled systems spend more time and energy creating, debugging, and fixing defects in their complex Enterprise test environments than they spend writing the new code. A lot of times they don’t even really want to do DevOps; they just need to fix their environments. They are hearing from all their agile teams that they are making progress but are limited in their ability to release new capabilities due to all the environment issues. I usually start there. How many environments do they have between development and production? What are the issues they are seeing in each new environment? Are they using the same processes and tools for defining the environments, configuring the databases, and deploying the code in all the different environments? Is it a matter of automating these processes with common tools to gain consistency or are the environment problems really code issues by other teams that are impacting the ability of other groups to effectively use the environments for validating their new code. These are frequently some of the biggest sources of waste that need to be addressed.
InfoQ: What can be done to remove the waste?
Gruver: It depends a lot on the source of waste. A lot of the waste is driven by the time it takes to do repetitive tasks and manual errors that occur by having different people implement the process in different ways. This waste is addressed through automation. This requires moving to infrastructure as code, automating deployment and testing processes. The problem is that this effort takes some time so you should start where the improvements will provide the most value for your organization. This is why we do the mapping to determine where to start.
There is waste that is associated with having developers working on code that won’t work with other code in development, won’t work in production, or doesn’t meet the need of the customer. Reducing this waste requires improving the quality and frequency of feedback to developers. The developers want to write good code that is secure, performs well, and meets the need of the business but if it takes a long time for them to get that feedback you can’t really expect them to improve. Therefore, the key to removing this waste is improving the quality and frequency of the feedback.
Lastly a lot of organizations waste a lot of time triaging issues to find the source of the problem. Moving to smaller batch sizes and creating quality gates that capture issues at their source before they can impact the broader organization is designed to addresses this waste.
InfoQ: Why should executives lead continuous improvement? How can they do that?
Gruver: In large tightly coupled systems somebody needs to be looking across the teams to optimize the flow through the deployment pipeline. As we discussed above this is just not something that small-empowered teams are in a position to drive. It requires getting all the different teams to agree that they are going to work differently. I frequently see people start with grass roots efforts but most of these initiatives start to loose momentum as they get frustrated trying to convince peers and superiors to support the changes. If you are going to release code on a more frequent basis with a tightly coupled system then all the teams need to be committed to keeping their code base more stable on a day to day basis. If 9 of the 10 teams focus on stability it won’t work. All the teams need to be committed to working differently. This is where the executives can help. They can pull the teams together, analyze the process, and get everyone to agree on the changes they will be making. They can then hold people accountable for following through on their commitments. This can’t be management by directive because before mapping out the process with the teams the executives typically don’t have a good feel for all the inefficiencies in their processes. It needs to be more executive lead where the executive is responsible for pulling everyone together and getting them to agree on the changes and then leading the continuous improvement process.
This advice for executive lead is for large tightly coupled system. For organizations that have small teams that can work independently this is not as important.
InfoQ: Any final advice for organizations that want to adopt DevOps?
Gruver: Don’t just go off and do DevOps, agile, lean or any other of the latest fades you are hearing about. Focus on the principles, analyze the unique characteristics of your organization, and start your own continuous improvement journey. Judgment is required. Understand what you are trying to fix by changes in your process and then hold regular checkpoints to evaluate if your changes are having the desired effect. Bring the team along with you on the journey. At each checkpoint review what got done, what didn’t get done, what you learned during that iteration, and agree on priorities for the next iteration. The important part is starting the continuous improvement journey and taking everyone with you. You will make some mistakes along the way and discover that what you thought was the issue was just the first layer of the onion but if you come together as a team on the journey you will achieve breakthroughs you never thought were possible.
Gary Gruver is an experienced executive with a proven track record of transforming software development and delivery processes in large organizations, first as the R&D director of the LaserJet FW group that completely transformed how they developed embedded FW and then as VP of QA, Release, and Operations at Macy’s.com leading the journey toward Continuous Delivery. He now consults with large organizations and runs workshops to help them transform their software development and delivery processes. He is the author of Starting and Scaling DevOps in the Enterprise and co-author of Leading the Transformation “Applying Agile and DevOps principles at Scale” and A Practical Approach to Large Scale Agile Development “How HP Transformed LaserJet FutureSmart Firmware”. | https://www.infoq.com/articles/book-review-scaling-DevOps?utm_source=articles_about_deployment&utm_medium=link&utm_campaign=deployment |
Nations 'need to work together' to save wildlife
Countries will have to improve their co-operation if they are to protect endangered wildlife in an age of climate change, according to an international study.
A team of scientists have come up with a conservation index designed to help policy-makers to deal with the effects of climate change on birds in Africa, the theory of which could help governments across the world as climate change forces species to move to new areas.
An international research team led by Professor Brian Huntley and Dr Stephen Willis of Durham, School of Biological and Biomedical Sciences, Durham University, in North East England, looked at how native African bird species will fare in 803 Important Bird Areas (IBAs) across the continent, if climate change continues as predicted.
Birds are seen as a key indicator for conservationists because they respond quickly to change and the research, funded by the Royal Society for the Protection of Birds (RSPB), and published in the journal Conservation Biology, suggests that hundreds of bird species in Africa will become emigrants, leaving one part of the continent for another in search of food and suitable habitat.
We need to improve monitoring, communication and co-operation to make protected areas work across borders. Conservationists and policy makers will have to work together in new ways as networks become increasingly important in protecting species.
Dr Stuart Butchart, Global Research and Indicators Coordinator at research partner BirdLife International, said: ''Many areas that are likely to become increasingly important are currently under-protected. Co-operation across borders to preserve and adapt areas so that birds and other wildlife can survive as their habitats change and shift will be essential to conserve biodiversity and maintain the ecosystem services that will help people and communities adapt to climate change.''
Dr David Hole, Climate Change Researcher with research partner, Conservation International, said: "Policy action to encourage practices that will make it easier for species to move through the wider landscape will be critical, such as conservation-friendly farming and agro-forestry, to ensure species can reach newly climatically suitable areas as climate changes. It's about trying to find those win-win situations." | http://www.earthtimes.org/nature/nations-need-work-together-save-wildlife/265/ |
The accomplishment of anything worthwhile in education and other circles depends on the accomplishment of its objectives and aims (Glenn 1). The business economics curriculum provides the blueprint through which students and teachers accomplish their educational goals. In simpler terms, the business economics curriculum provides an educational structure to propel students, administrators and teachers towards a sense of business academic progression.
When analyzed comprehensively, the business economics curriculum provides administrators with a dynamic and comprehensive program to guide the learning activities of existing and prospective students (Glenn 1). On a much broader platform, administrators in colleges and universities have consistently used different educational curricula to attract students (Glenn 1).
Teachers have also been observed to use business economic curricula to asses their students’ performance and progress in their educational activities. In fact, for student progression, the curriculum has often been used to evaluate student performance to determine whether a student is fit to progress to a higher educational level or not.
This therefore means that in the absence of the business economic curriculum, teachers can never be sure whether they have really imparted the right knowledge for students to move from one level of education to another.
The importance of a business economics curriculum however cuts both ways (teachers-students) because students also need the curriculum to determine the academic requirements of a given course. Without the guidance of the curriculum, students can be confused in a maze of academic courses (which would not to lead them towards any given direction).
In a deeper sense, in the absence of a business economics curriculum, students cannot even be sure that they are undertaking the right subjects to attain a business degree or diploma. Comprehensively, the business economics curriculum gives a sense of order and structure in the way things are done in the educational setting. The importance of a well articulate business economics curriculum can therefore not be overemphasized.
However, in developing the right business economics curriculum, a number of considerations ought to be factored in. This is important because the development of a holistic curriculum normally goes beyond pooling together a number of academically required business subjects and teaching them to the students (Glenn 1). Factors such as student learning needs, teacher needs, administrator needs, progression in the educational field and community perception ought to be factored into the development of the curriculum.
The development of a comprehensive business curriculum therefore implies factoring in the input of educational stakeholders, such that, business students can be able to meet the needs of the general community. Glenn affirms that “A curriculum prepares an individual with the knowledge to be successful, confident and responsible citizens” (5).
In light of these considerations, this study seeks to critically analyze the business economics curriculum with the aim of identifying a controversy in the development of the curriculum. However, before this is done, an analysis of who sets the standards of the curriculum, how the curriculum is enforced and the penalties awarded for breaking curriculum rules will be established. Lastly, this study will provide an ethical dilemma relating to the given curriculum as the last part of the study.
Business economics has also been fronted as appropriate for students who want to develop a comprehensive approach to various economic, corporate and legal issues affecting the society. This paves the way for many students who want to become economic or legal consultants to have an easy time in doing so. However, the ability of the business economics curriculum to horn the skills of such future professionals is in question because of its failure to impart analytical skills among students.
From this observation, it is essential to note that the curriculum is rather sloppy in encompassing important analytical skills that students need to apply in the practical business environment. Most of the courses or subjects undertaken under the business economics curriculum are normally taught in the most basic ways; meaning that not much detail or knowledge is imparted to the students.
Rather, students gain an abstract understanding of the concepts to be studied. This creates more problems in real-life application of practical business and economic concepts because many students are forced to develop a shallow understanding of core courses so that they can only meet the course’s academic requirements.
The dynamism of the subjects studied under the business economics major is one area of concern in the development of student analytical skills because only basic and fundamental ideas regarding the various subjects are communicated to the students.
For example, a number of subjects such as finance, marketing, and management are comprehensively taught, but few of them go deeper into explaining the underlying issues behind the basic skills students need to learn in the course. Consequently, many students experience a lot of challenges applying analytical skills in the various subject areas, but more so, many experience insurmountable challenges in gaining the skills in the first place.
The business economics curriculum can therefore be equated to surface learning which is based on nothing more than the memorization of concepts to achieve academic requirements. Arbor (10) notes that this learning method is normally based on replication or reproduction among students because students often replicate what they have been taught or have read from books, without reflecting on the meaning and purpose of their learning.
Arbor (10) also identifies that students who continuously get exposed to an abstract learning concept are unable to relate their learning experiences with real life experiences and in the same manner, such students will be unable to develop a critical approach to handling various real-life issues that pertains to their area of study.
Since much of surface learning is inappropriate for many students, Arbor (11) recommends that learning curriculums should be designed in the context of letting students understand the meaning of the concepts to be studied, and in helping students relate new ideas developed from reading studied materials and relating them to past concepts.
He also recommends that curriculums should be centered on helping students to organize the content of their study to help them develop a stronger sense of responsibility for the research they undertake (within the framework of the curriculum).
Normally, business economics curricula provide a number of academics options for students to pursue under the Business Economics major. More comprehensively, the business economics major has been developed by building on existing economic theories and quantitative skills which can be used in comprehending a number of economic issues plaguing the world in the 21st century (Brearley 1).
In the curriculum, students are required to take one communication focus course then undertake the upper division communication proficiency course, which contributes 3 points (in its entirety) to the overall course grading system. The communication focus and writing proficiency courses can be undertaken as part of the CBE or as part of the elective courses to be done under the curriculum (the elective courses are to be deliberated upon between the students and one faculty member).
The business economics curriculum standards are important in ensuring the standards of education are maintained. Basically, the state maintains such standards through a state-developed assessment criterion which ensures high standards of education are maintained throughout the country. Most states across the country ensure schools and universities uphold high curriculum standards (Tienken & Wilson 1).
The standards set in the educational curriculum specifically define what the students ought to have learnt at the end of the study and what they ought to have learnt to do by the time they are through with the curriculum. However, states authorities do not mandate specific strategies or pedagogies which are to be used by universities because they don’t have such mandate at the local district level (Tienken & Wilson 2).
However, the standards set do not have an assessment criterion to evaluate instructors’ ability to carry out an effective assessment on students’ performance. This poses a problem because instructors are normally faced with the problem of correctly knowing how to use assessment tools or how to design assessment criteria to evaluate the way students fair in the course of curriculum implementation (Tienken & Wilson 2).
More importantly, there is an inherent danger that different instructors use different assessment criteria within their locality; such that, the evaluation criterion used by instructors within a given educational setting differs from another. This creates a lot of inconsistencies in the manner curriculum assessment is done.
A good remedy to this problem could be obtained from adopting state test standards, curriculum framework, sample questions, educational research and other educational assessment criteria used in the development of the curriculum, so that an unbiased and comprehensive educational curriculum is developed (as opposed to an unregulated and possibly irregular curriculum evaluation criterion).
This is important because a limited understanding on the above educational curriculum parameters may cause a problem on the curriculum design and implementation criteria. This type of recommendation is in tandem with recommendations advanced by Black and Wiliam (139) who note that educational standards can be extensively improved if the curriculum contents communicated in the classroom setting is altered for the better.
It is expected that throughout the implementation of the curriculum, all students must be present. However, no general attendance guidelines are in place to control or monitor student attendance, but instructors generally require all students to be in class when the classes are in session.
This requirement is especially emphasized when there ought to be a seminar, or discussion group engagement as part the curriculum study. However, students who continuously fail to show up in class (in an excessive degree) are bound to face a number of academic penalties including cut restrictions.
This penalty may be applied in a given course or in a number of them. If a student still persists in absenting himself or herself from classes (without the knowledge of the dean or any other administrative officer), then he or she is liable to be dismissed from the course without any credits. Such recommendations may be made by the instructors, departmental head, or the residential college dean because these officers have the powers to do so (Yale College 3).
If the curriculum is not followed to the latter, academic warnings maybe issued to students to persuade them to provide satisfactory work that meets the expectation of the curriculum and academic requirements in general. Students who fail to pass all the courses which they have been warned against may face expulsion from the academic program (strictly based on academic grounds).
These types of warnings may be issued regardless of the number of credits a student may have accumulated in the given semester. Academic warnings are normally issued if a student fails to earn at least two credits in each course within the entire curriculum; if a student gets at least two fails in any given semester (or in two successive semesters) and if a student has failed at least two courses.
Normally, the academic registrar is supposed to issue students with the necessary warnings when any of the above situations are experienced. This ought to be in written form because comprehensively, it encompasses the academic warning. However, students who fail to meet the above academic requirements still regard themselves as warned, even if such written warnings is not communicated.
From the above observations, it is not strange to note that the business economics curriculum is based on mathematical concepts and existing business theories as opposed imparting critical, practical and analytical skills for the students to have a holistic learning experience.
This sort of learning design has created many problems to students who want to cut a mark above the rest in the corporate world because they essentially lack the corporate analytical skills needed in the practical business environment. This fact arises out of the premise that many students have only been exposed to theories and abstract concepts about business and economics. A lot of information about how such theories and concepts need to be applied in the real world therefore lacks among many of the students.
This kind of learning approach is potentially dangerous and career-threatening to most students because they are denied the opportunity to develop their thinking skills.
“But if, as I believe, this new knowledge could significantly affect many people’s ability to resolve their disputes in better ways, relying primarily on formal teaching of enrolled students is grossly inefficient. This implies that some twenty years will elapse before those students arrive at positions of authority. By then, they will be operating on theories that have since been significantly developed, or become entirely obsolete”.
Research studies done by the Hewlett Theory centre (cited in Honeyman 7) have also reinforced the above assertions by noting that when analyzed in the context of dispute resolution, many solutions developed using theoretical means amounted to impressive results (on paper), but in practical aspects, the solutions developed seemed rather disconnected with the realities of practice.
This observation exposes the fragility that students have, in tackling practical issues in the real business and economic context. It therefore comes as no surprise that many students are motivated to cheat in their academic papers, just so they attain the required academic grades because they are motivated by reproducing studied materials to pass given subjects.
Unfortunately, many of them do get away with such academic offences because the business economics curriculum does not have any assessment criteria which evaluate the development of the students’ analytical skills. Many of the students who complete their degrees through the business economics curriculum are therefore released into the job market when they are still raw (with regards to their analytical skills).
Employers are primarily motivated by the intrigues of the business environment. This ultimately defines their employee expectations because they need personnel who can be of value to the underpinnings of the business environment.
When deeply analyzed, the business environment has been more complex and volatile than in previous times, and in this regard, many employers nowadays look for people who can work best in this type of environment, and more importantly, those who can provide real life solutions to problems evidenced in the turbulent business environment. Analytical skills become one of the most desired skills among employers.
The failure of the business economics curriculum to impart proper analytical skills among students therefore does a lot of injustice to the students because a majority of them fail to meet the employee threshold expected of professionals by today’s employers.
This potentially reduces their marketability in the job market and therefore, many students are likely to experience high levels of unemployment. In fact, in today’s global environment, many employees would rather outsource employees from more appropriate sources around the globe than to employ people who are not as competitive as they should be.
In relation to this observation, Electric Club (95) notes that many employees today are after people who have a high attention to detail. In other words, he exposes the fact that many employers are after people who can effectively handle logistics and those who can effectively carry out and plan an event (among other duties that require a strong command of analytical skills). The business economics curriculum therefore fails to prepare students for such kind of job market underpinnings.
The business economics curriculum has done a lot to empower students develop a comprehensive business foresight to deal with most global corporate and economic problems. This comes from the fact that the curriculum relies a lot on economic theories and mathematical skills which to a great extent represents the theoretical part of business economics studies.
The biggest problem observed in the curriculum is that through the great reliance on economic theories and mathematical approaches, students have not developed comprehensive analytical skills required in the business environment.
A lack of analytical skills has consistently caused students to lack important corporate skills such as the lack of visualization, analytical, articulate and simple problem solving skills that can be used to remedy problems (which require the use of simple pieces of information regarding a given issue) (Hancock 8). Moreover, students have been hindered from developing logical thought observed through designing and testing solutions that can be applied to solve a given business or economic problem.
The importance of analytical skills in the real business environment cannot therefore be overemphasized because the world has become very demanding and more firms nowadays require professionals who can formulate and design various solutions to given business problems. For example, a business requiring a marketing manager needs a candidate who can keenly scrutinize a given piece of advertisement and identify any existent inconsistencies requiring correction.
A failure of the business economic curriculum to accommodate these analytical skills, to a large extent characterizes the immense condemnation surrounding the business economic curriculum. However, the biggest loser in using an ineffective curriculum is the students themselves because they fail to achieve high levels of marketability which consequently causes a number of them to be unemployed or receive poor pay in the long-run.
This fact can be evidenced through the growing trend among most businesses to include an analytical section in interviews that expects candidates to indentify a problem within a given business context and provide possible solutions to solve it (Noddings 54).
Candidates who fail to meet this evaluation criterion are normally disqualified from further evaluation because their value to a given company is minimized through the lack of analytical skills. The use of an ineffective business curriculum also affects businesses because they are faced with an eminent lack of analytic professionals who cannot come up with viable solutions to keep up with the changing nature of business and economic problems.
In coming up with a long-term solution to solve the failures of the business economics curriculum, it is important for the university to include certain important learning areas in the educational curriculum that will ensure students develop the right analytical skills in the business environment.
Firstly, it is important for some of the courses to be designed around enabling students develop the ability to question. The ability to question specifically revolves around developing the student’s ability to ask the right type of questions, and more deeply, how the questions ought to be asked. This will be the first step towards enabling students develop analytical skills.
Secondly, the curriculum should enable critiquing and reviewing exercises where students critique the work of other students and ask questions that will help improve their work. For example, the curriculum can include areas where students read a given text, ask questions and later develop their own independent opinions about the given study. In developing a more articulate critiquing exercise, the curriculum should provide a breakdown of the critique exercise; possibly into a number of steps.
“Reflective thought involves the ability to acquire facts, understand ideas and arguments, apply methodological principles, analyze and evaluate information and ultimately produce conclusions. This includes the ability to question and solve problems by linking previous ideas, knowledge and experiences with present knowledge ideas and experiences” (12).
The educational curriculum should therefore be designed along these lines but How To (15) notes that its results should be expected over time. These recommendations are likely to improve the business economics curriculum.
This study identifies that the business economics curriculum has a number of strengths and weaknesses. Its strengths come from the fact that it observes strict adherence to guidelines because there are stiff penalties and implementation standards that are upheld by the departmental officers and the state educational department.
The fact that the curriculum was developed in consideration of the state’s curriculum development guidelines also poses another advantage to the appropriateness of the curriculum in imparting knowledge to students. However, In terms of enabling students develop the right analytical skills needed in today’s business world; the business economic curriculum leaves a lot to be desired.
In this regard, the business economics curriculum does not effectively prepare students to deal with real life practicalities in the business and economics environment. Attention is hereby made on the lack of the curriculum to effectively impart analytical skills to the students. In this regard, this study proposes that the curriculum should include courses which are designed to impart analytical skills to the students.
Most importantly, the curriculum fails to include programs or courses that empower students to develop strong analytical skills including the ability to question; the ability to become a reflective learner; the ability to solve problems and the ability to critique and review existing problems.
These issues abound, this study proposes that these factors be included in the curriculum because collectively, they define proper analytical skills needed in the business world. Horning these skills will also enable students deal with existing dynamics in the business and social context. Comprehensively, these dynamics define the business economics curriculum.
Arbor. Definitions, Theories and Measurement of Learning and Teaching Styles. 1 September. 2006. 21February. 2011.
Black, Paul., and Wiliam, Daniel. “Inside the black box: Raising standards through classroom assessment.” Phi Delta Kappan 1998: 139-148.
Brearley, Diane. Overview: Economics Major. 2011. 18 February. 2011.
Electric Club. The Electric Journal. Harvard University, 2007. Print.
Glenn, Steve. Importance of Curriculum to Teaching. 2 April. 2010. 18 February. 2011.
Hancock, John. Advanced General Studies for OCR. London: Heinemann, 2001.
Honeyman, Christopher. Theory vs. Practice in Dispute Resolution. 9 August. 1997. 21 February. 2011. Web.
How To. Developing your Analytical Skills. 2009. 18 February. 2011.
Noddings, Nel . Educating Citizens for Global Awareness. New York: Teachers College Press, 2005.
Tienken, Christopher., and Wilson, Michael. Using State Standards and Tests to Improve Instruction. 2011. 18 February. 2011.
Yale College. Academic Penalties and Restrictions. 2011. 18 February. 2011.
Is Studying in American Colleges and Universities Worthwhile? | https://annesanders.net/research-project-on-education/ |
Thursday, 8 January 2015
Happy New Year! Hope your 2015 has started off well. The COTD blog has been idle for a bit over the holidays, but I've started posting new characters again over the various social network places. Thought I'd save all the entire first week of posts for one super-post here once they were all completed.
If you'd like see more characters as they're completed next week, please join me on Facebook or Twitter sometime. I'm going to try to keep the 'daily' posts on those places for the next little while and see how that goes. And I'll aim for another superpost here at the end of next week.
Welcome to Character Of The Day!
Lots of characters and stuff!
I'm Niall, a cartoonist from Prince Edward County, Ontario, Canada, and 'Character Of The Day' is my labour of love. Thanks for stopping by to see what's new.
'Character Of The Day' is really just my name for a lot of little ongoing cartooning projects under one banner.
I don't always post art every day, but I try to work on something every single day, and I strive to post a new cartoon, or a new character design, or a doodle, or a sketch, or part of a comics story, as often as I can.
COTD art is currently being re-organized into various albums, for stories and themes that are in progress. For now, the best place to see them is here.
Please see the links below to follow Character Of The Day on various social media platforms, or to find products like prints, t-shirts and more!
If there's anything you'd like to see more of (character appearances, products), I welcome your input!
Stay cool, characters!
Niall [email protected]
Google+ Badge
Character Of The Day on Society6
Help support your favourite characters! COTD merch includes prints, t-shirts and more!
Art and Cartooning Facebook Page
Follow for art, contests and news!
Niall's Personal Facebook Profile
Mostly used for dorking out about art, comics and movies. Let's be friends!
| |
Featured Stock Overview: Noble Corporation (NE)
Noble Corporation (NE) belonging to the Energy sector has declined -0.68% and closed its last trading session at $5.82.
The company reported its EPS on 12/30/2017. Currently, the stock has a 1 Year Price Target of $4.57.
The consensus recommendation, according to Zacks Investment research, is 2.82. The scale runs from 1 to 5 with 1 recommending Strong Buy and 5 recommending a Strong Sell.
The Stock had a 2.83 Consensus Analyst Recommendation 30 Days Ago, whereas 60 days ago and 90 days ago the analyst recommendations were 2.84 and 2.95 respectively.
Noble Corporation on 12/30/2017 reported its EPS as $-0.17 with the analysts projecting the EPS of the stock as $-0.19. The company beat the analyst EPS Estimate with the difference of $0.02. This shows a surprise factor of 0.105%.
Many analysts have provided their estimated foresights on Noble Corporation Earnings, with 27 analysts believing the company would generate an Average Estimate of $-0.57.
Whereas they predicted High and Low Earnings Estimate as $-0.45 and $-0.69 respectively. While in the same Quarter Previous year, the Actual EPS was $-0.17.
Analysts are also projecting an Average Revenue Estimate for Noble Corporation as $229590 in the Current Quarter. This estimate is provided by 21 analysts.
The High Revenue estimate is predicted as 250000, while the Low Revenue Estimate prediction stands at 219000. The company’s last year sales total was 362980.
Noble Corporation (NE) has the market capitalization of $1.43 Billion. The company rocked its 52-Week High of $6.33 and touched its 52-Week Low of $3.14.
The stock has Return on Assets (ROA) of -3.3 percent. Return on Equity (ROE) stands at -6.8% and Return on Investment (ROI) of -3.4 percent.
The stock is currently showing YTD performance of 28.76 Percent. The company has Beta Value of 2.19 and ATR value of 0.27. The Weekly and Monthly Volatility stands at 5.33% and 4.88%. | https://factsreporter.com/2018/05/24/featured-stock-overview-noble-corporation-ne/ |
Researchers report that measuring levels of certain fats in the bloodstream might one day help spot women at high risk for migraines.
“While more research is needed to confirm these initial findings, the possibility of discovering a new biomarker for migraine is exciting,” wrote study author Dr. B. Lee Peterlin, associate professor of neurology at Johns Hopkins University School of Medicine in Baltimore.
In the study, the researchers assessed 52 women with episodic migraine (average of nearly six migraines a month) and 36 women who did not have the debilitating headaches. Blood samples from the women were checked for fats called ceramides, which help regulate energy and brain inflammation.
Women with episodic migraines had lower levels of ceramides than those who did not have headaches. Every standard deviation increase in ceramide levels was associated with about a 92 percent lower risk of migraine.
Conversely, the researchers found that two other types of fats were associated with a 2.5 times greater risk of migraine with every standard deviation increase in their levels.
The researchers also tested the blood of a random sample of 14 of the participants and, based on these blood fat levels, correctly identified which women had migraines and which women did not.
The findings were published online Sept. 9 in the journal Neurology.
In an editorial accompanying the study, Dr. Karl Ekbom, of the Karolinska Institute in Stockholm, Sweden, wrote, “This study is a very important contribution to our understanding of the underpinnings of migraine and may have wide-ranging effects in diagnosing and treating migraine if the results are replicated in further studies.”
More information
The U.S. National Institute of Neurological Disorders and Stroke has more about migraine. | https://iamtotallysick.com/womens-health/blood-test-might-one-day-help-spot-migraines-2/ |
Human brain is the most complex organ, and there are lots of mysteries still unrevealed even after years of research. Researchers have spent decades to find the real cause for common brain disorders including depression, anxiety, ADHD, schizophrenia and so on. According to a postulated theory and many clinical case studies and practical observation, it has been identified that there is a chemical imbalance in an individual who is suffering from symptoms related to certain psychiatric and brain disorders. Though the subject still remains in a controversial domain, psychiatrists and clinicians propose an imbalance in brain chemistry as one of the reason for mood disorders, substance abuse, learning problem.
Most of us know that the message in the brain is transmitted through the neurotransmitters. Between two brain cells there is a junction known as synapse. Chemicals in brain facilitate easy signal transmission between two brain cells. Depletion or excess amount of brain chemicals affects the neurotransmitters to function as a stimulant or inhibitors in the nervous system.
Some of the chemical imbalances that may be responsible for depression, anxiety and other psychiatric disorders include.
- Low level of serotonin, dopamine, GABA, Acetylcholine. They all are neurotransmitters.
- High level of homocystiene, a neurotoxic chemical.
- Low level of certain minerals such as potassium, zinc, and magnesium and manganese.
- Deficiency of vitamins, especially vitamin B6, B12, folic acid, and C vitamin.
- Excess of stress hormones such as cortisol.
Causes of Chemical Imbalance in Body
Though the exact reason for brain disorder is still elusive, certain factors are widely believed as responsible cause for brain chemical changes and imbalance.
- Genetics: clinicians have observed that people are at a greater risk of developing psychiatric illnesses when any close family member is suffering from certain psychiatric illness.
- Developmental defect in the brain is also perceived as one of the reason for chemical imbalance in the brain.
- Our thought and actions have been associated with changes in brain chemistry.
Symptoms of Chemical Imbalance in Brain
- Depression: a person suffering from chemical imbalance in the brain may feel very emotional. He may feel depressed and sad. This is the result of low level of chemical serotonin, a hormone which gives a feeling of well being or other hormones such as GABA and dopamine.
- Anxiety: it is also a symptom of imbalanced brain chemistry. During an anxious situation there is low level of serotonin and GABA, while there is increased level of stress hormones such as adrenaline and cortisol.
- Physical pain: person suffering from chemical imbalance often complains of pain in different part of the body.
- Insomnia: sleeplessness is one of the symptoms of chemical imbalance of the brain as the person constantly thinks and that keeps him awake.
- People having imbalance of chemical in their brain easily fall prey to alcohol, and other addictions.
- Lack of concentration: there is loss of concentration and the person is unable to recollect some of the past happenings. | http://www.healthiro.com/mental-health/causes-of-chemical-imbalance-in-brain.html |
Wed 03.09.22 / Aditi Peyush
However, there are limitations to AI. Issues—such as bias and exclusion—arise as a result of the implementation and use of these systems.
Founded in 2016, the Cybersecurity and Privacy Institute at Northeastern University (CPI) is leading the charge to understand the impacts of AI and other emerging technologies. The institute is made up of researchers from Khoury College of Computer Sciences and the School of Law who collaborate with leading universities, tech companies, and defense contractors.
READ: Collision Conference 2021: Khoury faculty on ethical and responsible AI
Recently, the CPI welcomed a new member to their team. Meet Amba Kak, senior research fellow at the CPI, who is also currently senior advisor on AI at the Federal Trade Commission (FTC).
Kak joined the CPI from her role as the director of global policy at AI Now, a research institute affiliated with New York University. At AI Now, Kak designed, developed, and executed the institute’s globally-oriented policy research agenda that focused on algorithmic accountability. And at the FTC, as senior advisor on AI, Kak is working with the agency’s chief technology officer and technologists as part of an informal AI strategy group. Kak also partners with policy experts across the agency to provide insight and advice on emerging technology issues.
Kak’s passion to translate scholarly research to policy action led her to the CPI. She called the CPI “an interdisciplinary community of researchers who are motivated to find policy windows for their research.” Intentionally using the word “windows,” Kak continued, “I really like that metaphor because it gets at those pivotal opportunities for translating bold ideas into action that may not have been visible—or at all possible—before.”
How did Kak find herself in tech policy? Back in law school, Kak was drawn to a course around internet regulation. Reminiscing, Kak said, “In a sea of decade-old statute and settled precedent, it was so motivating to learn about a field where the legal questions were mostly open. In fact, there was no consensus on the pre-legal normative question of ‘what type of futures do we want in the first place?’”
Kak didn’t stop there. She went on to study at the University of Oxford as a Rhodes Scholar, where she pursued advanced degrees in both law and the social science of the internet. On the latter degree, Kak explained, “I think legal training gives you a specific set of skills […], but I also think it can limit your lens and imagination in some ways. At the Oxford Internet Institute, I got to expand the tools of analysis I applied to any particular issue. That kind of interdisciplinary lens is especially valuable in this field.”
At the Oxford Internet Institute, Kak wrote her master’s thesis on zero-rated plans. These plans, like Meta Connectivity’s Free Basics, offer access to a restricted selection of websites for users without a data plan or at reduced rates. These plans sparked a policy debate about net neutrality and gave Kak a policy window for her research, she explained, “On one hand, people said, ‘Everyone’s going to get stuck in the walled garden, they’re going to think that Facebook is the internet.’ On the other hand, people said, ‘Some internet is better than none.’” The ethnographic research that Kak conducted served as a reminder that “policy debates can often result in pitting abstract theoretical propositions against each other. I learned the value centering research around the communities that are directly impacted by these developments—and to always leave room to develop and adjust our arguments to those learnings.”
AI is the future, or so we’ve been told. Kak challenges this platitude by drawing attention to the bigger picture. “The futuristic rhetoric on AI can be a bit of a distraction. AI systems are intertwined with systemic and historical inequities, and using these systems in social contexts can disguise or obscure these larger issues.” Continuing, she said, “There’s also a fair amount of tech-solutionism in the field, as if there’s no problem that AI can’t fix—whether it’s poverty or mental health. It makes you wonder if AI is a solution in search of a problem.”
As a policy researcher focusing on technology regulation, one of her goals is to remind different audiences that these technologies are not immune to scrutiny. Kak said, “Many interested parties will project that technology trajectories are inevitable, so as an antidote we need to emphasize and remind people that technology must work for people and not the other way around.”
Why does accountability take the stage in the discussion of AI? Kak argues that it’s because the stakes are high. “AI systems—whether they’re used in private or public contexts—are having real, material impacts on people. They’re affecting their access and the quality of basic opportunities, services, and benefits,” she argued.
At the CPI, Kak finds herself among distinguished computer scientists who design and analyze complex systems. She’s excited to use this research to inform policy and understand what systems need to be put in place to prevent abuse of these technologies.
As she joins the team, Kak is “personally excited to learn from and grow with this community.” Between the FTC and the CPI, she’s got her hands full—but Kak continues to roll up her sleeves.
The unanswered questions and potential policy solutions drive her research and advocacy efforts. She explained, “I think we’ve moved in the last decade from abstract questions about AI ethics to ‘the moment for action is now.’ How can we practically hold companies and other actors accountable for their use of technology?”
Her advice to technologists? “To have a healthy amount of skepticism and humility about what tech can change on its own—divorced from broader social context and histories.” This means making room for other kinds of expertise and knowledge. She concluded, “I think we need more technologists to cede space for broader forms of expertise and deliberate over the impacts of technologies before—not after—they are developed.”
Δ
Enter your information to subscribe now. | https://www.khoury.northeastern.edu/accountable-ai-senior-policy-researcher-amba-kak-brings-new-expertise-to-the-cybersecurity-and-privacy-institute/ |
The Humane Society of Indianapolis was formed in 1905 by a group of concerned citizens who felt the need for an organized society to prevent cruelty to animals, children, and others who could not speak for themselves in the Indianapolis community. Over several of the early years, the society would function with limited visibility, low funding, and limited support from Indianapolis society. The future of the society would be uncertain until a generous bequest from the estate of Mary Powell Crume would aid in providing a solid financial foundation for the future of the Humane Society of Indianapolis.
In 1967, the generous donation from the Crume estate provided the means to purchase the grounds of the Indiana Society for the Prevention of Cruelty to Animals, located on North Michigan Road in Indianapolis. The purchase of these grounds would allow for the society to construct its first shelter facility to serve the Indianapolis community. At this point, the society decided to focus its efforts solely on animal welfare, and would begin efforts towards developing kennel operations, animal adoption and education programs, as well as rescue and investigation services.
This Indianapolis non-profit organization would continue to serve the Indianapolis area in the same manner until 1990, when the overall mission of the shelter would be redirected in accordance with the building of a new shelter facility. A new general mission would be adopted which called for providing shelter to lost and homeless animals, developing Indianapolis education programs for the community towards the humane care and treatment of all animals, and the advocating of animal welfare.
Today, the Humane Society of Indianapolis continues to run under the same principles, focusing on the humane treatment of all animals. On average, over 13,000 animals arrive at the shelter every year, with approximately 57 percent of the animals finding new and loving homes in 2006. The shelter would continue to expand and enhance its services, including 24 hour animal rescue, dog obedience classes, animal behavior seminars, and a wide variety of resources for animal related problems. Also, a cooperative would be formed with Indianapolis veterinarians to provide complete health care; in addition, the organization made great strides in the fight against overpopulation using several spay and neuter programs.
Due to generous gifts from the community, the Humane Society of Indianapolis is able to run on an annual revenue amount of nearly 2 million dollars. This support is made possible through a great amount of donations, fees for services provided, trusts, and an endowment. The shelter receives no government support and is truly dependent on the generosity and support of the local community.
The shelter works in as many ways as possible to find loving homes or alternative situations for the animals in its care. Be it adoption, rescue center placement, or temporary behavior rehab in a foster home, a great deal of effort is put forth to ensure that every possible option for care is approved.
Comprehensive and long term solutions for the animals of Indianapolis are the primary goals for the Humane Society of Indianapolis. The group adheres to its goals through vaccinations, spaying and neutering, microchip ID placement to aid in reuniting animals with their owners, behavior assessment and enrichment, essential medical care, and the basics: food, shelter, exercise, and love. The continued care and support of the animal continues once it is placed in its new home; pet counselors follow up with families regarding animal behavior and training problems, and the counselors ensure the pet is adjusting properly to the family and their lifestyle.
Informational video about the Humane Society of Indianapolis
At the Humane Society of Indianapolis, the various workers, veterinarians, trainers, and dedicated volunteers are available to ensure you find the best animal companion for your family, and ensure everyone can successfully experience the joy animal ownership can bring.
For more information on the Humane Society of Indianapolis, please visit the organization’s homepage. | http://indianapolis-indiana.funcityfinder.com/2008/09/11/indianapolis-humane-society/ |
I’ve been seeing a lot of posts that jump to conclusions that everyone would be safe from ransomware if they had patched all of their systems. Does this sound too good to be true? Yes. Does it sound realistic? No. There’s a lot more to it than that.
Firstly; not all organisations have the budget or resource to spend the entire day patching their systems, making sure their AV is up to date, checking their IDS, responding to every single incident, and remediating every single vulnerability as it is discovered. These things take time, money and resource, and must be addressed in such a way that the rest of the business can continue to function with minimal impact. It’s not a perfect choice for some, however the security function is a risk function and must be seen as an enabler which is dynamic enough to work flexibly with differing budget and resource constraints.
Firstly, there are no absolutes. Second, there is no one size fits all silver bullet solution to solve this problem of ransomware.
A risk is best mitigated when multiple controls are implemented which effect all different variables of the risk; such as the vulnerability, the threat, the severity of impact, and the frequency of a risk event.
Lets look at these risk variables in the context of the recent ransomware outbreaks.
1. The Vulnerability – The operating system. Obviously patching is the best mitigation. But lets face it; when was the last time you walked into an organisation and found every single system patched up to date? I don’t think this is possible. Especially with larger organisations. Furthermore, patching only addresses known bugs and not the unknown ones, so this approach, whilst effective, can be somewhat limited.
2. Impact / Severity – Loss of critical data, and business processes which rely on that data. Regular backups and well documented, and up to date business continuity processes which are properly understood and followed by staff are the best way to mitigate the impact.
3. The Frequency – How often outbreaks occur is commonly due to user awareness on how to identify the various forms of threats and respond in accordance with known processes. So a user armed with the knowledge on how to respond is a good way to mitigate the frequency, and this requires ongoing training programs in cyber security awareness.
But what I want to talk about in this article is The Threat.
If we can imagine the threat as some malicious party with some sort of malicious intent or other motivation which I described in Building Hacker Personas, then we are able to better understand the source of the threat. These threats, which are best dealt with in the physical world by law enforcement, can also be addressed in the virtual world by anyone armed with a little bit of technical knowledge. The further we can mitigate these threats and their source, the less expensive all other strategies become.
Almost all ransomware and malware can be tracked back to some source IP address or address range of C&C servers or email servers. And there are a bunch of open source IP lists which you can apply to your firewall to block them.
1. Known Compromised Hosts: This ruleset is compiled from a number of sources. It’s contents are hosts that are known to be compromised by bots, phishing sites, etc, or known to be spewing hostile traffic. These are not your everyday infected and sending a bit of spam hosts, these are significantly infected and hostile hosts. Sources include: Brute Force Blocker.
2. Firewall Block Rules: This dynamic IP list contains block rules from Spam nets identified by Spamhaus (www.spamhaus.org), Top Attackers listed by DShield (www.dshield.org), and also known ransomware C&C servers at Abuse.ch.
How do I create a Dynamic Firewall Rule in pfSense?
2. Click Add to create an Alias. Give the alias a name, description (optional), and select the Type as URL Table (IPS). Paste in the URL of the IP blocklist mentioned above, and select the update interval as every 1 days in the pull down menu.
There you have it. Dynamic firewall rules which update automatically on a daily basis to help mitigate the threat of ransomware and other malicious actors by blocking their source. For other solutions which help with the fight against ransomware, Core Sentinel can help. For over 15 years we have been successfully working to improve the security posture of various organisations. Call today for a free quote. | https://www.coresentinel.com/ransomware-mitigating-threat/ |
Refractory materials are materials that resist decomposition due to heat, pressure, or chemical attack. They have high strength and durability at high temperatures. They can be inorganic, crystalline, porous, or heterogeneous. Read on to learn about the benefits of refractory materials for various applications.
refractory ceramics
Refractory ceramics are used in a variety of industrial applications. They are often used for building kilns and ovens, but they are also used in building blocks, sheets, and blankets. They also make good insulation materials at high temperatures. Typically, refractory materials come in brick and block form and are available in a wide range of shapes.
Most refractory ceramics are made of a base material such as alumina, magnesia, or aluminosilicates. However, other compositions are available that provide superior properties in certain applications. Most refractory ceramics are highly-densified structural components and containers, but some materials are porous, making them good for insulation and filtration.
In addition to being highly-resistant to heat and pressure, refractory ceramics also exhibit excellent resistance to oxidation. These materials are used in a variety of industries including glass manufacturing, solid-oxide fuel cells, nuclear reactors, and automotive components. They are also used for protective coatings and industrial tooling.
aluminum nitride ceramic
Aluminum nitride ceramic re-fractories are made from a composite material of aluminum nitride and other filler materials. The filler materials may be reactive or thermodynamically stable. They also may not act as nucleation sites for the aluminum nitride oxidation reaction. The size of the filler materials and the amount of aluminum nitride in a composite body are both dependent on the desired use.
Aluminum nitride matrix ceramic composite bodies are especially well-suited for refractory environments. These ceramic composites embed the filler material in an aluminum nitride matrix to form a synergistic material. Moreover, these materials can be tailored for a specific application. They can be used in glass manufacturing processes and in the continuous manufacture of metals and glasses.
The process of forming aluminum nitride ceramic refractories is based on a directed oxidation of aluminum parent metal. In this process, the parent metal undergoes an oxidation reaction in the presence of nitrogen and forms a polycrystalline aluminum nitride ceramic material. The filler material is encased within the aluminum nitride matrix and grows towards it.
refractory material
A refractory material is a solid substance that resists decomposition under high temperatures and pressure. The material is resistant to chemical attack and can be organic, inorganic, porous, or heterogeneous. In addition to its ability to resist heat and pressure, refractory materials retain their strength even under extremely high temperatures.
Refractory material is a key component of high-temperature furnaces and boilers. These furnaces are subjected to high temperatures, high pressure, chemicals, and physical wear. Using a refractory in a boiler will prevent damage and premature failure of the boiler. Furthermore, it will ensure the efficiency of the boiler by preventing unnecessary interruptions in steam generation.
Refractory materials are generally classified based on their temperature resistance. Higher-temperature materials are suitable for high-temperature applications, while lower-temperature materials are suitable for lower-temperature environments. Lower-temperature materials are also suitable for use as backup linings for steel and glass.
high temperature resistant material
High temperature resistant refractory materials are needed in a variety of industries, including iron and steel production. They must have high thermal insulation properties and low density. They must also be impermeable and easy to install. In some cases, ceramic fibers are used in this application.
Refractory insulating materials are usually composed of two parts, a solid granular component and an initially wet component. The dry component is the standard high temperature resistant castable refractory material with a density of 150-200 lb/ft3. The wet component is typically a mixture of water and silica. The two components are combined during the casting process and refractory insulating composition is subsequently formed.
Refractory materials are used in furnaces and other high temperature applications. They provide structural integrity and strength and resist cracking and explosion during heating. They also resist corrosion and oxidation during use. The dry components typically comprise 30-60% alumina and 40% silica. Other refractory materials can contain 0.5-5% magnesia or alkali metal oxides. | https://www.eticeramics.com/benefits-of-refractory-materials/ |
OR WAIT null SECS
© 2021 MJH Life Sciences™ and Infection Control Today. All rights reserved.
FLEMINGTON, N.J. -- Results of a new national survey of physicians confirm that an overwhelming majority of U.S. physicians believe that the current flu vaccine shortage is a crisis situation.
The national e-survey was conducted by Muhlenberg College Institute of Public Opinion (MCIPO) and HCD Research from October 19-20 among a nationally representative sample of 600 primary care physicians.
"We conducted the national survey to obtain physicians' assessment of the current flu vaccine shortage and gauge the seriousness of the situation," noted Glenn Kessler, co-founder and managing partner, HCD Research. "Our results indicate that not only do nearly all physicians view this as a crisis situation, almost half believe that it is serious or significant in nature," explained Kessler.
Among the findings:
-- Nearly all physicians (95 percent) consider the current flu vaccine shortage a crisis situation, with 40 percent describing it as "serious" or "significant."
-- Approximately 60 percent report that patients who normally do not ask for the flu shot are requesting the flu vaccination at this time.
-- A vast majority of physicians (74 percent) reported that it was difficult to obtain adequate quantities of vaccine for patients who meet the recommended guidelines for administration of the vaccine. | https://www.infectioncontroltoday.com/view/us-physicians-believe-flu-vaccine-shortage-crisis-situation-vast-majority-report |
Young adults are seen as agents of social change. They are pioneers in the development of a lifestyle that responds to the latest cultural, economic and social changes in society. With “Young Adult Survey Switzerland YASS” of the Federal Youth Surveys ch-x, an instrument was developed to record changes and stability of attitudes and values of this generation on the threshold from adolescence to adulthood by repeated surveys with the same questionnaire.
The focus is on the following topics:
The aim of the “Young Adult Survey Switzerland” of the Federal Youth Surveys ch-x is to obtain an empirically and interdisciplinarily supported insight into the educational biographies, living conditions, and social and political orientations of young adults in Switzerland, to record possible changes, and thus to show trends and tendencies among 19-year-old Swiss.
YASS will have its own reporting platform. By means of a special YASS publication series, the current findings of the surveys will be published in several languages at cyclical intervals. The first report volume presents the goals and methods of the new project and also some exemplary results of the first survey. Volume 2, published in 2019, will allow concrete comparisons of two survey points and their explanations. Following tradition, the third volume, published in 2022, presents selected results from the comparison of three surveys on the above-mentioned topics and describes – in-depth on current developments in the Corona pandemic – changes in the lives of young adults. Volumes 1, 2, and 3 can be accessed here and on the ch-x website (www.chx.ch).
Medienmitteilung Young Adult Survey Switzerland (YASS) Band 3
Medienkonferenz der Eidgenössischen Jugendbefragungen ch-x vom 3. März 2022 im Medienzentrum des Bundeshauses in Bern – Mitschnitt: YouTube
Media reports on the Young Adult Survey Switzerland (YASS) Volume 3 can be found here.
For more information, please visit: www.chx.ch/YASS
Until the middle of the 20th century, the Pedagogical Recruit Examinations (PRP) were the instrument for obtaining information about one’s youth by means of a few school performance measures and for obtaining a picture of the elementary school education level in the cantons. As social science research in Switzerland gained breadth beginning in the 1960s, ideas emerged about how the PRP could be used anew as a tool for broad-based youth research. Finally, at the beginning of the new millennium, with the transition to surveys of all male conscripts in army recruitment centers and the introduction of an additional sample of young women representative of Switzerland, the PRPs became the Young Adult Survey Switzerland.
Since 2010, young adults aged 19 have been surveyed on the same topics. The surveys are always conducted over two calendar years. This rhythm enables youth monitoring, which was previously lacking in Switzerland in this form.
International comparison
YASS1 is unique in its design. The comparison with other countries shows that there are hardly any comparable multithemed youth surveys that are broadly based and designed for regular repetition. Well-known in Switzerland are also the “German Shell Youth Studies”, which have been conducted regularly since 1953 and publish a status report every 4-6 years. Moreover, most youth studies refer to the 10- to 18-year-old generation, whereas YASS refers to the threshold-age generation of 19- and 20-year-olds (Huber/Hurrelmann, YASS volume 1, 2016, p. 25ff).
Survey rhythm
The cyclically repeated representative surveys of the Federal Youth Surveys ch-x prove to be an ideal instrument for such “measurements” on the pulse of young adults of both genders. The surveys are always conducted over two calendar years. The first YASS survey took place in 2010/2011. A first repeat occurred in 2014/2015. The third survey cycle occurred in 2018/2019. Due to the Covid-19 pandemic, there will be a delay, so the next data will be collected in 2024/2025.
Respondent populations
Surveys are conducted at the time of enlistment at each of the six Swiss recruitment centers, thus capturing the bulk of Swiss men of draft age. By means of a nationally representative supplementary sample, approximately 3,000 randomly selected women aged 19 are also surveyed at their place of residence or, since 2018, via the Internet, which corresponds to about 5 percent of the 19-year-old female population in Switzerland. The response rates of around 60-90 percent in each of the previous surveys can be attributed to the quality of the questionnaire and personal contact by ch-x employees (around 100).
Survey instruments
Since the survey instruments are used repeatedly in the four-year cycle, special consideration was given to ensuring that the questions selected are both durable over the longer term and sensitive to possible changes. The selection of questions was guided by existing international studies in order to achieve possible comparability of results. However, new questions were also constructed based on the theoretical model. In constructing the questions, the current state of research in the survey literature was taken into account. In addition, the questionnaire was repeatedly modified based on the results of various pretests.
Further information on the various pretests and the translation process can be found in the final report on instrument development (Huber et al., 2011; cf. www.chx.ch/YASS).
Expected added value
By conducting largely identical surveys every 4 years, detailed and comparable data on the situation and development of young adults will be collected, which has been lacking in this form in Switzerland until now. This rhythm enables a permanent monitoring, which offers several advantages: With the instrument for the permanent observation of living conditions as well as social and political orientations of young adults, changes can be described retrospectively on the one hand and emerging trends can be pointed out on the other hand. It is obvious that the results of this long-term survey will only gain in significance cumulatively, i.e. with each subsequent survey. The identification of trends and tendencies is therefore the primary goal of the project. A major advantage lies in the large sample of young Swiss adults, which covers almost all educational and income levels. Analyses and statements are possible down to the level of individual cantons and political districts. In addition, the large sample size makes it possible to specifically analyze unusual groups (so-called problem groups with regard to drugs, propensity to violence, lack of education, etc.) and the typical transition problems from adolescent to adult (transition or rite de passage research). In this way, the results contribute to policy-making and to the improvement of services for young adults.
YASS Research Team
After a public call for tenders, a team of scientists from the University of Teacher Education Zug (leadership) and the Universities of Bern, Zurich and Geneva was recruited for the long-term project. | https://bildungsmanagement.net/en/forschung/yass/ |
State By State, Rainwater Harvesting legislations
Rainwater harvesting is the accumulation and deposition of rainwater for reuse before it reaches the aquifer. Uses include water for garden, water for livestock, water for irrigation, and indoor heating for houses etc.
In many places the water collected is just redirected to a deep pit with percolation. The harvested water can be used as drinking water as well as for storage and other purpose like irrigation. opponents of collecting rain water argue that if rain is captured less water will flow to streams were it is need for wells and springs. See state by state laws on rainwater collecting, | https://homegardendiy.com/state-by-state-rainwater-harvesting-legislations/ |
Good crisis management calls for open, honest communication with various target audiences.
During a crisis, however, this is most difficult to accomplish. As human beings, we usually seek ways to avoid or soften painful experiences. It is helpful to recognize some specific reasons people use to discourage open communication. These reasons are all logical, reasonable, and probably valid to some degree. Nevertheless, unless you deal with them effectively, they will become obstacles, making it extremely difficult to resolve the crisis.
- We need to assemble all the facts – We do need all the facts; that must be a priority. However, we may need to release some information initially and be honest about the fact that we still are gathering information.
- We must avoid panic – One of the best ways to avoid panic is to control the flow of information. We can establish and maintain our credibility as an information source only when we communicate openly and honestly.
- We have no spokesperson who can respond – Crisis communication planning will identify spokespersons. The head of the organization is an appropriate general spokesperson for most crises.
- There are legal issues involved – Legal issues often are involved in crises. Management must be willing to balance legal and public relations issues. The long-term health of an organization depends not only on a legal resolution of a specific issue, but also on the effective resolution of a crisis in the “court of public opinion.”
- We need to protect our organization’s image – Open and prompt communication is essential to protect our image with the media and the general public.
- We don’t know yet how to respond to the crisis – It may in fact take some time to develop a solution to the crisis. Part of the challenge and opportunity of the crisis is to show those affected that the organization is using a reasonable, caring process to resolve the crisis. We can show this process best when we are willing to communicate openly.
- There is proprietary information involved that we cannot divulge – There may be information we cannot divulge, especially if there are consequences for a particular member of the organization. We need to weigh our decisions carefully, point by point, to determine if such a situation really exists, or whether we simply are making excuses. We need to remember that public safety must be a paramount concern. | https://clarkcommunication.com/have-honest-communication-with-target-audiences-in-a-crisis/ |
The purpose of this research critique is to show how a quantitative study is effectively critiqued. The critiqued study follows the increased use of catheter in health care organizations and the resulting infections. In US for instance, CAUTIs account for around 34% of all health care associated infections. These infections are associated with high health care costs and excess morbidity. Even though there are CAUTI prevention practices in place, adherence to them has not been successful. This essay will explain the protection of human participants, data collection, data management and analysis, findings or interpretation of findings including implications for practice and future research, and conclusion.
Protection of Human Participants
As noted by Emanuel (2008), research studies that involve human participants are required to adhere to some norms to make them ethical and legal. The major expected aspects are obtaining consent, having voluntary participation, and institutional review board approval before the sample is selected. In this case, the researchers adhered to this and had several benefits. To start with, by obtaining informed consent, it was ensured that the participants knew what they were engaging into. The study participants did not agree to a study they did not understand; they understood the process and the part they were to play. This way, the researchers had willing participants. Further, by ensuring anonymity of the participants with the research being conducted on nurses, the researchers were able to get more detailed and honest responses. The participants also enjoyed voluntary participation and this encouraged their responding. Lastly, the Colorado Multiple Institutional Review Board gave approval to the researchers by exempting the study from human subjects’ research oversight (Fink et al, 2012).
Data Collection
The dependent and independent variables in the study are very clear. The researchers wanted to determine the care practices in place for CAUTI prevention. As a result, the dependent variable was the CAUTI prevention. The independent variables were the care practices in three areas namely equipment and alternatives and insertion and maintenance techniques; personnel, policies, training, and education; and documentation, surveillance, and removal reminders. Data was collected through electronic surveying of NICHE hospital nurse coordinators on issues regarding IUC practices. Data was collected in December 2009 with 20 NICHE member hospital coordinators invited to complete the survey with 233 more invited to complete the survey in June 2010. The respondents had to gather the information required in completing the surveys from various sources including nurses from the units focusing on NICHE activities, purchasing staff, infection preventionists, local educators, and clinical informatics staff. The researchers used the Survey Monkey methodology to collect data. This was agreed to because it was considered the most appropriate in the current study and had been used in previous studies satisfactorily. The data was collected from December 2009 to June 2010. The respondents started by filling the survey forms after gathering adequate and relevant data from the identified personnel. The survey was completed in ten minutes. The coordinators were also to send copies of their hospital’s IUC placement, management, and CAUTI prevention procedure and policy (Fink et al, 2012).
Data Management and Analysis
After the data was collected, it was managed and analyzed using SPSS version 19. The demographic data and survey items were summarized using tests of difference and association with a set at 0.05 and descriptive statistics. The authors in this case describe maintaining a paper trail of critical decisions that were made during the analysis of the data (Abbott, 2014). Further, they used statistical software to ensure accuracy of the analysis. The researchers had also measures in place to prevent research bias. They analyzed the data independently and later compared their analysis. They also had the help of experts in data analysis.
Findings / Interpretation of Findings: Implications for Practice and Future Research
From the analyzed data, it was found out that even though there are CAUTI prevention practices in place in the NICHE hospitals examined with most of the practices aligning with evidence-based guidelines, there is considerable heterogeneity of practices and this called for improvement and standardization. Even though silver-coated catheter house-wide is noted to have equivocal evidence support, it is used by fewer hospitals because of the excess costs involved. Other used prevention practices are use of removal triggers such as reminders and stop orders and nursing-driven catheter removal protocols. Even though these practices had beneficial results, they were not used by all hospitals. Since the current findings agreed with earlier studies, it can be argued that they are valid and accurate. The implication of the study was therefore to have implemented local procedures and policies that incorporate evidence-based guidelines. The designed and implemented procedures and policies should achieve regulatory compliance and also standardize practice for providers. One of the major limitations of the study was that it focused on the nursing practice with physician practice out of the study’s scope. Future research should thus not overlook physician training and education since they are also involved in catheter placement. Further, the study used NICHE hospitals, which are not for profit hospitals and thus are not a representative of the general hospital population. The study also relied on self-reports from NICHE hospital coordinators who might have lacked perfect knowledge on CAUTI prevention (Fink et al, 2012).
Conclusion
Even with increased catheter infections, there are prevention measures that can reduce the frequency is well adhered to. The preventive mechanisms should be standardized and aligned to the hospitals policy and procedures so that all physicians are encouraged to adhere to them. This study shows how CAUTI can be prevented if corrective measures are taken. There should be policies in place and the medical practitioners should be educated and trained on the same.
References
Abbott, H. (2014). Foundations for operating department practice: Essential theory for practice. London: Open University Press.
Emanuel, E. J. (2008). The Oxford textbook of clinical research ethics. Oxford: Oxford University Press.
Fink, R., Gilmartin, H., Richard, A., Capezuti, E., Boltz, M., & Wald, H. (2012). Indwelling urinary catheter management and catheter-associated urinary tract infection prevention practices in Nurses Improving Care for Health system Elders hospitals. American Journal of Infection Control, 1-6. Retrieved from http://www.ucdenver.edu/academics/colleges/medicalschool/departments/medicine/hcpr/cauti/documents/TeamPublications/Indwelling%20urinary%20catheter%20management%20and%20catheter-associated%20urinary%20tract%20infection%20prevention%20practices.pdf. | https://bestcustomessaywriting.com/tag/research-critique/ |
Once again, our centre has achieved amazing results in the LAMDA Examinations. We had 100% passes with a Distinction rate of 93%. 64 out of 69 students who took the examination at our centre achieved a Distinction. The highest scores were 96 out of a possible 100. We are happy to say the 4 of our students from various grades achieved a score of 96%. Congratulations to all of our students, parents and teachers for putting in the effort needed to achieve such amazing results.
In the LAMDA Speaking in Public programme, Students are taught to write and deliver their own speeches using proper speech writing techniques as well as verbal and non-verbal techniques to deliver the speech in front of an audience. There is also a segment on ‘Discussion and Conversation’ which teaches the students to interact on a 1 to 1 basis and to hold a conversation on topics that interest them. This segment of the programme has helped many of our students achieve high scores in English Oral Examinations in school.
There is still time to enrol your child in Public Speaking classes for Semester 1. We have very few places left. If you miss registering for Semester 1, the next intake will be in July 2017. We do not take in any students once the classes starts. The spaces remaining are mainly for lower primary students. All other classes are full. To find out more about the LAMDA Speaking in Public programme, please go to this link:
https://www.lamda.org.uk/examinations/all-examinations/communications-examinations/speaking-in-public
Our programme is 16 weeks long with an In-house test at the end of semester. Students who are keen to acquire an International Certificate which is recognised by MOE can register for the LAMDA exam. The examiner will fly in from London to conduct the examinations at our centre.
All of our trainers are specially selected and trained by our founder Jackeline Carter. Every year, many adults apply to teach at J Carter Centre but only a selected few who meet the stringent requirements set forth by Jackeline will be selected to undergo training and be given an opportunity to teach at J Carter Centre. Trainers’ performances are constantly being reviewed and those that do not maintain the standard of teaching required will no longer be offered assignments at J Carter. This is to ensure that the quality of teaching will not be affected and every student will receive the attention needed even if they are not in a class taught by our Founder.
If you would like to find out more about our programmes or enrol your child in a Semester 1 class, please call our centre at 67372700 and our centre manager will recommend the correct grade and programme for your child. You can also visit our website at http://www.jcartercentre.com for more information.
The vacancies available for Semester 1 are as follows:
Entry 2 – Start Date: Wednesday, 18 January 2017 – 5 places left.
Entry 1 – Start Date: Thursday, 26 January 2017 – 1 place left.
Grade 1 – Start Date: Thursday, 26 January 2017 – 4 places left.
Grade 3 – Start Date: Thursday, 26 January 2017 – 1 place left
Students who enroll in weekday classes will enjoy $100 weekday discount. | https://jcartercentre.blog/2017/01/09/results-for-november-2016-lamda-speaking-in-public-examinations/?replytocom=319 |
In his critically acclaimed 1968 paper “Orthomolecular Psychiatry,” as published in the journal Science, Dr. Linus Pauling introduced the term orthomolecular. This followed the groundbreaking work of he and Dr. Harvey Itano who, in 1949, discovered that the cause of sickle-cell anemia was an abnormal molecule; which ushered in the era of molecular medicine. His hemoglobin research led to revelations related to the role of enzymes in brain function. When discovering that psychiatrists Abram Huffer and Humphrey Osmond were successfully treating schizophrenia with niacin, the B vitamin that prevents pellagra; he gained support for his claim that varying the concentrations of naturally occurring bodily substances can aid in the prevention and treatment of both physical and mental illness. In other words, it is having the right molecules in the right amounts. The treatment of diabetes with insulin and the prevention of goiter with iodine are further examples of orthomolecular medicine. Although some orthomolecular scientists view their approach as an extension of traditional western medicine, and others an alternative genre; I am including it here because, for many of us, its wisdom remains hidden behind the conventional pecuniary curtain.
Neural Environment
According to orthomolecular psychiatry, healthy mental functioning requires the existence of a particular blend of substances in the neural environment. This involves the presence in specific concentrations of molecules of numerous substances in the brain. Each individual requires differing amounts of these substances, and what each of us needs usually changes over time. Our environment and genetics play a role in the production and processing of these needed substances. Since the late eighteenth century, scientists have observed and documented various ways the mind is determined by its neuromolecular environment. The mind-altering effects of nitrous oxide, hallucinogenic drugs, and general anesthesia demonstrate this fact (due to their molecular influences on the mind).
Molecules and Mental Illness
Mental illness, usually connected to physical illness, can result from low brain concentrations of thiamine (B1), nicotinic acid or nicotinamide (B3), pyridoxine (B6), cyanocobalamin (B12), biotin (H), ascorbic acid (C), and folic acid. Mental health and behavior are also influenced by changes in the brain concentrations of other naturally occurring substances such as L(+)-glutamic acid, uric acid, and gamma-aminobutyric acid. A comprehensive review would fill the shelves of an evolutionary healing library. My hope, through this post, is to open the curtain just enough to awaken others to this simple, natural light. If there is even a chance that we can prevent, adjunctively support, or treat mental illness through such natural solutions, then shouldn’t we at least give them a try before using the more invasive methods?
Neurologic Uniqueness
Since, as physical beings, our brains are all remarkably unique, the concentrations of substances needed to generate health and healing vary widely. The targeted ranges, those considered prerequisites of good health, are not necessarily healthy for everyone. These ranges, based on established practice guidelines; if strictly adhered to; may actually create or exacerbate illness. For example, when children experience phenylketonuria, a genetic disorder of metabolism within which there is a deficiency in the enzyme needed to convert the amino acid phenylalanine into tyrosine; they develop an accumulation of phenylalanine in the bodily fluids which causes varying degrees of mental deficiency. The natural solution lies in a diet containing reduced amounts of phenylalanine. A “normal” amount is actually too much for these children, since their genetics prevent them from metabolizing it properly. For them, a decrease in the consumption of phenylalanine will result in an approximation of the optimum concentrations; leading to the alleviation of the disease (both mental and physical). The desired amount, for them, is significantly less than that of the standardized norm. Healthy amounts of the needed substances, for any of us, may be different than those obtained through our regular diet and genetics (too much or too little); and may vary widely from person to person. They may also deviate significantly from medically established target ranges. Sometimes the only way to know for sure is to vary the concentration of the substances, and then observe what happens. This can both confirm a diagnosis and point to a course of treatment.
Additionally, molecules of harmful substances can interfere with the optimal concentrations of needed ones, and can impact each person differently. Such potentially damaging substances could derive from household cleaning products, cosmetics, fragrances, personal care products (shampoo, body wash, toothpaste), and water, to name a few. Besides blocking or inhibiting healthy levels of needed substances, they can trigger things like allergies, infections, and even cancer. It is important to learn how much molecules matter.
Established Habits of Thought, Perception, and Emotion
After understanding a problem, and creating the optimal molecular environment, we may have alleviated the physiological contributions to the symptoms. However, we still have the habituated cognitive, perceptual, and emotional patterns to deal with. If we pull in my previous blog post, we could see the problem as a healer trying to be born. If we consider an orthomolecular solution, we might vary the concentrations of niacin and vitamin C. If this helps create the needed molecular environment, then we would still have the cognitive, perceptual, and emotional habits to unravel and connect with truth. With the body restored to good health, after removing the physiological contributions to the problem; it becomes easier to break such mental habits. For example, if your aunt Suzy had symptoms of schizophrenia, learned to think of herself and the world as judgmental and terrifying, and developed patterns and triggers of profound anxiety related to these beliefs; then when she is feeling better due to the orthomolecular changes, she will be better able to learn new habits and contexts of thought, perception, and emotion. Her unhealthful physiological environment will no longer be creating or supporting such mental processes. With self acceptance and proper guidance; following the orthomolecular changes; the healer trapped within her can be born.
Forecasting
My next post will consider an Ayurvedic approach to mental health, which asserts that only a sound mind can keep the body in sound health.
Have you ever used vitamins, enzymes, or amino acids to alleviate symptoms of mental or emotional distress? What have you found most useful? Have you ever known anyone who seemed to be severely mentally ill and then recovered naturally? Do you know how they did it? | https://alternativeshrink.com/2015/04/21/alternative-approaches-to-mental-health-an-orthomolecular-window/ |
By combining a demanding academic program with opportunities for personal growth, we equip students with the tools necessary for success. Topeka Collegiate students learn to approach challenges with confidence. Our emphasis on “Beyond the Book” experiential learning taps students’ genuine intellectual curiosity – and allows them to think creatively and critically.
For 35 years, Topeka Collegiate has been providing exemplary education to hundreds of students. Families choose a Topeka Collegiate education because we focus on academic excellence. Each member of our faculty sees the limitless learning potential of each student. We prepare students by providing a strong foundation for educational success that will benefit them for life.
In addition to academic preparation, Topeka Collegiate provides Beyond the Book experiences for its students, which instill a love of learning, humanitarian ideals, and the confidence to take on any challenge. Topeka Collegiate provides experiences and opportunities to enhance each student's growth academically, physically, and socially.
Our faculty bring our the best in our students using community resources, field trips, technology, and guest speakers to enrich the whole child.
Our school provides each child with many opportunities to explore, excel, and lead. Each child is confidently challenged to reach his or her full potential at Topeka Collegiate. | https://www.topekacollegiate.org/campus-life/students.cfm |
Postoperative discomfort in oral soft tissue surgery: a comparative perspective evaluation of Nd:YAG Laser, quantic molecular resonance scalpel and cold blade.
The aim of this paper was to compare pain, health-related quality of life (HRQoL) and need for painkillers during the postoperative course of oral soft tissue surgery performed with neodymium-doped yttrium aluminum garnet (Nd:YAG) laser, quantic molecular resonance (QMR) scalpel and cold blade. One-hundred and sixty-three similar surgical interventions were subclassified as follows: group 1 (G1), 77 cases performed with Nd:YAG laser; group 2 (G2), 45 cases performed with QMR scalpel and group 3 (G3), 41 cases performed with cold blade. Pain was evaluated using a Visual Analogue Scale (VAS), a Numeric Rating Scale (NRS) and a Verbal Rating Scale-6 (VRS-6) on the same day of surgery (day 0), and at 1, 3 and 7 days after surgery. The HRQoL was evaluated on day 7 using a 0-45 score range questionnaire. On day 7, painkillers taken were recorded. No statistically significant differences could be highlighted in the VAS and NRS scores at day 1, 3 and 7. A trend toward significance at day 0 was evident, with a VAS and NRS average scores lower in G1 than G2 and G3. With regard to VRS-6, the scores resulted statistically lower in G1 than G2 and G3 at day 1 and 3. The HRQoL in G1 was statistically lower than G3. Our study demonstrates that the use of new technologies in oral soft tissue surgery is associated to a reduction of postoperative discomfort. The better HRQoL and the lower postoperative pain observed in laser-treated patients may be associated to the possible bio-modulating effect of the laser.
| |
Parsons describes what it’s like to see BSA Hospital staff pull up and unload close to 150 pairs of shoes for his non-profit.
“These are vital for us. We will probably start getting orders beginning of next week from the schools. We’ll give away probably in the ballpark of 170 to 300 pairs of shoes just this month alone,” said Parsons.
For the past month, BSA employees chose cards detailing the shoe sizes needed from a tree set up inside the hospital.
“They go and collect a tag. It tells them what size shoe is needed, if it’s for a male or female child. Then they go out and obviously we get overwhelming support. We love doing it. We love supporting Mission Amarillo,” said Denise Davis, BSA Organizational Development Manager.
Parsons says big donations like this allows them to better serve the community.
“It’s organizations like BSA that put the full weight of their influence behind something like this because we need those large quantities,” said Parsons.
“The community has always been good to BSA and our staff but over the last year and a half to two years during covid, we have felt the love and this is just the way that we want to give it back,” said Davis.
Parsons said they give away around 1,000 pairs of shoes every year to surrounding schools.
Copyright 2021 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | |
The invention belongs to the technical field of biology and in particular relates to a transgenic maize BT176 nucleic acid standard sample and a preparation method thereof. The preparation method comprises the following steps: taking a mature transgenic maize BT176 line as a raw material, extracting total DNA, performing specific PCR amplification sequencing, and then preparing plasmids, wherein the transgenic maize BT176 nucleic acid standard sample contains transgenic maize BT176 line specific genes and maize endogenesis zein genes. The nucleic acid standard sample prepared by the method has no biological activity; during processes of construction, amplification and analysis identification, monitoring analysis is performed on a laboratory environment, so that problems of biological contamination and contagion are avoided, positive control sources of molecular biology are realized, and samples are stable and free from pollution. After the standard sample is prepared, a preparation technology for solving transgenic ingredient detection molecule DNA standard samples and a stability ensuring technology can be well and deep completely developed, development of transgenic product detection standard samples in China can be actively implemented, blank in the measurement filed is filled, and the standard sample has great practical significance. | |
The radical practice of psychogeography – aimlessly walking the city – could bring a renewed energy to urban planning, says Tszwai So
A giant mural featuring a windmill on a flank wall beckons me forward and, as I continue my journey, I encounter a floral scent trailing over a row of Edwardian terraced houses. I wander further, the tintinnabulation of a distant church bell lures me to proceed onwards … I meander around my locale, free from the obligation to navigate the city with a purpose. I allow my senses to guide me forward instinctively, wandering and feeling my way through, and in so doing, I am able to lose myself in the urban landscape in the same way I immerse myself in an orchestra performance.
No doubt the practice of psychogeography is subjective rather than scientific. It never observed any precise laws governing the specific effects of the geographical environment on the emotions and behaviour of individuals, as claimed by its eccentric proponent Guy Debord, who coined the term in the first place. Psychogeography was a well-intentioned but somewhat naive endeavour to understand how different places make us feel and behave.
While it inspired an array of artists, writers and filmmakers worldwide, no architects or town planners seem to have ever taken it seriously. Perhaps its vagueness makes it a better recipe for creating enigmatic art which is an end in itself, instead of an informed design strategy. Indeed, if walking the city aimlessly is psychogeography’s ultimate goal, it would be paradoxical to imagine it could be of any practical use to architects.
In psychogeography we rediscover the most innocent, sensuous and visceral interaction with our built environment
In order to fully appreciate the legacy and relevance of psychogeography in urbanism, it is crucial to understand the intellectual underpinnings as well as the anti-capitalist origins of the Situationist International from the 60s – the movement from which psychogeography sprang. One of their most radical ideas was to liberate our minds from the limitations of consumerist homogeneity. The functionalist and utilitarian town planning approach is the outcome of the market’s invisible hand, whereas the mesmerising psychogeographical maps tear apart our conventional understanding of urban spaces, setting out a subjective emotional dimension to our relationship with cities.
Every psychogeographic map is unique because it manifests its author’s subjective reading of the city, as we see in British situationist Ralph Rumney’s works. Through his spellbinding maps we learnt that urbanism is about more than built environment; it is about the people in the city. The situationists sought an emotion-based urbanism and so should we.
Filmmaker David Lynch once told an audience: ‘If you have a golf ball-sized consciousness, when you read a book you will have a golf ball-sized understanding.’ An artist’s job is to expand that consciousness, and we learnt from the situationists that experiencing a city is highly subjective and involves the whole body; it is poetry in motion.
Alas, codification remains the orthodoxy well into the 21st century in planning policy. Technocrats are obsessed with regulating space standards, building heights and even ‘beauty’ – but how can we codify subjective experiences of our city? The lazy assumption that one can tabulate our lived experiences can be explained by capitalism’s raison d’etre: the commodification of everything – which ensures we maintain a golf ball-sized understanding of urban spaces.
Psychogeography can certainly bring renewed artistic energy to urban planning and architecture, dismantling architects’ overt reliance on the visual and an almost authoritarian attitude that we know better than anyone else about how people feel about their cities. In psychogeography we rediscover the most innocent, sensuous and visceral interaction with our built environment. | https://www.ribaj.com/culture/psychogeography-allows-us-to-explore-the-sensory-city-tszwai-so |
How Sweden's ancient language Elfdalian is being saved by Minecraft
The church of Älvdalen is being built in the Minecraft universe – pictured above is its tower.
11 October 2017
11:49 CEST+02:00
Sweden's top chefs have chosen their favourite Stockholm restaurants, from Michelin-starred culinary blow-outs to some of the best-kept secrets across the entire city. Here we present an exclusive sneak peek into the glory of the some of our best chefs' favourite and funkiest Stockholm food lairs. | https://www.thelocal.se/20171011/how-swedens-rare-viking-forest-language-elfdalian-is-being-saved-by-minecraft |
- How to evaluate the energy efficiency of an air conditioner?
- What are the criteria for the energy efficiency of an air conditioner?
- What does SEER/SCOP mean?
- What does BTU mean?
- How to choose an air conditioner with a proper BTU / h capacity?
- How many kW in a BTU?
- How to decipher the labeling of the air conditioner?
- Which air conditioners are definitely not energy efficient?
- What are inverter air conditioners?
- Which air conditioner is better: inverter or standard?
- How to improve and reduce the cost of air conditioning? Practical recommendations.
When buying an air conditioner, it is important to understand how much we will have to pay for electricity.
The energy efficiency of an air conditioner is one of the most important indicators that are evaluated when choosing a system. It determines the amount of electricity consumed by the device to achieve a comfortable room temperature.
In other words, the energy efficiency level of the device determines how much you have to pay for comfort.
If the energy efficiency is low, the device uses a lot of electricity, and you will have to pay more for it.
If you choose a high-performance air conditioner, you can save on your electricity bill.
1. How to assess the energy efficiency of an air conditioner?
When assessing energy efficiency, how the air conditioner works is taken into account.
Modern devices can not only cool the air in the room but also heat it. The latter quality is especially useful in mid-seasons when heating is not yet in use.
On average, the air conditioner works in this mode for about one to two months a year (depending on the climatic characteristics of the region).
In cooling mode, the air conditioner works mainly during the warmer months, so most of the electricity bill comes in the summer.
2. What are the criteria for the energy efficiency of an air conditioner?
To find out which air conditioner is better, you need to focus on the following criteria:
- Device type: wall, cassette, mobile phone, window, channel.
- Compressor. Pay attention to the inverter. In it, the electronic panel dynamically changes the voltage to reduce current consumption.
- Energy. The larger the room, the greater the power. Calculate the parameters according to the rule: not less than 1 kW per 10 sq. m.
- Energy efficiency class. The system is considered effective if it complies with classes A, A +, A ++ and A +++. That is, the coefficient is equal to or greater than 3.2.
- Unit size. Recommended average dimensions of the indoor unit – height from 24 cm, depth from 18 cm, width from 60 cm. The recommended average dimensions of the outdoor unit – height from 42 cm, width from 65, depth from 25 cm.
- Heating. The option is designed for autumn when the heating season has not begun but it is already cold outside the window, or for mild winters.
- Cooling. The option is provided for the summer season and rooms with windows on the sunny side.
- Dehumidification. The feature removes excess moisture from the air to save the dwelling from mold problems.
- Ventilation. Renew the stagnant air in the room.
- Air cleaning. Removes dust or animal hair.
- Oxygen saturation. These are systems that remove excess nitrogen outside or trap it in their own membranes to saturate the air with oxygen.
- Additional options. Other options include sleep mode, motion sensor, Wi-Fi control, self-diagnostics, outdoor unit defrosting, etc. They are found in almost all modern systems.
3. What does SEER/SCOP mean?
You also need to find some abbreviations in the product description. Energy efficiency indicators are presented in the form of two coefficients:
SEER (Seasonal Energy Efficiency Ratio) – to assess electricity consumption for cooling.
SCOP (Seasonal Coefficient of Performance) – to assess heating efficiency
The higher the number, the greater the energy efficiency of the air conditioner.
But there is a nuance – these coefficients determine the performance at a time when the air conditioner works at full load. And when the desired ambient temperature is reached, the devices work at partial load. In such a case, the indicators of efficiency grow 5 times and more. That is, even less energy is consumed.
|Cooling|
Energy Efficiency Class
Heating
SEER ≥ 8,50
A+++
SCOP ≥ 5,10
6,10 ≤ SEER < 8,50
A++
4,60 ≤ SCOP < 5,10
5,60 ≤ SEER < 6,10
A+
4,00 ≤ SCOP < 4,60
5,10 ≥ SEER < 5,60
A
3,40 ≤ SCOP < 4,00
4,60 ≤ SEER < 5,10
B
3,10 ≤ SCOP < 3,40
4,10 ≤ SEER < 4,60
C
2,80 ≤ SCOP < 3,10
3,60 ≤ SEER < 4,10
D
2,50 ≤ SCOP < 2,80
4. What does BTU mean?
The abbreviation BTU stands for British Thermal Unit – and stands for units of measurement of thermal energy in the British measurement system. BTU is defined as the amount of heat needed to heat 1 pound of water of 1 degree Fahrenheit. 1 BTU is estimated at about 1.06 kilojoules and 1 BTU is equal to 0.3 watts. BTU is not officially included in the international system, however, this unit is widely used in almost all countries of the world to denote the capacity of air conditioners.
As a rule, most models of household air conditioners work in the range of 5000-18000 BTU, so a large number of BTUs, corresponds to a higher speed to cool a room.
5. How to choose an air conditioner with a proper BTU / h capacity?
First, you need to get information about the dimensions of the room in which you plan to place the future air conditioner, the capacity of which, as mentioned above, is indicated in the specifications in BTU.
If the room is of simple shape, we measure the length and width of the room, multiply them and thus obtain the required dimensions in square meters.
If there are several rooms, it is better to measure the size of each by noting the values obtained, for example, in a smartphone.
Next, multiply the surface of the room by its height (as a rule, about 300 cm). For example, if the length of the room is 5 meters, and the width is 5 meters, its area is 25 m. 30 x 300, and we get 9000. Therefore, an air conditioner with a capacity of 10,000 BTU / hour is suitable for efficient cooling of this room. It is also possible to install a model with a larger cooling capacity, however, it is likely to be more expensive at the time of purchase and will surely cost more during operation.
Floor area of the space
Room height
2.6 m
3 m
4 m
10 m2
5000
5000
7000
14 m2
5000
5000
9000
16 m2
7000
7000
9000
18 m2
7000
7000
9000
20 m2
9000
9000
12000
22 m2
9000
12000
12000
26 m2
12000
12000
18000
28 m2
12000
12000
18000
30 m2
12000
18000
18000
36 m2
18000
18000
18000
40 m2
18000
18000
24000
50 m2
18000
24000
28000
70 m2
24000
28000
35000
80 m2
28000
30000
40000
90 m2
30000
32000
42000
105 m2
36000
40000
52000
6. How many kW in a BTU?
1 BTU = 0.2931 W
Power match in BTU and kW
BTU
5000
7000
9000
12000
18000
24000
kW
1.5
2.1
2.5
3.5
5
7
BTU
28000
32000
36000
40000
52000
kW
8.5
9.5
10.5
11.5
15.3
7. How to decipher the labeling of the air conditioner?
The front panel of the inverter or unit also shows the model name and markings from a series of numbers and letters.
Let’s see how these values are decoded. Decoding of markings on air conditioners:
GWH or SRK is a domestic air conditioning device, GU or SRR is a type of semi-industrial equipment.
The next two digits – 07, 09, 12, 13, 14, 18, 24, 27, or 30 – are the capacity of the split system, measured in BTU/hour.
“07” is a power of 2.1 watts, “09” is 2.6 watts, “12” is 3.5 watts, “18” is 5.2 watts, and so on.
The following are alphabetical and numeric values that define the pattern and pattern of the tool.
The last set of numbers – 1/4 3/8, 1/4 1/2 or 1/4, 5/8 specifies the diameter of the copper tube, indicated in inches.
In addition to digital and alphabetical values, additional stickers and markings may be present on the air conditioner housing, which show additional functions of the device.
8. Which air conditioners are definitely not energy efficient?
Manufacturers are constantly working to ensure that air conditioners require less and less energy to operate. HVAC equipment produced before 2013 will clearly be energy inefficient.
Air conditioners with a low energy efficiency index automatically include systems of obsolete classes “E”, “F” and “G”.
These products do not comply with new regulations and cannot compete with modern innovative technologies.
9. What are inverter air conditioners?
In the modern market, the most energy-efficient are inverter models. Savings are achieved by varying the frequency of rotation of the motor. Once the temperature is reached, the engine slows down. This allows you to reduce energy consumption by 40-58% compared to conventional air conditioning systems.
10. Which air conditioner is better: inverter or standard?
Comparison between inverter and standard air conditioner:
Type of air conditioner
Advantages
Disadvantages
Inverter
Non-inverter
Among other things, inverter models have the following characteristics:
- wide range of temperature conditions,
- reduced noise effect,
- operational cooling/heating,
- environmental friendliness of refrigerants.
In addition, the built-in surge protection ensures a long service life of the equipment.
11. How to improve and reduce the cost of air conditioning? Practical recommendations
We have long been accustomed to the comfort and coolness that air conditioners provide on hot days. However, not everyone knows how to use an air conditioner so that it works efficiently and economically.
First of all, we remind you:
- Any air conditioner needs maintenance. Even if the device works properly, at least once a year (preferably in the spring, while it is not very hot), you need to call a specialist who will check the entire system and perform proper routine maintenance. It is important to remember that the sooner the problem is identified and eliminated, the longer and better the air conditioner will work. In addition, the cost of maintaining air conditioners is always lower than overhauling or replacing them.
- Insulate the attic floor (and/or attic) and the basement so that on hot days as little warm air as possible enters the house. The hotter the air in the house, the longer the air conditioner must work, as a result, it wears out faster and consumes more electricity.
- Make sure that the air conditioners installed are the correct size and capacity to effectively cool the air in their respective rooms. A large and powerful air conditioner, once again, consumes more electricity, and, in addition, in a small room it will not work as efficiently as a smaller and less powerful device, but correctly selected.
- You can install in the house several conventional fans or a modern ventilation system based on them. Conventional fans allow you to do without expensive air conditioners on days that are not too hot or at night when the air outside the house is colder than inside.
- Try to close the windows in time with awnings, screens, or special films, so that in the warm season direct sunlight does not penetrate the premises and does not heat the interior items and walls.
- Use ceiling fans in the rooms. Moving the air in the rooms creates a feeling of coolness and in fact lowers the temperature in our “internal thermometers”, so that on some days it is possible to feel quite well without an air conditioner.
- Do not install lighting fixtures, televisions and other household electronic devices near the thermostat of the air conditioning system. The thermostat will take into account the heat generated by these devices, and the air conditioners will work longer than necessary.
- Set the chronothermostat and in the settings the automatic mode, set the temperature a few degrees more when you are away from home and at night when you sleep. In addition, this thermostat also allows you to preheat the room, for example, when you get home.
- If the outdoor unit of the air conditioner is installed in a shady place, even on very hot days, the system will consume about 10% less electricity. But in this case, remember that you can not install the outdoor unit very close to the walls of neighboring buildings or trees, which can block the flow of air.
- During prolonged and active operation of the air conditioner, its filter should be changed once a month. Experts advise buying pleated filters that trap more dirt and dust than cheaper but less efficient flat ones.
- It is better to replace incandescent lamps in the house with modern LED lamps. LED lamps generate much less heat, which is especially important during the warm season. | https://astraverde.uk/energy-efficient-air-conditioner-myth-or-reality/ |
Identification, characterization, and regulation of the canonical Wnt signaling pathway in human endometrium.
Members of the Wnt family of signaling molecules are important in cell specification and epithelial-mesenchymal interactions, and targeted gene deletion of Wnt-7a in mice results in complete absence of uterine glands and infertility. To assess potential roles of the Wnt family in human endometrium, an endocrine-responsive tissue, we investigated in the proliferative and secretory phases of the menstrual cycle, endometrial expression of several Wnt ligands (Wnt-2, Wnt-3, Wnt-4, Wnt-5a, Wnt-7a, and Wnt-8b), receptors [Frizzled (Fz)-6 and low-density lipoprotein receptor-related protein (LRP)-6], inhibitors [FrpHE and Dickkopf (Dkk)-1], and downstream effectors (Dishevelled-1, glycogen synthase kinase-3beta, and beta-catenin) by RT-PCR, real-time PCR and in situ hybridization. No significant menstrual cycle dependence of the Wnt ligands (except Wnt-3), receptors, or downstream effectors, was observed. Wnt-3 increased 4.7-fold in proliferative compared with secretory endometrium (P < 0.05). However, both inhibitors showed dramatic changes during the cycle, with 22.2-fold down-regulation (P < 0.05) of FrpHE and 234.3-fold up-regulation (P < 0.001) of Dkk-1 in the secretory, compared with the proliferative phase. In situ hybridization revealed cell-specific expression of different Wnt family genes in human endometrium. Wnt-7a was exclusively expressed in the luminal epithelium, and Fz-6 and beta-catenin were expressed in both epithelium and stroma, without any apparent change during the cycle. Both FrpHE and Dkk-1 expression were restricted to the stroma, during the proliferative and secretory phase, respectively. These unique expression patterns of Wnt family genes in different cell types of endometrium and the differential regulation of the inhibitors during the proliferative and secretory phase of the menstrual cycle strongly suggest functions for a Wnt signaling dialog between epithelial and stromal components in human endometrium. Also, they underscore the likely importance of this family during endometrial development, differentiation and implantation.
| |
Communication and Media Studies
Communication and Media Studies focuses on the emerging digital forms of media in the 21st Century, with emphasis on human and social communication. Students have opportunities to produce innovative media products that increase and enhance global-intercultural awareness.
Courses emphasize the liberal arts and humanities by focusing on critical thinking, research, production, public speaking, and constantly evolving social diversity issues. The curriculum is designed to construct a foundational framework in the areas of communication, law, education, management, international affairs and other relevant fields of study. The viable framework prepares students for distinguished graduate studies and successful entrepreneurial and professional careers.
Programs of Study
Student Involvement, Media, and Organizations
In its effort to promote quality global learning experiences, the department partners with various organizations to create experiential study-abroad and internship opportunities. Departmental faculty advise the following student clubs/organizations.
The News Argus is produced by and for the students of WSSU, and the target audience is the WSSU campus and surrounding communities. The Argus mission is to fulfill the seven traditional goals and responsibilities of news: inform, educate, entertain, advertise, serve as a watchdog, persuade, and provide a forum.
- Provide students with opportunities to produce good-quality productions.
- Provide students with fundamental experiences in researching, interviewing, writing, videography.
- Provide students the opportunity to make a positive contribution to the University.
- Provide an outlet for students to express their creativity.
- Provide opportunities for students to demonstrate organizational and time management skills by meeting deadlines completely, correctly, and on time.
- Provide students with opportunities to submit their work in national collegiate journalism competitions.
In addition, more experienced staff members or staff members with expertise in electronic media will assist less experienced staff members with performing multimedia assignments. Any WSSU full-time student is entitled to become an Argus staff member.
RAM-TV is a student media group which provides Winston-Salem State University with non-commercial information, news, educational, and entertainment programming. The organization provides WSSU students, particularly those in the Department of Communication and Media Studies with the opportunity to participate in television production operations and to create television/video programming that encourages the creation of ideas and free expression of issues and concerns. Watch RAM-TV on the campus digital cable channel 69.1.
RAM-TV is designed to provide Winston-Salem State University students with hands-on experience in the television broadcasting/video production field. Toward the goal, the department’s digital HD television production studio located in Hall-Patterson 209 and editing suite located in Hall-Patterson 132 provide students with the tools they need to produce quality programming. RAM-TV is a place where students come together to learn, teach, and share skills they have learned in classes taught by the department. There are no restrictions to being a member of RAM-TV. Any student in good standing with the University can become a member of RAM-TV by attending meetings and expressing an interest.
You can also join by logging on to the RAMSync website for WSSU student organizations, use your WSSU campus email address and password to login, then go to the Organizations menu at the top of the page to find RAM-TV.
SU Radio is an intranet-based, student-operated station designed to give students enrolled at WSSU the opportunity to gain valuable experience in the field of radio broadcasting. Under the supervision of the Department of Communication and Media Studies and WSNC-FM (an NPR affiliate), students gain real-world training in every aspect of the radio industry. The professional staff members of WSNC advise and guide the transitional learning experiences of interested students who would like to become student staff members at the NPR affiliate. SU Radio is open to students from all majors and classifications and are encouraged to join its staff. Please contact Mr. Brian Anthony, General Manager at [email protected] or call (336) 750-2321 for more information.
The Dow Jones News Fund and American City Business Journals are offering college juniors, seniors and graduate students the opportunity to spend a week in the financial capital of the world before reporting to work at paid summer internships as business reporters.
Interns attend a week of training led by journalists from Investigative Reporters and Editors (IRE) refining skills like making Freedom of Information Act (FOIA) requests, learning computer-assisted reporting and analyzing and cleaning data in order to tell rich, often hidden, stories before starting internships where they apply these skills to issues like education, government, criminal justice and the economy
- Dow Jones News Fund - the Fund mentors the next generation of newsroom leaders, promoting journalism fundamentals while advancing new storytelling methods using data and digital innovation.
Paid, Prestigious Internships in Data Journalism, Digital Media, Multi-Platform Editing and Business Reporting.
The National Communication Association Student Club (NCASC) provides a forum for interaction among students, faculty, and others interested in the study, research, criticism, training, and application of the artistic, humanistic, and scientific principles of communication. Upon meeting the academic requirements (appropriate GPA and earned credit hours), NCASC members are encouraged to join one of NCA’s honor societies, Lambda Pi Eta and Sigma Chi Eta. LPH and SCH local chapters sponsor a variety of scholarly and service oriented activities that NCASC members are welcome to attend. For more information, please contact the faculty advisers, Dr. Althea Bradford at [email protected] and Dr. Andrea Patterson-Masuka at [email protected]. | https://wssu.edu/academics/colleges-and-departments/college-of-arts-sciences-business-education/social-sciences/department-of-communication.html |
MONTREAL, June 21, 2021 (GLOBE NEWSWIRE) -- Theratechnologies Inc. (Theratechnologies) (TSX: TH) (NASDAQ: THTX), a biopharmaceutical company focused on the development and commercialization of innovative therapies, today announced new preclinical in vivo findings on the anti-metastatic effect and tolerability of its novel investigational proprietary peptide-drug conjugate (PDC), TH1902.
These results demonstrate that TH1902 has better anti-metastatic activity when compared to docetaxel alone when administered at an equimolar concentration in a lung metastasis cancer model expressing the sortilin (SORT1) receptor. Metastasis is a form of cancer that has spread from its original site to a distant site or organ where it grows or metastasizes. It is well-known that the survival rate for metastatic cancer is low. The Company intends to present these findings at an upcoming scientific meeting.
“These new results are very encouraging for the development of TH1902 in SORT1+ cancers. It is known that SORT1-receptor expression increases as cancers progress and these new data confirm that by targeting the SORT1 receptor TH1902 could potentially be effective in the treatment of metastasis. Most importantly, these preclinical findings, if confirmed in humans, are promising signs that we may finally be able to inhibit hard-to-treat cancers with a more effective and better-tolerated treatment,” said Dr. Christian Marsolais, Senior Vice President and Chief Medical Officer of Theratechnologies.
The Company will host a webcast today at 11:00 a.m. ET to discuss its SORT1+ Technology and TH1902, which will include additional details on these preclinical findings. To access the live webcast please click here. An archived webcast will also be available on the Company’s website under the ‘Past Events’ section.
About SORT1+ Technology™
Theratechnologies is currently developing a platform of new proprietary peptides for cancer drug development targeting SORT1 receptors called SORT1+ TechnologyTM. SORT1 is a receptor that plays a significant role in protein internalization, sorting and trafficking. It is highly expressed in cancer cells compared to healthy tissue making it an attractive target for cancer drug development. Expression has been demonstrated in, but not limited to, ovarian, triple-negative breast, endometrial, skin, lung, colorectal and pancreatic cancers. Expression of SORT1 is associated with aggressive disease, poor prognosis and decreased survival. It is estimated that the SORT1 receptor is expressed in 40% to 90% of cases of endometrial, ovarian, colorectal, triple-negative breast and pancreatic cancers.
The Company’s innovative peptide-drug conjugates (PDCs) generated through its SORT1+ TechnologyTM demonstrate distinct pharmacodynamic and pharmacokinetic properties that differentiate them from traditional chemotherapy. In contrast to traditional chemotherapy, Theratechnologies’ proprietary PDCs are designed to enable selective delivery of certain anticancer drugs within the tumor microenvironment, and more importantly, directly inside SORT1 cancer cells. Commercially available anticancer drugs, like docetaxel, doxorubicin or tyrosine kinase inhibitors are conjugated to Theratechnologies’ PDC to specifically target SORT1 receptors. This could potentially improve the efficacy and safety of those agents.
In preclinical data, the Company’s SORT1+ TechnologyTM has shown to improve anti-tumor activity and reduce neutropenia and systemic toxicity compared to traditional chemotherapy. Additionally, in preclinical models, SORT1+ TechnologyTM has shown to bypass the multidrug resistance protein 1 (MDR1; also known as P-glycoprotein) and inhibit the formation of vasculogenic mimicry - two key resistance mechanisms of chemotherapy treatment.
About TH1902
TH1902 combines Theratechnologies’ proprietary peptide to the cytotoxic drug docetaxel. TH1902 is currently Theratechnologies’ lead investigational PDC candidate for the treatment of cancer derived from its SORT1+ Technology™. The FDA granted fast track designation to TH1902 as a single agent for the treatment of all sortilin-positive recurrent advanced solid tumors that are refractory to standard therapy. TH1902 is currently being evaluated in a Phase 1 clinical trial for the treatment of cancers where the sortilin receptor is expressed.
The Company is also evaluating TH1904 in preclinical research, a second PDC derived from its SORT1+ TechnologyTM TH1904 is conjugated to the cytotoxic drug doxorubicin.
The Canadian Cancer Society and the Government of Quebec, through the Consortium Québécois sur la découverte du médicament (CQDM), will contribute a total of 1.4 million dollars towards some of the research currently being conducted for the development of Theratechnologies’ targeted oncology platform.
About Theratechnologies
Theratechnologies (TSX: TH) (NASDAQ: THTX) is a biopharmaceutical company focused on the development and commercialization of innovative therapies addressing unmet medical needs. Further information about Theratechnologies is available on the Company's website at www.theratech.com, on SEDAR at www.sedar.com and on EDGAR at www.sec.gov.
Forward-Looking Information
This press release contains forward-looking statements and forward-looking information, or, collectively, forward-looking statements, within the meaning of applicable securities laws, that are based on our management’s beliefs and assumptions and on information currently available to our management. You can identify forward-looking statements by terms such as "may", "will", "should", "could", “would”, "outlook", "believe", "plan", "envisage", "anticipate", "expect" and "estimate", or the negatives of these terms, or variations of them. The forward-looking statements contained in this press release include, but are not limited to, statements regarding the effects and tolerability of TH1902, the development of TH1902,, and the use of TH1902 for the potential treatment of various cancer types.
Forward-looking statements are based upon a number of assumptions and include, but are not limited to, the following: results observed in pre-clinical in vivo research and development work will be replicated in humans, no adverse side effects will be discovered from the administration of TH1902 into humans,the Company will be able to enroll patients for the ongoing Phase 1 trial using TH1902 and the Covid-19 pandemic will not adversely affect the development of TH1902 and other peptides that may be derived from the Company’ s SORT1+ TechnologyTM .
Forward-looking statements are subject to a variety of risks and uncertainties, many of which are beyond our control that could cause our actual results to differ materially from those that are disclosed in or implied by the forward-looking statements contained in this press release. These risks and uncertainties include, among others, the risk that results (whether safety or efficacy, or both) obtained through the administration of our SORT1-targeting PDCs in humans will not be similar to those obtained in animals, , the risks that we are unable to enroll patients to complete the ongoing Phase 1 trial using TH1902 or that serious adverse effects resulting from the administration of TH1902 are discovered leading to a suspension or cancellation of any development work using TH1902, and the risk that new cancer treatments are discovered or introduced which may prove safer and/or more effective than our SORT1+ Technology™ for the cancer types in which we aim to demonstrate efficacy and safety.
We refer potential investors to the "Risk Factors" section of our annual information form dated February 24, 2021 available on SEDAR at www.sedar.com and on EDGAR at www.sec.gov as an exhibit to our report on Form 40-F dated February 25, 2021 under Theratechnologies’ public filings for additional risks regarding the conduct of our business and Theratechnologies. The reader is cautioned to consider these and other risks and uncertainties carefully and not to put undue reliance on forward-looking statements. Forward-looking statements reflect current expectations regarding future events and speak only as of the date of this press release and represent our expectations as of that date.
We undertake no obligation to update or revise the information contained in this press release, whether as a result of new information, future events or circumstances or otherwise, except as may be required by applicable law.
For media inquiries:
Denis Boucher
Vice President, Communications and Corporate Affairs
514-336-7800
[email protected]
For investor inquiries: | https://www.theratech.com/news-releases/news-release-details/theratechnologies-announces-new-preclinical-findings-its-lead |
role of nectar microbes (yeasts and bacteria) in plant-pollinator interactions,
and to the potential role of epigenetic variation in natural plant populations as
a source of adaptations to pollinators, microbes, herbivores and abiotic stressors.
Our aims may be grouped into three main research lines: - Unravelling three-way
links between plant ecology, genetics and epigenetics. Along this line we pursue
to understanding the potential of epigenetics in shaping plant-animal-microbe interactions
and adaptation,
particularly in stressful and seasonal environments, like Mediterranean
ones. To accomplish this aim we focus on (i) the comparison between the genetic
and epigenetic diversity, and the spatial structure of plant and microbe wild populations;
(ii) the analysis of transgenerational epigenetic inheritance; and (iii) the role
of epigenetics in resistance to extinction of naturally fragmented populations.
- Uncovering the ecological and evolutionary consequences of nectar microbes in
plant-pollinator interactions. The hidden role of yeasts in modifying nectar sugar
amount and profile in wild flowers has been recently remarked by our research group.
Now, we want to deepen on the ecological and microevolutionary consequences of nectar
yeasts for plant-pollinator interactions. Exploring the effects of other microbes
associated to nectar and wild fruits, as well as their biotechnological potential,
is also included in our current research agenda. - Mediterranean ecosystems and
endemic plant species are among our priority study subjects. Mediterranean ecosystems,
probably as a consequence of their complex paleogeographic and paleoclimatic history,
are characterized by extremely heterogeneous landscapes and highly stressful and
changing abiotic conditions, which has contributed to the extraordinary rich Mediterranean
plant biodiversity. Unsurprisingly, Mediterranean Basin is considered
one of the most important Biodiversity Hotspots on Earth. Studying genetic identity, diversity
and spatial structure of endemic and conservation priority plant taxa remains as
a fundamental research objective. Results are also essential to assist managers
to incorporate genetic diversity and plant-animal interactions as further key factors
in the management of natural Mediterranean ecosystems under current conditions of
accelerated climate change. | http://web.ebd.csic.es/website1/Lineas/Nplantanimal.aspx |
Fox, Graeme, Preziosi, Richard F, Antwis, Rachael E, Milena, Benavides-Serrato, Combe, Fraser J, Harris, W Edwin, Hartley, Ian R, Kitchener, Andrew C, de Kort, Selvino R, Nekaris, Anne-Isola and Rowntree, Jennifer K (2019) Multi‐individual microsatellite identification: a multiple genome approach to microsatellite design (MiMi). Molecular Ecology Resources.
|
|
|
PDF (Published version of article)
|
2396 Multi individual microsatellite identification.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.
Download (779kB) | Preview
Abstract
Bespoke microsatellite marker panels are increasingly affordable and tractable to researchers and conservationists. The rate of microsatellite discovery is very high within a shotgun genomic data set, but extensive laboratory testing of markers is required for confirmation of amplification and polymorphism. By incorporating shotgun next‐generation sequencing data sets from multiple individuals of the same species, we have developed a new method for the optimal design of microsatellite markers. This new tool allows us to increase the rate at which suitable candidate markers are selected by 58% in direct comparisons and facilitate an estimated 16% reduction in costs associated with producing a novel microsatellite panel. Our method enables the visualisation of each microsatellite locus in a multiple sequence alignment allowing several important quality checks to be made. Polymorphic loci can be identified and prioritised. Loci containing fragment‐length‐altering mutations in the flanking regions, which may invalidate assumptions regarding the model of evolution underlying variation at the microsatellite, can be avoided. Priming regions containing point mutations can be detected and avoided, helping to reduce sample‐site‐marker specificity arising from genetic isolation, and the likelihood of null alleles occurring. We demonstrate the utility of this new approach in two species: an echinoderm and a bird. Our method makes a valuable contribution towards minimising genotyping errors and reducing costs associated with developing a novel marker panel. The Python script to perform our method of multi‐individual microsatellite identification (MiMi) is freely available from GitHub (https://github.com/graemefox/mimi). | http://repository.nms.ac.uk/2396/ |
Analysis of enzymatic DNA sequencing reactions by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.
The products from base-specific, dideoxy-nucleotide chain-termination DNA sequencing reactions catalyzed by the modified T7 DNA polymerase have been analyzed by using the technique of matrix-assisted laser desorption/ionization (MALDI) time-of-flight mass spectrometry. Preliminary experiments were performed to determine detection limits for a synthetic mixture of mixed-base single-stranded DNA which contained a 14-mer, a 21-mer, and a 41-mer; acceptable spectra, showing peaks for each component, were obtainable for samples that contained as little as 5 fmol per component. Initial sequencing reactions were therefore carried out on 2-pmol amounts of a short synthetic template that was 45 nucleotides in length, employing 2 pmol of 12-mer as the primer strand. This provided readable sequence information out to the 19th base past the primer. Using a 21-mer primer, nearly the entire sequence of the template could be read.
| |
Real-time detection and classification of signals or events present in time series data is a fairly common need. Stereotypical examples include identifying high-risk conditions in ICU data streams or classifying signals present in acoustic data from diagnostic or monitoring sensors. Using a combination of stream processing and machine learning is an agile and highly capable approach. It can effectively scale to large, fast data streams and adapt to evolving problem spaces.
Background
The ability to effectively leverage the information contained in time series data has become more challenging as volumes and speeds have increased while at the same time the opportunities are greater than ever. This article describes a highly adaptable and scalable data-driven method for extracting relevant information. With very little modification, the same design pattern can be applied to a wide variety of application domains. Examples include classification of signals present in acoustic data, anomaly detection in maintenance telemetry or real-time recognition of important conditions in streaming medical data.
Goals
In situations where one or more conditions are already known, this approach can be applied to specifically detect those conditions. However, it can also be used for applications where the patterns are evolving over time (concept drift) or there is little or no advance knowledge of what signals or patterns might be present or significant. In this case, the approach can be used to both identify different categories of behavior and classify the current state into one of them. Yet another way it can be applied is to learn what is “normal” even if that tends to change over time, and then alert when something happens that is unusual (anomaly detection).
Approach
Accomplishing all the previously mentioned goals requires a data-driven approach that combines stream processing and machine learning. Stream processing is used to generate feature vectors (fingerprints) representing the current characteristics of the signal in a form that can be used by machine learning technologies. The machine learning part can use the feature vectors to build a model of the system behavior as well as score future input against that model. In certain cases, the model can also be used to predict future state.
One of the characteristics of this approach is that the techniques used for both the stream processing portion and the machine learning operation are not domain-specific. In other words, common general-purpose algorithms can be used to automatically work with different kinds of data. Generating the feature vectors involves combining standard numerical processing techniques such as Fourier Transformation (FT), Discrete Wavelet Transformation (DWT) and Cepstrum Analysis, along with various numerical or statistical values such as normalized root mean square (RMS). Although it is possible to use a single algorithm to generate a shorter feature vector, selectivity (the ability to discriminate between to similar signals) is greatly improved by joining multiple algorithms to produce a longer feature vector. Historically, this was problematic because of the computational load and volume of data—but modern tools such as IBM InfoSphere Streams can effectively scale to extremely large and fast data even for computationally intensive operations such as those described above. Similarly, the availability of cloud-hosted machine learning tools such as the open source Spark and the H2O-based Sparkling Water package running on IBM BigInsights now make it possible to address much larger data sets than ever.
Different types of machine learning
In situations where the pattern or signal you are looking for is already known, some form of supervised machine learning algorithm can be applied to the feature vectors. There is a wide range of supervised algorithms that may be applicable such as include State Vector Machines (SVM), Random Forest, Naïve Bayes and Neural Networks. Semi-supervised variations of these may also apply if there is a relatively small amount of training data available. Unsupervised techniques are appropriate when there is no preexisting set of labeled data on which to train, or in which the result is not initially known. Examples of unsupervised algorithms include K-means clustering (if the number of categories is known, or X-means if it isn’t), Hidden Markov models and Principal Component Analysis. Recently, Deep Learning has become very popular; it has the added benefit of being able to work in both unsupervised and supervised modes. Reinforcement training is yet another kind of machine learning but is more suited to fully autonomous applications.
Regardless of which type of machine learning is relevant to a particular use case, the feature vectors remain the same and would not necessarily have to change between different use-case domains.
Benefits
Some of the more significant benefits of this approach are scalability, capability and adaptability. InfoSphere Streams can easily handle very large numbers of high data rate signals to generate the feature vectors, and cloud-based machine learning can handle the large volumes of data produced. Selectivity is enhanced because long feature vectors containing multiple representations and many calculated attributes can be used. And the design pattern is extremely adaptable to many different domains. That is, the same code can be used with very little modification for practically any type of time series data.
Learn more about IBM InfoSphere Streams and see how it is different from other stream processing platforms and ideally suited for enterprise-class, low-latency analytics. You can also check out the links above to get more information about Hadoop-based machine learning.
Plus, read the InfoSphere Streams data sheet “A New Paradigm for Information Processing,” and visit this IBM Streams site. | https://www.ibmbigdatahub.com/blog/analyzing-time-series-data-stream-processing-and-machine-learning |
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of and priority to U.S. Provisional Application Ser. No. 63/283,865 filed Nov. 29, 2021, the disclosure of which is incorporated in its entirety herein by this reference.
BACKGROUND
Obese and overweight animals have an increased risk of many chronic diseases including heart disease, diabetes, hypertension, stroke, dyslipidemia, certain types of cancer, apnea and osteoarthritis. Therefore, it is essential for overweight and obese animals, including humans and pets, to lose excessive body fat to maintain health and quality of life. Unfortunately, losing excessive body fat or maintaining healthy weight after weight loss is difficult to achieve and various solutions can have adverse consequences, e.g., loss of lean body mass or weight rebound after weight loss.
Obesity is among the most serious health problems in humans and pets and considered to be the leading preventable cause of death. Maintaining a healthy weight is critical for optimal metabolism, normal physical activity and good health. There is, therefore, a need for methods and compositions to increase satiety, promote weight loss, and/or maintain healthy weight, for better the health and wellness of animals.
SUMMARY
In one embodiment, a pet food composition can comprise protein, fat, carbohydrates, omega-3 fatty acids, and isoflavones; wherein the protein to carbohydrate is in a ratio ranging from 3.5:1 to 2.5:1 by weight as fed.
In another embodiment, a method for providing a health benefit in an animal can comprise administering a food composition to the animal, wherein the food composition comprises protein, fat, carbohydrates, omega-3 fatty acids, and isoflavones; wherein the protein to carbohydrate is in a ratio ranging from 3.5:1 to 2.5:1 by weight as fed.
Other and further objects, features, and advantages of the invention will be readily apparent to those skilled in the art.
DETAILED DESCRIPTION
Definitions
DETAILED DESCRIPTION
The term “animal” means any animal that would benefit from the health benefits described herein, including human, avian, bovine, canine, equine, feline, hircine, lupine, murine, ovine, or porcine animals. In one aspect, the animal can be a mammal.
The term “companion animal” means domesticated animals such as cats, dogs, rabbits, guinea pigs, ferrets, hamsters, mice, gerbils, horses, cows, goats, sheep, donkeys, pigs, and the like. In one aspect, the companion animal can be a canine. In another aspect, the companion animal can be a feline.
The term “caloric contribution ratio” refers to the ratio of macronutrients measured as percentages of caloric contribution from the respective food compositions. For example, the caloric contribution ratio of protein to fat would be measured as the caloric percentage of protein from the food composition divided by the caloric percentage of fat from the food composition.
The term “therapeutically effective amount” means an amount of a compound disclosed herein that (i) treats or prevents the particular disease, condition, or disorder, (ii) attenuates, ameliorates, or eliminates one or more symptoms of the particular disease, condition, or disorder, or (iii) prevents or delays the onset of one or more symptoms of the particular disease, condition, or disorder described herein.
The terms “treating”, “treat”, and “treatment” embrace both preventative, i.e., prophylactic, and palliative treatment.
The term “health and/or wellness of an animal” means the complete physical, mental, and social well-being of the animal, not merely the absence of disease or infirmity.
The term “in conjunction” means that the food composition, components thereof, or other compositions disclosed herein are administered to an animal (1) together in a single food composition or (2) separately at the same or different frequency using the same or different administration routes at about the same time or periodically. “Periodically” means that the food composition, components thereof, or other compositions are administered on a schedule acceptable for specific compounds or compositions. “About the same time” generally means that the food composition, components thereof, or other compositions are administered at the same time or within about 72 hours of each other.
The term “food” or “food product” or “food composition” means a product or composition that is intended for ingestion by an animal, including a human, and provides nutrition to the animal.
The term “carbohydrate” refers to carbohydrates that are digestible, e.g., sugars and starches, and does not include fiber, e.g., cellulose or fermentable fibers.
The term “crude fiber” refers to part of insoluble fiber found in the edible portion of the plant cell wall, and crude fiber is a measure of the quantity of indigestible cellulose, lignin, and other components of this type in foods.
Psyllium
The term “total dietary fiber” refers to the portion of plant-derived food that cannot be completely broken down by animal digestive enzymes and includes both soluble and insoluble fibers. Soluble fiber dissolves in water and is fermented in the colon by gut microbiota. Examples of soluble fibers are beta-glucans, guar gum, , inulin, wheat dextrin, resistant starches. Insoluble fiber does not dissolve in water. Examples of insoluble fibers are cellulose and lignin.
The term “regular basis” means at least monthly administration and, in one aspect, at least weekly administration. More frequent administration or consumption, such as twice or three times weekly, can be performed in certain embodiments. In one aspect, an administration regimen can comprise at least once daily consumption.
The term “single package” means that the components of a kit are physically associated in or with one or more containers and considered a unit for manufacture, distribution, sale, or use. Containers include, but are not limited to, bags, boxes, cartons, bottles, packages such as shrink wrap packages, stapled or otherwise affixed components, or combinations thereof. A single package may be containers of the food compositions, or components thereof, physically associated such that they are considered a unit for manufacture, distribution, sale, or use.
The term “virtual package” means that the components of a kit are associated by directions on one or more physical or virtual kit components instructing the user how to obtain the other components, e.g., in a bag or other container containing one component and directions instructing the user to go to a website, contact a recorded message or a fax-back service, view a visual message, or contact a caregiver or instructor to obtain instructions on how to use the kit or safety or technical information about one or more components of a kit.
The term “about” means plus or minus 20% of a numeric value; in one aspect, plus or minus 10%; in another aspect, plus or minus 5%; and in one specific aspect, plus or minus 2%. For example, in one aspect where about is plus or minus 20% of a numeric value, the phrase “from about 10% to about 20%” could include a range from 8% to 24% or 12% to 16%, include any subranges therein.
As used herein, embodiments, aspects, and examples using “comprising” language or other open-ended language can be substituted with “consisting essentially of” and “consisting of” embodiments.
The term “complete and balanced” when referring to a food composition means a food composition that contains all known required nutrients in appropriate amounts and proportions based on recommendations of recognized authorities in the field of animal nutrition and are therefore capable of serving as a sole source of dietary intake to maintain life or promote production, without the addition of supplemental nutritional sources. Nutritionally balanced pet food and animal food compositions are widely known and widely used in the art, e.g., complete and balanced food compositions formulated according to standards established by the Association of American Feed Control Officials (AAFCO). In one embodiment, “complete and balanced” can be according to the current standards published by AAFCO as of Jan. 1, 2021.
All percentages expressed herein are by weight of the composition on a dry matter basis unless specifically stated otherwise. The skilled artisan will appreciate that the term “dry matter basis” means that an ingredient's concentration or percentage in a composition is measured or determined after any free moisture in the composition has been removed.
As used herein, ranges are used herein in shorthand, so as to avoid having to list and describe each and every value within the range. Any appropriate value within the range can be selected, where appropriate, as the upper value, lower value, or the terminus of the range.
As used herein, the singular form of a word includes the plural, and vice versa, unless the context clearly dictates otherwise. Thus, the references “a”, “an”, and “the” are generally inclusive of the plurals of the respective terms. For example, reference to “a supplement”, “a method”, or “a food” includes a plurality of such “supplements”, “methods”, or “foods.” Similarly, the words “comprise”, “comprises”, and “comprising” are to be interpreted inclusively rather than exclusively. Likewise, the terms “include”, “including” and “or” should all be construed to be inclusive, unless such a construction is clearly prohibited from the context. Similarly, the term “examples,” particularly when followed by a listing of terms, is merely exemplary and illustrative and should not be deemed to be exclusive or comprehensive.
The methods and compositions and other advances disclosed here are not limited to particular methodology, protocols, and reagents described herein because, as the skilled artisan will appreciate, they may vary. Further, the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to, and does not, limit the scope of that which is disclosed or claimed.
Unless defined otherwise, all technical and scientific terms, terms of art, and acronyms used herein have the meanings commonly understood by one of ordinary skill in the art in the field(s) of the invention, or in the field(s) where the term is used. Although any compositions, methods, articles of manufacture, or other means or materials similar or equivalent to those described herein can be used in the practice of the present invention, certain compositions, methods, articles of manufacture, or other means or materials are described herein.
All patents, patent applications, publications, technical and/or scholarly articles, and other references cited or referred to herein are in their entirety incorporated herein by reference to the extent allowed by law. The discussion of those references is intended merely to summarize the assertions made therein. No admission is made that any such patents, patent applications, publications or references, or any portion thereof, are relevant, material, or prior art. The right to challenge the accuracy and pertinence of any assertion of such patents, patent applications, publications, and other references as relevant, material, or prior art is specifically reserved.
The present methods and compositions are based upon the discovery that specific food components work synergistically to provide health benefits in an animal. Specifically, the present food compositions utilize a ratio of protein to carbohydrate, omega-3 fatty acids, and isoflavones that enhances satiety, preserves lean body mass during weight loss, and provides health benefits as compared to known treatment regimens such as low caloric food compositions, dieting, or the use of costly additives or supplements. However, the use of such treatments can be used in conjunction with the methods and compositions.
In one embodiment, a pet food composition can comprise protein, fat, carbohydrates, omega-3 fatty acids, and isoflavones; wherein the protein to carbohydrate is in a ratio ranging from 3.5:1 to 2.5:1 by weight as fed.
In another embodiment, a method for providing a health benefit in an animal can comprise administering a food composition to the animal, wherein the food composition comprises protein, fat, carbohydrates, omega-3 fatty acids, and isoflavones; wherein the protein to carbohydrate is in a ratio ranging from 3.5:1 to 2.5:1 by weight as fed.
While the present diets generally have high protein and low carbohydrates, the present macronutrient profile is unique, having specific ratios and components that provide unexpected benefits. Notably, the present diets are not ketogenic diets (traditional or modified), i.e., diets that rely on high fat or diets having fat as the predominant component of the diet. Further, the present diet is set apart from general high protein diets as shown in the Examples below. Rather than relying on a single macronutrient component or ratio, the present methods and compositions rely on unique combination of macronutrient ratios and food components that were previously not understood in the art.
Generally, the present compositions comprise a protein. The protein can be crude protein material and may comprise vegetable proteins such as soybean meal, soy protein concentrate, corn gluten meal, wheat gluten, cottonseed, pea protein, canola meal, and peanut meal, or animal proteins such as casein, albumin, and meat protein. Examples of meat protein useful herein include beef, pork, lamb, equine, poultry, fish, and mixtures thereof. The compositions may also optionally comprise other materials such as dried whey and other dairy by-products. In one embodiment, the food compositions can comprise protein in amounts from about 25%, 30%, 35%, 40%, 45%, 50%, or even 55% to about 35%, 40%, 45%, 50%, 55%, or even 60% by weight, including various subranges within these amounts. In one aspect, the protein can be from about 40% to about 60% of the food composition by weight. In another aspect, the protein can be from about 45% to about 55% of the food composition by weight.
Glycine max
Notwithstanding the aforementioned proteins, the present compositions comprise isoflavones. In various embodiments, the isoflavones include at least one of daidzein, 6-O-malonyl daidzein, 6-O-acetyl daidzein, genistein, 6-O-malonyl genistein, 6-O-acetyl genistein, glycitein, 6-O-malonyl glycitein, 6-O-acetyl glycitein, biochanin A, or formononetin. The isoflavones or metabolites thereof can be from soybean () in certain embodiments. Where present, the one or more metabolites preferably include equol. In one embodiment, the food compositions can comprise isoflavones in amounts from about 300 mg, 400 mg, 500 mg, 600 mg, 700 mg, 800 mg, 900 mg, or even 1,000 mg per kg of the food composition to about 500 mg; 600 mg; 700 mg; 800 mg; 900 mg; 1,000 mg; 1,100 mg; 1,200 mg; 1,300 mg; 1,400 mg; or even 1,500 mg per kg of the food composition, including various subranges within these amounts. In one aspect, the isoflavones can present in an amount from about 100 mg to 1,500 mg per kilogram of the pet food composition. In another aspect, the isoflavones can present in an amount from about 300 mg to 1,200 mg per kilogram of the pet food composition.
Generally, any type of carbohydrate can be used in the food compositions. Examples of suitable carbohydrates include grains or cereals such as rice, corn, millet, sorghum, alfalfa, barley, soybeans, canola, oats, wheat, rye, triticale and mixtures thereof. In one embodiment, the carbohydrate comprises from about 15% to about 25% of the food composition by weight. In another embodiment, the carbohydrate comprises from about 10% to about 20% of the food compositions by weight. In other aspects, the carbohydrate can be present in amounts from about 5%, 10%, 15%, or even 20%, to about 10%, 15%, 20%, or even 25% by weight.
Generally, the protein and carbohydrates are in ratios that provide a health benefit to the animal. Typically, the ratio of protein to carbohydrate ranges from 3.5:1 to 2.5:1 by weight. In some aspects, the ratio of protein to carbohydrate can range from 3.25:1 to 2.75:1, or even from 3.15:1 to 3:1 by weight.
Generally, the food compositions include fat. Examples of suitable fats include animal fats and vegetable fats. In one aspect, the fat source can be an animal fat source such as tallow, lard, or poultry fat. Vegetable oils such as corn oil, sunflower oil, safflower oil, grape seed oil, soybean oil, olive oil, fish oil and other oils rich in monounsaturated and n-6 and n-3 polyunsaturated fatty acids, may also be used. In one embodiment, the food compositions can comprise fat in amounts from about 15%, 20%, 25%, 30%, 35%, or even 40% to about 20%, 25%, 30%, 35%, 40%, or even 45%, including various subranges within these amounts by weight. In one aspect, the fat comprises from about 20% to about 40% of the food composition by weight. In another aspect, the fat comprises from about 25% to about 35% of the food composition by weight.
Notwithstanding the aforementioned fats, the present compositions comprise omega-3 fatty acids. Non-limiting examples of suitable omega-3 fatty acids include eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), alpha-linolenic acid (ALA), stearidonic acid (SDA), eicosatrienoic acid (ETE), eicosatetraenoic acid (ETA), heneicosapentaenoic acid (HPA), docosapentaenoic acid (DPA), tetracosapentaenoic acid, tetracosahexaenoic acid (nisinic acid) and mixtures thereof. In one embodiment, the omega-3 fatty acids can range from about 0.1%, 0.2%, 0.5%, 1%, 1.5%, 2%, 2.5%, or even 3% to about 1%, 1.5%, 2%, 2.5%, 3%, 3.5%, 4%, 4.5%, or even 5% of the composition by weight. In some embodiments, the omega-3 fatty acids are present in the food composition in an amount from about 0.1% to about 5% by weight. In some embodiments, the omega-3 fatty acids are present in the food composition in an amount from about 0.5% to about 2.5% by weight.
In addition to the fats and fatty acids discussed herein, the present compositions can comprise omega-6 fatty acids. Non-limiting examples of suitable omega-6 fatty acids include linoleic acid (LA), gamma-linolenic acid (GLA), arachidonic acid (AA, ARA), eicosadienoic acid, calendic acid, dihomo-gamma-linolenic acid (DGLA), docosadienoic acid, adrenic acid, osbond acid, tetracosatetraenoic acid, tetracosapentaenoic acid, and mixtures thereof. In one embodiment, the omega-6 fatty acids can range from about 0.2%, 0.5%, 1%, 2%, or even 3% to about 1%, 2%, 3%, 4%, or even 5% of the composition by weight. In some embodiments, the omega-6 fatty acids are present in the food composition in an amount from about 1% to about 5% by weight. In some embodiments, the omega-6 fatty acids are present in the food composition in an amount from about 1% to about 2% by weight.
The administration can be performed on as-needed basis, an as-desired basis, a regular basis, or intermittent basis. In one aspect, the food composition can be administered to the animal on a regular basis. In one aspect, at least weekly administration can be performed. More frequent administration or consumption, such as twice or three times weekly, can be performed in certain embodiments. In one aspect, an administration regimen can comprise at least once daily consumption.
According to the presently described methods, administration, including administration as part of a dietary regimen, can span a period ranging from parturition through the adult life of the animal. In various embodiments, the animal can be a human or companion animal such as a dog or cat. In certain embodiments, the animal can be a young or growing animal. In other embodiments, administration can begin, for example, on a regular or extended regular basis, when the animal has reached more than about 10%, 20%, 30%, 40%, or 50% of its projected or anticipated lifespan. In some embodiments, the animal can have attained 40, 45, or 50% of its anticipated lifespan. In yet other embodiments, the animal can be older having reached 60, 66, 70, 75, or 80% of its likely lifespan. A determination of lifespan may be based on actuarial tables, calculations, estimates, or the like, and may consider past, present, and future influences or factors that are known to positively or negatively affect lifespan. Consideration of species, gender, size, genetic factors, environmental factors and stressors, present and past health status, past and present nutritional status, stressors, and the like may also influence or be taken into consideration when determining lifespan.
Such administration can be performed for a time required to accomplish one or more objectives described herein, e.g., preserving lean body mass in an animal during weight loss. Other administration amounts may be appropriate and can be determined based on the animal's initial weight as well as other variables such as species, gender, breed, age, desired health benefit, etc.
The moisture content for such food compositions varies depending on the nature of the food composition. The food compositions may be dry compositions (e.g., kibble), semi-moist compositions, wet compositions, or any mixture thereof. In one embodiment, the composition can be a pet food composition, and in one aspect, can be a complete and nutritionally balanced pet food. In this embodiment, the pet food may be a “wet food”, “dry food”, or food of “intermediate moisture” content. “Wet food” describes pet food that is typically sold in cans or foil bags and has a moisture content typically in the range of about 70% to about 90%. “Dry food” describes pet food that is of a similar composition to wet food but contains a limited moisture content typically in the range of about 5% to about 15% or 20% (typically in the form or small biscuit-like kibbles). In one embodiment, the compositions can have moisture content from about 5% to about 20%. Dry food products include a variety of foods of various moisture contents, such that they are relatively shelf-stable and resistant to microbial or fungal deterioration or contamination. Also, in one aspect, dry food compositions can be extruded food products for either humans or companion animals. In one aspect, the pet food composition can be formulated for a dog. In another aspect, the pet food composition can be formulated for a cat.
psyllium
The food compositions may also comprise one or more fiber sources. Such fiber sources include fiber that is soluble, insoluble, fermentable, and nonfermentable. Such fibers can be from plant sources such as marine plants, but microbial sources of fiber may also be used. A variety of soluble or insoluble fibers may be utilized, as will be known to those of ordinary skill in the art. The fiber source can be beet pulp (from sugar beet), gum arabic, gum talha, , rice bran, corn bran, wheat bran, oat bran, carob bean gum, citrus pulp, pectin, fructooligosaccharide, short chain oligofructose, mannanoligofructose, soy fiber, arabinogalactan, galactooligosaccharide, arabinoxylan, cellulose, chicory, or mixtures thereof.
Alternatively, the fiber source can be a fermentable fiber. Fermentable fiber has previously been described to provide a benefit to the immune system of a companion animal. Fermentable fiber or other compositions known to skilled artisans that provide a prebiotic to enhance the growth of probiotics within the intestine may also be incorporated into the composition to aid in the enhancement of the benefits described herein or to the immune system of an animal.
In one embodiment, the food compositions can include a total dietary fiber from about 1% to about 15% by weight. In some aspects, the total dietary fiber can be included in an amount from about 5% to about 15% by weight, or even from about 8% to about 13% by weight. In another embodiment, the food compositions can include crude fiber from about 1% to about 10% by weight. In some aspects, the crude fiber can be included in an amount from about 3% to about 10% by weight, or even from about 3% to about 7% by weight.
In some embodiments, the ash content of the food composition ranges from less than 1% to about 15%. In one aspect, the ash content can be from about 5% to about 10%.
Generally, the food composition can be suitable for consumption by an animal, including humans and companion animals such as dogs and cats, as a meal, component of a meal, a snack, or a treat. Such compositions can include complete foods intended to supply the necessary dietary requirements for an animal. Examples of such food compositions include but are not limited to dry foods, wet foods, drinks, bars, frozen prepared foods, shelf prepared foods, and refrigerated prepared foods.
Food compositions may further comprise one or more substances such as vitamins, minerals, antioxidants, probiotics, prebiotics, salts, and functional additives such as palatants, colorants, emulsifiers, and antimicrobial or other preservatives. Minerals that may be useful in such compositions include, for example, calcium, phosphorous, potassium, sodium, iron, chloride, boron, copper, zinc, magnesium, manganese, iodine, selenium, and the like. Examples of additional vitamins useful herein include such fat-soluble vitamins as A, D, E, and K and water-soluble vitamins including B vitamins, and vitamin C. Inulin, amino acids, enzymes, coenzymes, and the like may be useful to include in various embodiments.
The present methods for increasing satiety can provide a health benefit to the animal. In one embodiment, the health benefit can include preservation of lean body mass, minimization of lean body mass during weight loss, reduced body fat, reduced weight, reduced weight gain, reduced insulin resistance, decreased risk of diabetes, decreased risk of prediabetes, lower cholesterol, lower glucose, lower triglycerides, lower insulin, improved insulin sensitivity, lower leptin, prevention of prediabetes, delaying onset of prediabetes, treatment of prediabetes, prevention of diabetes, delaying onset of diabetes, treatment of diabetes, prevention of insulin resistance, delaying onset of insulin resistance, treatment of insulin resistance, prevention of overweight or obesity, delaying onset of overweight or obesity, treatment of overweight or obesity, promoting metabolic health, promoting better blood glucose management, lowering chronic inflammation and proinflammatory cytokines, improving voluntary daytime activity, reducing restlessness at daytime and nighttime, increasing satiety, and combinations thereof.
In various embodiments, the food compositions contain at least one of (1) one or more probiotics; (2) one or more inactivated probiotics; (3) one or more components of inactivated probiotics that promote health benefits similar to or the same as the probiotics, e.g., proteins, lipids, glycoproteins, and the like; (4) one or more prebiotics; and (5) combinations thereof. The probiotics or their components can be integrated into the food compositions (e.g., uniformly or non-uniformly distributed in the compositions) or applied to the food compositions (e.g., topically applied with or without a carrier). Such methods are known to skilled artisans, e.g., U.S. Pat. No. 5,968,569 and related patents.
Lactobacillus reuteii, Lactobacillus acidophilus, Lactobacillus animalis, Lactobacillus ruminis, Lactobacillus johnsonii, Lactobacillus casei, Lactobacillus paracasei, Lactobacillus rhamnosus, Lactobacillus fermentum
Bifidobacterium
Enterococcus faecium
Enterococcus
Lactobacillus reuteri
Lactobacillus reuteri
Lactobacillus rhamnosus
Lactobacillus reuteri
Lactobacillus reuteri
Lactobacillus acidophilus
Bifidobacterium adolescentis
Bifidobacterium
Enterococcus faecium
Enterococcus faecium
4
12
5
11
10
Typical probiotics include, but are not limited to, probiotic strains selected from Lactobacilli, Bifidobacteria, or Enterococci, e.g., , and sp., and sp. In some embodiments, the probiotic strain can be selected from the group consisting of (NCC2581; CNCM I-2448), (NCC2592; CNCM I-2450), (NCC2583; CNCM I-2449), (NCC2603; CNCM I-2451), (NCC2613; CNCM I-2452), (NCC2628; CNCM I-2453), (e.g., NCC2627), sp. NCC2657 or SF68 (NCIMB 10415). Generally, the food compositions can contain probiotics in amounts sufficient to supply from about 10to about 10cfu/animal/day, in one aspect, from 10to about 10cfu/animal/day, and in one specific aspect, from 10′ to 10cfu/animal/day. When the probiotics are killed or inactivated, the amount of killed or inactivated probiotics or their components should produce a similar beneficial effect as the live microorganisms. Many such probiotics and their benefits are known to skilled artisans, e.g., EP1213970B1, EP1143806B1, U.S. Pat. No. 7,189,390, EP1482811B1, EP1296565B1, and U.S. Pat. No. 6,929,793. In one embodiment, the probiotic can be SF68 (NCIMB 10415). In another embodiment, the probiotics can be encapsulated in a carrier using methods and materials known to skilled artisans.
As stated, the food compositions may contain one or more prebiotics, e.g., fructo-oligosaccharides, gluco-oligosaccharides, galacto-oligosaccharides, isomalto-oligosaccharides, xylo-oligosaccharides, soybean oligosaccharides, lactosucrose, lactulose, and isomaltulose. In one embodiment, the prebiotic can be chicory root, chicory root extract, inulin, or combinations thereof. Generally, prebiotics can be administered in amounts sufficient to positively stimulate the healthy microflora in the gut and cause these “good” bacteria to reproduce. Typical amounts range from about one to about 10 grams per serving or from about 5% to about 40% of the recommended daily dietary fiber for an animal. The probiotics and prebiotics can be made part of the composition by any suitable means. Generally, the agents can be mixed with the composition or applied to the surface of the composition, e.g., by sprinkling or spraying. When the agents are part of a kit, the agents can be admixed with other materials or in their own package. Typically, the food composition contains from about 0.1 to about 10% prebiotic, in one aspect, from about 0.3 to about 7%, and in one specific aspect, from about 0.5 to 5%, on a dry matter basis. The prebiotics can be integrated into the compositions using methods known to skilled artisans, e.g., U.S. Pat. No. 5,952,033.
A skilled artisan can determine the appropriate amount of food ingredients, vitamins, minerals, probiotics, prebiotics, antioxidants, or other ingredients to be used to make a particular composition to be administered to a particular animal. Such artisan can consider the animal's species, age, size, weight, health, and the like in determining how best to formulate a particular composition comprising such ingredients. Other factors that may be considered include the desired dosage of each component, the average consumption of specific types of compositions by different animals (e.g., based on species, body weight, activity/energy demands, and the like), and the manufacturing requirements for the composition.
In a further aspect, the present disclosure provides kits suitable for administering food compositions to animals. The kits comprise in separate containers in a single package or in separate containers in a virtual package, as appropriate for the kit component, one or more of (1) one or more ingredients suitable for consumption by an animal; (2) instructions for how to combine the ingredients and other kit components to produce a composition useful for providing a health benefit as described herein; (3) instructions for how to use the food composition to obtain such benefits; (4) one or more probiotics; (5) one or more inactivated probiotics; (6) one or more components of inactivated probiotics that promote health benefits similar to or the same as the probiotics, e.g., proteins, lipids, glycoproteins, and the like; (7) one or more prebiotics; (8) a device for preparing or combining the kit components to produce a composition suitable for administration to an animal; and (9) a device for administering the combined or prepared kit components to an animal. In one embodiment, the kit comprises one or more ingredients suitable for consumption by an animal. In another embodiment, the kit comprises instructions for how to combine the ingredients to produce a composition useful for obtaining a health benefit as described herein.
When the kit comprises a virtual package, the kit is limited to instructions in a virtual environment in combination with one or more physical kit components. The kit contains components in amounts sufficient for to obtain a health benefit as described herein. Typically, the kit components can be admixed just prior to consumption by an animal. The kits may contain the kit components in any of various combinations and/or mixtures. In one embodiment, the kit contains a container of food for consumption by an animal. The kit may contain additional items such as a device for mixing ingredients or a device for containing the admixture, e.g., a food bowl. In another embodiment, the food compositions can be mixed with additional nutritional supplements such as vitamins and minerals that promote good health in an animal. The components can be each provided in separate containers in a single package or in mixtures of various components in different packages. In some embodiments, the kits comprise one or more other ingredients suitable for consumption by an animal. In one aspect, such kits can comprise instructions describing how to combine the ingredients to form a food composition for consumption by the animal, generally by mixing the ingredients or by applying optional additives to the other ingredients, e.g., by sprinkling nutritional supplements on a food composition.
In a further aspect, a means for communicating information about or instructions for one or more of (1) using a food composition for obtaining one of the health benefits described herein; (2) contact information for consumers to use if they have a question regarding the methods and compositions described herein; and (3) nutritional information about the food composition can be provided. The communication means can be useful for instructing on the benefits of using the present methods or compositions and communicating the approved methods for administering food compositions to an animal. The means comprises one or more of a physical or electronic document, digital storage media, optical storage media, audio presentation, audiovisual display, or visual display containing the information or instructions. In one aspect, the means can be selected from the group consisting of a displayed website, a visual display kiosk, a brochure, a product label, a package insert, an advertisement, a handout, a public announcement, an audiotape, a videotape, a DVD, a CD-ROM, a computer readable chip, a computer readable card, a computer readable disk, a USB device, a FireWire device, a computer memory, and any combination thereof.
In another aspect, methods for manufacturing a food composition comprising one or more other ingredients suitable for consumption by an animal, e.g., one or more of protein, fat, carbohydrate, fiber, vitamins, minerals, probiotics, prebiotics, and the like, can comprise admixing one or more of the ingredients suitable for consumption by an animal. The composition can be made according to any method suitable in the art.
In another aspect, a package useful for containing compositions described herein can comprise at least one material suitable for containing the food composition and a label affixed to the package containing a word or words, picture, design, acronym, slogan, phrase, or other device, or combination thereof that indicates that the contents of the package contains the food composition. In some embodiments, the label affixed to the package contains a word or words, picture, design, acronym, slogan, phrase, or other device, or combination thereof that indicates that the contents of the package contains the food composition with beneficial properties relating to a health benefit described herein. In one aspect, such device can comprise the words “enhances satiety,” or an equivalent or similar expression printed on the package. Any package configuration and packaging material suitable for containing the composition can be used herein, e.g., bag, box, bottle, can, pouch, and the like manufactured from paper, plastic, foil, metal, and the like. In one embodiment, the package contains a food composition adapted for a particular animal such as a human, canine, or feline, as appropriate for the label, in one aspect, a companion animal food composition for dogs or cats. In one embodiment, the package can be a can or pouch comprising a food composition described herein. In various embodiments, the package further comprises at least one window that permit the package contents to be viewed without opening the package. In some embodiments, the window can be a transparent portion of the packaging material. In others, the window can be a missing portion of the packaging material.
EXAMPLES
Example 1—Cat Study I
Example 2—Cat Study II
Example 3—Cat Study III
Example 4—Dog Study I
Example 5—Dog Study II
Other Studies
The invention can be further illustrated by the following example, although it will be understood that this example is included merely for purposes of illustration and is not intended to limit the scope of the invention unless otherwise specifically indicated.
Two panel of cats, with 20 cats per panel, were studied to determine the effects of diets on satiety and voluntary food intake in cats. The cats had free access to either control or test diet for two days, and after 2-5 days of break, the cats were switched to the opposite diets for two more days with free access to the corresponding diets. The number of meals, time between meals, time spent on each mean and total caloric intake were recorded. The macronutrient breakdown of the diets used are found in Table 1.
TABLE 1
Test Diet
Control diet
Caloric
Caloric
Macro
contri-
Macro
contri-
Nutrients
bution %
Ratio
Nutrients
bution %
Ratio
Protein
52
3.1
Protein
38
1.2
Fat
31
1.8
Fat
31
1
Carbohydrate
17
1
Carbohydrate
31
1
As shown in Tables 2-5, when the cats were fed the test diet, the cats ate bigger test meals and increased eating rate (g food/min), but they increased the time between meals and ate few meals per day, which led to significant reduction of voluntary daily caloric intake. These data confirm that the test diet significantly enhanced satiety and reduced voluntary food intake. Further, the increased rate of consumption of the test diet proves that the overall difference in consumption (and the presently claimed benefits) was not due to the test diet having poor palatability.
TABLE 2
Total Consumption (g)
Control diet
54.1791
Total Consumption (g)
Test diet
47.2930
TABLE 3
Avg. Eating Rate (g/min.)
Control diet
2.5982
Avg. Eating Rate (g/min.)
Test diet
2.7776
TABLE 4
Avg. Cons. per Meal (g)
Control diet
6.0349
Avg. Cons. per Meal (g)
Test diet
6.5955
TABLE 5
Total Number of Meals
Control diet
9.4186
Total Number of Meals
Test diet
7.4535
As can be seen in Table 5, the test diet significantly reduced the number of the meals per day, which is responsible for the reduction of voluntary food intake in the cats. Further as can be seen in Table 6, below, the test diet results in increased times between meals thereby substantiating that the cats fed the test diets had higher levels of satiety.
TABLE 6
Avg. Time between Meals (minutes)
Control diet
94
Avg. Time between Meals (minutes)
Test diet
138
These data confirm that the test diet significantly enhanced satiety, which resulted in the reduction of voluntary food intake in the cats. Reduced voluntary food intake in cats will significantly reduce weight gain, and help cats maintain healthy weight and metabolic health.
Forty-five adult cats were randomized into three groups with 15 cats per group based on their baseline maintenance energy requirement (MER), percentage of body, BCS, and body weight. The groups were fed three different diets with varying ratios for protein to fat to carbohydrates (CHO) as found in Table 7.
TABLE 7
Group 2:
Group 3:
Group 1:
High Protein,
High Protein,
Macro
Control diet
Moderate CHO
Low CHO
Nutrients
%*
Ratio
%*
Ratio
%*
Ratio
Protein
30.95
1
47.49
2.4
54.23
4.7
Fat
33.91
1.1
32.66
1.6
34.22
3.0
Carbohydrate
35.14
1.1
19.85
1
11.55
1
*Percent of total dietary calories as fed
The cats were fed 25% more than their baseline MERs for a period of 12 months. As shown in Table 8, the average food intake was not significantly different between groups, and in fact, the diet with the highest protein (Group 3) had the highest consumption.
TABLE 8
Group
Total Consumption (g)
Standard Error
Group 1
60.2714
1.9
Group 2
58.5514
2.2
Group 3
61.2767
2.2
As shown above, the diets of Table 7 provided no satiety benefit. Even high protein diets did not provide a satiety benefit further showing that the satiety benefit of the test diet of Example 1 was wholly unexpected.
In the cat study, thirty overweight cats were randomized into two groups based on their baseline maintenance energy requirement (MER), body weight, body fat, age, and gender. During the study, all cats were fed 25% less than their baseline MERs. The administered diets are listed in Table 9. The cats were administered each diet for 6 months. Body weight was recorded weekly, and body composition was measured monthly by quantitative magnetic resonance (QMR) technology.
TABLE 9
Ingredients
Diet I (%)
Diet II (%)
Protein
53.7
34.6
Carbohydrate (CHO)
12.1
33.5
Fat
15.4
13.6
Fiber
4.26
3.74
Protein:CHO
4.44:1
1:1
At the end of the 6-month study, cats fed Diet I lost more body weight than the cats fed Diet II (645.54 g vs 513.50 g); however, both sets of cats lost lean body mass as shown in Table 10.
TABLE 10
Average Lean Loss
Average Fat Loss
(grams)
(% change from baseline)
Cats Fed Diet II
151 g
3.58%
Cats Fed Diet I
105.9 g
7.60%
In this study, 30 overweight dogs were randomized into two groups with 15 dogs per group based on their baseline maintenance energy requirement (MER), body weight, % body fat, and genders. The dogs in both the control and test groups were fed 75% if their baseline MERs during the first 4 months of the weight loss study and then 60% of their baseline MERs during the last 2 months of the weight loss study. The body composition was determined with a DEXA machine. The diets are shown in Table 11.
TABLE 11
Components
Test diet (wt %)
Control diet (wt %)
Moisture
8.07
8.09
Protein
48.70
26.47
Starch
15.65
31.60
Fat
10.1
14.73
Crude fiber
5.00
11.40
Total dietary fiber
12.93
19.70
Ash
5.94
5.19
n-3 PUFAs*
1.2166
0.08902
n-6 PUFAs**
1.53587
1.62164
Total Isoflavones (mg/kg)
965.33
138.67
*Omega-3 Polyunsaturated fatty acids
**Omega-6 Polyunsaturated fatty acids
There was no significant difference in lean body mass between baseline and any of the three time points (2 months, 4 months, and 6 months) in dogs fed the test diet. On the contrary, dogs fed the control diet lost significant amount of lean body mass at all three time points compared with baseline as shown in Table 12.
TABLE 12
Lean Body Mass -
Lean Body Mass -
Control diet
Test diet
Time
Initial
Final
Difference
Initial
Final
Difference
Period
(kg)
(kg)
(kg)
(kg)
(kg)
(kg)
2 months
20.29
19.60
−0.69
20.09
20.32
0.24
4 months
20.29
19.75
−0.54
20.09
20.08
0.05
6 months
20.29
19.54
−0.75
20.09
19.88
−0.15
Dogs in both groups lost significant amount of body fat compared with baseline. However, dogs fed the test diet lost more body fat than the control dogs (5.93 kg vs 4.98 kg) at the end of the 6-month weight loss study as shown in Table 13.
TABLE 13
Body Fat -
Body Fat -
Control diet
Test diet
Time
Initial
Final
Difference
Initial
Final
Difference
Period
(kg)
(kg)
(kg)
(kg)
(kg)
(kg)
2 months
13.35
12.00
−1.35
13.38
11.31
−2.07
4 months
13.35
10.47
−2.88
13.38
9.60
−3.68
6 months
13.35
8.37
−4.98
13.38
7.35
−5.93
Dogs fed the control diet increased their daytime spontaneous activity more than the dogs fed the test diet compared with their baseline spontaneous daytime activity even dogs in both groups were fed 25% less than their baseline maintenance energy requirement (MERs). More strikingly, dogs fed the control diet increased their nighttime spontaneous activity compared with their baseline spontaneous nighttime activity, indicating that the control dogs were more restless at nighttime. On the contrary, dogs fed the test diet lowered their spontaneous nighttime activity compared with their baseline spontaneous nighttime activity, indicating that the dogs on the test diet were even less restless during weight loss than at baseline when they were fed 100% of their MERs without any caloric deficiency. These data as shown in Table 14 indicate that the test diet reduces restlessness in dogs compared with the control diet during weight loss.
TABLE 14
% change over
% change over
baseline in daytime
baseline in nighttime
Diet Groups
activity
activity
Control
54.94
9%
Test
32.73
−11.61
The objective of the study was to investigate whether soy isoflavone alone or a combination of soy isoflavones, conjugated linoleic acid (CLA), and carnitine can promote fat loss, and preserve lean body mass in overweight dogs.
The control diet was formulated based on a low-calorie weight loss formula. The isoflavone diet was the control diet supplemented with 10% soy germ meal. The cocktail diet was the control diet supplemented with 10% soy germ meal, 1.5% conjugated linoleic acid (CLA), and 100 ppm L-carnitine. All three diets had comparable levels of protein, fat, fiber, and carbohydrate.
The diets are shown in Table 15 as follows: Ration 1: a traditional weight loss control diet (metabolizable energy=1338.8 kcal/lb). Ration 2: Isoflavone diet (metabolizable energy=1346.3 kcal/lb): the control diet containing 10% soy germ meal (SGM containing 6500 to 8400 mg/kg isoflavones). Ration 3: Cocktail diet (metabolizable energy=1309.8 kcal/lb): the control diet containing 10% SGM, CLA (1.5%), and L-carnitine (100 ppm).
TABLE 15
Total
isoflavone
Carbo-
Crude
(Aglycone
ME*
Protein
Fat
hydrate
Fiber
units)
Ration
(kcal/lb)
(wt %)
(wt %)
(wt %)
(wt %)
(mg/kg)
1
1338.8
26.3
6.78
44.9
6.69
30-80
2
1346.3
26.8
6.13
44.4
7.26
680-950
3
1309.8
27.0
7.57
42.2
6.75
660-930
*ME = metabolizable energy
Over-weight dogs with more than 22% body fat (male dogs) and 26% body fat (female dogs) were randomized into three groups and fed 70% of their MER during the first 3-month of weight loss. Dexa scan was performed on each dogs three months and six months after the initiation of the study. Dogs that failed to reach their ideal body fat levels after the first 3 months of weight loss were fed 55% of their MER during the second 3-month of weight loss.
Changes in body fat and lean body mass after 3 and 6 months of weight loss are summarized in Table 16 (mean). After both the 3-month mark and the 6-month mark, the isoflavone diet did not prevent the loss of lean body mass. Even when supplemented with other actives, the cocktail diet did not preserve lean body mass at the end of the trial (6-month mark).
TABLE 16
Test diets
3-month
6-month
Change in Lean Tissue
Control
−399.5
−578.3
(g) from baseline
Isoflavones
−173
−159.8
Cocktail
+267
−283
Change in body fat
Control
−4385.9
−7722.6
(g) From baseline
Isoflavones
−3889.3
−7097.1
Cocktail
−5158.2
−9198.3
In addition to the above, other published works have demonstrated that omega-3 fatty acids alone do not preserve lean body mass and protein to carb ratios alone do not preserve lean body mass.
Diez et al. (Diez, M., Nguyen, P., Jeusette, I., Devois, C., Istasse, L. & Biourge, V., “Weight loss in obese dogs: evaluation of a high-protein, low-carbohydrate diet” J. Nutr. 132: 1685S-1687S (2002)) reported obese dogs fed a high protein (47.5%), low starch (5.3%) diet with a protein to starch ratio of 9:1 had 20% of weight loss came from lean body mass. In addition, dogs fed a diet with 2.4 to 1 protein to carbohydrate ratio lost significant amount of lean body mass after 16 weeks of weight loss (A. Andre, I. Leriche, G. Chaix, C. Thorin, M. Burger, P. Nguyen, “Recovery of Insulin and optimal body composition after rapid weight loss in obese dogs fed a high-protein medium-carbohydrate diet”, J. of Animal Physiology and Animal Nutrition 2017, 101:21-30). Bender et al. (N Bender, M. Portmann, Z. Heg, K. Hofmann, M. Zwahlen, M. Egger, “Fish or n3-PUFA intake and body composition: a systematic review and meta-analysis” Obesity Reviews 2014, 15: 657-665) reported that including fish or fish oil in weight loss diets didn't result in significant difference in either fat mass or lean body mass after weight loss in people, compared with control diets.
The protein level of the present test diet (48.7%) was similar to that of the Diez's test diet, but the test diet contained more starch (14.2%) as well as isoflavones and omega-3 fatty acids. To the inventor's surprise, those nutrients unexpectedly worked synergistically to promote fat mobilization and preserve lean body mass during the weight loss study in dogs, leading to the unexpected total prevention of the loss in lean body mass even after 40% reduction in caloric intake during the 6-month weight loss.
In the specification, there have been disclosed certain embodiments of the invention. Although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation. The scope of the invention is set forth in the claims. Obviously, many modifications and variations of the invention are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described. | |
Have you ever thought that the positioning of your bed in your room can be a matter of concern?
If not yet, you should start thinking about it, as it is related to your mental health that affects the body as well.
Now, this raises the question, which way should your bed face?
Well, before you get to know the answer, first, you need to know about the famous Chinese philosophy regarding this matter named “Feng Shui.”
Basically, Feng means wind and Shui means water, in Chinese. It is a practice to live in harmony in accordance with our surrounding environment with the principles of nature.
The important principles of feng shui are the commanding position, five elements, and the Bagua. Facing the bed in the right way comes under Feng Shui that discusses the effects of bed positioning, which I will emphasize in this article.
Applying Feng Shui Principles
For favorable positioning of your bed, feng shui uses certain principles. Using common sense, observing the condition of a particular room are the first important things to think of while you start using the principles.
You might also consider four directions- east, west, south, and north while deciding the right position.
As well as commanding position, placement according to the relationship to doors, heading against a solid wall, locating it away from toilet wall, space on both sides is some of the key factors that should be maintained well.
If it already sounds tempting to know what direction should your bed face for good feng shui, the following information will give you a clear understanding of it.
Commanding Position
The first principle of feng shui is the commanding position. Your bed should be located in a position where you can easily see the door.
That doesn’t mean you need to be exactly in line with the door. It is advised to face the bed diagonally from your door so that you can see it clearly without being in line with it.
It might seem difficult for some people to face it this way as the condition of the room, and the door can be disproportionate.
In that case, it can be corrected by placing a mirror. It will help you to see the reflection of the door when you are lying on the bed.
Moreover, the distance between your bed and the door is also important in feng shui.
The more the distance is, the better it is for you. In case your door is to the left, the best position for bed would be the far right corner of the room. If it is to the right, the best location would be the far left corner of the room.
Opening of the Doors
You don’t like the situation if any of the doors of your room opens up directly to any part of your bed.
If the door opens up facing your head directly, then over time, a headache might be a common problem for you. And if it’s opening up in line to your legs and feet, you may face foot-related problems gradually.
To avoid such problems, you can use faceted crystal balls according to feng shui. It can be placed halfway between the bed and the door.
Headboard against a Strong Wall
You have to consider the wall against which you are facing the bed. It should be sound, strong, and secure to protect you from the back.
Some people set their bed in the middle of the room, avoiding a solid wall backing it up; but, it’s a wrong choice. It’s better to avoid windows behind the headboard if possible.
Because if you are living on the ground floor of a building or you have eye issues, then having windows just behind bed might bother you a lot. As it is not possible for some people because of the structure of the room, so at least having a solid wall is important enough.
Avoiding Electronics and Toilet Wall
Make sure you don’t put your bed just behind the toilet.
No one would want such anyway as common sense doesn’t go with it. Because it gives you a kind of negative vibe and sound coming from the other side at night might bother you.
So, it’s better if you can avoid the toilet wall while facing your bed.
But if somehow it’s not possible, then you better put a mirror on that wall, which might erase the vibe of the toilet being next to it. Another important thing to consider is the electronics around that place.
You won’t have a great sleep if the electronics are humming over your head. It is dangerous as accidents can happen at any time.
Space on Both Sides of the Bed
For elderly people, it is suggested to have enough space to the left and right sides of the bed.
It will not be a wise decision if you place the bed one-sided.
Enough space from both sides helps the partners to move from bed to other places easily. It gives you a balance of yin and yang, masculine and feminine.
Choosing the Right Direction
Other than these principles, feng shui also talks about the effects of placing the bed in different directions, which you should keep in mind before you finalize it.
You might have this question in mind, what direction should your bed face to get comfortable sleep?
To answer this, feng shui comes with certain explanations.
If you face the bed to the west, it might help you to have a good night’s sleep. But it can also bring low motivation and laziness. This position is not for those who just started their career.
For young starters, east positioning of the bed is a more preferable choice than other directions. According to feng shui, it will give you a kind of feeling that every day is a new day.
It says that you will have feelings of growth and ambition if you face it towards the east.
For active life, better communication, and creativity enhancement, south-east positioning of your bed is advised according to feng shui. If you are having a restless life, facing the bed towards the south-west direction can make you calm and peaceful.
North is more likely known as a death position regarding bed positioning. It can create sleep disorder, insomnia, lethargy, and many more.
But for older people, this position is suggested as it has some good impacts like calming the self with tranquility. South is never suggested by feng shui as a suitable position for bed.
Why Should You Face Your Bed in a Certain Direction?
In feng shui, it uses the energy of people, space, and surrounding things to facilitate positive energy to flow.
So, according to feng shui, you need to know the way your home decoration should be. And when you think of it, the first object that comes to your mind is the bed of your room, which is your comfort zone.
Some people might have doubts regarding the philosophy of feng shui; but, no one can deny the importance of a good night’s satisfactory sleep. But, if the positioning of the bed is wrong, it might cause sleeping problems that will affect your health as well.
So, you should know the right way to place your bed in favor of your health.
Not only just feng shui but also Quantum Physics also talks about the relationship between matter and energy.
From a human being like you to the bed of your room, everything is up of energy, and thus, each matter is influenced by the wave energy of others around it.
If you are going through mental and physical health issues, one of the reasons behind it might be your sleep disorder. And facing the bed wrong way causes sleep disorder in most cases.
So, irrespective of what you prioritize, you can’t deny the effects of bed positioning in your room. That’s why it is important.
It tells you how human life can be reached to an ideal state by connecting and living life with the flow of the environment around us.
Before you decide the way you should face the bed, make sure some important factors are in check. It is better if it is newly bought, not used before by someone else. A solid headboard made of wood would be a preferable choice to protect you.
Make sure you clear the clutter and dust balls beneath the bed as well.
Conclusion
By the end of the day, you want comfort, ease, and peace when you come back home after a long tiring day.
The perfect placement of your bed can provide you a sense of relaxation. Your relationship with your bed is not less important than any other relationship you have in life. So, bed positioning is a matter of concern.
Which way should your bed face?
Hopefully, the whole discussion was helpful enough for you to answer this particular question.
After analyzing feng shui principles, your room structure, and common sense, you will have a better idea to choose the right way and direction to face your bed.
The 10 Best Flavored K Cups Review And Buying Guide
The world of coffee has changed quite a bit in the past few years. Gone are the days where everyone was satisfied with regular old
How to Add Potassium to Plants
Potassium is a chemical element that is released by the symbol K. Potassium is a nutritious ingredient for plants. As long as the soil is
What Do Plants Need to Grow? | https://thatgardenguru.com/which-way-should-your-bed-face/ |
Did you know that most nutrient absorption occurs in the small intestine?
And, in fact, here is a very specific stat.
Most of the nutrient transport occurs in the small intestine, whereas the colon is primarily responsible for water and electrolyte transport. (source)
So, why does this matter for you and why am I even talking about it?
Most Nutrient Absorption Occurs in the
Click HERE to save this post for later.
The quick and simple reason is this: many of us have problems in and with the small intestine.
These problems then lead to other problems like weight gain (or loss), fatigue, acne, low immune system, and more.
We wonder, “…..but why?”
Well, if most nutrients are absorbed in the small intestine and there is something wrong with the small intestine, then we aren’t absorbing all (any?) nutrients and therefore a downward spiral of events are occurring.
Okay, so let’s break this all down more.
The Small Intestine
The small intestine (also referred to as the small bowel) is the specialized tubular organ between the stomach and the large intestine (also called the colon or large bowel).
The small intestine is, in fact, much larger than the large intestine. At approximately 22 feet, it is far bigger than the large (at just 6 feet).
These 22-ish feet break down into three parts: the duodenum, jejunum and ileum.
Duodenum
The beginning portion of the small intestine (the duodenum) begins at the exit of the stomach (pylorus) and curves around the pancreas to end in the region of the left upper part of the abdominal cavity where it joins the jejunum.
- receives partially digested food from the stomach
- the shortest segment of the intestine and is about 23 to 28 cm (9 to 11 inches) long
- Hormone glands release here to alert that food is present.
- The mucous lining of the last two segments of the duodenum begins the absorption of nutrients, in particular iron and calcium. (source)
- Inflammation of the duodenum is known as duodenitis. Conditions associated with duodenitis include:
- celiac disease
- Crohn’s disease
- Whipple disease
- H. Pylori infection
- peptic ulcers
Jejunum
The jejunum makes up the middle half of the small intestine (a little less than half of the remaining length).
- many blood vessels, which give the deep, red color
- nerves trigger, muscle work increases, and the food churns back and forth mixing with digestive juices
- one of the main tasks of the Jejunum include: absorption of lipophilic nutrients (proteins, fats, cholesterol and the fat-soluble vitamins A, D, E and K)
Ileum
The Ileum is the longest section of the small intestine.
- the walls begin to thin and narrow in the Ileum
- blood supply reduces
- food spends the most time here
- most of the water and nutrients are absorbed in the Ileum
- “Small collections of lymphatic tissue (Peyer patches) are embedded in the ileal wall, and specific receptors for bile salts and vitamin B12 are contained exclusively in its lining; about 95 percent of the conjugated bile salts in the intestinal contents is absorbed by the ileum.” (source)
Note: We have talked about the ileocecal valve before. The ileocecal valve separates the ileum from the large intestine.
Mucosa
There is another piece to the small intestine that’s crucial to understand. That is the small intestine wall layers.
There are four and they include: the outermost serosa, muscularis, submucosa, and innermost mucosa.
Mucosa (Innermost layer)
Contains the epithelium, lamina propria and muscularis mucosae.
- Epithelium: the epithelium is also very important to highlight, as it lines the luminal (lining) surface. There are a number of components to the epithelium:
- Enterocytes – They have an absorptive function. They contain brush border enzymes on the surface which have an important digestive function.
- Goblet cells – Exocrine glands which secrete mucin.
- Crypts of Lieberkuhn
Submucosa
Connective tissue layer, which contains blood vessels, lymphatics and the submucosal plexus.
Muscularis externa
Consists of two smooth muscle layers; the outer longitudinal layer and inner circular layer. The myenteric plexus lies between them.
Adventitia/ Serosa (Outermost layer)
Comprised of loosely arranged fibroblasts and collagen, with the vessels and nerves passing through it. The majority of the small intestine adventitia is covered by mesothelium and is commonly called the serosa.
The mucosa and submucosa form large numbers of folds arranged in a circular fashion in the lumen.
Additionally, these folds contain microvilli to further increase the surface area, which increases absorption.
Sources: HERE.
Small Intestinal Brush Border Enzymes
There’s one more point of focus worth mentioning, and that focus includes the brush border enzymes.
First, what is a brush border enzyme?
The enzymes responsible for this terminal stage of digestion are not free in the intestinal lumen, but rather, tethered as integral membrane proteins in the plasma membrane of the enterocyte. The apical plasma membrane housing these enzymes is composed of numerous microvilli which extend from the cell and constitute the “brush border”. Hence, the enzymes embedded in those microvilli are referred to as brush border enzymes. (source)
In simplistic terms, the brush border is a chemical barrier through which food must pass to be absorbed.
And the focus I want to emphasize today is their correlation to carbohydrates.
Before doing so, though, if you’d like to geek out with me for roughly 2 minutes, check this out:
Now that we have shared that fun…….
Carbohydrate Breakdown
When it comes to the small intestine, digestion, and absorption, I want to quickly illustrate why you might be having a hard time breaking certain carbohydrates down (and the correlation to brush border enzymes).
Carbohydrates that pass undigested into the large intestine are then digested by intestinal bacteria.
At this point, the brush border enzymes take over.
The most important brush border enzymes include: dextrinase and glucoamylase, which further break down oligosaccharides. (If you’re curious about oligosaccharides, I have written about them HERE.)
In addition to dextrinase and glucoamylase, three other brush border enzymes include: maltase, sucrase, and lactase.
Alright, now you’re starting to recognize some terms we talk about frequently, right?
So, remember what I said in my huge post on Lactose:
The enzyme lactase breaks the sugar lactose into two compounds. However, lactase is absent in most adult humans, and therefore is not digested in the small intestine.
Lactose is not digested until it reaches the small intestine, where the hydrolytic enzyme lactase is located. Lactase (β-galactosidase) is a membrane-bound enzyme located in the brush border epithelial cells of the small intestine. Lactase catalyzes the hydrolysis of lactose into its constituent monosaccharides. SOURCE
Due to all of the above, the facts and information surrounding the brush border enzymes and digestion, it’s one reason why many of us need to supplement with digestive enzymes.
They are a daily supplement for me because I thoroughly understand all of the above.
Major Digestive Enzymes Chart
Furthermore, check out this incredible Major Digestive Enzymes chart via Lumen Learning Courses.
So many enzymes are both produced and released in the small intestine.
A digestive enzyme breaks food down in order to become fully absorbed…..which occurs in the small intestine.
Sources: HERE
What is the MAIN Function of the Small Intestine?
In case this hasn’t been crystal clear yet, let me review this with you one last time.
Functionally, the small intestine is chiefly involved in the digestion and absorption of nutrients.
In case you want more proof for this small-intestine pudding, you can go back through this post and look at all the times I have bolded words about absorption. And by the way, all of those times I have linked to sources for which they were derived from.
Note: one other main function of the small intestine is the production of GI Hormones. If your’e interested in learning more about GI Hormones, I’ve already written about them HERE.
Sources: HERE, HERE, HERE, and HERE.
Why does this matter?
Well, return to the beginning of this post.
The conversation began around problems like weight gain (or loss), fatigue, acne, low immune system, and more that you might be experiencing.
We can eat and eat and eat, but if we are not absorbing due to a problem with the small intestine, then….
- Those problems will continue occurring and
- You’ll need to uncover what is going on with your small intestine and then ultimately how you can heal it
And if you already know that SIBO is your reason, then you might want to start with Reasonable SIBO.
Any questions? Leave them in the comments below.
If you liked this post, you might also enjoy:
Xox,
SKH
You will heal. I will help. | https://agutsygirl.com/2022/03/15/most-nutrient-absorption-occurs-in-the/ |
When planning to explore a city like San Antonio for supernatural encounters, it is better to learn about the location and its history. Since we are a part of this industry and the best ghost tour in San Antonio, we love to share some valuable information with our potential patrons.
Therefore, we will shed some light on two topics in this blog post. The primary point is the history of San Antonio, and the second point is guidelines for explorers visiting such sights for the first time.
A Brief History of San Antonio’s Hauntings
San Antonio is a city with a long and storied history. It was founded in 1718 by Spanish missionaries and settlers, and it quickly became a vital hub for trade and commerce in the American Southwest. In 1836, it was the location of the Battle of the Alamo, which helped secure Texas’ independence from Mexico. Today, it is one of the most populous cities in Texas and a major tourist destination.
San Antonio is also a city with a long history of hauntings. There are numerous stories and legends about ghosts and hauntings in the town, dating back to its founding. Many of these stories center on the city’s most famous landmarks, such as the Alamo and the San Fernando Cathedral. We’ll look at some of the most famous hauntings in San Antonio and try to separate fact from fiction.
The Alamo:
The Alamo is the most famous landmark in San Antonio and one of the city’s most haunted places. There are numerous stories about ghosts haunting the grounds of the Alamo. These men were killed during the Battle of the Alamo. And their spirits are said to roam the lands where they fell.
In addition to these individual ghosts, there have also been reports of strange lights and sounds inside the Alamo church. Some say that these are manifestations of the battle that took place there; others believe that they are residual energy from all the violence that has taken place on that site over the years. Whatever the case, there is no denying that the Alamo is a place with a lot of history—and many ghost stories!
San Fernando Cathedral
San Fernando Cathedral is another popular tourist destination in San Antonio, and it is said to be tormented by many ghosts. One of these phantoms is said to be that of Father Miguel Hidalgo y Costilla, a Mexican priest who played a pivotal role in Mexico’s War of Independence.
Father Hidalgo was excommunicated from the Catholic Church shortly before his death. His ghost is said to haunt San Fernando Cathedral because he was not allowed to be buried there.
Another ghost said to haunt San Fernando Cathedral is that of Fray Damian Massanet. It is the name of one of the founders of Mission San Antonio de Valero (the Alamo). Fray Damian died in 1714, but his ghost is said to still wander around inside San Fernando Cathedral because he loved it so much in life.
There are also reports of strange noises coming from inside the cathedral late at night and sightings of unexplained lights and shadows. Whether or not these reports are accurate remains to be seen, but one thing is for sure: San Fernando Cathedral has an eerie feeling to it!
San Antonio is a city with a long history—and a long history of hauntings! From The Alamo to San Fernando Cathedral, there are numerous stories about ghosts haunting some of the city’s most famous landmarks. Whether or not you believe in ghosts, there’s no denying that these stories add an extra layer of intrigue to an already fascinating city.
Ghost Tours: A Beginner’s Guide to Spooky Sightseeing
For those who love spookiness, a ghost tour is a perfect way to get your fill of chills and thrills. But with so many different times, it usually takes time to figure out where to start. That’s why we’ve put together this beginner’s guide to ghost tours, complete with necessary info that you require to know to make the most of your spooky sightseeing experience.
Ghost Tour, And The Role Of The Experts?
A ghost tour is a guided walking tour through a haunted area. During this time, you’ll hear stories about the ghosts that are said to haunt the place and any other supernatural activity that has been reported. Many tours focus on historical sites that are said to be haunted, such as battlefields, cemeteries, or mansions.
Most ghost tours are led by professional guides who are well-versed in the history of the area and the stories of its resident ghosts. These self-guided tours typically come with a map and suggestions for where to go and what to look for. However, some DIY terms are also available for those who prefer to explore independently.
What Should I Expect on a Ghost Tour?
When you take a ghost tour, you can expect to do a lot of walking and listening. The pace of the expedition will vary depending on the size of the group and the age of the participants. However, most tours last between 1-2 hours.
Many of the tales told on ghost tours are based on actual events, which means they can sometimes be pretty graphic. You should also expect to hear some disturbing stories. If you’re easily offended or have young children with you, you must ask about the tour’s content before you book it.
Finally, don’t be surprised if you don’t see any ghosts during your tour! While some people claim to have seen apparitions while on a ghost tour, it’s pretty rare. Most people enjoy hearing spooky stories and exploring haunted places.
TIPS FOR TAKING A GHOST TOUR
If you’re thinking about taking a ghost tour, there are certain things you can do to make sure you have a good experience:
Wear comfortable shoes:
You’ll be doing a lot of walking on most ghost tours, so wearing shoes that won’t rub or give you blisters is essential.
Dress for the weather:
Ghost tours typically take place rain or shine, so dress appropriately for the conditions outside.
Bring along bug spray:
If the tour takes place in an outdoor area that is known for mosquitoes or other pests, bring along bug spray to keep them at bay.
Arrive early:
Most ghost tours require advance reservations, but it’s always best to arrive 10-15 minutes early just in case paperwork needs to be filled out before the excursion begins.
A ghost tour is excellent for getting your fill of all things spooky! With so many different outings available, there’s sure to be one that’s perfect for you. Remember to wear comfortable shoes, dress for the weather, bring along bug spray if necessary, and arrive early, so you don’t miss anything!
However, the goal is to help you live your dreams of visiting San Antonio haunted hotel and help you realize the difference between the practical world and the world of spirits.
However, after reading these topics, if you are excited and prepared to go on one such excursion, then Alamo City Ghost Tours is your destination. Whether you need to explore the haunted areas or wish to go ghost hunting, they can assist you in the best possible way. So, allow us to help you in touring a haunted city like Alamo City. Read More Interesting Ghost Related Blogs Click Here….. | https://alamocityghosttours.com/2022/11/11/ghost-tours-a-beginners-guide-best-ghost-tour-in-san-antonio/ |
[…] Sometimes, in order to bring a woman closer to this nature (Life/Death/Life), I ask her to keep a garden. Let this be a psychic one or one with mud, dirt, green, and all the things that surround and help and assail. Let it represent the wild psyche. The garden is a concrete connection to life and death. You could even say there is a religion of garden, for it teaches profound psychological and spiritual lessons. Whatever can happen to a garden can happen to soul and psyche – too much water, too little water, infestations, heat, storm, flood, invasion, miracles, dying back, coming back, boon, healing, blossoming, bounty, beauty.
During the life of the garden, women keep a diary, recording the signs of life-giving and life-taking. Each entry cooks up a psychic soup. In the garden we practice letting thoughts, ideas, preferences, desires, even loves, both live and die. We plant, we pull, we bury. We dry seed, sow it, moisten it, support it, harvest.
The garden is a meditation practice, that of seeing when it is time for something to die. In the garden one can see the time coming for both fruition and for dying back. In the garden one is moving with rather than against the inhalations and exhalations of greater wild Nature.
Through this meditation, we acknowledge that the Life/Death/Life cycle is a natural one. Both live-giving and death-dealing natures are waiting to be befriended, forever loved. In this process, we become like the cyclical wild. We have the ability to infuse energy and strengthen life, and to stand out of the way of what dies. […] – page 105
***
[…] Bazen bu doğaya (Hayat/Ölüm/Hayat doğası) daha yakın olması için bir kadından bahçeyle uğraşmasını isterim. Bu, psişik bir bahçe de olabilir, çamuruyla, pisliğiyle, yeşiliyle, etrafı saran, iyi gelen ve saldıran her şeyiyle sıradan bir bahçe de. Bu, vahşi psişeyi temsil de edebilir. Bahçenin, hayatla ve ölümle somut bir bağlantısı vardır. Hatta bir bahçenin dinsel boyutları olduğunu dahi söyleyebiliriz, çünkü bize derin psikolojik ve tinsel dersler öğretir. Bir bahçenin başına ne gelirse, ruhun ve psişenin başına da gelebilir: Çok sulanabilir; susuz kalabilir; böceklenebilir; sıcak ya da sel basabilir; fırtına vurabilir; mucizeler görebilir; kuruyabilir; canlanabilir; nimetler verebilir; iyileştirebilir; çiçeklenebilir; cömertlik ve güzellik sunabilir.
Bahçenin hayatı sırasında, kadınlar bir günlük tutarak hayat-verici ve hayat-alıcı işaretleri kaydederler. Kaydedilen her şey psişik bir çorba hazırlar. Bahçede, düşünceleri, fikirleri, tercihleri, arzuları ve hatta kayıpları hem yaşamaya hem de ölmeye bırakma alıştırması yaparız. Diker, söker, gömeriz. Tohumu kurutur, eker, nemlendirir, besler, hasat ederiz.
Bahçe, bir meditasyon uygulamasıdır, bir şey için ölüm vaktinin ne zaman geldiğini görmeyi öğretir. Bahçede, hem meyve verme hem de kuruyup ölme zamanının gelişi görülebilir. Bahçede büyük vahşi Doğa’nın nefes alıp verişlerine karşı değil, onlarla birlikte hareket edilir. | https://travellers-to-the-east.org/2015/05/03/women-who-run-with-the-wolves-about-the-garden-kurtlarla-kosan-kadinlar-bahce-hakkinda/ |
Rows of concrete fins shade the large windows of this nursing home near the Spanish city of Valladolid, which features rooms clustered around a landscaped central courtyard.
Local architect Óscar Miguel Ares Álvares designed the facility for the Spanish village of Aldeamayor de San Martin. Its low-lying profile is informed by its position at the border of an arid plain and a flat landscape of salinated wetlands.
The building seeks to provide a sense of connection with the natural surroundings, while offering its elderly occupants a sheltered environment with a strong feeling of internal community.
White concrete facades that rise up from the dry ground are clad with vertical fins that cast rhythmical patterns of shadow in the strong Spanish sun.
"The exterior is abstract and hard, like the environment," said Ares Álvares. "A seemingly insurmountable barrier, a shell to protect the interior that becomes kind, warm and complex."
One of the building's solid elevations is interrupted by a recessed space surrounded by glazing, where the vertical fins function as louvres to protect the interiors from direct sunlight.
A section of one corner is removed to create a sheltered entrance by the reception area. Corridors extending from either side of the reception follow the outer edge of the building and connect the inhabitants' rooms.
Rooms are arranged in clusters that extend around and into the courtyard at the heart of the care centre. Their staggered grouping and angled roofs emphasises the individuality of each unit.
Spaces between the groups of rooms accommodate informal seating areas where occupants can meet and chat. These are intended to replicate the local practice of bringing seats out onto the street for neighbourly catch ups.
"The perimeter corridor becomes a place rich in nuances and spaces in the manner of a small town where people can speak in front of the door of their room-houses, fleeing the classic configuration of such centres more close to lugubrious hospitals than to kind and welcoming buildings," said the architect.
Each room has a window looking onto the landscaped central area, while full-height glazed surfaces fill the corridors with daylight and doors lead out onto pathways that traverse the courtyard.
Interspersed among the living units are communal facilities including activity and fitness rooms, a medical consultation space and a large hall.
The hall incorporates a window that looks onto the courtyard and is fronted with the same concrete fins found on the exterior. A clerestory window also ensures plenty of natural light enters the space.
Exposed concrete blockwork, white-painted bricks, timber flooring and vertical wooden strips fixed to the walls create a neutral material palette, which accentuates the brightness of the interior spaces.
"The whole work has been governed by the use of simple and cost-effective materials, without fanfare," suggested Ares Álvares. "Geometry, spatiality, light and careful treatment colour and textures to get a warm and cozy interior protected by an abstract and rhythmic limit to the exterior." | https://www.dezeen.com/2016/10/31/aldeamayor-de-san-martin-care-home-elderly-residential-architecture-concrete-courtyards-oscar-miguel-ares-alvares-spain/ |
Introduction Non-adherence to antipsychotic medications for individuals with serious mental illness increases risk of relapse and hospitalisation. Real time monitoring of adherence would allow for early intervention. AI2 is a both a personal nudging system and a clinical decision support tool that applies machine learning on Medicare prescription and benefits data to raise alerts when patients have discontinued antipsychotic medications without supervision, or when essential routine health checks have not been performed.
Methods and analysis We outline two intervention models using AI2. In the first use-case, the personal nudging system, patients receive text messages when an alert of a missed medication or routine health check is detected by AI2. In the second use-case, as a clinical decision support tool, AI2 generated alerts are presented as flags through a dashboard to the community mental health professionals. Implementation protocols for different scenarios of AI2, along with a mixed-methods evaluation, are planned to identify pragmatic issues necessary to inform a larger randomised control trial, as well as improve the application.
Ethics and dissemination This study protocol has been approved by The Southern Adelaide Clinical Human Research Ethics Committee. The dissemination of this trial will serve to inform further implementation of the AI2 into daily personal and clinical practice.
- healthcare
- record systems
- BMJ Health Informatics
- patient care
- medical informatics
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Background
Schizophrenia and other serious mental illnesses (SMI) such as bipolar disorder and schizoaffective disorder, are fast emerging as one of the world’s most important health problems, with worldwide prevalence rates ranging from 0.8% to 6.8%.1 The Global Burden of Diseases, Injuries and Risk Factors Study 2010 revealed that SMIs were among the top 10 causes of disability, directly accounting for more than 7.4% of disease burden worldwide.2 In Australia, it is estimated that 2%–3% of the population (~600 000 people) are living with an SMI3 and facing numerous barriers to accessing and using health services.4 Treatment and management of SMI requires a multidisciplinary approach and communication between General Practice (GP), emergency departments, specialists, pharmacies, community clinics and allied health services.
Reducing risk of relapse and hospitalisation remains one of the greatest challenges in the treatment of SMI, in particular for people with schizophrenia, with an estimated 80% of this population reported to have relapsed multiple times within the first 5 years of initial treatment or remission from their index episode.5 Mental illness stigma constrains the use of available resources, as do inefficiencies in the distribution of funding and interventions. This combination of stigma and structural discrimination contributes to social exclusion and breaches of basic human rights of individuals with mental disorders.6 Detecting when people with SMI stop medication is a challenge given limited resources and suboptimal medication monitoring. Early detection of medication non-adherence is important to prevent: recurrence of negative symptomatology, relapse resulting in harm to self and others, decreased response to future treatment and for people with schizophrenia specifically neuro-degeneration.7
Information technology has the potential to improve effectiveness in the way people are monitored, treated and followed up8 and enhance self-efficacy in health management.9 For clinical decision support tools, patient-specific assessments or recommendations can play a critical role in improving prescribing practices, reducing serious medication errors, enhancing the delivery of preventative care services and improving adherence to recommended care standards.10 In a systematic review of 70 clinical studies, decision support systems significantly improved clinical practice in 68% of trials.10 Additionally, computer-based access to complete pharmaceutical profiles and alerts reduces the rate of initiation of potentially inappropriate prescribing, therapeutic duplication, excessive medication and the resulting adverse drug-related events.11 Such systems also enable information exchange within clinical teams, assisting in managing demand for health services and lowering direct medical costs for consumers.12
Consumer-facing information technology can provide pragmatic, accessible and scalable mobile health interventions.13 Furthermore, it has been suggested that the use of eHealth technologies allows individuals to be more proactively involved in health management, which ultimately leads to a greater likelihood of optimal healthcare outcomes.14
One of the most important factors for the successful implementation of such systems in healthcare is user’s acceptance and use of that technology. One of the major factors leading to the failed uptake of these systems is an inadequate understanding of the sociotechnical aspects, especially the understanding of how individuals and organisations adopt new technology.15 That is, a disparity between the model of healthcare ascribed by these systems and the actual nature of healthcare often result in decreased organisational approval. Sociotechnical theory provides a model against which system implementation into workflow can be better understood.
Rationale
There is considerable potential for the use of digital analytics systems to improve the monitoring of people with SMI between periods of illness, as well as to improve the overall health and well-being for people with SMI. However, a strong evidence base is essential before specific approaches are implemented. The aim of this study is to describe two different scenarios of AI2, and a protocol for a feasibility pilot of these use-cases in order to gather data and uncover pragmatic issues necessary to inform a larger randomised control trial, as well as improve the application.
Method
AI2 application
AI2 is a cloud-based application that sources GP appointments, laboratory tests, prescription and dispense records from Medicare data view in Australia’s national electronic health record, known as My Health Record (MyHR). Medicare data is only uploaded on the MyHR every 3–4 weeks so the system is near-real time. The AI2 algorithms systematically consider the combined effects of a set of prognostic factors (such as non-adherence to medication schedules or the absence of an appointment that are then grouped and mapped by service provider and prescription type16) to estimate the level of risk an individual with SMI has in relation to relapse, or if a patient has deviated from their individualised care trajectories. These visualisation algorithms are displayed on an internet-based dashboard, and are traffic light colour coded (red=potential high-risk urgent action required, yellow=potential moderate-risk, action required, green=unlikely risk, no action required). The application was developed using open-source hosting and development tools, Java Enterprise Edition using JBoss Seam and Hibernate Frameworks. For an overview of ethical workflow issues associated with this transfer of data, see Bidargaddi et al.16
Trial plan
Two use cases of the AI2 platform will run concurrently; in the first use case (patient-signup, a personal nudging system), patients will receive text messages directly when red flags are detected. In the second use-case (clinic-signup, a clinical decision support tool), the flags are presented as a dashboard for community mental healthcare clinics allowing, healthcare professionals to better prioritise their workload by focusing on patients most at risk (red/orange flags) while reducing interactions with those who are managing better (green flag). Summary of study designs for patient-signup and clinic-signup use cases are displayed in figure 1.
Use case 1: patient-signup—a personal nudging system
Recruitment
Patient participants must satisfy the following inclusion criteria; >18 years of age; have been diagnosed with SMI as defined by the Diagnostic and Statistical Manual of Mental Disorders17 by a psychiatrist; have sufficient command of the English language; be eligible for Medicare benefits (eg, an Australian citizen or permanent resident). A person will be ineligible to participate if their treating healthcare professional determines they are unable to provide informed consent themselves or with the assistance of a family member or an authorised representative. A stepwise recruitment strategy will be utilised to recruit patients in mental health settings in metropolitan, rural and remote South Australian locations.
Eligible individuals will be approached by their treating healthcare professional during their scheduled visit, where the healthcare professional will and state that participation in the trial is voluntary ask that the patient to carefully read over the patient information sheet and consent form. To avoid feelings of coercion, patients will be invited to take the information sheet and consent form home before deciding whether they wish to participate.
Design
Patient participants will be randomised by computer software with randomisation probability set at 50/50 to one of two care groups: (1) intervention (50%; where participants receive a clinician-mediated text message following an alert) or (2) control group (50%; usual care)(figure 2). Alerts will be automatically generated by the AI2 system as described above. The decision to send an alert is decided by psychiatrists within the study team acting as ‘monitors’. The psychiatrists are trained on how to use the AI2 system, how to correctly interpret the alerts and how to specify appropriate intervention pathways when participants are identified at risk of relapse and hospitalisation. The participant will be sent a text message from a bank of individually configurable messages.
Procedure
After consenting to participate, patients must sign up to the AI2 system online providing the necessary information (Name, DOB, Medicare number) to enable the AI2 system to pull their Medicare data from MyHR system. Participants must also agree to fill in a post study questionnaire. Once the trial commences, the participant may or may not receive SMS alerts depending on their adherence to medication and care plans.
Outcomes
To assess the usefulness of the SMS nudges and experience with the AI2 system, patient recorded experience measures via both questionnaires and focus groups will be administered. Additionally, patients will complete the Medical Interview Satisfaction Scale18 at the end of the trial to assess their level of satisfaction with their healthcare professional in relation to distress relief, communication, rapport and compliance intent.
We will assess proportions of nudges that resulted in patient’s picking up a script or a GP appointment within 2 weeks of nudge. This information will be useful to estimate sample size and power calculations for future trials.
Use case 2: clinic-signup—clinical decision support tool
Recruitment
Healthcare clinics (rural and metropolitan) including community mental health, inpatient, primary care and /or hospitals or pharmacies will be approached to participate in the research. The head of each site must agree to the research and sign the required consent forms. Once the clinic has agreed to participate in the research, the study coordinators will run information sessions at participating sites to recruit individual clinicians. Information sessions will provide an overview of the AI2 study design, rationale, aims and intended outcomes and answer any question to recruit participating clinicians for each site.
Design
We will use interrupted time-series analysis—a quasi-experimental design that can evaluate an intervention effect by using retrospective and longitudinal health data.19 Enrolled health professionals will be trained to view and attend to flags of their patients displayed on the AI2 application dashboard during the intervention time period (12 months) and compare the indicators during these 12 months with the prior 12 months. Patients will not be randomised and will all receive the same level of care. Refer to figure 3 for this study design.
Procedure
Clinics that agree to participate in the trial will provide the research team with the necessary patient information (Name, DOB, Medicare number) to enable the AI2 system to pull their Medicare data from MyHR system. To participate in the research each clinician must provide online informed consent prior to gaining access to AI2. Importantly, AI2 does not replace or change usual care, it provides additional information. Healthcare professional participants will be provided with training on how to use the AI2 system and how to correctly identify and interpret the algorithms. Each clinic will be guided to discuss how the clinic can best incorporate AI2 into clinic practice, how best to use the information provided by AI2 and how best to intervene when patients are identified at risk of relapse and hospitalisation (red alert). After the healthcare professional has evaluated and reviewed an alert, they will record their response to the alert in the AI2 system. This might include no action taken, or details of communication with a patient. The AI2 generated green alerts, indicating ongoing adherence, can also be used when reviewing patients ready for potential discharge from community services to aid as a decision-making tool.
Outcomes
To assess the usefulness of AI2 in improving decisions of health professionals, we will calculate the proportion of the alerts generated by AI2 that were actioned and deemed useful by health professionals. To help with sample size calculations for a larger scale trial, differences between healthcare process and outcome metrics of patients of enrolled health professionals during the 12 months trial period with the prior 12 months period will be calculated by an interrupted time series analysis. Using linkages with health records, we will derive healthcare process, patient trajectory and outcome metrics for comparison purposes.
To evaluate the usability and acceptability of AI2 and to understand the implementation issues associated with new methods of healthcare delivery and decision support systems, consent for implementation research will be sought from participating clinicians to record, transcribe and analyse meetings and focus groups. The qualitative data will be analysed using an existing consolidated framework.20 Additionally, uptake and usage parameters will be automatically sourced by AI2 to understand uptake and patterns of use.
At the end of 12 months, all healthcare professionals involved in the intervention will be asked to fill in The Unified Theory of Acceptance and Use of Technology Scale21 that assesses their acceptance of AI2 on dimensions including: performance expectancy, attitude towards using technology, social influence, facilitating conditions and self-efficacy, anxiety and behavioural intention to use the system.
Discussion
This study will allow for the exploration of pragmatic issues that will inform a larger randomised control trial and enables us to assess the suitability of AI2 in two different use cases and, if necessary, identity how it can be refined and tailored for both clinical and consumer use.
Strengths
Current models of care for SMI are reactive. AI2, a unique third-party application based on Australia’s national electronic record, provides clinicians with the information and the opportunity to intervene early to prevent relapse and hospitalisation. Lack of connectivity between community and hospital information systems can result in ineffective handovers of medication schedules.22 Many individuals with SMI need regular monitoring and support to ensure they adhere to their medication and treatment plans. Although over 30% of people with SMI are managed by GPs, there is (1) no regular monitoring system for people managed in general practice23 and (2) GP’s are often not confident in treating or managing people with severe mental illness.24 Because of concerns over fear or relapse, community mental health professionals often delay referring people to primary care.24 As such, AI2 can support better strategic relationship to facilitate more effective initial assessments, care planning and transition of care.
In addition to the clinical application of AI2, the data collected in the course of this trial can, once de-identified, form the basis of a South Australian Mental Health Registry based on essential attributes recommended for clinical quality registries.25 There are a number of benefits to using a prospective design, most prominently that the investigative team will be able to study the process of implementation in real time rather than retrospectively, as most studies have done.26 Further, this study design allows for demonstrating the temporal sequence between the policy and the resulting outcomes.
Limitations
No power analysis was conducted for these pilot trials due to the fact that this intervention can have an effect on several different health process and health outcome metrics, and these effects are also moderated/mediated by the extent of health professional and patient participation in the intervention process. Given this complexity, we believe through the proposed pilot study we will be able to observe measures that are most likely to change in a realistic way. This would provide data needed to do sample size calculation for a realistic outcome measure, which is another objective of this pilot study. Additionally, diverging subjective interpretations of AI2-generated alerts by healthcare professionals is a potential source of bias, which will be mitigated by hands-on system training before the start of the trial.
Innovation
Innovative applications play an essential role in improving health outcomes and consumer-centred healthcare services through the use of electronic health records.16 AI2 is the first of its kid of analytic software in Australia, and national authorities are recognising the value of third-party applications for better engaging consumers in their healthcare and for aiding healthcare professionals with digital decision support tools.
Acknowledgments
The authors are gratefully acknowledge the support of Health Translation South Australia and Country Health SA.
References
Footnotes
Contributors LO-N drafted the first version of the manuscript. JS, GS and TB contributed to the revision of the manuscript. NB designed the study, formulated the manuscript structure and edited the final draft. All authors have read and approved the final manuscript.
Funding The project is supported by the Medical Research Future Fund (MRFF) Rapid Applied Research Translation Program, undertaken by Health Translation South Australia.
Competing interests None declared.
Ethics approval This study received approval from The Southern Adelaide Clinical Human Research Ethics Committee (HREA: AK03426, Protocol: 177.17).
Provenance and peer review Not commissioned; internally peer reviewed. | https://informatics.bmj.com/content/27/1/e100084 |
Written by Leanne Kitchen, The Produce Bible is a handy reference guide to fresh fruits and vegetables that will appeal to both gardeners and cooks. With each featured food there is a bit of history and lore followed by selection and storage information, varieties, preparation and culinary uses and 2-3 representative recipes. The content is organized into the 4 featured foods; Fruits, Nuts, Vegetables and Herbs and each is in turn broken down into food type, for example fruit is organized by citrus; soft; stone; tropical and vegetables by roots and tubers; stems and bulbs; flowers; leaves; fruit vegetables; seeds and pods and fungi.
As much a cookbook as a guide, the recipes in the Produce Bible – more than 200 in all – include a selection of side dishes, meat and meat-free main dishes and desserts and offer a nice repertoire of traditional favorites – brussel sprouts with pancetta; pork chops with braised red cabbage; apple galette and sweet corn chowder – along with some fresh new ideas such as parsnip and leek puree; apple and passion fruit crumble; and plum and rosemary flatbread.
The Produce Bible has a fresh, easy to follow design with some excellent photography of the fresh produce and many of the finished dishes. Whether you grow your own produce or frequent your local farmers markets this is a great guide to to follow along the seasons with. | http://www.cookbookswelove.com/cookbooks/the-produce-bible/ |
Updated: Feb 11, 2020
Hi friends! Let’s talk about LEONARDO DA VINCI!
Why?
Because he is considered one of the most talented and intelligent people
of all time and… guess… he was Italian!
You can find here the life and works of Leonardo da Vinci. You can also have a look at
some pictures of Santa Maria’s students working on a school project about Leonardo
da Vinci’s inventions and machines.
• Occupation: Artist, Inventor, Scientist
• Born: April 15, 1452 in Vinci, Italy
• Died: May 2, 1519 in Amboise, Kingdom of France
• Famous works: Mona Lisa, The Last Supper, The Vitruvian Man
• Style/Period: High Renaissance
Biography:
Leonardo da Vinci was an artist, scientist and inventor during the Italian Renaissance. The term Renaissance Man (someone who does many things very well) was coined from Leonardo's many talents and is today used to describe people who resemble da Vinci.
Where was Leonardo da Vinci born?
Leonardo was born in the town of Vinci, Italy on April 15, 1452. Not much is known about his childhood, his father was wealthy and had a number of wives. At the age of 14 he became an apprentice of the famous artist named Verrocchio. This is where he learned about art, drawing, painting and more.
Leonardo the Artist
Leonardo da Vinci is regarded as one of the greatest artists in history. Leonardo excelled in many areas including drawing, painting, and sculpture. He is probably most famous for his paintings and gained great fame during his own time due to his art. Two of his most famous paintings known all over the world are the Mona Lisa and The Last Supper.
Leonardo's drawings are also quite extraordinary. He would keep journals full of drawings and sketches, often of different subjects that he was studying. Some of his drawings were previews to later paintings, some were studies of anatomy, some were closer to scientific sketches. One famous drawing is the Vitruvian Man. It is a picture of a man who has perfect proportions based on the notes of the Roman architect Vitruvius. Other famous drawings include a design for a flying machine and a self portrait.
Leonardo: Inventor and Scientist
Many of da Vinci's drawings and journals were made in his pursuit of scientific knowledge and inventions. His journals were filled with over 13,000 pages of his observations of the world. He drew pictures and designs of hang gliders, helicopters, war machines, musical instruments, various pumps, and more. He was interested in civil engineering projects and designed a single span bridge, a way to divert the Arno River, and moveable barricades which would help protect a city in the case of attack.
Many of his drawings were on the subject of anatomy. He studied the human body including many drawings of muscles, tendons, and the human skeleton. He had detailed figures of various parts of the body including the heart, arms, and other internal organs. Leonardo didn't just study the human anatomy either. He also had a strong interest in horses as well as cows, frogs, monkeys, and other animals.
Fun Facts about Leonardo da Vinci
• The term Renaissance Man means someone who is good at everything. Leonardo is
considered the ultimate Renaissance man.
• Some people claim he invented the bicycle.
• He was very logical and used a process like the scientific method when investigating a subject.
• His Vitruvian man is on the Italian one Euro coin.
• Only around 15 of his paintings are still around.
• The Mona Lisa is also called "La Giaconda" meaning the laughing one.
• Unlike some artists, Leonardo was very famous for his paintings while he was still
alive. We have only recently realized what a great scientist and inventor he was. | https://www.santamariayoung.org/post/famous-italian-people-leonardo-da-vinci |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.