content
string | pred_label
string | pred_score
float64 |
---|---|---|
繁体 English 中英
类型错误:<<:'str' 和 'int' 的操作数类型不受支持
[英]TypeError: unsupported operand type(s) for <<: 'str' and 'int'
谁能给我一些关于我 10 岁儿子正在尝试的 Python 项目的广泛指导? 我很少寻找特定的编码解决方案,但我希望这是一个提出问题的好地方。 我想看看我儿子是否正在着手对他的编码项目进行实际操作,以及是否有一种相对简单的方法可以让他学习正确的步骤。 或者,这对于一个喜欢阅读和尝试各种编码项目只是为了好玩的 10 岁孩子来说是不合时宜的? 如您所见,我不是编码员,对此类项目知之甚少,因此将不胜感激!
我儿子喜欢密码学,他告诉我他尝试了下面的 Python 代码。 他希望构建一个类似海绵的功能来加密按摩,使其无法被解密。 这受到他的书“Serious Cryptography”(J. Aumasson 着)中标题为“基于排列的哈希函数:海绵函数”的部分的启发。 当他运行他编写的代码时,他收到错误消息“类型错误:<<:'str'和'int'不受支持的操作数类型”(请参阅他在代码下方终端中的交互)。
非常感谢! 亚历山大
这是他的代码:
import math
import textwrap
plaintext = raw_input("The value to be hashed: ") # Get the user to input the data to be hashed
nonce = raw_input("The nonce to be used: ") # Get the user to input the nonce to be used
key = raw_input("The key to be used: ") # Get the user to input the key to be used
blocks = textwrap.wrap(plaintext, 16) # Split the string into 128-bit blocks
if len(blocks[len(blocks)-1]) < 16: # Check if the last block is less than 128 bits
while len(blocks[len(blocks)-1]) < 16: # Keep iterating the following code
blocks[len(blocks)-1] += "." # Add padding to the end of the block to make it 128-bit
sponge = nonce # Set the sponge's initial state to that of the nonce
for j in blocks: # Absorb all of the blocks
sponge = (sponge << 128) + j # Concatenate the current sponge value and the block
sponge = textwrap.wrap(sponge, 128) # Convert the sponge into 128-bit blocks
for z in sponge: # Keep iterating the following code
z = z^j # XOR the sponge block with the message block
sponge = join(sponge) # Convert the blocks back into a string
sponge = textwrap.wrap(sponge, len(key)*8) # Convert the sponge into blocks with the same length of the key
output = sponge # Create a new variable to save space
del nonce, blocks # Delete variables to save space
while len(output) > 1: # Keep iterating the following code
output[1] = output[1]^output[0] >> output[0] # XOR the second element with the first, then shift forward
del output[0] # Delete the first element, so it can repeat again
tag = ((output^plaintext) <<< sponge) + output # Generate an authentication tag. That's not overkill, is it?
print output # Oh yeah, just print it in hexadecimal, I dunno how to
当他在终端运行脚本时,这是交互:
• 要散列的值:abcioagdsbvasizfuvbosuif
• 要使用的随机数:iugzaliuglieas
• 要使用的密钥:asljdgadskj
例外:
Traceback (most recent call last):
File "DarkKnight-Sponge.py", line 13, in <module>
sponge = (sponge << 128) + j # Concatenate the current sponge value and the block
TypeError: unsupported operand type(s) for <<: 'str' and 'int'
恭喜你儿子! 这个项目对我来说看起来很现实。 我能想到的唯一雄心勃勃的事情是直接钻研像<<^这样的按位运算符,而不是尝试对字符序列实现相应的操作。 位运算符有时看起来像算术黑魔法,因为它们操纵数字的内部二进制表示,我们不像数字的十进制表示或文本那样熟悉。
了解错误信息
TypeError: unsupported operand type(s) for <<: 'str' and 'int'
这个错误非常简单:它表示无法执行sponge << 128操作,因为sponge是一个str ,即(字符)字符串,即文本,而 128 是一个 int,即整数。
想象一下,如果您让计算机计算"three" + 2 它会返回一个错误,因为+需要两个数字,但"three"是一个字符串,而不是一个数字。 同样,如果你让计算机计算"327" + 173 ,它会返回一个错误,因为"327"是文本,而不是数字。
了解发生错误的线路
运算符<<左移运算符。 它将一个数字向左移动一定数量的位。 计算机以二进制表示形式存储数字; 我们人类更习惯于十进制表示,所以让我们用“左移数字”操作来做一个比喻。 “向左移动一个数字”意味着将它乘以 10 的幂。例如,138 向左移动两次就是 13800。我们在右边用零填充。 在二进制表示中,bitshift 的工作原理相同,但改为乘以 2 的幂。 二进制表示的 138 是1110110 将它向左移动两次给出111011000 ,这与将其乘以100 (即 4)相同。
如果spongej都是数字,并且j小于 2^128,则该行:
sponge = (sponge << 128) + j # Concatenate the current sponge value and the block
sponge向左移动 128 位,然后将小于 128 位的数字添加到结果中。 实际上,这是将sponge位与j位连接起来。 回到我们的十进制类比:如果x是一个数字,而y是一个小于 100 的数字,那么数字x * 100 + y是通过连接xy的数字获得的数字。 例如, 1374 * 100 + 56 = 137456
解决问题
我没有读过启发这段代码的密码学书籍,所以我只是从这里开始猜测。
我的理解是这本书期望plaintextnoncekey是数字。 但是,在您儿子的代码中,它们都是文本。 这两种类型的对象之间的区别并非不可调和。 无论如何,在计算机的内存中,所有内容都存储为位序列。 数字是位序列; 一个字符串是一个字符序列,每个字符本身就是一个短的位序列。
我看到三种可能性:(1)在执行操作之前将所有文本转换为数字; (2) 调整操作,使它们可以应用于字符串而不是整数; (3) 将所有文本转换为仅包含字符01字符串并调整操作,以便将它们应用于此类序列。 密码算法在现实世界中的高效实现肯定都选择了第二个选项。 第三个选项显然是三个选项中效率较低的一个,但出于学习目的,这是一个可能的选项。
看看你的代码,我注意到所有使用的操作都是关于序列的操作,而不是关于算术运算。 正如我提到的, (sponge << 128) + j是两个位序列的串联。 稍后在代码^使用的按位异或运算期望两个长度相同的位序列,并返回一个长度相同的序列,在两个序列具有不同位的每个位置处为1 ,在两个序列具有不同位的每个位置处返回0序列具有相同的位。 例如, 00010110 ^ 00110111 = 00100001因为第三位和第八位不同,但所有其他位都相等。
将文本转换为 int
要将文本转换为数字(我称之为选项 2),您可以用这些行替换代码的前三行:
plaintext_string = raw_input("The value to be hashed: ") # Get the user to input the data to be hashed
nonce_string = raw_input("The nonce to be used: ") # Get the user to input the nonce to be used
key_string = raw_input("The key to be used: ") # Get the user to input the key to be used
def string_to_int(txt):
number = 0
for c in txt:
number = (number << 8) + ord(c)
return number
plaintext = string_to_int(plaintext_string)
nonce = string_to_int(plaintext_string)
key = string_to_int(key_string)
这是如何工作的:python 函数ord每个 ascii 字符c映射到一个 8 位数字。 8 位块使用公式number = (number << 8) + ord(c) ,您可以从上面的讨论中识别出来。
这不足以使您的代码正常工作,因为之后直接使用的textwrap.wrap()函数需要一个字符串,而不是一个 int。 一种可能性是用自定义函数text_to_intblocks()替换textwrap.wrap()函数:
def string_to_intblocks(txt, blocksize):
blocks = []
block_number = 0
for i,c in enumerate(txt):
block_number = (block_number << 8) + ord(c)
if i % blocksize == 0:
blocks.append(block_number)
block_number = 0
return blocks
然后将blocks = textwrap.wrap(plaintext, 16)替换为blocks = string_to_intblocks(plaintext_string, 16)
这仍然不足以修复您儿子的代码。 我确信以下六行中存在逻辑错误,尽管修复它需要比我目前对算法有更好的理解:
sponge = nonce # Set the sponge's initial state to that of the nonce
for j in blocks: # Absorb all of the blocks
sponge = (sponge << 128) + j # Concatenate the current sponge value and the block
sponge = textwrap.wrap(sponge, 128) # Convert the sponge into 128-bit blocks
for z in sponge: # Keep iterating the following code
z = z^j # XOR the sponge block with the message block
sponge = join(sponge)
暂无
暂无
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:[email protected].
粤ICP备18138465号 © 2020-2024 STACKOOM.COM
|
__label__pos
| 0.972163 |
Nostradamus Predicting Maelstroms
Learn about our RFC process, Open RFC meetings & more.Join in the discussion! »
virtual-dom
2.1.1 • Public • Published
virtual-dom
A JavaScript DOM model supporting element creation, diff computation and patch operations for efficient re-rendering
build status NPM version Coverage Status Davis Dependency status experimental
Sauce Test Status
Motivation
Manual DOM manipulation is messy and keeping track of the previous DOM state is hard. A solution to this problem is to write your code as if you were recreating the entire DOM whenever state changes. Of course, if you actually recreated the entire DOM every time your application state changed, your app would be very slow and your input fields would lose focus.
virtual-dom is a collection of modules designed to provide a declarative way of representing the DOM for your app. So instead of updating the DOM when your application state changes, you simply create a virtual tree or VTree, which looks like the DOM state that you want. virtual-dom will then figure out how to make the DOM look like this efficiently without recreating all of the DOM nodes.
virtual-dom allows you to update a view whenever state changes by creating a full VTree of the view and then patching the DOM efficiently to look exactly as you described it. This results in keeping manual DOM manipulation and previous state tracking out of your application code, promoting clean and maintainable rendering logic for web applications.
Example
var h = require('virtual-dom/h');
var diff = require('virtual-dom/diff');
var patch = require('virtual-dom/patch');
var createElement = require('virtual-dom/create-element');
// 1: Create a function that declares what the DOM should look like
function render(count) {
return h('div', {
style: {
textAlign: 'center',
lineHeight: (100 + count) + 'px',
border: '1px solid red',
width: (100 + count) + 'px',
height: (100 + count) + 'px'
}
}, [String(count)]);
}
// 2: Initialise the document
var count = 0; // We need some app data. Here we just store a count.
var tree = render(count); // We need an initial tree
var rootNode = createElement(tree); // Create an initial root DOM node ...
document.body.appendChild(rootNode); // ... and it should be in the document
// 3: Wire up the update logic
setInterval(function () {
count++;
var newTree = render(count);
var patches = diff(tree, newTree);
rootNode = patch(rootNode, patches);
tree = newTree;
}, 1000);
View on RequireBin
Documentation
You can find the documentation for the seperate components in their READMEs
For information about the type signatures of these modules feel free to read the javascript signature definition
DOM model
virtual-dom exposes a set of objects designed for representing DOM nodes. A "Document Object Model Model" might seem like a strange term, but it is exactly that. It's a native JavaScript tree structure that represents a native DOM node tree. We call this a VTree
We can create a VTree using the objects directly in a verbose manner, or we can use the more terse virtual-hyperscript.
Example - creating a VTree using the objects directly
var VNode = require('virtual-dom/vnode/vnode');
var VText = require('virtual-dom/vnode/vtext');
function render(data) {
return new VNode('div', {
className: "greeting"
}, [
new VText("Hello " + String(data.name))
]);
}
module.exports = render;
Example - creating a VTree using virtual-hyperscript
var h = require('virtual-dom/h');
function render(data) {
return h('.greeting', ['Hello ' + data.name]);
}
module.exports = render;
The DOM model is designed to be efficient to create and read from. The reason why we don't just create a real DOM tree is that creating DOM nodes and reading the node properties is an expensive operation which is what we are trying to avoid. Reading some DOM node properties even causes side effects, so recreating the entire DOM structure with real DOM nodes simply isn't suitable for high performance rendering and it is not easy to reason about either.
A VTree is designed to be equivalent to an immutable data structure. While it's not actually immutable, you can reuse the nodes in multiple places and the functions we have exposed that take VTrees as arguments never mutate the trees. We could freeze the objects in the model but don't for efficiency. (The benefits of an immutable-equivalent data structure will be documented in vtree or blog post at some point)
Element creation
createElement(tree:VTree) -> DOMNode
Given that we have created a VTree, we need some way to translate this into a real DOM tree of some sort. This is provided by create-element.js. When rendering for the first time we would pass a complete VTree to create-element function to create the equivalent DOM node.
Diff computation
diff(previous:VTree, current:VTree) -> PatchObject
The primary motivation behind virtual-dom is to allow us to write code independent of previous state. So when our application state changes we will generate a new VTree. The diff function creates a set of DOM patches that, based on the difference between the previous VTree and the current VTree, will update the previous DOM tree to match the new VTree.
Patch operations
patch(rootNode:DOMNode, patches:PatchObject) -> DOMNode newRootNode
Once we have computed the set of patches required to apply to the DOM, we need a function that can apply those patches. This is provided by the patch function. Given a DOM root node and a set of DOM patches, the patch function will update the DOM. After applying the patches to the DOM, the DOM should look like the new VTree.
Original motivation
virtual-dom is heavily inspired by the inner workings of React by facebook. This project originated as a gist of ideas, which we have linked to provide some background context.
Tools
Install
npm i virtual-dom
DownloadsWeekly Downloads
17,987
Version
2.1.1
License
MIT
Last publish
Collaborators
• avatar
|
__label__pos
| 0.936424 |
Habitat fragmentation and a decrease in population size may lead to
Habitat fragmentation and a decrease in population size may lead to a loss in population genetic diversity. in the northernmost limit of distribution of the species, represent three genetic clusters. These results Rabbit Polyclonal to SLC27A4 are in agreement with the pattern of geographic distribution of the studied populations. (Bertol.) Kuntze, also known as the Brazilian pine, is usually a dioecious wind-pollinated species whose seeds are dispersed mainly by authocory. is one of the most important trees in its natural range of distribution, due to its economical, social and ecological relevance. As a result of the high quality of its timber, the wood is used for construction in general, furniture making and the production of long-fibre cellulose (Carvalho, 2003). Furthermore, through being rich in starch, the seeds constitute an important source of nutrients for humans (Reitz was formerly about 200,000 square kilometers (Reitz natural range still remains, thus placing the species in the critically endangered category (IBAMA, 1992; IUCN, 2008). Nowadays, araucaria forests are limited to altitudes above 600 m, over a broad organic range in the three southernmost expresses of Brazil (Rio Grande perform Sul, Santa Paran and Catarina, between latitudes 24 and 30 S. The types is 133099-04-4 certainly sparsely pass on throughout various other expresses in Brazil also, such as for example Minas Gerais, S?o Rio and Paulo de Janeiro, seeing that isolated, relict populations, between latitudes 18 and 24 S with higher altitudes (1200 m). In addition, it occurs as a little extant inhabitants in the Province of Missiones, in Argentine (Hueck, 1972; Mattos, 1994) (Body 1). However, before, the species north was spread further. Ruschi (1950) details a no more existent inhabitants of araucaria through the southern region from the condition of Esprito Santo 133099-04-4 (Serra perform Capara, Latitude 20 26′ S, 1700 m elevation). Predicated on palynological research, Ledru (1996) reported the current presence of pollen records through the Later Pleistocene in the Lagoa Campestre lake in Salitre, in the condition of Minas Gerais (19 S, 46 46′ W, at 970 m). Research predicated on the Bioclim algorithm (Busby, 1991), as stated by Koch (2007), which consider data on types incident, mean pluviometry and mean temperatures, concur that araucaria forests may appear at lower latitudes. Body?1 Map displaying the estimated original distribution of in Brazil, location lately Quaternary pollen information containing (Kershaw and Wagstaff, 2001) as well as the sampled populations: RS-1, RS-2, RS-3, PR, MG, RJ-1, RJ-2, RJ-3 … It’s possible the fact that depletion of wide areas of araucaria forests may have led to a decrease in genetic diversity, to the point of interfering in its use for conservation and exploitation of its genetic resources. At present, a large number of approaches have been undertaken by using various markers, all of which point to the fact that, notwithstanding araucaria forests having undergone drastic reduction in areas of natural distribution, a considerable level of genetic diversity has still been maintained (Shimizu in southeastern Brazil. The main goal of the present study was to evaluate the reduction in genetic diversity of five populations in southeastern Brazil, when compared with other populations from the south (the descendants of continuous forests). Methodology Sampling In order to analyse a large part of the natural range of (1995). Selective amplifications were done around the pre-amplified fragments by using six primer-enzyme combinations (PECs): at 5% level were assessed using Arlequin 133099-04-4 (version 3.11, Excoffier from = 2 to = 12 were tried, and twelve replicates were run for each was chosen according to that suggested by Evanno (2005). Outcomes Genetic variety The 6 primer-enzyme combos found in this ongoing function yielded a complete of 673 unambiguously scoreable fragments. The percentage of polymorphic loci for every primer-enzyme mixture was higher in the southern populations and low in populations RJ-2, RJ-3 and RJ-4. The best percentage was discovered with PEC populations examined. Indirect procedures of gene movement and interactions between populations Needlessly to say, the hereditary length (Nei, 1978) was the best between RS-1 and MG (mean = 0.076), that are 1034 km apart approximately, and the cheapest between populations RS-1 and RS-2 (mean = 0.006) (Desk 4). Regardless of their physical proximity, the excellent stage in this evaluation was RJ-1 getting more distant through the various other four populations from southeastern Brazil (RJ-2, RJ-3, MG) and RJ-4 than through the southern populations. Desk?4 Unbiased Nei genetic ranges (Nei, 1978) among all of the nine populations (below diagonal) and geographic length between populations (km) (above diagonal). The UPGMA.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.634213 |
How bad are Ref Errors on a NX Controller ?
m.Bernerm.Berner FreelancerPosts: 14
edited January 2019 in NetLinx Studio
Hey
Just wondering how bad those Ref Errors really are.
For example Index 0 or Index to Large...
What is the NX controller doing in case of such an error ?
Just ignoring or doing something stupid ?
Manuel
Comments
Sign In or Register to comment.
|
__label__pos
| 0.930266 |
ERROR: type should be string, got "https://insu.hal.science/insu-02539763Bernard, MaximeMaximeBernardGR - Géosciences Rennes - UR - Université de Rennes - INSU - CNRS - Institut national des sciences de l'Univers - OSUR - Observatoire des Sciences de l'Univers de Rennes - UR - Université de Rennes - INSU - CNRS - Institut national des sciences de l'Univers - UR2 - Université de Rennes 2 - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - CNRS - Centre National de la Recherche ScientifiqueSteer, PhilippePhilippeSteerGR - Géosciences Rennes - UR - Université de Rennes - INSU - CNRS - Institut national des sciences de l'Univers - OSUR - Observatoire des Sciences de l'Univers de Rennes - UR - Université de Rennes - INSU - CNRS - Institut national des sciences de l'Univers - UR2 - Université de Rennes 2 - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - CNRS - Centre National de la Recherche ScientifiqueGallagher, KerryKerryGallagherGR - Géosciences Rennes - UR - Université de Rennes - INSU - CNRS - Institut national des sciences de l'Univers - OSUR - Observatoire des Sciences de l'Univers de Rennes - UR - Université de Rennes - INSU - CNRS - Institut national des sciences de l'Univers - UR2 - Université de Rennes 2 - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - CNRS - Centre National de la Recherche ScientifiqueEgholm, David L.David L.EgholmDepartment of Earth Sciences [Aarhus] - Aarhus University [Aarhus]The effects of ice and hillslope erosion and detrital transport on the form of detrital thermochronological age probability distributions from glacial settingsHAL CCSD2020[SDU.STU.GC] Sciences of the Universe [physics]/Earth Sciences/Geochemistry[SDU.STU.GM] Sciences of the Universe [physics]/Earth Sciences/GeomorphologyDubigeon, Isabelle2020-04-10 12:37:192023-07-12 05:16:272020-04-10 12:37:19enConference papers1The impact of glaciers on the Quaternary evolution of mountainous landscapes remains controversial. While in-situ low-temperature thermochronology offers insights on past rock exhumation and landscape erosion, it also suffers from biases due to the difficulty of sampling bedrocks buried under the ice of glaciers. Detrital thermochronology attempts to bypass this issue by sampling sediments at, e.g. the catchment outlet, that may originate from beneath the ice. However, the age distribution resulting from detrital thermochronology does not only inform on the catchment exhumation, but also on the patterns and rates of surface erosion and sediment transport. In this study, we use a new version of a glacial landscape evolution model, iSOSIA to address the role of erosion and sediment transport by the ice on the form of synthetic detrital age distributions and thus, for inferred catchment erosion from such data. Sediments are tracked as Lagrangian particles that can be formed by bedrock erosion, transported by ice or hillslope processes and deposited. We apply our model to the Tiedemann glacier (British Columbia, Canada), which has simple morphological characteristics, such as a straight form and no connectivity with large tributary glaciers. Synthetic detrital age distributions are generated by specifying an erosion history, then sampling sediment particles at the frontal moraine of the modelled glacier. The detrital ages are represented as synoptic probability density functions (SPDFs).A characterization of sediment transport shows that 1500 years are required to reach an equilibrium for detrital particles age distributions, due to the large range of particle transport times from their sources to the frontal moraine. Second, varying sampling locations and strategies at the glacier front lead to varying detrital SPDFs, even at equilibrium. These discrepancies are related to (i) the selective storage of a large proportion of sediments in small tributary glaciers and in lateral moraines, (ii) the large range of particle transport times, due to varying transport lengths and to a strong variability of glacier ice velocity, (iii) the heterogeneous pattern of erosion, (iv) the advective nature of glacier sediment transport along ice streamlines that leads to a poor lateral mixing of particle detrital signatures inside the frontal moraine. Third, systematic comparisons between (U-Th)/He and fission track detrital ages, with different age-elevation profiles and relative age uncertainties, show that (i) the age increasing rate with elevation largely controls the ability to track sediment sources, and (ii) qualitative first-order information about distribution of erosion may still be extracted from thermochronological system with high variable uncertainties (> 30 %). Overall, our distributions in glaciated catchments are strongly impacted by erosion and transport processes and by their spatial variability. Combined with bedrock age distributions, detrital thermochronology can offer a means to constrain the transport pattern and time of sediment particles. However, results also suggest that detrital age distributions of glacial features like frontal moraines, are likely to reflect a transient case as the time required to reach detrital thermochronological equilibrium is of the order of the short-timescale glaciers dynamic variability, as little ice ages or recent glaciers recessions." |
__label__pos
| 0.520222 |
OSJOY
How To Fix The Garbled Code On Notepad Files?
How To Fix The Garbled Code On Notepad Files
5
(22)
Summary: Usually, we will record some information in a notepad, such as passwords, keys, ideas, notes, etc. But sometimes, when we open it after a while, we find many garbled characters inside. What should we do? This article will show how to fix the garbled code on notepad files.
When working extensively with Plain Text files with the TXT file extension, individuals may encounter documents with garbled text instead of the expected content. This issue often occurs when the corrupted text document is written in a non-Latin alphabet-based foreign language. Still, it can happen with any file if there are inconsistencies in the settings used during the file’s saving process.
To solve the garbled code on notepad files, you must first understand the cause of the problem. The primary reason is the wrong document format. Generally, there are several reasons for garbled characters:
Methods to Fix the Garbled Code on Notepad Files
Method 1: Convert the File Extension
First, let’s solve the problem caused by the file format disorder. Just change the file extension back to the original format. For example, if your file is originally a word file, you can change its file extension to .doc or .docx. If it is a web page file, change it to .html, etc. Then open these files with the appropriate program.
Method 2: Change the Encoding Option
Mismatched encoding formats for text files can also lead to garbled code. In such case, you need to change the encoding form as follows:
1. Open the garbled file with Notepad and select Save>Save As.
2. Choose ANSI or UTF-8 as encoding format and click Save.
Method 3: Change the System Locale
When opening various text documents with Notepad results in gibberish, it’s typically a system issue. The problem could be that the encoding used for the file is not compatible with the native character encoding used by the version of Windows you have.
So if the previous method doesn’t work, we can change the default system language in Windows.
1. Search control panel in the search bar and open it.
2. Click on Change date, time or number formats under Clock and Region.
3. Select the Administrative tab and click on Change system locale option.
4. In the next window, choose the language you need from the drop-down menu and click OK.
5. Close Control Panel and restart your computer to make the changes effective.
Method 4: Use Restore Previous Version
You will also get a Notepad garbled codes issue when a Notepad file gets corrupted. Several common reasons can cause Notepad file corruption or damage, such as file header corruption, improper system shutdown, virus attacks, and incomplete download and compression issues.
Many Windows users enable Windows backup tools such as file history to protect data security and regularly back up their files. If this is the case for you, here are some steps you can take when the above two methods fail:
1. Go to the location where the Notepad file is stored.
2. Right-click on the corrupted file and select Previous Version from the context menu.
3. Choose the correct previous version and click on Restore.
4. Now, open the Notepad file to see if the problem is fixed.
Method 5: Use Microsoft Word to Repair Notepad Files
1. Open the Word app, then click File and select Options.
2. Shift to the Advanced tab and navigate to the General section.
3. Check Confirm file format conversion on open and click OK to save changes.
4. Click File and select Open.
5. When the Open window appears, hit the Browse button.
6. Choose All Files and select Recover Text from Any File (*.*).
7. Select the corrupted notepad file from the list and click Open.
8. Now, you can check whether this method helps to repair the text file.
Method 6: Open Notepad Files in the Browser
The simplest and most effective way to view the original file is by opening it with a web browser. Web browsers are designed to translate data between encoding schemes. You can use either Microsoft Edge or Google Chrome to open the text file, and it will display the language correctly.
Conclusion
The garbled code of the Notepad file is not a big problem. When you encounter garbled code on Notepad files, the six recommended methods in this article should solve the problem in no time. If you have any other valuable methods or suggestions, welcome to leave a comment below.
How useful was this post?
Click on a star to rate it!
Average rating 5 / 5. Vote count: 22
No votes so far! Be the first to rate this post.
Exit mobile version
|
__label__pos
| 0.938609 |
Impact of Artificial Intelligence System and Volumetric Density on Risk Prediction of Interval, Screen-Detected, and Advanced Breast Cancer
J Clin Oncol. 2023 Jun 10;41(17):3172-3183. doi: 10.1200/JCO.22.01153. Epub 2023 Apr 27.
Abstract
Purpose: Artificial intelligence (AI) algorithms improve breast cancer detection on mammography, but their contribution to long-term risk prediction for advanced and interval cancers is unknown.
Methods: We identified 2,412 women with invasive breast cancer and 4,995 controls matched on age, race, and date of mammogram, from two US mammography cohorts, who had two-dimensional full-field digital mammograms performed 2-5.5 years before cancer diagnosis. We assessed Breast Imaging Reporting and Data System density, an AI malignancy score (1-10), and volumetric density measures. We used conditional logistic regression to estimate odds ratios (ORs), 95% CIs, adjusted for age and BMI, and C-statistics (AUC) to describe the association of AI score with invasive cancer and its contribution to models with breast density measures. Likelihood ratio tests (LRTs) and bootstrapping methods were used to compare model performance.
Results: On mammograms between 2-5.5 years prior to cancer, a one unit increase in AI score was associated with 20% greater odds of invasive breast cancer (OR, 1.20; 95% CI, 1.17 to 1.22; AUC, 0.63; 95% CI, 0.62 to 0.64) and was similarly predictive of interval (OR, 1.20; 95% CI, 1.13 to 1.27; AUC, 0.63) and advanced cancers (OR, 1.23; 95% CI, 1.16 to 1.31; AUC, 0.64) and in dense (OR, 1.18; 95% CI, 1.15 to 1.22; AUC, 0.66) breasts. AI score improved prediction of all cancer types in models with density measures (PLRT values < .001); discrimination improved for advanced cancer (ie, AUC for dense volume increased from 0.624 to 0.679, Δ AUC 0.065, P = .01) but did not reach statistical significance for interval cancer.
Conclusion: AI imaging algorithms coupled with breast density independently contribute to long-term risk prediction of invasive breast cancers, in particular, advanced cancer.
Publication types
• Research Support, N.I.H., Extramural
MeSH terms
• Artificial Intelligence
• Breast / diagnostic imaging
• Breast Density
• Breast Neoplasms* / pathology
• Early Detection of Cancer / methods
• Female
• Humans
• Mammography / methods
• Retrospective Studies
|
__label__pos
| 0.996274 |
The Benefits of Delta-9 Gummies: A Comprehensive Guide
Table of Contents
1. Table of Contents
2. What Are Delta-9 Gummies?
3. How Delta-9 Gummies Work
4. Health Benefits of Delta-9 Gummies
5. Potential Side Effects
6. How to Choose Quality Delta-9 Gummies
7. Usage Tips and Dosages
8. Are Delta-9 Gummies Legal?
9. Closing Thoughts
With its main psychoactive ingredient, delta-9 gummies are a well-liked substitute for more conventional cannabis intake techniques like smoking or vaping. New and seasoned users will find these edibles intriguing since they offer accurate dosage, simplicity of usage, and a wide range of tastes. Delta-9 gummies may affect mood, hunger, and pain perception by promoting homeostasis through interactions with the body’s endocannabinoid system. Although they provide advantages like reduced stress, pain management, and better sleep, buying high-quality goods is essential, and being mindful of potential adverse effects is necessary. Recognizing their legal status and using Delta-9 gummies responsibly can improve your overall experience with them.
What Are Delta-9 Gummies?
The main psychoactive ingredient in cannabis, delta-9-tetrahydrocannabinol (THC), is found in popular edible forms such as Delta 9 gummies. In contrast to conventional delivery techniques like vaping or smoking, gummies provide a discrete and palatable substitute. Because of their exact dose and ease of usage, Delta 9 gummies are highly recommended and may be used by both rookie and expert users. These candies are available in various flavors and intensities, so customers may choose from various possibilities to fit their tastes.
How Delta-9 Gummies Work
After consumption, Delta-9 gummies engage the body’s endocannabinoid system, which is essential to preserving homeostasis. Numerous physiological functions, such as mood, hunger, pain perception, and immunological response, are regulated by this system. When you ingest Delta-9 gummies, the THC is broken down in the liver to produce an even more powerful substance known as 11-hydroxy-THC. Compared to smoking or vaping, this change may have a more substantial and more persistent effect. For a more thorough explanation of the functioning of the endocannabinoid system, please read this study paper.
Health Benefits of Delta-9 Gummies
Delta-9 gummies offer numerous health benefits, including stress and anxiety relief, pain management, improved sleep, and appetite stimulation. They provide:
• A calming effect.
• Promoting relaxation and well-being.
• Making them a popular choice for those with daily pressures.
Delta-9 THC interacts with neural pathways, providing relief from chronic and acute pain, especially for conditions like arthritis, fibromyalgia, and migraines. The relaxing properties of Delta-9 gummies help individuals fall asleep faster and enjoy a more restful sleep. Additionally, Delta-9 THC increases hunger, aiding in nutritional intake for individuals with conditions like cancer or HIV/AIDS.
Potential Side Effects
Even though Delta-9 gummies provide several health advantages, it’s essential to be aware of any possible negative effects. Frequent adverse effects include transient anxiety, vertigo, and dry mouth. These can be lessened by beginning with a low dosage and monitoring your body’s reaction. Drinking lots of water and eating the gummies in a cozy, familiar setting is also advised. Furthermore, especially at larger dosages, some users may suffer minor hallucinations or an elevated heart rate. If you are concerned about any pre-existing problems, always get medical advice.
How to Choose Quality Delta-9 Gummies
Selecting top-notch Delta-9 gummies will significantly improve your experience. Seek out goods undergoing independent laboratory testing to guarantee purity and efficacy. The presence of no dangerous pollutants in the product may be reliably determined by looking at the certificate of analysis (COA). Examine the findings of microbiological, heavy metal, and pesticide tests while examining COAs. Choose products that do not utilize artificial additives and instead use natural substances. Examining consumer feedback and learning more about the producer can also give important information about the product’s quality.
Usage Tips and Dosages
It is advised that first-time users begin with a modest dose of Delta-9 THC, often 5–10 mg. You should then monitor your reactions and progressively raise the amount if needed. Gummies should be consumed in a comfortable environment, mainly if you are new to THC edibles. It’s also crucial to exercise patience since, depending on your metabolism and previous eating habits, the effects of edibles may take anywhere from 30 minutes to 2 hours to manifest.
Are Delta-9 Gummies Legal?
The legality of Delta-9 gummies varies by location. In some regions, they are completely legal, while in others, they may face restrictions or be entirely prohibited. The 2018 Farm Bill legalized hemp-derived Delta-9 THC products with a THC concentration of less than 0.3%. However, state laws can be more restrictive. It’s essential to check your local laws before purchasing or consuming Delta-9 THC products to ensure compliance. Keeping updated on changes in legislation can also help you stay informed about your legal rights and responsibilities.
Closing Thoughts
Gummies made from delta-9 provide a practical and efficient approach to benefit from THC. You may improve your experience and make an educated choice by knowing how they operate, their possible advantages and disadvantages, and how to select high-quality items. Always remember to start with a small dosage, research the laws, and use them responsibly. Adding Delta-9 gummies to your health regimen can help you reduce discomfort, manage stress, and have a more peaceful evening.
Leave a Comment
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.898336 |
Home » Computer Graphics Introduction of Shading
Computer Graphics Introduction of Shading
by
Introduction of Shading
Shading is referred to as the implementation of the illumination model at the pixel points or polygon surfaces of the graphics objects.
Shading model is used to compute the intensities and colors to display the surface. The shading model has two primary ingredients: properties of the surface and properties of the illumination falling on it. The principal surface property is its reflectance, which determines how much of the incident light is reflected. If a surface has different reflectance for the light of different wavelengths, it will appear to be colored.
An object illumination is also significant in computing intensity. The scene may have to save illumination that is uniform from all direction, called diffuse illumination.
Shading models determine the shade of a point on the surface of an object in terms of a number of attributes. The shading Mode can be decomposed into three parts, a contribution from diffuse illumination, the contribution for one or more specific light sources and a transparency effect. Each of these effects contributes to shading term E which is summed to find the total energy coming from a point on an object. This is the energy a display should generate to present a realistic image of the object. The energy comes not from a point on the surface but a small area around the point.
Introduction of Shading
The simplest form of shading considers only diffuse illumination:
Epd=Rp Id
where Epd is the energy coming from point P due to diffuse illumination. Id is the diffuse illumination falling on the entire scene, and Rp is the reflectance coefficient at P which ranges from shading contribution from specific light sources will cause the shade of a surface to vary as to its orientation concerning the light sources changes and will also include specular reflection effects. In the above figure, a point P on a surface, with light arriving at an angle of incidence i, the angle between the surface normal Np and a ray to the light source. If the energy Ips arriving from the light source is reflected uniformly in all directions, called diffuse reflection, we have
Eps=(Rp cos i)Ips
This equation shows the reduction in the intensity of a surface as it’s tipped obliquely to the light source. If the angle of incidence i exceeds90°, the surface is hidden from the light source and we must set Epsto zero.
You may also like
|
__label__pos
| 0.935226 |
scanres.dll
Process name: U32SCAN Dynamic Link Library
Application using this process: U32SCAN Dynamic Link Library
Recommended: Check your system for invalid registry entries.
scanres.dll
Process name: U32SCAN Dynamic Link Library
Application using this process: U32SCAN Dynamic Link Library
Recommended: Check your system for invalid registry entries.
scanres.dll
Process name: U32SCAN Dynamic Link Library
Application using this process: U32SCAN Dynamic Link Library
Recommended: Check your system for invalid registry entries.
What is scanres.dll doing on my computer?
U32SCAN DLL This process is still being reviewed. If you have some information about it feel free to send us an email at pl[at]uniblue[dot]com Non-system processes like scanres.dll originate from software you installed on your system. Since most applications store data in your system's registry, it is likely that over time your registry suffers fragmentation and accumulates invalid entries which can affect your PC's performance. It is recommended that you check your registry to identify slowdown issues.
scanres.dll
In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device.
Is scanres.dll harmful?
scanres.dll has not been assigned a security rating yet.
scanres.dll is unrated
Can I stop or remove scanres.dll?
Most non-system processes that are running can be stopped because they are not involved in running your operating system. Scan your system now to identify unused processes that are using up valuable resources. scanres.dll is used by 'U32SCAN Dynamic Link Library'.This is an application created by 'Unknown'. To stop scanres.dll permanently uninstall 'U32SCAN Dynamic Link Library' from your system. Uninstalling applications can leave invalid registry entries, accumulating over time.
Is scanres.dll CPU intensive?
This process is not considered CPU intensive. However, running too many processes on your system may affect your PC’s performance. To reduce system overload, you can use the Microsoft System Configuration Utility to manually find and disable processes that launch upon start-up.
Why is scanres.dll giving me errors?
Process related issues are usually related to problems encountered by the application that runs it. A safe way to stop these errors is to uninstall the application and run a system scan to automatically identify any PC issues.
Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now!
Toolbox
ProcessQuicklink
|
__label__pos
| 0.959678 |
VLSI CMOS Logic MCQ Quiz – Objective Question with Answer for VLSI CMOS Logic MCQ
1. In Pseudo-nMOS logic, n transistor operates in
A. cut off region
B. saturation region
C. resistive region
D. non-saturation region
Answer: B
In Pseudo-nMOS logic, n transistor operates in a saturation region and the p transistor operates in a resistive region.
2. The power dissipation in Pseudo-nMOS is reduced to about ________ compared to nMOS device.
A. 50%
B. 30%
C. 60%
D. 70%
Answer: C
The power dissipation in Pseudo-nMOS is reduced to about 60% compared to nMOS devices.
3. Pseudo-nMOS has higher pull-up resistance than nMOS devices.
A. true
B. false
Answer: A
Pseudo-nMOS has higher pull-up resistance than nMOS devices and thus inverter pair delay is larger.
4. In dynamic CMOS logic _____ is used.
A. two-phase clock
B. three-phase clock
C. one phase clock
D. four-phase clock
Answer: D
In dynamic CMOS logic, a four-phase clock is used in which actual signals are used to derive the clocks.
5. In clocked CMOS logic, output in evaluated in
A. on period
B. off period
C. both periods
D. half of on period
Answer: A
In clocked CMOS logic, the logic is evaluated only in on the period of the clock. And owing to the extra transistor in series, slower rise time and fall times are expected.
6. In clocked CMOS logic, rise time and fall time are
A. faster
B. slower
C. faster first and then slows down
D. slower first and then speeds up
Answer: B
In clocked CMOS logic, rise time and fall time are slower because of more number of transistors in series.
7. In CMOS domino logic _____ is used.
A. two-phase clock
B. three-phase clock
C. one phase clock
D. four-phase clock
Answer: C
In CMOS domino logic, a single-phase clock is used. Clock signals distributed on one wire are called a single or one-phase clock.
8. CMOS domino logic is the same as ______ with an inverter at the output line.
A. clocked CMOS logic
B. dynamic CMOS logic
C. gate logic
D. switch logic
Answer: B
CMOS domino logic is the same as that of the dynamic CMOS logic with an inverter at the output line.
9. CMOS domino logic occupies
A. smaller area
B. larger area
C. smaller & larger area
D. none of the mentioned
Answer: A
CMOS domino logic structure occupies a smaller area than conventional CMOS logic as only n-block is used.
10. CMOS domino logic has
A. smaller parasitic capacitance
B. larger parasitic capacitance
C. low operating speed
D. very large parasitic capacitance
Answer: A
CMOS domino logic has smaller parasitic capacitance and higher operating speed.
Scroll to Top
|
__label__pos
| 0.999689 |
How Do You Stop Multiple Seizures?
Seizures are a medical emergency. Whether the seizure is a first-time onset or a recurring episode, it is advisable to dial 911 and call for help. A group of drugs called benzodiazepines is usually administered to stop multiple seizures.
Seizures are a medical emergency. Whether the seizure is a first-time onset or a recurring episode, it is advisable to dial 911 and call for help. A group of drugs called benzodiazepines is usually administered to stop multiple seizures.
Seizures are a medical emergency. Whether the seizure is a first-time onset or a recurring episode, it is advisable to dial 911 and call for help.
A group of drugs called benzodiazepines is usually administered to stop multiple seizures. These work by altering the level of a chemical messenger in the brain called gamma-aminobutyric acid or GABA. However, their side effects may include drowsiness and dizziness. Benzodiazepines are usually considered rescue medications. A caregiver may identify the cluster symptoms and they can start a rescue treatment right away until help arrives. They can give these medications in the following ways.
• Rectal method
• This method is usually used when a patient is having a seizure.
• The caregiver may inject a gel, Diastat (diazepam), into the rectum using a syringe without a needle.
• This method of administration works much faster than other methods.
• Side effects may include sleepiness, dizziness, headache and pain.
• Nasal method
• Valium (diazepam) and Nayzilam (midazolam) are simple options and the body absorbs them quickly.
• The caregiver may spray them into the nose to stop cluster seizures.
• Midazolam works quicker than diazepam, but it doesn't last long in the body.
• Side effects of nasal diazepam and midazolam include nasal irritation, fatigue, watery eyes and an odd taste in the mouth.
• Cheek method
• The caregiver can also put midazolam inside the cheek. This is also called the buccal method. However, it is not always possible to access the cheek in a person having major (tonic-clonic) seizures. It may be an option for those who have partial seizures or absence seizures.
• Side effects include a bitter taste and risk of aspiration (when the medication gets into the airways or lungs).
• Besides, this method might not be right for people who tend to vomit or create a lot of saliva during a seizure.
What is a seizure?
A seizure is an uncontrolled, sudden change in the brain's normal electrical activity. During a seizure, brain cells fire uncontrollably. This briefly affects the way a person behaves, moves, thinks or feels. Recurrent seizures are called epilepsy. Seizures are usually categorized into three types depending on the onset, which include
1. Unknown onset
• The beginning of a seizure is unknown, which is known as an unknown onset seizure.
• A seizure could also be called an unknown onset if it’s not witnessed or seen by anyone. For example, when seizures happen at night or in a person who lives alone.
• Unknown onset seizure may later be diagnosed as a focal or generalized seizure.
2. Generalized seizures: Generalized seizures are characterized by widespread electrical discharges in both sides of the brain. They are further divided into six types.
• Tonic seizures: The seizure may cause a patient to fall or collapse. Body stiffening is usually noticed. The back, arm and leg muscles are affected most often.
• Clonic seizures: It usually affects the face, neck and arms and may last for several minutes. It includes jerking, rhythmic muscle movements.
• Tonic-clonic seizures/grand mal seizures: This is the most common type of seizure. They involve a loss of consciousness, stiffening of the body and shaking or jerking. It is sometimes followed by a loss of bladder or bowel control.
• Myoclonic seizures: They are short and involve uncontrollable jerking. Usually, the jerking is seen in the arms and/or legs and lasts for only a second or two.
• Atonic seizures/drop attack seizures: This type of seizure may cause the person suffering to drop objects. Usually, a sudden collapse is noted. It usually involves a sudden loss of muscle tone, a head drop or leg weakening.
• Absence seizures/petit mal seizures: People who have absence seizures usually lose awareness for a short time and have no memory of the seizure afterward. This type of seizure usually begins between the ages of 4 and 14 years old. It may resemble daydreaming. Subtle body movement may accompany the seizure.
3. Partial seizures/focal seizures: Usually, this begins in one side of the brain and falls into one of the following groups.
• Simple partial seizures: This type of seizure may alter emotions or change the way things look, smell, feel, taste or sound. It may also result in involuntary jerking of a body part (such as an arm or leg) or spontaneous sensory symptoms (such as tingling, dizziness and flashing lights).
• Complex partial seizures: They usually alter consciousness or responsiveness. The person having the seizure may appear to be staring into space or moving without purpose. Some common movements include hand rubbing, chewing, swallowing and repetitive motion, such as bicycling leg movements or walking in circles.
Treatment options for seizures
Medication
• Doctors may prescribe an anti-epileptic drug or anticonvulsant to treat seizures. These drugs are taken every day, sometimes several times a day and/or for as long as needed.
• Common drugs include Dilantin (phenytoin), Tegretol (carbamazepine), Depakote (valproic acid) and Luminal (phenobarbital). These drugs may be used alone or in combination with each other when seizures are difficult to control.
• Most of them have side effects, such as fatigue, drowsiness, nausea and blurred vision.
Surgery
• Doctors usually consider surgery when the condition is not improved by medication. Surgery is done in the portion of the brain responsible for seizures (e.g., brain resection, disconnection or stimulation).
References
Medscape Medical Reference
|
__label__pos
| 0.943744 |
Emily Damuth, MD
Goldilocks in the ICU: Oxygenation targets for mechanical ventilation
Like all medical therapies, we have learned that treatment with oxygen comes at a cost. The medical literature is replete with the detriments of hyperoxia in the management of myocardial infarction, acute stroke, cardiac arrest and septic shock. What is the optimal oxygenation target for critically ill patients requiring mechanical ventilation? Three landmark trials can guide us: Oxygen-ICU, ICU-ROX and LOCO2. The end to the oxygenation fairytale remains to be told, but perhaps Goldilocks is “just right.”
Mastering mechanical ventilation: what is mechanical power?
Over the last three decades since the introduction of the term ventilator-induced lung injury (VILI), we have recognized that positive pressure mechanical ventilation can injure the lungs. It is widely recognized that the cornerstone of lung protective ventilation requires control of tidal volume and transpulmonary pressure. On the other hand, there has been considerably less focus on the impact of respiratory rate and flow on VILI. Mechanical power unites the causes of ventilator-induced lung injury in a single variable that incorporates both the elastic and resistive load of the positive pressure breath.6 In other words, mechanical power quantifies the energy delivered to the lung during each positive pressure breath by assessing the relative contribution of pressure, volume, flow and respiratory rate.
Category (Day):
Venous thrombosis after VV ECMO: What is the true prevalence?
Venous thromboembolism is considered one of the most preventable causes of in-hospital death. Venovenous extracorporeal membrane oxygenation (VV ECMO) utilization for severe respiratory failure has increased in the decade following the 2009 influenza A H1N1 pandemic and the publication of the CESAR trial.1 The interaction between a patient’s blood and the ECMO circuit produces an inflammatory response that can provoke both thrombotic and bleeding complications. In a systematic review of patients with H1N1 treated with VV ECMO published in 2013, the incidence of cannula-associated deep venous thrombosis (CaDVT) was estimated to be as low as 10 percent; however, more recent data suggests the incidence of venous thrombosis after decannulation is much higher. Additionally, a significant proportion of CaDVT are distal thrombi located in the vena cava, which would be missed with a traditional ultrasound diagnostic approach after decannulation from VV ECMO.
Category (Day):
A Novel Coronavirus (2019-nCoV)
While most coronaviruses cause mild respiratory illness consistent with the common cold, two lethal coronaviruses have been previously identified, including the acute respiratory syndrome coronavirus (SARS-CoV) in 2002 demonstrating 10% mortality and the Middle East respiratory syndrome coronavirus (MERS-CoV) in 2012 producing 37% mortality. In December 2019, a novel coronavirus (2019-nCoV) was isolated from a cluster of patients with pneumonia in Wuhan, China. As reported in the Lancet last week, two thirds of the affected patients in a case series had a history of exposure to the Huanan seafood market.
Category (Day):
Preventing ventilator-induce lung injury (VILI): Optimizing PEEP titration in ARDS
Lung-protective mechanical ventilation with low tidal volume and restricted plateau pressure improves survival in ARDS. However, the optimal approach to PEEP titration to minimize VILI is still debated. Should oxygenation, lung compliance, driving pressure or transpulmonary pressure guide adjustment of PEEP in ARDS?
Category (Day):
Leave the sedation alone! Diagnosis and management of patient-ventilator asynchrony
Patient-ventilator asynchrony is underrecognized yet associated with increased mortality, ICU length of stay and duration of mechanical ventilation in critical illness. How do you diagnose and treat it? Hint: the answer is rarely deep sedation or paralysis!
Category (Day):
Management of status epilepticus
A 72-year-old man develops generalized tonic-clonic activity at home. He receives lorazepam 4 mg intravenously during the 7-minute transport to the ED. He continues to have witnessed convulsions on your examination. Point-of-care glucose is normal. After supporting his airway, breathing and circulation, what medication should be administered second line for status epilepticus (SE)?
Category (Day):
Pages
|
__label__pos
| 0.941646 |
Segmentation Dataset for run length encoding
PR:
I was quite confuse with the SplitData Class and the kwargs, I was able to pull up the semi-finish notebook only because of the lesson 3 notebook. I follow the SegmentationDataset closely and run the debugger step by step. I notice that even though x and y should be a file list, the argument passed when creating the dataset was actually an array, I cannot figure out this part yet.
class SegmentationDataset(ImageDataset):
"A dataset for segmentation task."
def __init__(self, x:FilePathList, y:FilePathList, classes:Collection[Any], div=False, convert_mode='L'):
class SplitData():
"Regroups `train` and `valid` data, inside a `path`."
path:PathOrStr
train:LabelList
valid:LabelList
.....omitted
def datasets(self, dataset_cls:type, **kwargs)->'SplitDatasets':
"Create datasets from the underlying data using `dataset_cls` and passing the `kwargs`."
dss = [dataset_cls(*self.train.items.T, **kwargs)]
kwg_cls = kwargs.pop('classes') if 'classes' in kwargs else None
if hasattr(dss[0], 'classes'): kwg_cls = dss[0].classes
if kwg_cls is not None: kwargs['classes'] = kwg_cls
dss.append(dataset_cls(*self.valid.items.T, **kwargs))
cls = getattr(dataset_cls, '__splits_class__', SplitDatasets)
return cls(self.path, *dss)
Notebook:
I’m thinking we can just have an argument passed to SegmentationDataset to use open_mask_rle if it’s rle encoded. That’s nice work, thanks for your help!
1 Like
Thank you! Some sort of flag like rle=True? I guess the only difference for SegmentationDataset and SegmentationRLEDataset was the get_y_() and it takes an extra argument shape for telling open_mask_rle what is the size of the mask.
Even easier now that we can specify a mask_opener function, it’s jsut one more line in the data block API:
data = (ImageFileList.from_folder(path_img)
.label_from_func(get_y_fn)
.split_by_fname_file('../valid.txt')
.datasets(SegmentationDataset, classes=codes)
.set_attr(mask_opener=open_mask_rle)
.transform(get_transforms(), size=size, tfm_y=True)
.databunch(bs=bs)
.normalize(imagenet_stats))
1 Like
Just merged your PR. Can you add documentation for the three functions you introduced now?
Thanks a lot @sgugger. Am I supposed to make changes in fastai/fastai/doc_scr?
Btw, I have raised an issue on Github, seems that the links are broken on CONTRIBUTE.md. @stas Maybe you are maintaining this doc? https://github.com/fastai/fastai/blob/master/CONTRIBUTING.md
Made a PR for the doc, I am not sure how to strip out the metadata for input cell (cell number, etc). Also I add a small sample csv for mask_rle. Currently this file stay in the same folder docs/img/ despite the fact that it is a csv.
Thanks for adding this!
You need to run tools/run-after-git-clone to automatically get your notebooks stripped. We can’t merge unless you do that step.
I figure I should run tools/fastai-nbstripout -d file too, but seems that it’s still does not doing the job right…I saw a lot of noise with nbdiff.
Should I only re-run those cell that I added?
I saw something like this with nbdiff
Frankly, we don’t tend to do that, although it would make things a bit cleaner. :slight_smile:
Ok, that’s great to hear. I was afraid I was not doing it in a right way. :slight_smile:
Hi @sgugger, I just look at the mask_opener, how could I pass a parameter to the Segmentation dataset? The open_mask_rle () require an extra parameter for image shape
def _get_y(self,i): return self.mask_opener(self.y[i])
I found that I have not pushed the latest version of the demo notebook previously, I was attempting to pass an extra shape argument to the dataset.
It seems like the docs need an update; is an ImageFileList an ItemList subclass? The docs mention ImageItemList, but not ImageFileList, so it’s tricky to figure out what methods the ImageFileList is expected to have.
1 Like
You can just hit the tab completion to see what method of a class has? I think it will be easier than looking up docs for method.
The docs don’t mention ImageFileList because this function doesn’t exist anymore, it was only there during temporary development
1 Like
What should we be using? I only arrived at ImageFileList because ImageItemList (from the docs) isn’t recognized—NameError: name 'ImageItemList' is not defined. I just did a pull, but no change…
UPDATE: Okay, it must be something with my environment. After pulling, I can now see that ImageFileList in data.py is replaced with ImageItemList, but I still get the above error from my notebook. Strange.
Okay… I see what’s going on now. I installed fastai using anaconda, but the version in site-packages is out of date (and updating doesn’t seem to give me ImageItemList—the source file in site-packages still has ImageFileList). Can anyone advise on how to get anaconda to use the current github version (i.e., my local fastai repo)?
Okay, got it. I followed the advice here: https://stackoverflow.com/questions/19042389/conda-installing-upgrading-directly-from-github
Not the accepted answer, but the one that advises on installing git and pip from anaconda.
1 Like
I think you posted the wrong link? I’m having similar issues as well.
Wow, yeah, obviously I did… bizarre (that looks like my jupyter notebook link!). I’ll edit that post with the correct link!
|
__label__pos
| 0.590406 |
1244
A company named RT&T has a network of n switching stations connected by m high-speed communication links. Each customer’s phone is directly connected to one station in his or her area. The engineers of RT&T have developed a prototype video-phone system that allows two customers to see each other during a phone call. In order to have acceptable image quality, however, the number of links used to transmit video signals between the two parties cannot exceed 4. Suppose that RT&T’s network is represented by a graph. Design an efficient algorithm that computes, for each station, the set of stations it can reach using no more than 4 links.
|
__label__pos
| 0.859003 |
Entity API
Describes how to define and manipulate content and configuration entities.
Entities, in Drupal, are objects that are used for persistent storage of content and configuration information. See the Information types topic for an overview of the different types of information, and the Configuration API topic for more about the configuration API.
Each entity is an instance of a particular "entity type". Some content entity types have sub-types, which are known as "bundles", while for other entity types, there is only a single bundle. For example, the Node content entity type, which is used for the main content pages in Drupal, has bundles that are known as "content types", while the User content type, which is used for user accounts, has only one bundle.
The sections below have more information about entities and the Entity API; for more detailed information, see https://drupal.org/developing/api/entity
Defining an entity type
Entity types are defined by modules, using Drupal's Plugin API (see the Plugin API topic for more information about plugins in general). Here are the steps to follow to define a new entity type:
• Choose a unique machine name, or ID, for your entity type. This normally starts with (or is the same as) your module's machine name. It should be as short as possible, and may not exceed 32 characters.
• Define an interface for your entity's get/set methods, extending either \Drupal\Core\Config\Entity\ConfigEntityInterface or \Drupal\Core\Entity\ContentEntityInterface.
• Define a class for your entity, implementing your interface and extending either \Drupal\Core\Config\Entity\ConfigEntityBase or \Drupal\Core\Entity\ContentEntityBase, with annotation for @ConfigEntityType or @ContentEntityType in its documentation block.
• The 'id' annotation gives the entity type ID, and the 'label' annotation gives the human-readable name of the entity type. If you are defining a content entity type that uses bundles, the 'bundle_label' annotation gives the human-readable name to use for a bundle of this entity type (for example, "Content type" for the Node entity).
• The annotation will refer to several controller classes, which you will also need to define:
• For content entities, the annotation will refer to a number of database tables and their fields. These annotation properties, such as 'base_table', 'data_table', 'entity_keys', etc., are documented on \Drupal\Core\Entity\EntityType. Your module will also need to set up its database tables using hook_schema().
• For content entities that are displayed on their own pages, the annotation will refer to a 'uri_callback' function, which takes an object of the entity interface you have defined as its parameter, and returns routing information for the entity page; see node_uri() for an example. You will also need to add a corresponding route to your module's routing.yml file; see the node.view route in node.routing.yml for an example, and see Entity routes below for some notes.
• Define routing and links for the various URLs associated with the entity. These go into the 'links' annotation, with the link type as the key, and the route machine name (defined in your module's routing.yml file) as the value; see Entity routes below for some routing notes. Typical link types are:
• canonical: Default link, either to view (if entities are viewed on their own pages) or edit the entity.
• delete-form: Confirmation form to delete the entity.
• edit-form: Editing form.
• admin-form: Form for editing bundle or entity type settings.
• Other link types specific to your entity type can also be defined.
• If your content entity has bundles, you will also need to define a second plugin to handle the bundles. This plugin is itself a configuration entity type, so follow the steps here to define it. The machine name ('id' annotation) of this configuration entity class goes into the 'bundle_entity_type' annotation on the entity type class. For example, for the Node entity, the bundle class is \Drupal\node\Entity\NodeType, whose machine name is 'node_type'. This is the annotation value for 'bundle_entity_type' on the \Drupal\node\Entity\Node class. Also, the bundle config entity type annotation must have a 'bundle_of' entry, giving the machine name of the entity type it is acting as a bundle for.
• Additional annotations can be seen on entity class examples such as \Drupal\node\Entity\Node (content) and \Drupal\user\Entity\Role (configuration). These annotations are documented on \Drupal\Core\Entity\EntityType.
Entity routes
Entity routes, like other routes, are defined in *.routing.yml files; see the Menu and routing topic for more information. Here is a typical entry, for the block configure form:
block.admin_edit:
path: '/admin/structure/block/manage/{block}'
defaults:
_entity_form: 'block.default'
_title: 'Configure block'
requirements:
_entity_access: 'block.update'
Some notes:
• path: The {block} in the path is a placeholder, which (for an entity) must always take the form of {machine_name_of_entity_type}. In the URL, the placeholder value will be the ID of an entity item. When the route is used, the entity system will load the corresponding entity item and pass it in as an object to the controller for the route.
• defaults: For entity form routes, use _entity_form rather than the generic _content or _form. The value is composed of the entity type machine name and a form controller type from the entity annotation (see Defining an entity type above more more on controllers and annotation). So, in this example, block.default refers to the 'default' form controller on the block entity type, whose annotation contains:
controllers = {
"form" = {
"default" = "Drupal\block\BlockForm",
Defining a content entity bundle
For entity types that use bundles, such as Node (bundles are content types) and Taxonomy (bundles are vocabularies), modules and install profiles can define bundles by supplying default configuration in their config/install directories. (See the Configuration API topic for general information about configuration.)
There are several good examples of this in Drupal Core:
Loading, querying, and rendering entities
To load entities, use the entity storage manager, which is an object implementing \Drupal\Core\Entity\EntityStorageInterface that you can retrieve with:
$storage = \Drupal::entityManager()->getStorage('your_entity_type');
// Or if you have a $container variable:
$storage = $container->get('entity.manager')->getStorage('your_entity_type');
Here, 'your_entity_type' is the machine name of your entity type ('id' annotation on the entity class), and note that you should use dependency injection to retrieve this object if possible. See the Services and Dependency Injection topic for more about how to properly retrieve services.
To query to find entities to load, use an entity query, which is a object implementing \Drupal\Core\Entity\Query\QueryInterface that you can retrieve with:
// Simple query:
$query = \Drupal::entityQuery('your_entity_type');
// Or, if you have a $container variable:
$query_service = $container->get('entity.query');
$query = $query_service->get('your_entity_type');
If you need aggregation, there is an aggregate query avaialable, which implements \Drupal\Core\Entity\Query\QueryAggregateInterface:
$query \Drupal::entityQueryAggregate('your_entity_type');
// Or:
$query = $query_service->getAggregate('your_entity_type');
Also, you should use dependency injection to get this object if possible; the service you need is entity.query, and its methods getQuery() or getAggregateQuery() will get the query object.
In either case, you can then add conditions to your query, using methods like condition(), exists(), etc. on $query; add sorting, pager, and range if needed, and execute the query to return a list of entity IDs that match the query.
Here is an example, using the core File entity:
$fids = Drupal::entityQuery('file')
->condition('status', FILE_STATUS_PERMANENT, '<>')
->condition('changed', REQUEST_TIME - $age, '<')
->range(0, 100)
->execute();
$files = $storage->loadMultiple($fids);
The normal way of viewing entities is by using a route, as described in the sections above. If for some reason you need to render an entity in code in a particular view mode, you can use an entity view builder, which is an object implementing \Drupal\Core\Entity\EntityViewBuilderInterface that you can retrieve with:
$view_builder = \Drupal::entityManager()->getViewBuilder('your_entity_type');
// Or if you have a $container variable:
$view_builder = $container->get('entity.manager')->getViewBuilder('your_entity_type');
Then, to build and render the entity:
// You can omit the language ID if the default language is being used.
$build = $view_builder->view($entity, 'view_mode_name', $language->id);
// $build is a render array.
$rendered = drupal_render($build);
Access checking on entities
Entity types define their access permission scheme in their annotation. Access permissions can be quite complex, so you should not assume any particular permission scheme. Instead, once you have an entity object loaded, you can check for permission for a particular operation (such as 'view') at the entity or field level by calling:
$entity->access($operation);
$entity->nameOfField->access($operation);
The interface related to access checking in entities and fields is \Drupal\Core\Access\AccessibleInterface.
The default entity access controller invokes two hooks while checking access on a single entity: hook_entity_access() is invoked first, and then hook_ENTITY_TYPE_access() (where ENTITY_TYPE is the machine name of the entity type). If no module returns a TRUE or FALSE value from either of these hooks, then the entity's default access checking takes place. For create operations (creating a new entity), the hooks that are invoked are hook_entity_create_access() and hook_ENTITY_TYPE_create_access() instead.
The Node entity type has a complex system for determining access, which developers can interact with. This is described in the Node access topic.
See also
Internationalization
Entity CRUD, editing, and view hooks
Functions
Namesort descending Location Description
hook_entity_access drupal/core/modules/system/entity.api.php Control entity operation access.
hook_entity_create_access drupal/core/modules/system/entity.api.php Control entity create access.
hook_ENTITY_TYPE_access drupal/core/modules/system/entity.api.php Control entity operation access for a specific entity type.
hook_ENTITY_TYPE_create_access drupal/core/modules/system/entity.api.php Control entity create access for a specific entity type.
Classes
Namesort descending Location Description
ConfigEntityBase drupal/core/lib/Drupal/Core/Config/Entity/ConfigEntityBase.php Defines a base configuration entity class.
ConfigEntityListBuilder drupal/core/lib/Drupal/Core/Config/Entity/ConfigEntityListBuilder.php Defines the default class to build a listing of configuration entities.
ConfigEntityStorage drupal/core/lib/Drupal/Core/Config/Entity/ConfigEntityStorage.php Defines the storage class for configuration entities.
ConfigEntityType drupal/core/lib/Drupal/Core/Entity/Annotation/ConfigEntityType.php Defines a config entity type annotation object.
ContentEntityBase drupal/core/lib/Drupal/Core/Entity/ContentEntityBase.php Implements Entity Field API specific enhancements to the Entity class.
ContentEntityDatabaseStorage drupal/core/lib/Drupal/Core/Entity/ContentEntityDatabaseStorage.php Defines a base entity controller class.
ContentEntityType drupal/core/lib/Drupal/Core/Entity/Annotation/ContentEntityType.php Defines a content entity type annotation object.
ContentTranslationHandler drupal/core/modules/content_translation/src/ContentTranslationHandler.php Base class for content translation handlers.
EntityConfirmFormBase drupal/core/lib/Drupal/Core/Entity/EntityConfirmFormBase.php Provides a generic base class for an entity-based confirmation form.
EntityForm drupal/core/lib/Drupal/Core/Entity/EntityForm.php Base class for entity forms.
EntityListBuilder drupal/core/lib/Drupal/Core/Entity/EntityListBuilder.php Defines a generic implementation to build a listing of entities.
EntityType drupal/core/lib/Drupal/Core/Entity/Annotation/EntityType.php Defines an Entity type annotation object.
EntityType drupal/core/lib/Drupal/Core/Entity/EntityType.php Provides an implementation of an entity type and its metadata.
EntityViewBuilder drupal/core/lib/Drupal/Core/Entity/EntityViewBuilder.php Base class for entity view controllers.
Interfaces
Namesort descending Location Description
AccessibleInterface drupal/core/lib/Drupal/Core/Access/AccessibleInterface.php Interface for checking access.
ConfigEntityInterface drupal/core/lib/Drupal/Core/Config/Entity/ConfigEntityInterface.php Defines the interface common for all configuration entities.
ContentEntityInterface drupal/core/lib/Drupal/Core/Entity/ContentEntityInterface.php Defines a common interface for all content entity objects.
EntityStorageInterface drupal/core/lib/Drupal/Core/Entity/EntityStorageInterface.php Defines a common interface for entity storage classes.
EntityViewBuilderInterface drupal/core/lib/Drupal/Core/Entity/EntityViewBuilderInterface.php Defines a common interface for entity view controller classes.
File
drupal/core/modules/system/core.api.php, line 344
Documentation landing page and topics, plus core library hooks.
|
__label__pos
| 0.683011 |
Alloys For Hydrogen Storage
Alloys For Hydrogen Storage
Hydrogen storage alloys are intermetallic compounds that can absorb, store and release hydrogen in large quantities reversibly. Hydrogen storage alloys have the characteristics of large hydrogen storage, no pollution, safety and reliability. During hydrogen absorption and discharge of hydrogen storage alloys, thermal effect occurs. Heat is given off by hydrogen absorption and absorbed by dehydrogenation. Hydrogen storage alloys absorb hydrogen when temperature is lowered or pressure is increased. Conversely, when temperature is increased or pressure is lowered, hydrogen is released.
Classification:
• Rare earth series hydrogen storage alloys: The rare earth series hydrogen storage alloys can be expressed by general formula AB5 and has CaCu5 hexagonal structure. LaNi5 is a typical example, which was discovered by Philips Laboratory in 1969. The hydrogen storage capacity of LaNi5 alloy can reach to 1.37wt% and the dehydrogenation does not need high temperature and high pressure. LaNi5 cannot meet the large demand of industrial production because of the high price of pure rare earth metals. In order to reduce the cost, people use mixed rare earth metal elements to replace part of La in LaNi5, and replace Ni with Co, Al, Mn, Fe, Cr, Cu, Si, Sn to improve the performance. Therefore, multi-mixed rare earth hydrogen storage alloys are developed.
The crystal structure of LaNi5 alloy.Figure 1. The crystal structure of LaNi5 alloy.
• Titanium series hydrogen storage alloys: At present, a variety of titanium hydrogen storage alloys have been developed, such as ferrotitanium, titanium-manganese, titanium-chromium, titanium-zirconium, titanium-nickel, titanium-copper and others. Among them, ferrotitanium is AB type, and the others are AB2 type alloys. Among titanium hydrogen storage alloys, ferrotitanium and manganese hydrogen storage alloys are the most practical. FeTi alloy is a typical AB type hydrogen storage alloy with CsCl structure. As a kind of hydrogen storage material, the hydrogen storage capacity of FeTi alloy is even slightly higher than LaNi5. TiMnx is also a promising type of hydrogen storage alloys. This type alloy has a hydrogen absorption capacity of 1.89wt %, and is easily activated at room temperature.
• Magnesium series hydrogen storage alloys: MgNi2 is a typical magnesium material, which has great potential for light and high energy hydrogen storage materials. Both in terms of material price and theoretical hydrogen storage capacity, it is superior to rare earth alloy and titanium alloy, and its theoretical capacity is up to 1000mAh/ g, which is about 2.7 times that of LaNi5 alloy. However, MgNi2 alloy can only absorb and release hydrogen at low temperature. Furthermore, the reaction speed is very slow, and it is difficult to activate, which makes its practical application difficult.
An example of magnesium series hydrogen storage alloys.Figure 2. An example of magnesium series hydrogen storage alloys.
• Zirconium series hydrogen storage alloys: Zirconium series alloys are represented by ZrV2, ZrCr2, ZrMn2 and others, which can be expressed by the general formula AB2. Zirconium series alloys have the advantages of large hydrogen absorption, fast reaction with hydrogen, easy activation and no lag effect, so they are promising new hydrogen storage materials.
References
1. Liu Y, Pan H, Gao M, et al. Advanced hydrogen storage alloys for Ni/MH rechargeable batteries [J]. Journal of Materials Chemistry, 2011, 21.
2. Edalati K, Uehiro R, Ikeda Y, et al. Design and synthesis of a magnesium alloy for room temperature hydrogen storage[J]. Acta Materialia, 2018:88-96.
Please kindly note that our products and services are for research use only.
Have a question? Get a Free Consultation
Verification code
|
__label__pos
| 0.880998 |
Banner Health
Making healthcare easier
INSTALL
Common Cold
What is the common cold?
The common cold is a viral infection that affects the upper respiratory tract, primarily the nose and throat. It is one of the most frequent illnesses people experience, with millions of cases occurring each year.
The common cold is typically not severe and symptoms, while unpleasant, usually improve on their own without medical treatment.
Most people recover within seven to ten days. However, symptoms may last longer for children, older adults and individuals with weakened immune systems.
What are symptoms of a cold?
A common cold can cause different symptoms, and they can be more or less severe for each person. The most common cold symptoms include:
• Stuffy or runny nose
• Sore throat
• Cough
• Sneezing
• Mild body aches or a mild headache
• Low-grade fever
• General feeling of being unwell (malaise)
Cold symptoms typically come on slowly and can linger for a week or more. They're usually uncomfortable but not serious. However, you should see a doctor if they get worse or don't go away.
Causes of common colds
Viruses cause the common cold, with rhinoviruses being the most common. Other viruses that can cause colds include coronaviruses, respiratory syncytial virus (RSV) and adenoviruses. These viruses also infect the upper respiratory tract, causing typical cold symptoms.
Colds are highly contagious and can spread easily from person to person. Viruses primarily transmit through these ways:
• Airborne droplets: When someone who is sick coughs, sneezes or talks, they release small drops into the air with the virus. Others nearby can breathe in these drops and get sick too.
• Direct contact: Touching objects with the virus on them, like doorknobs or phones, and then touching your face can spread it.
Who is at higher risk for colds?
Several factors can increase your risk of catching the common cold. These include:
• Age: Infants and young children under six years old are at the greatest risk for colds. Their immune systems are still developing, and they often have close contact with other children in daycare or school settings.
• Weakened immune system: People with weak immune systems, from chronic health conditions or poor nutrition, are more likely to get infections like colds.
• Time of year: Colds are more common during the fall and winter months. People spending more time indoors and close to others helps viruses spread easily.
• Smoking: Smokers and those exposed to secondhand smoke have a higher risk of catching colds. Smoking damages the respiratory tract and weakens the immune system, making it easier for viruses to take hold.
• Exposure to infected individuals: Close contact with someone who has a cold increases your risk of catching it. This includes living in the same household, working in close quarters or being in crowded places.
Practicing good hygiene and avoiding exposure to known sources of infection can help protect you and make it less likely you will get sick. This is especially important for individuals with conditions like cancer, high blood pressure and heart disease.
How to prevent colds
Here are some effective strategies to prevent getting colds:
• Wash your hands frequently with soap and water. Make sure to wash for at least 20 seconds. Hand washing is especially important after being in public places, touching surfaces or being near someone who is sick. If soap and water are not available, use an alcohol-based hand sanitizer.
• Try to avoid close contact with people who have colds. This includes keeping a safe distance and avoiding physical contact such as handshakes and hugs.
• Stay home if you are sick to avoid spreading the virus to others.
• Clean and disinfect frequently touched surfaces like doorknobs, light switches, keyboards and phones to prevent the spread of viruses through touch.
• Avoid touching your eyes, nose and mouth with unwashed hands, as this is a common way for viruses to enter your body.
• When you cough or sneeze, use a tissue or the inside of your elbow to cover your mouth and nose. Throw away tissues immediately and wash your hands to prevent spreading the virus to others.
Conditions often confused with a cold
Since several illnesses have similar signs and symptoms with the common cold, it can be easy to confuse them. Here's how to know if you may have a cold or another conditions:
Flu (influenza)
• Similarities: Fever, cough, sore throat, runny or stuffy nose, body aches
• Differences: Flu symptoms are usually more severe and can include high fever, chills and extreme fatigue. The flu often comes on suddenly and can lead to more serious health problems like pneumonia.
Allergies
• Similarities: Runny or stuffy nose, sneezing, coughing
• Differences: Allergies often cause itchy eyes, nose, or throat, and symptoms persist longer. Allergies do not typically cause a fever. They are triggered by allergens such as pollen, dust or pet dander.
Sinus infection (sinusitis)
• Similarities: Runny or stuffy nose, headache, cough
• Differences: Sinus infections may cause facial pain or pressure, thick yellow or green mucus, and can last longer than a cold. Sinusitis may also cause bad breath and a reduced sense of smell.
COVID-19
• Similarities: Fever, cough, sore throat, fatigue
• Differences: COVID-19 can cause unique symptoms such as loss of taste or smell, shortness of breath and has the potential for more serious complications like severe respiratory issues. Testing is necessary to confirm COVID-19, as symptoms can overlap with both colds and flu.
Understanding these differences can help you determine whether you are dealing with a common cold or another condition that may require different treatment or medical attention. If you're unsure, it's always best to consult a health care provider.
How do you treat a cold?
Treating a cold means relieving symptoms and helping your body's immune system as it fights off the virus. Here are some effective ways to treat a cold:
• Rest and hydration: Get plenty of rest. Sleep helps your body heal and recover. Drink lots of fluids like water, herbal teas and clear broths to stay hydrated. Proper hydration helps thin mucus and keep your throat moist.
• Use over-the-counter (OTC) medications to relieve symptoms: You can use over-the-counter medications, also known as cold medicines, to alleviate symptoms. Pain relievers such as acetaminophen or ibuprofen can reduce fever and ease aches. Decongestants can help with nasal congestion and cough syrups or lozenges can soothe a sore throat and suppress coughs.
• Take medication as directed: Be sure to follow the dosage instructions and be aware of any side effect each medicine may cause. Also, people who have high blood pressure need to take extra precautions when choosing cold medicine to avoid potential complications.
• Use a humidifier or saline nasal spray: A humidifier adds moisture to the air, which can help ease congestion and soothe irritated nasal passages. Saline nasal sprays can also help by moisturizing the nasal passages and loosening mucus.
• Drink warm fluids like tea or soup: Drinking warm fluids, such as herbal tea, broth, or soup, can provide comfort and help soothe a sore throat. The steam from hot liquids can also help ease congestion.
Remember, antibiotics treat bacterial infections and are not effective against viral infections like the common cold. If your symptoms persist or worsen, or if you have any concerns, consult a health care provider.
When to see a doctor for a cold
While the common cold is usually mild, there are times when you should seek medical attention. Here are some signs that it's time to see a doctor:
• Symptoms that worsen or don’t improve after 7 to 10 days.
• A high fever (over 102°F or 39°C) or a fever that lasts more than a few days.
• Severe symptoms like difficulty breathing, chest pain, persistent headaches or confusion.
• Symptoms that suggest a secondary infection, such as an ear infection or sinusitis, including severe ear pain, facial pain or pressure, thick yellow or green nasal discharge, or a significant change in symptoms after initial improvement.
If you're ever in doubt about your symptoms, it's always best to consult with a health care provider.
Key points
It's important to remember that while the common cold can be uncomfortable, most cases are mild and manageable at home with rest and self-care measures. However, if you experience severe or persistent symptoms, it's essential to seek medical advice to rule out more serious conditions and receive appropriate treatment.
Comprehensive care and support
At Banner Health, our team of health care professionals is dedicated to providing compassionate care and personalized treatment to help you feel better and recover from cold and flu symptoms. Whether you need to schedule an appointment with a primary care provider or require immediate care at one of our urgent care locations, we're here to meet your health care needs.
We understand that convenience is important when seeking health care services. That's why we offer online scheduling for appointments with primary care providers or you can save your spot at a nearby Banner Urgent Care.
|
__label__pos
| 0.993442 |
Unverified Commit e9e7e73d authored by Bernhard Schuster's avatar Bernhard Schuster Committed by GitHub
Browse files
test: add unit test to catch missing distribution to subsystems faster (#2495)
* test: add unit test to catch missing distribution to subsystems faster
* add a simple count
* introduce proc macro to generate dispatch type
* refactor
* refactor
* chore: add license
* fixup unit test
* fixup merge
* better errors
* better fmt
* fix error spans
* better docs
* better error messages
* ui test foo
* Update node/subsystem/dispatch-gen/src/lib.rs
Co-authored-by: default avatarBastian Köcher <[email protected]>
* Update node/network/bridge/src/lib.rs
Co-authored-by: default avatarBastian Köcher <[email protected]>
* Update node/subsystem/Cargo.toml
Co-authored-by: default avatarBastian Köcher <[email protected]>
* Update node/subsystem/dispatch-gen/src/lib.rs
Co-authored-by: default avatarBastian Köcher <[email protected]>
* Update node/subsystem/dispatch-gen/src/lib.rs
Co-authored-by: default avatarBastian Köcher <[email protected]>
* Update node/network/bridge/src/lib.rs
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <[email protected]>
* fix compilation
* use find_map
* drop the silly 2, use _inner instead
* Update node/network/bridge/src/lib.rs
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <[email protected]>
* Update node/subsystem/dispatch-gen/src/lib.rs
Co-authored-by: default avatarBastian Köcher <[email protected]>
* nail deps down
* more into()
* flatten
* missing use statement
* fix messages order
Co-authored-by: default avatarBastian Köcher <[email protected]>
Co-authored-by: Andronik Ordian's avatarAndronik Ordian <[email protected]>
parent 01657e9d
Pipeline #125947 passed with stages
in 34 minutes and 13 seconds
......@@ -188,9 +188,9 @@ dependencies = [
[[package]]
name = "assert_matches"
version = "1.4.0"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "695579f0f2520f3774bb40461e5adb066459d4e0af4d59d20175484fb8e9edf1"
checksum = "9b34d609dfbaf33d6889b2b7106d3ca345eacad44200913df5ba02bfd31d2ba9"
[[package]]
name = "async-channel"
......@@ -5526,6 +5526,7 @@ dependencies = [
"polkadot-node-primitives",
"polkadot-node-subsystem-test-helpers",
"polkadot-primitives",
"polkadot-procmacro-subsystem-dispatch-gen",
"polkadot-statement-table",
"sc-network",
"smallvec 1.6.1",
......@@ -5686,6 +5687,17 @@ dependencies = [
"sp-version",
]
[[package]]
name = "polkadot-procmacro-subsystem-dispatch-gen"
version = "0.1.0"
dependencies = [
"assert_matches",
"proc-macro2",
"quote",
"syn",
"trybuild",
]
[[package]]
name = "polkadot-rpc"
version = "0.8.29"
......@@ -6402,9 +6414,9 @@ dependencies = [
[[package]]
name = "quote"
version = "1.0.7"
version = "1.0.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa563d17ecb180e500da1cfd2b028310ac758de548efdd203e18f283af693f37"
checksum = "c3d0b9745dc2debf507c8422de05d7226cc1f0644216dfdfead988f9b1ab32a7"
dependencies = [
"proc-macro2",
]
......@@ -9893,6 +9905,20 @@ dependencies = [
"structopt",
]
[[package]]
name = "trybuild"
version = "1.0.41"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "99471a206425fba51842a9186315f32d91c56eadc21ea4c21f847b59cf778f8b"
dependencies = [
"glob",
"lazy_static",
"serde",
"serde_json",
"termcolor",
"toml",
]
[[package]]
name = "twox-hash"
version = "1.5.0"
......
......@@ -65,6 +65,7 @@ members = [
"node/primitives",
"node/service",
"node/subsystem",
"node/subsystem/dispatch-gen",
"node/subsystem-test-helpers",
"node/subsystem-util",
"node/jaeger",
......
......@@ -14,27 +14,22 @@
// You should have received a copy of the GNU General Public License
// along with Substrate. If not, see <http://www.gnu.org/licenses/>.
use browser_utils::{browser_configuration, init_logging_and_telemetry, set_console_error_panic_hook, Client};
use log::info;
use wasm_bindgen::prelude::*;
use browser_utils::{
Client,
browser_configuration, init_logging_and_telemetry, set_console_error_panic_hook,
};
/// Starts the client.
#[wasm_bindgen]
pub async fn start_client(chain_spec: String, log_level: String) -> Result<Client, JsValue> {
start_inner(chain_spec, log_level)
.await
.map_err(|err| JsValue::from_str(&err.to_string()))
start_inner(chain_spec, log_level).await.map_err(|err| JsValue::from_str(&err.to_string()))
}
async fn start_inner(chain_spec: String, log_directives: String) -> Result<Client, Box<dyn std::error::Error>> {
set_console_error_panic_hook();
let telemetry_worker = init_logging_and_telemetry(&log_directives)?;
let chain_spec = service::PolkadotChainSpec::from_json_bytes(chain_spec.as_bytes().to_vec())
.map_err(|e| format!("{:?}", e))?;
let chain_spec =
service::PolkadotChainSpec::from_json_bytes(chain_spec.as_bytes().to_vec()).map_err(|e| format!("{:?}", e))?;
let telemetry_handle = telemetry_worker.handle();
let config = browser_configuration(chain_spec, Some(telemetry_handle)).await?;
......
......@@ -28,10 +28,8 @@ use polkadot_subsystem::{
SubsystemResult, jaeger,
};
use polkadot_subsystem::messages::{
NetworkBridgeMessage, AllMessages, AvailabilityDistributionMessage,
BitfieldDistributionMessage, PoVDistributionMessage, StatementDistributionMessage,
CollatorProtocolMessage, ApprovalDistributionMessage, NetworkBridgeEvent,
AvailabilityRecoveryMessage,
NetworkBridgeMessage, AllMessages,
CollatorProtocolMessage, NetworkBridgeEvent,
};
use polkadot_primitives::v1::{Hash, BlockNumber};
use polkadot_node_network_protocol::{
......@@ -565,35 +563,7 @@ async fn dispatch_validation_events_to_all<I>(
I: IntoIterator<Item = NetworkBridgeEvent<protocol_v1::ValidationProtocol>>,
I::IntoIter: Send,
{
let messages_for = |event: NetworkBridgeEvent<protocol_v1::ValidationProtocol>| {
let av_d = std::iter::once(event.focus().ok().map(|m| AllMessages::AvailabilityDistribution(
AvailabilityDistributionMessage::NetworkBridgeUpdateV1(m)
)));
let b = std::iter::once(event.focus().ok().map(|m| AllMessages::BitfieldDistribution(
BitfieldDistributionMessage::NetworkBridgeUpdateV1(m)
)));
let p = std::iter::once(event.focus().ok().map(|m| AllMessages::PoVDistribution(
PoVDistributionMessage::NetworkBridgeUpdateV1(m)
)));
let s = std::iter::once(event.focus().ok().map(|m| AllMessages::StatementDistribution(
StatementDistributionMessage::NetworkBridgeUpdateV1(m)
)));
let ap = std::iter::once(event.focus().ok().map(|m| AllMessages::ApprovalDistribution(
ApprovalDistributionMessage::NetworkBridgeUpdateV1(m)
)));
let av_r = std::iter::once(event.focus().ok().map(|m| AllMessages::AvailabilityRecovery(
AvailabilityRecoveryMessage::NetworkBridgeUpdateV1(m)
)));
av_d.chain(b).chain(p).chain(s).chain(ap).chain(av_r).filter_map(|x| x)
};
ctx.send_messages(events.into_iter().flat_map(messages_for)).await
ctx.send_messages(events.into_iter().flat_map(AllMessages::dispatch_iter)).await
}
#[tracing::instrument(level = "trace", skip(events, ctx), fields(subsystem = LOG_TARGET))]
......@@ -635,8 +605,12 @@ mod tests {
use polkadot_subsystem::{ActiveLeavesUpdate, FromOverseer, OverseerSignal};
use polkadot_subsystem::messages::{
StatementDistributionMessage, BitfieldDistributionMessage,
AvailabilityDistributionMessage,
AvailabilityRecoveryMessage,
ApprovalDistributionMessage,
BitfieldDistributionMessage,
PoVDistributionMessage,
StatementDistributionMessage
};
use polkadot_node_subsystem_test_helpers::{
SingleItemSink, SingleItemStream, TestSubsystemContextHandle,
......@@ -818,45 +792,47 @@ mod tests {
event: NetworkBridgeEvent<protocol_v1::ValidationProtocol>,
virtual_overseer: &mut TestSubsystemContextHandle<NetworkBridgeMessage>,
) {
// Ordering must match the enum variant order
// in `AllMessages`.
assert_matches!(
virtual_overseer.recv().await,
AllMessages::AvailabilityDistribution(
AvailabilityDistributionMessage::NetworkBridgeUpdateV1(e)
AllMessages::StatementDistribution(
StatementDistributionMessage::NetworkBridgeUpdateV1(e)
) if e == event.focus().expect("could not focus message")
);
assert_matches!(
virtual_overseer.recv().await,
AllMessages::BitfieldDistribution(
BitfieldDistributionMessage::NetworkBridgeUpdateV1(e)
AllMessages::AvailabilityDistribution(
AvailabilityDistributionMessage::NetworkBridgeUpdateV1(e)
) if e == event.focus().expect("could not focus message")
);
assert_matches!(
virtual_overseer.recv().await,
AllMessages::PoVDistribution(
PoVDistributionMessage::NetworkBridgeUpdateV1(e)
AllMessages::AvailabilityRecovery(
AvailabilityRecoveryMessage::NetworkBridgeUpdateV1(e)
) if e == event.focus().expect("could not focus message")
);
assert_matches!(
virtual_overseer.recv().await,
AllMessages::StatementDistribution(
StatementDistributionMessage::NetworkBridgeUpdateV1(e)
AllMessages::BitfieldDistribution(
BitfieldDistributionMessage::NetworkBridgeUpdateV1(e)
) if e == event.focus().expect("could not focus message")
);
assert_matches!(
virtual_overseer.recv().await,
AllMessages::ApprovalDistribution(
ApprovalDistributionMessage::NetworkBridgeUpdateV1(e)
AllMessages::PoVDistribution(
PoVDistributionMessage::NetworkBridgeUpdateV1(e)
) if e == event.focus().expect("could not focus message")
);
assert_matches!(
virtual_overseer.recv().await,
AllMessages::AvailabilityRecovery(
AvailabilityRecoveryMessage::NetworkBridgeUpdateV1(e)
AllMessages::ApprovalDistribution(
ApprovalDistributionMessage::NetworkBridgeUpdateV1(e)
) if e == event.focus().expect("could not focus message")
);
}
......@@ -1546,4 +1522,38 @@ mod tests {
}
});
}
#[test]
fn spread_event_to_subsystems_is_up_to_date() {
// Number of subsystems expected to be interested in a network event,
// and hence the network event broadcasted to.
const EXPECTED_COUNT: usize = 6;
let mut cnt = 0_usize;
for msg in AllMessages::dispatch_iter(NetworkBridgeEvent::PeerDisconnected(PeerId::random())) {
match msg {
AllMessages::CandidateValidation(_) => unreachable!("Not interested in network events"),
AllMessages::CandidateBacking(_) => unreachable!("Not interested in network events"),
AllMessages::CandidateSelection(_) => unreachable!("Not interested in network events"),
AllMessages::ChainApi(_) => unreachable!("Not interested in network events"),
AllMessages::CollatorProtocol(_) => unreachable!("Not interested in network events"),
AllMessages::StatementDistribution(_) => { cnt += 1; }
AllMessages::AvailabilityDistribution(_) => { cnt += 1; }
AllMessages::AvailabilityRecovery(_) => { cnt += 1; }
AllMessages::BitfieldDistribution(_) => { cnt += 1; }
AllMessages::BitfieldSigning(_) => unreachable!("Not interested in network events"),
AllMessages::Provisioner(_) => unreachable!("Not interested in network events"),
AllMessages::PoVDistribution(_) => { cnt += 1; }
AllMessages::RuntimeApi(_) => unreachable!("Not interested in network events"),
AllMessages::AvailabilityStore(_) => unreachable!("Not interested in network events"),
AllMessages::NetworkBridge(_) => unreachable!("Not interested in network events"),
AllMessages::CollationGeneration(_) => unreachable!("Not interested in network events"),
AllMessages::ApprovalVoting(_) => unreachable!("Not interested in network events"),
AllMessages::ApprovalDistribution(_) => { cnt += 1; }
// Add variants here as needed, `{ cnt += 1; }` for those that need to be
// notified, `unreachable!()` for those that should not.
}
}
assert_eq!(cnt, EXPECTED_COUNT);
}
}
......@@ -23,6 +23,7 @@ polkadot-node-network-protocol = { path = "../network/protocol" }
polkadot-primitives = { path = "../../primitives" }
polkadot-statement-table = { path = "../../statement-table" }
polkadot-node-jaeger = { path = "../jaeger" }
polkadot-procmacro-subsystem-dispatch-gen = { path = "dispatch-gen" }
sc-network = { git = "https://github.com/paritytech/substrate", branch = "master" }
smallvec = "1.6.1"
sp-core = { git = "https://github.com/paritytech/substrate", branch = "master" }
......
[package]
name = "polkadot-procmacro-subsystem-dispatch-gen"
version = "0.1.0"
authors = ["Parity Technologies <[email protected]>"]
edition = "2018"
description = "Small proc macro to create the distribution code for network events"
[lib]
proc-macro = true
[dependencies]
syn = { version = "1.0.60", features = ["full"] }
quote = "1.0.9"
proc-macro2 = "1.0.24"
assert_matches = "1.5.0"
[dev-dependencies]
trybuild = "1.0.41"
// Copyright 2021 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
// Polkadot is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Polkadot is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Polkadot. If not, see <http://www.gnu.org/licenses/>.
use proc_macro2::TokenStream;
use quote::{quote, ToTokens};
use std::fmt;
use syn::{parse2, Error, Fields, FieldsNamed, FieldsUnnamed, Ident, ItemEnum, Path, Result, Type, Variant};
#[proc_macro_attribute]
pub fn subsystem_dispatch_gen(attr: proc_macro::TokenStream, item: proc_macro::TokenStream) -> proc_macro::TokenStream {
let attr: TokenStream = attr.into();
let item: TokenStream = item.into();
let mut backup = item.clone();
impl_subsystem_dispatch_gen(attr.into(), item).unwrap_or_else(|err| {
backup.extend(err.to_compile_error());
backup
}).into()
}
/// An enum variant without base type.
#[derive(Clone)]
struct EnumVariantDispatchWithTy {
// enum ty name
ty: Ident,
// variant
variant: EnumVariantDispatch,
}
impl fmt::Debug for EnumVariantDispatchWithTy {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}::{:?}", self.ty, self.variant)
}
}
impl ToTokens for EnumVariantDispatchWithTy {
fn to_tokens(&self, tokens: &mut proc_macro2::TokenStream) {
if let Some(inner) = &self.variant.inner {
let enum_name = &self.ty;
let variant_name = &self.variant.name;
let quoted = quote! {
#enum_name::#variant_name(#inner::from(event))
};
quoted.to_tokens(tokens);
}
}
}
/// An enum variant without the base type, contains the relevant inner type.
#[derive(Clone)]
struct EnumVariantDispatch {
/// variant name
name: Ident,
/// The inner type for which a `From::from` impl is anticipated from the input type.
/// No code will be generated for this enum variant if `inner` is `None`.
inner: Option<Type>,
}
impl fmt::Debug for EnumVariantDispatch {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}(..)", self.name)
}
}
fn prepare_enum_variant(variant: &mut Variant) -> Result<EnumVariantDispatch> {
let skip = variant.attrs.iter().find(|attr| attr.path.is_ident("skip")).is_some();
variant.attrs = variant.attrs.iter().filter(|attr| !attr.path.is_ident("skip")).cloned().collect::<Vec<_>>();
let variant = variant.clone();
let span = variant.ident.span();
let inner = match variant.fields.clone() {
// look for one called inner
Fields::Named(FieldsNamed { brace_token: _, named }) if !skip => named
.iter()
.find_map(
|field| {
if let Some(ident) = &field.ident {
if ident == "inner" {
return Some(Some(field.ty.clone()))
}
}
None
},
)
.ok_or_else(|| {
Error::new(span, "To dispatch with struct enum variant, one element must named `inner`")
})?,
// technically, if it has no inner types we cound not require the #[skip] annotation, but better make it consistent
Fields::Unnamed(FieldsUnnamed { paren_token: _, unnamed }) if !skip => unnamed
.first()
.map(|field| Some(field.ty.clone()))
.ok_or_else(|| Error::new(span, "Must be annotated with skip, even if no inner types exist."))?,
_ if skip => None,
Fields::Unit => {
return Err(Error::new(
span,
"Must be annotated with #[skip].",
))
}
Fields::Unnamed(_) => {
return Err(Error::new(
span,
"Must be annotated with #[skip] or have in `inner` element which impls `From<_>`.",
))
}
Fields::Named(_) => {
return Err(Error::new(
span,
"Must be annotated with #[skip] or the first wrapped type must impl `From<_>`.",
))
}
};
Ok(EnumVariantDispatch { name: variant.ident, inner })
}
fn impl_subsystem_dispatch_gen(attr: TokenStream, item: TokenStream) -> Result<proc_macro2::TokenStream> {
let event_ty = parse2::<Path>(attr)?;
let mut ie = parse2::<ItemEnum>(item)?;
let message_enum = ie.ident.clone();
let variants = ie.variants.iter_mut().try_fold(Vec::<EnumVariantDispatchWithTy>::new(), |mut acc, variant| {
let variant = prepare_enum_variant(variant)?;
if variant.inner.is_some() {
acc.push(EnumVariantDispatchWithTy { ty: message_enum.clone(), variant })
}
Ok::<_, syn::Error>(acc)
})?;
let mut orig = ie.to_token_stream();
let msg = "Generated by #[subsystem_dispatch_gen] proc-macro.";
orig.extend(quote! {
impl #message_enum {
#[doc = #msg]
pub fn dispatch_iter(event: #event_ty) -> impl Iterator<Item=Self> + Send {
let mut iter = None.into_iter();
#(
let mut iter = iter.chain(std::iter::once(event.focus().ok().map(|event| {
#variants
})));
)*
iter.filter_map(|x| x)
}
}
});
Ok(orig)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn basic() {
let attr = quote! {
NetEvent<foo::Bar>
};
let item = quote! {
/// Documentation.
#[derive(Clone)]
enum AllMessages {
Sub1(Inner1),
#[skip]
/// D3
Sub3,
/// D4
#[skip]
Sub4(Inner2),
/// D2
Sub2(Inner2),
}
};
let output = impl_subsystem_dispatch_gen(attr, item).expect("Simple example always works. qed");
println!("//generated:");
println!("{}", output);
}
#[test]
fn ui() {
let t = trybuild::TestCases::new();
t.compile_fail("tests/ui/err-*.rs");
t.pass("tests/ui/ok-*.rs");
}
}
#![allow(dead_code)]
use polkadot_procmacro_subsystem_dispatch_gen::subsystem_dispatch_gen;
/// The event type in question.
#[derive(Clone, Copy)]
enum Event {
Smth,
Else,
}
impl Event {
fn focus(&self) -> std::result::Result<Inner, ()> {
unimplemented!("foo")
}
}
/// This should have a `From<Event>` impl but does not.
#[derive(Clone)]
enum Inner {
Foo,
Bar(Event),
}
#[subsystem_dispatch_gen(Event)]
#[derive(Clone)]
enum AllMessages {
/// Foo
Vvvvvv(Inner),
/// Missing a `#[skip]` annotation
Uuuuu,
}
fn main() {
let _x = AllMessages::dispatch_iter(Event::Else);
}
error: Must be annotated with #[skip].
--> $DIR/err-01-missing-skip.rs:32:5
|
32 | Uuuuu,
| ^^^^^
error[E0599]: no variant or associated item named `dispatch_iter` found for enum `AllMessages` in the current scope
--> $DIR/err-01-missing-skip.rs:36:27
|
27 | enum AllMessages {
| ---------------- variant or associated item `dispatch_iter` not found here
...
36 | let _x = AllMessages::dispatch_iter(Event::Else);
| ^^^^^^^^^^^^^ variant or associated item not found in `AllMessages`
#![allow(dead_code)]
use polkadot_procmacro_subsystem_dispatch_gen::subsystem_dispatch_gen;
/// The event type in question.
#[derive(Clone, Copy, Debug)]
enum Event {
Smth,
Else,
}
impl Event {
fn focus(&self) -> std::result::Result<Intermediate, ()> {
Ok(Intermediate(self.clone()))
}
}
#[derive(Debug, Clone)]
struct Intermediate(Event);
/// This should have a `From<Event>` impl but does not.
#[derive(Debug, Clone)]
enum Inner {
Foo,
Bar(Intermediate),
}
#[subsystem_dispatch_gen(Event)]
#[derive(Clone)]
enum AllMessages {
/// Foo
Vvvvvv(Inner),
#[skip]
Uuuuu,
}
|
__label__pos
| 0.962479 |
Mathematical notebook revisited
Arjen Markus (19 december 2011) I am fascinated by mathematics in general and geometry in particular. So, this weekend I extended my old mathematical notebook with the specific purpose of visualising classical geometrical constructions. For this I needed an extended version - which you will find below (some convenience procedures and a different user-interface).
The idea is to use the flexibility of the text widget to allow for an interactive display of the construction: via tags the example below highlights the steps in the construction of lines tangent to a circles. All in the classical tradition of the ancient Greek.
I use the tag mechanism of the canvas to delete those items I do not need anymore - right now: just used by the "Reset" button.
(Note: it is not perfect yet - opening a second file for instance does not refresh the text widget properly)
Example: Tangent lines to a circle
Here is a screenshot:
Matematical notebook - screenshot
# Classical construction:
# Tangent lines to a circle
#
<h1>Using compasses and straightedge</h1>
<p>
The classical tools for geometrical constructions, such as the hexagon
or the bissectrix of an angle are the compasses and the straightedge.
<p>
Here we show how to use these tools to construct the lines through a
given point that are tangent to a circle. The construction is shown in
steps.
(This illustrates that it is easy to make interactive displays).
<p>
First we draw the point P and the circle C (see the canvas on the
right-hand side)
<p>
Then we take the following steps (press the Next button):
<ul>
<tag>step1
<li>
Draw a line piece through the centre of the circle and the point.
<tag>step2
<li>
Construct the line through the middle of this line piece, perpendicular
to that line.
<tag>step3
<li>
The midpoint is the centre of a circle through the centre of C and P.
<tag>step4
<li>
Finally the points where the two circles intersect are the points where
the two tangents lines meet circle C.
<tag>step5
<li>
Draw the tangent lines.
</ul>
<tag>normal
This way we have constructed the tangent lines.
@init {
proc resetTags {} {
variable TXT
variable fontNormal
$TXT tag configure step1 -foreground lightgrey -font $fontNormal
$TXT tag configure step2 -foreground lightgrey -font $fontNormal
$TXT tag configure step3 -foreground lightgrey -font $fontNormal
$TXT tag configure step4 -foreground lightgrey -font $fontNormal
$TXT tag configure step5 -foreground lightgrey -font $fontNormal
}
resetTags
proc showStep {step} {
variable CNV
variable xcentre
variable ycentre
variable radius
variable xpoint
variable ypoint
variable xp ;# Midpoint between point P and centre of circle C
variable yp
variable radiusM ;# Radius of the circle through P and centre
variable tangentPoints
switch $step {
"0" {
$CNV delete steps
}
"1" {
#
# Step: line piece connecting point and centre
#
set id [polyline [list $xcentre $ycentre $xpoint $ypoint]]
$CNV itemconfigure $id -tag steps
}
"2" {
#
# Midpoint of the line piece
#
set id1 [circle $xcentre $ycentre 1.5]
set id2 [circle $xpoint $ypoint 1.5]
$CNV itemconfigure $id1 -tag steps
$CNV itemconfigure $id2 -tag steps
#
# Compute the two intersection points
# and draw them, as well as the line piece between them
#
set points [circleIntersection [list $xcentre $ycentre 1.5] \
[list $xpoint $ypoint 1.5]]
set id [polyline $points]
$CNV itemconfigure $id -tag steps
set xp [expr {([lindex $points 0] + [lindex $points 2]) / 2.0}]
set yp [expr {([lindex $points 1] + [lindex $points 3]) / 2.0}]
set id1 [point [lindex $points 0] [lindex $points 1] black]
set id2 [point [lindex $points 2] [lindex $points 3] black]
set id3 [point $xp $yp red]
$CNV itemconfigure $id1 -tag steps
$CNV itemconfigure $id2 -tag steps
$CNV itemconfigure $id3 -tag steps
}
"3" {
#
# Step: circle through centre and point
#
set radiusM [expr {hypot($xcentre-$xp,$ycentre-$yp)}]
set id [circle $xp $yp $radiusM red]
$CNV itemconfigure $id -tag steps
}
"4" {
#
# Step: intersection points of the two circles
#
set tangentPoints [circleIntersection [list $xcentre $ycentre $radius] \
[list $xp $yp $radiusM]]
set id1 [point [lindex $tangentPoints 0] [lindex $tangentPoints 1] green]
set id2 [point [lindex $tangentPoints 2] [lindex $tangentPoints 3] green]
$CNV itemconfigure $id1 -tag steps
$CNV itemconfigure $id2 -tag steps
}
"5" {
#
# Step: draw the tangent lines
#
set id1 [infiniteLine [lindex $tangentPoints 0] [lindex $tangentPoints 1] $xpoint $ypoint green]
set id2 [infiniteLine [lindex $tangentPoints 2] [lindex $tangentPoints 3] $xpoint $ypoint green]
$CNV itemconfigure $id1 -tag steps -width 2
$CNV itemconfigure $id2 -tag steps -width 2
}
}
}
}
@canvasright 400 400 {
variable state
variable xcentre
variable ycentre
variable xpoint
variable ypoint
variable radius
variable CNV
set state 0
scale {-2.5 -2.0 1.5 2.0}
set xcentre -1.0
set ycentre 0.0
set xpoint 0.7
set ypoint 0.6
set radius 0.8
set id [circle $xcentre $ycentre $radius black] ;# Circle C
$CNV itemconfigure $id -width 2
point $xpoint $ypoint black ;# Point P
set state 0
}
@button Reset {
resetTags
$CNV delete steps
set state 0
}
@button Next {
variable state
resetTags
incr state
$TXT tag configure step$state -foreground black -font $fontBold
showStep $state
}
Example: The witch of Agnesi
Here is a second example, as it presents an animated construction, you will have to run it yourself to see it.
# witch_agnesi.txt --
# Construct the curve known as the witch of Agnesi
#
<h1>Witch of Agnesi</h1>
<p>
With compasses and straightedge you can construct all manner of curves,
though sometimes the process is more mechanical than mathematical.
<p>
Here is an example: the witch of Agnesi is constructed by drawing
a line through a fixed point on a circle and using the intersection
through that circle and a tangent line to define a new point.
<p>
The process is illustrated in the figure on the right. Press the
"Go" button to see how it works.
<p>
The curve has the parametric form:
<pre>
x = 2 a cot t
y = a (1-cos(2t))
</pre>
where "a" is the radius of the circle (cf. mathworld.wolfram.com/WitchofAgnesi.html).
@init {
proc intersectionCircleLine {circle line} {
foreach {xc yc radius} $circle {break}
foreach {x1 y1 x2 y2} $line {break}
set dx [expr {$x2 - $x1}]
set dy [expr {$y2 - $y1}]
set length [expr {hypot($dx,$dy)}]
set xn [expr {-$dy/$length}]
set yn [expr {$dx/$length}]
set mu [expr {($x1-$xc)*$xn + ($y1-$yc)*$yn}]
set xmid [expr {$xc + $mu * $xn}]
set ymid [expr {$yc + $mu * $yn}]
set dist [expr {sqrt($radius**2 - ($xmid-$xc)**2 - ($ymid-$yc)**2)/$length}]
set xi1 [expr {$xmid + $dx * $dist}]
set yi1 [expr {$ymid + $dy * $dist}]
set xi2 [expr {$xmid - $dx * $dist}]
set yi2 [expr {$ymid - $dy * $dist}]
return [list $xi1 $yi1 $xi2 $yi2]
}
proc drawWitch {x} {
variable CNV
variable coords
$CNV delete line
$CNV delete witch
set line [list 0.0 -1.0 $x 1.0]
set id1 [polyline $line black]
set id2 [point $x 1.0 red]
$CNV itemconfigure $id1 -tag line
$CNV itemconfigure $id2 -tag line
#
# Determine the points of intersection and select the
# right one - the one with y > -1.0
#
set intersectionPoints [intersectionCircleLine {0.0 0.0 1.0} $line]
foreach {xp yp} $intersectionPoints {
if { $yp > -0.999 } {
break
}
}
#
# Draw the auxiliary lines
#
set id1 [polyline [list $x 1.0 $x -1.0] red]
set id2 [polyline [list $xp $yp $x $yp] red]
set id3 [point $xp $yp red]
set id4 [point $x $yp red]
$CNV itemconfigure $id1 -tag line
$CNV itemconfigure $id2 -tag line
$CNV itemconfigure $id3 -tag line
$CNV itemconfigure $id4 -tag line
lappend coords $x $yp
if { [llength $coords] > 2 } {
set id [polyline $coords red]
$CNV itemconfigure $id -tag witch
}
#
# Note: the procedure is defined within the MathData namespace
#
if { $x < 4.0 } {
after 100 [list ::MathData::drawWitch [expr {$x+0.1}]]
} else {
$CNV delete line
$CNV itemconfigure $id -width 2
}
}
}
@canvasright 400 400 {
scale {-4.0 -4.0 4.0 4.0}
infiniteLine -2.0 1.0 2.0 1.0 black
circle 0.0 0.0 1.0 black
point 0.0 -1.0 black
console show
puts [intersectionCircleLine {0.0 0.0 1.0} {0.0 0.0 0.0 1.0}]
}
@button Go {
variable coords
set coords {}
drawWitch -4.0
}
Code: Updated mathbook
# mathbook.tcl --
# Script to show notes on mathematical subjects
#
# TODO:
# - Implement a number of useful drawing commands
# - Implement a formula renderer (a basic one _is_ available)
# - Implement more convenient bindings
# - Describe the application
#
# Missing commands:
# @refresh - define your own refresh method
# @label - allow a label (useful for variable text)
# @button - allow a pushbutton
#
package require Tcl 8.5
package require Tk
if { [tk windowingsystem] == "x11" } {
. configure -background #dcdad5
option add *background #dcdad5
option add *foreground black
option add *borderWidth 1 widgetDefault
option add *activeBorderWidth 1 widgetDefault
option add *selectBorderWidth 1 widgetDefault
option add *font -adobe-helvetica-medium-r-normal-*-12-*-*-*-*-*-*
option add *padX 2
option add *padY 4
option add *Listbox.background white
option add *Listbox.selectBorderWidth 0
option add *Listbox.selectForeground white
option add *Listbox.selectBackground #4a6984
option add *Entry.foreground black
option add *Entry.background white
option add *Entry.selectBorderWidth 0
option add *Entry.selectForeground white
option add *Entry.selectBackground #4a6984
option add *Text.background white
option add *Text.selectBorderWidth 0
option add *Text.selectForeground white
option add *Text.selectBackground #4a6984
option add *Menu.activeBackground #4a6984
option add *Menu.activeForeground white
option add *Menu.activeBorderWidth 0
option add *Menu.highlightThickness 0
option add *Menu.borderWidth 2
option add *MenuButton.activeBackground #4a6984
option add *MenuButton.activeForeground white
option add *MenuButton.activeBorderWidth 0
option add *MenuButton.highlightThickness 0
option add *MenuButton.borderWidth 0
option add *highlightThickness 0
option add *troughColor #bdb6ad
}
# MathData --
# Namespace for the user-defined commands and data
#
namespace eval ::MathData:: {
variable CNV ""
variable TXT ""
variable fontNormal "Courier 10"
variable fontBold "Courier 10 bold"
variable fontItalic "Courier 10 italic"
}
# scale --
# Set up the scaling for the given canvas
# Arguments:
# data List of data (x, y, x, y ...)
# Result:
# None
# Side effects:
# Scaling parameters set
# Note:
# TODO: Should make sure there is some scaling involved
# if only using pixels
#
proc ::MathData::scale { data } {
variable CNV
variable SCALE
set width [$CNV cget -width]
set height [$CNV cget -height]
set xmin 1.0e30
set xmax -1.0e30
set ymin 1.0e30
set ymax -1.0e30
foreach {x y} $data {
if { $x < $xmin } { set xmin $x }
if { $x > $xmax } { set xmax $x }
if { $y < $ymin } { set ymin $y }
if { $y > $ymax } { set ymax $y }
}
if { $xmin == $xmax } { set xmax [expr {$xmax+1.0}] }
if { $ymin == $ymax } { set ymax [expr {$ymax+1.0}] }
set SCALE(xscale) [expr {$width/double($xmax-$xmin)}]
set SCALE(yscale) [expr {$height/double($ymax-$ymin)}]
set SCALE(xmin) $xmin
set SCALE(xmax) $xmax
set SCALE(ymin) $ymin
set SCALE(ymax) $ymax
}
# polyline --
# Draw a line consisting of multiple points
# Arguments:
# data List of data (x, y, x, y ...)
# colour Colour to use (default: black)
# Result:
# Canvas ID of the polyline
# Side effects:
# Line drawn according to current scales
#
proc ::MathData::polyline { data {colour black} } {
variable CNV
variable SCALE
set xscale $SCALE(xscale)
set yscale $SCALE(yscale)
set xmin $SCALE(xmin)
set xmax $SCALE(xmax)
set ymin $SCALE(ymin)
set ymax $SCALE(ymax)
set pixels {}
foreach {x y} $data {
set px [expr {$xscale*($x-$xmin)}]
set py [expr {$yscale*($ymax-$y)}]
lappend pixels $px $py
}
$CNV create line $pixels -fill $colour
}
# circle --
# Draw a circle with given centre and radius
# Arguments:
# xcentre X-coordinate of the centre
# ycentre Y-coordinate of the centre
# radius Radius of circle
# colour Colour to use (default: black)
# filled Filled or not (default: not)
# Result:
# Canvas ID of the circle
# Side effects:
# Line drawn according to current scales
#
proc ::MathData::circle { xcentre ycentre radius {colour black} {filled 0} } {
variable CNV
variable SCALE
set xscale $SCALE(xscale)
set yscale $SCALE(yscale)
set xmin $SCALE(xmin)
set xmax $SCALE(xmax)
set ymin $SCALE(ymin)
set ymax $SCALE(ymax)
set pixels {}
foreach {x y} [list [expr {$xcentre-$radius}] [expr {$ycentre-$radius}] \
[expr {$xcentre+$radius}] [expr {$ycentre+$radius}] ] {
set px [expr {$xscale*($x-$xmin)}]
set py [expr {$yscale*($ymax-$y)}]
lappend pixels $px $py
}
$CNV create oval $pixels -fill [expr {$filled? $colour : {}}] -outline $colour
}
# point --
# Draw a point with given coordinates
# Arguments:
# xpoint X-coordinate
# ypoint Y-coordinate
# colour Colour to use (default: black)
# Result:
# Canvas ID of the point
# Side effects:
# Line drawn according to current scales
#
proc ::MathData::point { xpoint ypoint {colour black} } {
variable CNV
variable SCALE
set xscale $SCALE(xscale)
set yscale $SCALE(yscale)
set xmin $SCALE(xmin)
set xmax $SCALE(xmax)
set ymin $SCALE(ymin)
set ymax $SCALE(ymax)
set pixels {}
set px [expr {$xscale*($xpoint-$xmin)}]
set py [expr {$yscale*($ymax-$ypoint)}]
lappend pixels [expr {$px-1}] [expr {$py-1}] [expr {$px+1}] [expr {$py+1}]
$CNV create rectangle $pixels -fill $colour -outline $colour
}
# text --
# Draw a text string at a given position
# Arguments:
# x X coordinate
# y Y coordinate
# string String to show
# Result:
# Canvas ID of the text object
# Side effects:
# String drawn
#
proc ::MathData::text { x y string } {
variable CNV
variable SCALE
set xscale $SCALE(xscale)
set yscale $SCALE(yscale)
set xmin $SCALE(xmin)
set xmax $SCALE(xmax)
set ymin $SCALE(ymin)
set ymax $SCALE(ymax)
set px [expr {$xscale*($x-$xmin)}]
set py [expr {$yscale*($ymax-$y)}]
$CNV create text $px $py -text $string -anchor nw
}
# axes --
# Draw two lines representing the axes
# Arguments:
# None
# Result:
# None
# Side effects:
# Two lines drawn (no labels yet)
#
proc ::MathData::axes { } {
variable CNV
variable SCALE
set width [$CNV cget -width]
set height [$CNV cget -height]
set xscale $SCALE(xscale)
set yscale $SCALE(yscale)
set xmin $SCALE(xmin)
set xmax $SCALE(xmax)
set ymin $SCALE(ymin)
set ymax $SCALE(ymax)
set px0 [expr {$xscale*(0.0-$xmin)}]
set py0 [expr {$yscale*($ymax-0.0)}]
$CNV create line $px0 0 $px0 $height -fill black
$CNV create line 0 $py0 $width $py0 -fill black
}
# func --
# Repeatedly run a function and return xy-pairs
# Arguments:
# funcname Name of the function (procedure)
# xmin Minimum x-value
# xmax Maximum x-value
# nosteps Number of steps (inbetween; default: 50)
# Result:
# List of x, y values
#
proc ::MathData::func { funcname xmin xmax { nosteps 50 } } {
set coords {}
set xstep [expr {($xmax-$xmin)/$nosteps}]
for { set i 0 } { $i <= $nosteps } { incr i } {
set x [expr {$xmin+$i*$xstep}]
set y [$funcname $x]
lappend coords $x $y
}
return $coords
}
# circleIntersection --
# Determine the points where two circles intersect
# Arguments:
# circle1 X-, Y-coordinate and radius of first circle
# circle2 X-, Y-coordinate and radius of second circle
# Result:
# List of x/y coordinates or empty
#
proc ::MathData::circleIntersection { circle1 circle2 } {
set coords {}
foreach {x1 y1 r1} $circle1 {break}
foreach {x2 y2 r2} $circle2 {break}
#
# Do we have an intersection?
#
set distc [expr {sqrt(($x1-$x2)**2 + ($y1-$y2)**2)}]
if { $distc**2 <= $r1**2+$r2**2 } {
set a [expr {0.5 * ($distc + ($r1**2-$r2**2)/$distc)}]
set dist [expr {sqrt($r1**2 - $a**2)}]
set dx [expr {$x2-$x1}]
set dy [expr {$y2-$y1}]
set dd [expr {hypot($dx,$dy)}]
set xc [expr {$x1 + $a * $dx/$dd}]
set yc [expr {$y1 + $a * $dy/$dd}]
set xn [expr {$dist * $dy/$dd}]
set yn [expr {-$dist * $dx/$dd}]
set xp1 [expr {$xc + $xn}]
set xp2 [expr {$xc - $xn}]
set yp1 [expr {$yc + $yn}]
set yp2 [expr {$yc - $yn}]
set coords [list $xp1 $yp1 $xp2 $yp2]
}
return $coords
}
# infiniteLine --
# Draw a line through two points that extends indefinitely
# Arguments:
# xp1 X-coordinate first point
# yp1 Y-coordinate first point
# xp2 X-coordinate second point
# yp2 Y-coordinate second point
# colour Colour of the line (default: black)
# Result:
# Canvas ID of the line
#
proc ::MathData::infiniteLine { xp1 yp1 xp2 yp2 {colour black} } {
variable CNV
variable SCALE
set dx [expr {$xp2-$xp1}]
set dy [expr {$yp2-$yp1}]
if { abs($dx) > abs($dy) } {
set xn1 $SCALE(xmin)
set lambda [expr {($xn1 - $xp1) / $dx}]
set yn1 [expr {$yp1 + $dy * $lambda}]
set xn2 $SCALE(xmax)
set lambda [expr {($xn2 - $xp1) / $dx}]
set yn2 [expr {$yp1 + $dy * $lambda}]
} else {
set yn1 $SCALE(ymin)
set lambda [expr {($yn1 - $yp1) / $dy}]
set xn1 [expr {$xp1 + $dx * $lambda}]
set yn2 $SCALE(ymax)
set lambda [expr {($yn2 - $yp1) / $dy}]
set xn2 [expr {$xp1 + $dx * $lambda}]
}
polyline [list $xn1 $yn1 $xn2 $yn2] $colour
}
# MathBook --
# Namespace for the mathbook commands and data
#
namespace eval ::MathBook:: {
variable count 0
variable CNV
variable CNVRIGHT ""
variable CNVCODE
variable TXT
variable REFRESH
}
# @init --
# Execute code once (when reading the notebook file)
# Arguments:
# code Code to run
# Result:
# Nothing
#
proc ::MathBook::@init { code } {
namespace eval ::MathData $code
}
# @canvas --
# Create a canvas of given size
# Arguments:
# width Width in pixels
# height Height in pixels
# code Code to execute
# Result:
# Nothing
# Side effect:
# Canvas created
#
proc ::MathBook::@canvas { width height code } {
variable CNV
variable CNVCODE
variable TXT
variable count
incr count
set CNV $TXT.cnv$count
set ::MathData::CNV $CNV
set CNVCODE($CNV) $code
canvas $CNV -width $width -height $height -bg white
$TXT insert end "\n"
$TXT window create end -window $CNV
$TXT insert end "\n"
namespace eval ::MathData $code
}
# @canvasright --
# Create a canvas of given size on the right of the text
# Arguments:
# width Width in pixels
# height Height in pixels
# code Code to execute
# Result:
# Nothing
# Side effect:
# Canvas created
#
proc ::MathBook::@canvasright { width height code } {
variable CNV
variable CNVCODE
variable CNVRIGHT
if { $CNVRIGHT eq "" } {
set CNVRIGHT [canvas .cnv -width $width -height $height -bg white]
grid configure $CNVRIGHT -row 0 -column 2
} else {
$CNVRIGHT configure -width $width -height $height
}
set CNV $CNVRIGHT
set ::MathData::CNV $CNV
namespace eval ::MathData $code
}
# @entry --
# Create an entry widget of given width
# Arguments:
# name Name of the associated variable
# width Width of the widget (in characters)
# Result:
# Nothing
# Side effect:
# Entry created
#
proc ::MathBook::@entry { name width } {
variable TXT
variable count
incr count
set entry $TXT.entry$count
entry $entry -textvariable ::MathData::$name -width $width
$TXT window create end -window $entry
bind $entry <Return> ::MathBook::Refresh
}
# @label --
# Create a label widget of given width
# Arguments:
# name Name of the associated variable
# width Width of the widget (in characters)
# Result:
# Nothing
# Side effect:
# Label created
#
proc ::MathBook::@label { name width } {
variable TXT
variable count
incr count
set label $TXT.label$count
label $label -textvariable ::MathData::$name -width $width \
-background white -anchor nw -font $::MathData::fontNormal
$TXT window create end -window $label
}
# @button --
# Create a pushbutton
# Arguments:
# label Label for the pushbutton
# code Code to apply
# Result:
# Nothing
# Side effect:
# Button created
#
proc ::MathBook::@button { label code } {
variable TXT
variable count
variable buttoncount
incr buttoncount
set button .buttons.button$buttoncount
button $button -text $label -command "namespace eval ::MathData [list $code]" -width 10
grid configure $button -column $buttoncount -row 0
}
# @refresh --
# Define a refresh method - called before the canvas methods
# Arguments:
# code Code to be run on refresh
# Result:
# None
# Side effect:
# Defines the REFRESH variable
#
proc ::MathBook::@refresh { code } {
variable REFRESH
set REFRESH $code
}
# Refresh --
# Refresh the canvases and labels etc.
# Arguments:
# None
# Result:
# None
# Side effect:
# Canvases refreshed and whatever occurs in the @refresh method
#
proc ::MathBook::Refresh { } {
variable CNV
variable CNVCODE
variable REFRESH
variable TXT
variable count
if { [info exists REFRESH] } {
namespace eval ::MathData $REFRESH
}
foreach {name code} [array get CNVCODE] {
set ::MathData::CNV $name
$name delete all
namespace eval ::MathData $code
}
}
# initMainWindow --
# Create the main window
# Arguments:
# None
# Result:
# None
# Side effect:
# Main window created
#
proc ::MathBook::initMainWindow { } {
variable TXT
variable count
set count 0
set buttoncount -1
set menu [menu .mb -type menubar]
. configure -menu .mb
.mb add cascade -label File -underline 0 -menu .mb.file
menu .mb.file -tearoff 0
.mb.file add command -label Open -command ::MathBook::OpenTextFile
.mb.file add command -label Exit -command exit
set tf .textframe
set tw $tf.text
set buttons .buttons
set TXT $tw
set ::MathData::TXT $tw
frame $tf
scrollbar $tf.scrollx -orient horiz -command "$tw xview"
scrollbar $tf.scrolly -command "$tw yview"
text $tw -yscrollcommand "$tf.scrolly set" \
-xscrollcommand "$tf.scrollx set" \
-fg black -bg white -font "courier 10" \
-wrap word
grid $tw $tf.scrolly
grid $tf.scrollx x
grid $tw -sticky news
grid $tf.scrolly -sticky ns
grid $tf.scrollx -sticky ew
grid columnconfigure $tf 0 -weight 1
grid rowconfigure $tf 0 -weight 1
frame $buttons
button $buttons.refresh -text Refresh -command ::MathBook::Refresh -width 10
grid $tf - -sticky news
grid $buttons.refresh -sticky news
grid $buttons -sticky news
grid columnconfigure . 0 -weight 1
grid columnconfigure . 1 -weight 1
grid rowconfigure . 0 -weight 1
$tw tag configure bigbold -font "helvetica 12 bold"
$tw tag configure normal -font "courier 10"
$tw tag configure preform -font "courier 10" -background "lightgrey"
$tw tag configure indent -lmargin2 16
}
# fillTextWindow --
# Fill the text window
# Arguments:
# filename Name of the notebook file to use
# Result:
# None
# Side effect:
# Text window filled
#
proc ::MathBook::fillTextWindow { filename } {
variable TXT
set infile [open $filename "r"]
set just ""
set indent ""
set tag normal
while { [gets $infile line] >= 0 } {
set trimmed [string trim $line]
#
# Analyse the contents ...
#
if { [string first "#" $trimmed] == 0 } {
continue
}
# Ignore empty lines, unless in preformatted text
if { $trimmed == "" } {
if { $just != "" } {
$TXT insert end "\n" $tag
}
continue
}
if { [string first "@" $trimmed] == 0 } {
RunWholeCommand $infile $line
continue
}
if { [string first "<h1>" $trimmed] == 0 } {
set tag bigbold
set trimmed [string map {<h1> "" </h1> ""} $trimmed]
}
if { [string first "<b>" $trimmed] == 0 } {
set tag [list bold $indent]
set trimmed [string map {<b> "" </b> ""} $trimmed]
}
if { [string first "<i>" $trimmed] == 0 } {
set tag [list italic $indent]
set trimmed [string map {<i> "" </i> ""} $trimmed]
}
if { [string first "<pre>" $trimmed] == 0 } {
$TXT insert end "\n"
set tag "preform"
set just "\n"
continue
}
if { [string first "</pre>" $trimmed] == 0 } {
$TXT insert end "\n"
set tag [list normal $indent]
set just ""
continue
}
if { [string first "<p>" $trimmed] == 0 } {
$TXT insert end "\n\n"
continue
}
if { [string first "<br>" $trimmed] == 0 } {
$TXT insert end "\n"
continue
}
if { [string first "<ul>" $trimmed] == 0 } {
set indent "indent"
continue
}
if { [string first "</ul>" $trimmed] == 0 } {
$TXT insert end "\n\n"
set indent ""
continue
}
if { [string first "<li>" $trimmed] == 0 } {
$TXT insert end "\n* " indent
continue
}
if { [string first "<tag>" $trimmed] == 0 } {
set tag [list [lindex $tag 0] $indent [string trim [string range $trimmed 5 end]]]
continue
}
if { $just == "" } {
$TXT insert end "$trimmed " $tag
} else {
$TXT insert end "$line\n" $tag
}
if { $tag == "bigbold" || $tag == "italic" || $tag == "bold" } {
set tag "normal"
}
}
close $infile
$TXT configure -state disabled
wm title . "$filename - MathBook"
}
# OpenTextFile --
# Select a text file and display the contents
# Arguments:
# None
# Result:
# None
# Side effect:
# The contents is shown
#
proc ::MathBook::OpenTextFile {} {
variable TXT
variable CNVRIGHT
set types {
{{Text Files} {*.txt}}
{{All Files} *}
}
set filename [tk_getOpenFile -filetypes $types -parent . -title "Select mathbook file"]
if { $filename != "" } {
$TXT delete 1.0 end
if { $CNVRIGHT != "" } {
destroy $CNVRIGHT
}
fillTextWindow $filename
}
}
# RunWholeCommand --
# Run an embedded command
# Arguments:
# infile Handle to the file
# line First line of the command
# Result:
# None
# Side effect:
# Whatever the command does
#
proc ::MathBook::RunWholeCommand { infile line } {
variable TXT
while { ! [info complete $line] } {
if { [gets $infile nextline] >= 0 } {
append line "\n$nextline"
} else {
break
}
}
eval $line
}
# main --
# Get the whole thing going
#
::MathBook::initMainWindow
if { [llength $argv] > 0 } {
::MathBook::fillTextWindow [lindex $argv 0]
} else {
$::MathBook::TXT insert end "-- please open a mathbook file --"
wm title . "MathBook"
}
JM I may have done something wrong, but I am getting this error when opening the sample file...
wrong # args: should be "linsert list index element ?element ...?"
wrong # args: should be "linsert list index element ?element ...?"
while executing
"linsert $tag 0"
(procedure "fillTextWindow" line 81)
invoked from within
"fillTextWindow $filename"
(procedure "::MathBook::OpenTextFile" line 13)
invoked from within
"::MathBook::OpenTextFile"
arjen - 2011-12-20 03:11:03
The linsert should be an lindex and the given example does not load properly (i.e. no canvas on the right) so that is something I have to look into. Try:
wish mathbook.tcl tangentcircle.txt
instead. I have _not_ seen the error message you report though. Are you using Tcl/Tk 8.4 or 8.5/8.6?
arjen - 2011-12-20 03:13:53
Just answered my own question: you get this error with Tcl/Tk 8.4. I will have corrected this. I will look at the other problem later.
-- Deletion should be done in the right order - that was the mistake. Corrected.
Jorge - 2011-12-20 22:28:13
correct, I just change to 8.6 and works fine (it still throws error messages with 8.4, ** vs pow for example). Thanks, nice job.
arjen - 2011-12-21 02:58:12
Thanks for the compliment. As for ** versus pow(), I have to add a package require Tcl 8.5 in there.
|
__label__pos
| 0.998021 |
What are the negative side effects of too much screen time?
22 March 2019
Active Health
From young children to working adults to even the elderly, most of us Singaporeans get a fair amount of daily screen time from the numerous digital devices that we own. We consume so much content from our digital screens that it’s easy to forget the negative side effects of too much screen time. However, with research showing that excess screen time can impair brain development or even lead to long-term medical conditions such as diabetes, it’s time to stop ignoring the risks of overdosing on our screens.
negative side effects of too much screen time Photo: Active Health
The consequences of too much screen time
• Physical strain to your eyes and body
Spending long hours staring at a screen definitely takes a toll on your body, especially your eyes. Excessive screen time not only strains your eyes and leaves them feeling dry, but can also lead to retina damage and blurred vision. Myopia is already a big problem that plenty of Singaporeans face and staring incessantly at a screen only worsens existing conditions. Furthermore, being constantly hunched over (like how so many people tend to do with their smartphones) also affects your posture and can cause stiffness and pain both the neck and shoulder.
Sleep deprivation
The amount of screen time you clock has a direct impact on how much sleep you are getting, given that the blue light emitted from digital screens interferes with the production of the sleep hormone melatonin in your body. This is why using digital devices right before bedtime makes it much harder for you to fall asleep. Research has found that Singaporeans aren’t getting enough sleep and cutting our screen time certainly makes for a good solution to this problem!
• Increased risk of obesity
The passive, sedentary nature of digital device usage means you are depriving yourself of physical activity and exercise. This contributes to increased weight-gain, especially if you tend to snack quite a bit while watching TV. Furthermore, the numerous fast-food commercials on TV also tempt many of us into eating more unhealthily. Simply watching two more additional hours of TV each day can significantly increase the risk of becoming obese.
• Susceptibility to chronic health conditions
The increased risk of obesity also makes you more vulnerable to chronic diseases such as type 2 diabetes, heart disease and cancer. Scientific research has shown that spending long hours sitting when using digital devices can cause a spike in insulin and blood glucose levels, and also lead to an accumulation of fat in your bloodstream. Spending less time on screens and more on being physically active can definitely help you avoid these problems!
• Loss of cognitive ability
One of the scariest consequences of excessive screen time is its effect on one's mental health. Too much screen time alters the very structure of your brain by causing the grey matter that’s responsible for cognitive processes to shrink, as well as deformity to the white matter that serves as the network to the brain’s signal communication. This manifests itself in the form of poorer concentration, weaker memory, slower information processing and weaker impulse control – these effects are particularly worrying when it comes to children, whose brains are still developing.
• Impaired socialising skills
Using digital devices is a largely solitary activity – we don’t have much real-life interactions when we are preoccupied with what's happening on the screen. This could lead to increasing anti-social tendencies and feelings of withdrawal. With children in particular, this precious opportunity to develop important social skills through playing with their friends is lost when they spend time on digital devices instead.
• Weakened emotional judgment
Too much screen time also affects your ability to register and process emotions. Desensitisation to violent content is one particularly worrying side effect of weakened emotional judgment. According to scientific research, exposure to violent media content can also increase aggression levels, especially in younger children and adolescents.
• Delayed learning in young children
When it comes to young children, the alteration of the brain’s structure due to excessive screen time can impact their learning abilities. In particular, children who watch more TV have more difficulty picking up languages – this delay in learning can be as much as 50% higher for every 30 minutes spent watching the TV. Letting kids watch educational programs may not be the best way to educate them either – young children learn better by physically exploring, and letting them watch shows passively hinders their brains from being active and engaged.
• Lower self-esteem
Finally, spending too much time in the virtual world of screens can also have a negative impact on how you perceive yourself. The time you lose that could have been spent on forming relationships with other people, discovering and honing your passions, and creating new experiences leads to a weakened sense of self-identity and confidence. When the bulk of your time is spent on social media sites, this problem is exacerbated because you may end up worrying more about your virtual self-image instead of your real one. For children and youth, the dangers of cyberbullying and self-image issues are particularly worrying.
Perhaps one of the most worrying consequences of excessive screen usage is how it sends your brain into an addictive state. The rush of the pleasure-inducing dopamine we get from using our digital devices activates our brain’s reward centre and insidiously makes us crave more. This is why many of us find ourselves trapped in a cycle of screen addiction.
negative side effects of too much screen time Photo: Active Health
What leads to screen addiction?
Addiction, be it in any format or to any substance, can be viewed as a biochemical consequence where the body craves having the reward centre of its brain constantly stimulated. Pursuit of pleasurable activities results in the release of dopamine, one of the three major “feel good” hormones (the other two being endorphins and serotonin). Just as how the body responds with a burst of energy upon the consumption of a caffeinated beverage, the mind experiences a sense of euphoria that's the result of a dopamine rush. The problem: the body becomes desensitised to this feeling over time (par for the course with all forms of stimuli). As a result, it seeks similar experiences of a higher intensity to make up for it.
This is essentially how screen addiction is developed within individuals. Digital devices now occupy a significant area within our personal space and technology has integrated itself into multiple facets of our lives. From the essentials like food to luxuries such as home movies, practically all forms of consumer-level technology are geared towards generating a rewarding experience. Addiction can occur at any stage of life too; an elderly individual can just as easily fall victim as a toddler. Frequent exposure, be it the result of a conscious or unconscious decision, is enough to trigger a neurochemical cascade of reactions that may be minor at first, but will in fact snowball over time if left unmitigated.
Information overload
Another reason behind our growing addiction to digital screens is the culture of instant gratification that has risen over the years. As technology progressed, we started to demand more results at a faster rate. Unfortunately, devices like the smartphone and the tablet were perfectly tailored to suit this new demand. Combined with the high-speed capabilities of the Internet, handheld devices were able to spit out content like news, search results, and social media at an astounding rate.
To the brain, information is like food. Unlike food however, information doesn't take up any real physical space. As a result, the brain continues to consume unabated. Does staying up to the wee hours of the morning while hopping between YouTube and Instagram sound familiar? Even the consumption of useless information (as long as when done in a relaxed state) can trigger a release of dopamine. For the mind, the only constraint at play here is fatigue, which is how late-night screen sessions typically end – tapping out due to tiredness.
Now even though no one likes waking up feeling like they only had half the amount of sleep they actually had, the connection it makes between the two events (watching screens till late and feeling dog-tired the following day) is only surface-level. What it DOES remember is that it had a ball of a time the previous night, and with a new day comes another opportunity to relive (or outdo) that exact experience. Hence, the cycle repeats itself.
negative side effects of too much screen time Photo: Active Health
Seeking help
There is no uniform approach to combating addiction. Much of it depends on the individual, the kind of lifestyle they lead, the resources they have access to, and the severity of the issue. While seeking the opinion of qualified health and wellness practitioners like the ones at the Active Health Lab can be helpful when it comes to defining a workable solution, involving close friends and family works just as well. Informing them of your situation and your commitment towards remedying it gives them the opportunity to get on the same page as you, and it becomes much easier to include them in your plans. You can also make an arrangement to hold yourself accountable, such as imposing timed curfews for screen usage and penalties for slip-ups.
Optimising your environment to suit your goals is another step you can take. Things like keeping your smartphone out of the bedroom, designating the dining table as a screen-free zone, and seeking alternative activites to de-stress are extrinsic measures that can keep you on track by eliminating temptation and teaching yourself new ways to experience life.
Dr Richard Swinbourne, PhD., a senior sport dietitian and sleep scientist at Singapore Sport Institute, shared: "It is also a good practice to switch on the night mode and lower screen brightness when you are using your device in the evening. This will allow melatonin to be produced earlier in the night, which leads to improved quality of sleep."
Digital devices have become such an indispensable part of our daily lives that getting rid of them would be almost impossible. Besides, it’s unnecessary to go to such lengths – simply limiting your screen time can go a long way towards protecting you from the consequences of excessive screen. The less time you spend on your screens, the more time you have to spend with your family and friends!
Tags: Screen Time
You May Be Interested In
|
__label__pos
| 0.651609 |
REST example running on Cooja and Sky motes
From Contiki
Revision as of 13:51, 8 November 2014 by Muni (Talk | contribs) (Introduction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Back to Contiki Tutorials
Introduction
REST (Representational State Transfer) is an architectural style consisting of coordinated set of architectural constraints APPLIED to components, connectors and data elements within a distributed hypermedia system. It is one in which web services are viewed as resources and can be uniquely identified by their URLs. A web service can be characterized as RESTful if they conform to architectural constraints like client-server, stateless, cacheable, layered system and uniform Interface. The basic REST design principle uses the HTTP or COAP protocol methods for typical CRUD operations:
• POST - Create a resource
• GET - Retrieve a resource
• PUT – Update a resource
• DELETE - Delete a resource
There are various resources that are available at the server. Each resource at the server has a handler function which the REST layer calls to serve the request by the client. The REST server sends the response back to the client with the contents of the resource requested.
You Will Learn
How to use REST layer to develop server-side applications like COAP or HTTP and run it on COOJA and using T-mote sky.
Relavant Files
• /contiki-2.7/apps/rest-common - It contains the codes that are mentioned as target in the rest-example.
• /contiki-2.7/apps/rest-coap - It contains codes for codes mentioned as target in rest-common.
• /contiki-2.7/apps/rest-http - It contains codes for codes mentioned as target in rest-common.
• /contiki-2.7/examples/rest-example - It has the main example of the REST server side and COAP client side codes that are to be uploaded on COOJA and real motes.
Understanding The Code
rest-server-example.c
The REST server has light, led, toggle, helloworld and discover resources. Each resource has a resource handler function which is called when the client requests for that resource and the resource handler function sends the response back to the client. For example, when a HTTP client sends a request to the REST server for light resource, the REST server calls the light_handler function which reads the light sensor values from the mote and sends the light sensor values along with a simple etag back to the client as shown below.
/*A simple getter example. Returns the reading from light sensor with a simple etag*/
RESOURCE(light, METHOD_GET, "light");
void
light_handler(REQUEST* request, RESPONSE* response)
{
read_light_sensor(&light_photosynthetic, &light_solar);
sprintf(temp,"%u;%u", light_photosynthetic, light_solar);
char etag[4] = "ABCD";
rest_set_header_content_type(response, TEXT_PLAIN);
rest_set_header_etag(response, etag, sizeof(etag));
rest_set_response_payload(response, temp, strlen(temp));
}
coap-client-example.c
The COAP client establishes a connection with the server on the COAP port 61616 and sets the et timer to a particular value. Everytime the et timer is expired, the send_data(void) function is called. When it recieves the response from the server for its request, the handle_incoming_data() function is called as shown in the code snippet within the process.
etimer_set(&et, 5 * CLOCK_SECOND);
while(1) {
PROCESS_YIELD();
if (etimer_expired(&et)) {
send_data();
etimer_reset(&et);
} else if (ev == tcpip_event) {
handle_incoming_data();
}
}
The COAP client runs a timer which when resets, the client randomly selects a service_id (resource) using random_rand() function and sends the request to the REST server as seen below in the send_data(void) function.
int data_size = 0;
int service_id = random_rand() % NUMBER_OF_URLS;
coap_packet_t* request = (coap_packet_t*)allocate_buffer(sizeof(coap_packet_t));
init_packet(request);
coap_set_method(request, COAP_GET);
request->tid = xact_id++;
request->type = MESSAGE_TYPE_CON;
coap_set_header_uri(request, service_urls[service_id]);
When the server response returns back to the client, it runs the handle_incoming_data() function which takes the packet, parses the message and prints the payload that is it receives from the response. It can be seen in the code below
static void
handle_incoming_data()
{
PRINTF("Incoming packet size: %u \n", (uint16_t)uip_datalen());
if (init_buffer(COAP_DATA_BUFF_SIZE)) {
if (uip_newdata()) {
coap_packet_t* response = (coap_packet_t*)allocate_buffer(sizeof(coap_packet_t));
if (response) {
parse_message(response, uip_appdata, uip_datalen());
response_handler(response);
}
}
delete_buffer();
}
}
Run the REST-Example on COOJA
HTTP Example
1. Open the Makefile in the /contiki-2.7/examples/rest-example folder. For HTTP server, make sure the WITH_COAP = 0 in the Makefile.
2. Open the terminal. Go to the /contiki-2.7/examples/rest-example folder. Run the following command given below:
make TARGET=cooja rest-server-example.csc
This will open a COOJA terminal and loads the rest-server-example code for simulation. The Network will look as shown:
Networkhttp.PNG
3. Open another terminal. Go to the same directory and connect the COOJA simulation to the router.
make connect-router-cooja
4. Start the Simulation on the COOJA and test the connectivity by pinging the server. The IP addresses of the servers are aaaa::0212:7402:0002:0202 and aaaa::0212:7403:0003:0303.
ping6 aaaa::0212:7402:0002:0202
ping6 aaaa::0212:7403:0003:0303
5. Use curl as a HTTP client we interact with the COOJA motes that are running the REST code. The default HTTP client runs on 8080 port number.
curl -H "User-Agent: curl" aaaa::0212:7402:0002:0202:8080/helloworld #get helloworld plain text
curl -H "User-Agent: curl" aaaa::0212:7402:0002:0202:8080/led?color=green -d mode=off -i #turn off the green led
curl -H "User-Agent: curl" aaaa::0212:7402:0002:0202:8080/.well-known/core -i
curl -X POST -H "User-Agent: curl" aaaa::0212:7402:0002:0202:8080/helloworld #method not allowed
COAP Example
1. Open the Makefile in the /contiki-2.7/examples/rest-example folder. For COAP server, make sure the WITH_COAP = 1 in the Makefile.
2. Open the terminal. Go to the /contiki-2.7/examples/rest-example folder. Run the following command given below:
make TARGET=cooja coap-client-server-example.csc
This opens the COOJA terminal which runs rest-server-example.c over COAP in one node having IP address aaaa::0212:7401:0001:0101 and coap-client-example.c in the another node having IP address aaaa::0212:7402:0002:0202. COAP uses 61616 port number as default. The client in the example periodically accesses the resources of the server and prints the payload which can be seen on the mote output. The Network looks as shown below:
Coapnetwork.PNG
3.Start the Simulation and observe the mote output. The Output is diplayed as shown in the below window.
Coapmoteout.PNG
Run the REST-Example on T-mote Sky
1. Go to the /contiki-2.7/examples/rest-example folder. Connect the motes and program them with rest-server-example.
make TARGET=sky rest-server-example.upload
2. Disconnect the motes and load one other mote with the RPL border router to connect to the rest-server.
cd ../ipv6/rpl-border-router
make TARGET=sky border-router.upload
3. Connect the REST-server to the border router using tunslip6.
make connect-router
4. Open new terminal window for each mote and execute the following command and reset the motes. The IP addresses of these motes get printed after the motes are reset.
make login TARGET=sky MOTE=2 #Shows the prints for first mote
make login TARGET=sky MOTE=3 #For second mote and so on.
5. Ping the motes using the IP addresses we get from step 4. As highlighted below
ping6 <IPv6 Address of the MOTE>
IPaddrmote.PNG
6. If WITH_COAP = 0 i.e. HTTP Server, use the HTTP Client to connect to the REST server. When WITH_COAP = 1. i.e. COAP Server, load the coap-client-example.c to the motes and connect to the REST server (same as COOJA Example).
General Issues You Might Face
1. To use curl as the HTTP client you need to install curl by running the following command in the terminal:
sudo apt-get install curl
2. The memory on the Tmote sky is not big enough to fit in the HTTP example on REST server. Thus while running it on real motes we get the error “region text is full”. To overcome this problem, instead of RPL to decide the route, we can use static routes. We can also reduce the size by changing the Makefile of the border router by setting the WITH_WEBSERVER = 0.
3. Generally the COAP Client is used instead of HTTP client for running over REST Server.
References
1. REST-Example Github page: https://github.com/contiki-os/contiki/tree/master/examples/rest-example
2. A Low-Power CoAP for Contiki - Matthias Kovatsch, Simon Duquennoy, Adam Dunkels
Back to Contiki Tutorials
|
__label__pos
| 0.992658 |
Google Play Services for Location and Activity Recognition
Android Development with Google Play Services
This article was peer reviewed by Marc Towler. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
People like to take their mobile devices everywhere and use them constantly. Some apps take advantage of this and change their behaviour according to the users location and/or current activity to provide a better individualized service.
To get the user’s current location on Android, you can use the Location API that has been part of the Android framework since API level 1, or you can use the Google Location Services API, which is part of Google Play Services. The latter is the recommended method for accessing Android location.
The Google Location Services API, part of Google Play Services, provides a more powerful, high-level framework that automates tasks such as location provider choice and power management. Location Services provides new features such as activity detection that aren’t available in the framework API.
Developers using the framework API, as well as those now adding location-awareness to their apps, are strongly advised to use the Location Services API and this is what we will look at in this article. We will create different apps that show how to get a users current location, update it periodically and detecting the user’s current activity. For example, are they walking, running, on a bicycle, in a vehicle, etc.
Note: The device you use for testing during this tutorial must have support for Google Play Services. You should have a device that runs Android 2.3 or higher and includes the Google Play Store. If you are using an emulator, you need an emulator image with the Google APIs platform based on Android 4.2.2 or higher.
Getting the Last Known Location
The Google Play Services Location API can request the last known location of the user’s device, this is equivalent to the user’s current location.
To get the device’s last known location, use the FusedLocationProviderApi which allows you to specify requirements such as the location accuracy required. High accuracy means more battery power used.
Create a new Android project, name it Example01, set the Minimum SDK version to 2.3.3 (Gingerbread) and select Empty Activity on the next window, leave the default settings in the last window and click on Finish.
Note: I’m assuming that you are using Android Studio 1.4 or later, where the Activity templates have changed. The Blank Activity template in previous versions resulted in an app with an almost empty view, but it now includes a Floating Action Button. We’ll use the Empty Activity for our project, if you are using a previous version, then select Blank Activity.
Include the following dependency in the build.gradle (Module: app) file and sync the gradle files.
compile 'com.google.android.gms:play-services:8.1.0'
To use location services, the app must request permission to do so. Android offers two location permissions: ACCESS_COARSE_LOCATION and ACCESS_FINE_LOCATION. The permission you choose determines the accuracy of the location returned by the API. Fine Location uses the device GPS, cellular data and WiFi to get the most accurate position but it costs battery life. Coarse Location uses the device cellular data and WiFi to get the location. It wont be as accurate as Fine but uses a lot less battery power, returning a location with an accuracy equivalent to a city block.
Add the following permission to the AndroidManifest.xml file as a child of the manifest tag.
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
Note: If you have used Google Play Services in an app before, you might be used to adding the following to the manifest file which sets the version number of Google Play Services your app uses.
<meta-data android:name="com.google.android.gms.version" android:value="@integer/google_play_services_version"/>
As of version 7.0 of Google Play Services, if you are using Gradle, it’s included automatically..
We’ll be using the fused location provider to get the device’s location. This information will be presented as a Location object from which you can retrieve the latitude, longitude, timestamp, and other information such as bearing, altitude and velocity of a location.
The apps we’ll create, will display the raw latitude and longitude data of the retrieved Location. In a real app, you might use this information to, for instance, get the location Address, plot the location on a map, change the UI or fire a notification.
Let’s create the UI that will display the latitude and longitude values. Change activity_main.xml as shown.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:paddingBottom="@dimen/activity_vertical_margin"
tools:context=".MainActivity">
<TextView
android:id="@+id/latitude"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:text="Latitude:"
android:textSize="18sp" />
<TextView
android:id="@+id/latitude_textview"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@+id/latitude"
android:layout_marginLeft="10dp"
android:layout_toRightOf="@+id/latitude"
android:textSize="16sp" />
<TextView
android:id="@+id/longitude"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:text="Longitude:"
android:layout_marginTop="24dp"
android:textSize="18sp" />
<TextView
android:id="@+id/longitude_textview"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@+id/longitude"
android:layout_marginLeft="10dp"
android:layout_toRightOf="@+id/longitude"
android:textSize="16sp"/>
</RelativeLayout>
Add the following in MainActivity.java.
private static final String TAG = "MainActivity";
private TextView mLatitudeTextView;
private TextView mLongitudeTextView;
Instantiate the two TextViews by adding the following at the end of onCreate(Bundle).
mLatitudeTextView = (TextView) findViewById((R.id.latitude_textview));
mLongitudeTextView = (TextView) findViewById((R.id.longitude_textview));
When you want to connect to one of the Google APIs provided in the Google Play services library, you need to create an instance of GoogleApiClient. The Google API Client provides a common entry point to all Google Play services and manages the network connection between the user’s device and each Google service.
Before making the connection, you must always check for a compatible Google Play services APK. To do this either use the isGooglePlayServicesAvailable() method or attach a GoogleApiClient.OnConnectionFailedListener object to your client and implement its onConnectionFailed() callback method. We’ll use the latter approach.
If the connection fails due to a missing or out-of-date version of the Google Play APK, the callback receives an error code such as SERVICE_MISSING, SERVICE_VERSION_UPDATE_REQUIRED or SERVICE_DISABLED.
Change the class definition as shown.
public class MainActivity extends AppCompatActivity implements ConnectionCallbacks, OnConnectionFailedListener
Add the necessary imports and implement the following methods from the two interfaces.
@Override
public void onConnected(Bundle bundle) {
}
@Override
public void onConnectionSuspended(int i) {
}
@Override
public void onConnectionFailed(ConnectionResult connectionResult) {
}
The ConnectionCallbacks interface provides callbacks that called when the client connects or disconnects from the service (onConnected() and onConnectionSuspended()) and the OnConnectionFailedListener interface provides callbacks for scenarios that result in a failed attempt to connect the client to the service (onConnectionFailed()).
Before any operation executes, the GoogleApiClient must connect using the connect() method. The client is not considered connected until the onConnected(Bundle) callback has been called.
When your app finishes using this client, call disconnect() to free up resources.
You should instantiate the client object in your Activity’s onCreate(Bundle) method and then call connect() in onStart() and disconnect() in onStop().
Add the following class variables that will hold the GoogleApiClient and Location objects.
private GoogleApiClient mGoogleApiClient;
private Location mLocation;
At the end of onCreate() method, create an instance of the Google API Client using GoogleApiClient.Builder. Use the builder to add the LocationServices API.
mGoogleApiClient = new GoogleApiClient.Builder(this)
.addConnectionCallbacks(this)
.addOnConnectionFailedListener(this)
.addApi(LocationServices.API)
.build();
Change the previously added callback methods as shown.
@Override
public void onConnected(Bundle bundle) {
mLocation = LocationServices.FusedLocationApi.getLastLocation(mGoogleApiClient);
if (mLocation != null) {
mLatitudeTextView.setText(String.valueOf(mLocation.getLatitude()));
mLongitudeTextView.setText(String.valueOf(mLocation.getLongitude()));
} else {
Toast.makeText(this, "Location not Detected", Toast.LENGTH_SHORT).show();
}
}
@Override
public void onConnectionSuspended(int i) {
Log.i(TAG, "Connection Suspended");
mGoogleApiClient.connect();
}
@Override
public void onConnectionFailed(ConnectionResult connectionResult) {
Log.i(TAG, "Connection failed. Error: " + connectionResult.getErrorCode());
}
In the onConnected() method, we get the Location object by calling getLastLocation() and then update the UI with the latitude and longitude values from the object. The Location object returned may in rare cases be null when the location is not available, so we check for this.
onConnectionSuspended() is called if the connection is lost for whatever reason and here we attempt to re-establish the connection. If the connection fails, onConnectionFailed() is called and we just log the error code. You can view the available error codes here.
Override the onStart() and onStop() methods as shown.
@Override
protected void onStart() {
super.onStart();
mGoogleApiClient.connect();
}
@Override
protected void onStop() {
super.onStop();
if (mGoogleApiClient.isConnected()) {
mGoogleApiClient.disconnect();
}
}
These start and disconnect the connection to the service when appropriate.
Run the app and you should see the latitude and longitude displayed.
Example 1 Demo
You can download the completed Example01 project here.
Getting Periodic Location Updates
Some apps, for example fitness or navigation apps, might need to continuously track location data. While you can get a device’s location with getLastLocation(), a more direct approach is to request periodic updates from the fused location provider. The API will then update your app periodically with the best available location, based on the currently-available location providers such as WiFi and GPS. The accuracy of the location is determined by the providers, the location permissions you’ve requested, and the options you set in the location request.
Create another project with similar settings to the last projects and name it Example02.
Add the play services dependency to the build.gradle(Module: app) file.
compile 'com.google.android.gms:play-services:8.1.0'
Add the permission to the manifest file.
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>
Change activity_main.xml as below.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:paddingBottom="@dimen/activity_vertical_margin"
tools:context=".MainActivity">
<TextView
android:id="@+id/latitude"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:text="Latitude:"
android:textSize="18sp" />
<TextView
android:id="@+id/latitude_textview"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@+id/latitude"
android:layout_marginLeft="10dp"
android:layout_toRightOf="@+id/latitude"
android:textSize="16sp" />
<TextView
android:id="@+id/longitude"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:text="Longitude:"
android:layout_marginTop="24dp"
android:textSize="18sp" />
<TextView
android:id="@+id/longitude_textview"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBaseline="@+id/longitude"
android:layout_marginLeft="10dp"
android:layout_toRightOf="@+id/longitude"
android:textSize="16sp"/>
</RelativeLayout>
Change the class definition of MainActivity.
public class MainActivity extends AppCompatActivity implements ConnectionCallbacks, OnConnectionFailedListener, LocationListener
Make the necessary imports. For the LocationListener import import com.google.android.gms.location.LocationListener and not the other suggested import.
LocationListener is used for receiving notifications from the FusedLocationProviderApi when the location has changed.
Implement the methods from the three interfaces.
@Override
public void onConnected(Bundle bundle) {
}
@Override
public void onConnectionSuspended(int i) {
}
@Override
public void onLocationChanged(Location location) {
}
@Override
public void onConnectionFailed(ConnectionResult connectionResult) {
}
The onLocationChanged() method is called when the location changes.
Add the following class variables.
private static final String TAG = "MainActivity";
private GoogleApiClient mGoogleApiClient;
private LocationRequest mLocationRequest;
private String mLastUpdateTime;
private TextView mLatitudeTextView;
private TextView mLongitudeTextView;
LocationRequest is a data object that contains quality of service parameters for requests to the FusedLocationProviderApi. We’ll see its use soon.
Change onCreate() as shown below to override the onStart() and onPause() methods.
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mLatitudeTextView = (TextView) findViewById((R.id.latitude_textview));
mLongitudeTextView = (TextView) findViewById((R.id.longitude_textview));
mGoogleApiClient = new GoogleApiClient.Builder(this)
.addConnectionCallbacks(this)
.addOnConnectionFailedListener(this)
.addApi(LocationServices.API)
.build();
}
@Override
protected void onStart() {
super.onStart();
mGoogleApiClient.connect();
}
@Override
protected void onStop() {
super.onStop();
if (mGoogleApiClient.isConnected()) {
mGoogleApiClient.disconnect();
}
}
This is similar to what we did in the previous example, so there is no need for explanation.
Change the previously implemented interface methods as below.
@Override
public void onConnected(Bundle bundle) {
mLocationRequest = LocationRequest.create();
mLocationRequest.setPriority(LocationRequest.PRIORITY_HIGH_ACCURACY);
mLocationRequest.setInterval(5000);
mLocationRequest.setFastestInterval(3000);
LocationServices.FusedLocationApi.requestLocationUpdates(mGoogleApiClient, mLocationRequest, this);
}
@Override
public void onConnectionSuspended(int i) {
Log.i(TAG, "Connection Suspended");
mGoogleApiClient.connect();
}
@Override
public void onLocationChanged(Location location) {
mLastUpdateTime = DateFormat.getTimeInstance().format(new Date());
mLatitudeTextView.setText(String.valueOf(location.getLatitude()));
mLongitudeTextView.setText(String.valueOf(location.getLongitude()));
Toast.makeText(this, "Updated: " + mLastUpdateTime, Toast.LENGTH_SHORT).show();
}
@Override
public void onConnectionFailed(ConnectionResult connectionResult) {
Log.i(TAG, "Connection failed. Error: " + connectionResult.getErrorCode());
}
In onConnected() we create the LocationRequest object which stores parameters for requests to the fused location provider. The parameters determine the levels of accuracy requested. To find out about all the options available in the location request, see the LocationRequest class reference. In our example, we set the priority, update interval and the fastest update interval.
setPriority() sets the priority of the request, which gives the Google Play services location services a strong hint about which location sources to use. The following values are supported:
• PRIORITY_BALANCED_POWER_ACCURACY: Use this setting to request location precision to within a city block, which is an accuracy of approximately 100 meters. This is considered a coarse level of accuracy, and is likely to consume less power. With this setting, the location services will probably use WiFi and cell tower positioning. Note that the choice of location provider depends on other factors, such as which sources are available.
• PRIORITY_HIGH_ACCURACY: Use this setting to request the most precise location possible. With this setting, the location services are more likely to use GPS to determine the location.
• PRIORITY_LOW_POWER: Use this setting to request city-level precision, which is an accuracy of approximately 10 kilometers. This is considered a coarse level of accuracy, and is likely to consume less power.
• PRIORITY_NO_POWER: Use this setting if you need negligible impact on power consumption but want to receive location updates when available. With this setting, your app does not trigger any location updates, but receives locations triggered by other apps.
The setInterval() method sets the desired interval in milliseconds for active location updates. This interval is inexact. You may not receive updates at all if no location sources are available, or you may receive them slower than requested. You may receive updates faster than requested if other applications are requesting locations at a faster interval.
The setFastestInterval() method sets the fastest rate for active location updates. This interval is exact, and your application will never receive updates faster than this value. You need to set this rate because other apps will affect the rate at which updates are sent. The Google Play services location APIs send out updates at the fastest rate that any app has requested with setInterval(). If this rate is faster than your app can handle, you may encounter problems with UI flicker or data overflow. To prevent this, you set an upper limit to the update rate.
With the location request set up, we call requestLocationUpdates() to start the regular updates.
The onLocationChanged() method is called with the updated location. Here, we update the UI with the location information. We also set off a Toast message showing the time of update.
Run the app and you should see the updating location data if you move far enough for the readings to change.
Example 2 Demo
The completed Example02 project can be downloaded here.
Activity Recognition
Other than detecting the location of your Android device, the Google Location Services API can also be used to detect the activities that the device, and thus the user, might be undertaking. It can detect activities such as the user being on foot, in a vehicle, on a bicycle or still. It doesn’t give definite data, just the probability of the possibility that an activity is happening. It’s up to the programmer to read this data and decide what to do with it.
To get started, create a new project named Example03 with the same settings as the previous two projects.
Include the dependency in build.gradle (Module: app) file and sync the gradle files.
compile 'com.google.android.gms:play-services:8.1.0'
In the manifest file, include the following activity recognition permission as a child of the manifest tag.
<uses-permission android:name="com.google.android.gms.permission.ACTIVITY_RECOGNITION" />
Change activity_main.xml as below.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:paddingBottom="@dimen/activity_vertical_margin"
android:orientation="vertical"
tools:context=".MainActivity">
<Button
android:id="@+id/request_updates_button"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:onClick="requestActivityUpdates"
android:text="Request Activity Updates" />
<Button
android:id="@+id/remove_updates_button"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:onClick="removeActivityUpdates"
android:text="Remove Activity Updates" />
<TextView
android:id="@+id/detected_activities_textview"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:textSize="20sp"/>
</LinearLayout>
Usually, applications that make use of Activity Recognition monitor activities in the background and perform an action when a specific activity is detected. To do this without needing a service that is always running in the background consuming resources, detected activities are delivered via an Intent. The application specifies a PendingIntent callback (typically an IntentService) which will be called with an intent when activities are detected. The intent recipient can extract the ActivityRecognitionResult using extractResult(android.content.Intent).
Before the IntentService class, create a class called Constants and change it as shown. It will hold some constant values we’ll use later.
package com.echessa.example03; // Change as appropriate
/**
* Created by echessa on 10/14/15.
*/
public class Constants {
private Constants(){
}
public static final String PACKAGE_NAME = "com.echessa.activityexample"; // Change as appropriate
public static final String STRING_ACTION = PACKAGE_NAME + ".STRING_ACTION";
public static final String STRING_EXTRA = PACKAGE_NAME + ".STRING_EXTRA";
}
Next we create an IntentService. Create a class called ActivitiesIntentService and make it extend IntentService. Change the contents as shown.
package com.echessa.example03; // Change as appropriate
import android.app.IntentService;
import android.content.Intent;
import android.support.v4.content.LocalBroadcastManager;
import com.google.android.gms.location.ActivityRecognitionResult;
import com.google.android.gms.location.DetectedActivity;
import java.util.ArrayList;
/**
* Created by echessa on 10/14/15.
*/
public class ActivitiesIntentService extends IntentService {
private static final String TAG = "ActivitiesIntentService";
public ActivitiesIntentService() {
super(TAG);
}
@Override
protected void onHandleIntent(Intent intent) {
ActivityRecognitionResult result = ActivityRecognitionResult.extractResult(intent);
Intent i = new Intent(Constants.STRING_ACTION);
ArrayList<DetectedActivity> detectedActivities = (ArrayList) result.getProbableActivities();
i.putExtra(Constants.STRING_EXTRA, detectedActivities);
LocalBroadcastManager.getInstance(this).sendBroadcast(i);
}
}
In the above class, the constructor is required. It calls the super IntentService(String) constructor with the name of a worker thread.
In onHandleIntent(), we get the ActivityRecognitionResult from the Intent by using extractResult(). We then use this result to get an array list of DetectedActivity objects. Each activity is associated with a confidence level, which is an int between 0 and 100. Then we create a new Intent on which we are going to send the detected activities. Finally we broadcast the Intent so that it can be picked up.
Paste the following into the manifest file so that the Android system knows about the service. It should be a child of the application tag.
<service
android:name=".ActivitiesIntentService"
android:exported="false" />
In MainActivity implement the ConnectionCallbacks and OnConnectionFailedListener interfaces.
public class MainActivity extends AppCompatActivity implements ConnectionCallbacks, OnConnectionFailedListener
Make the necessary imports and implement their methods.
@Override
public void onConnected(Bundle bundle) {
Log.i(TAG, "Connected");
}
@Override
public void onConnectionSuspended(int i) {
Log.i(TAG, "Connection suspended");
mGoogleApiClient.connect();
}
@Override
public void onConnectionFailed(ConnectionResult connectionResult) {
Log.i(TAG, "Connection failed. Error: " + connectionResult.getErrorCode());
}
You will see an error as we haven’t created the mGoogleApiClient variable yet.
Add the following variables to MainActivity.
private static final String TAG = "MainActivity";
private GoogleApiClient mGoogleApiClient;
private TextView mDetectedActivityTextView;
Paste the following at the end of onCreate().
mDetectedActivityTextView = (TextView) findViewById(R.id.detected_activities_textview);
mGoogleApiClient = new GoogleApiClient.Builder(this)
.addConnectionCallbacks(this)
.addOnConnectionFailedListener(this)
.addApi(ActivityRecognition.API)
.build();
Note that we add the ActivityRecognition.API when creating the Google Api Client and not the location API as we did in the previous examples.
Include the onStart() and onStop() methods to connect and disconnect the client.
@Override
protected void onStart() {
super.onStart();
mGoogleApiClient.connect();
}
@Override
protected void onStop() {
super.onStop();
if (mGoogleApiClient.isConnected()) {
mGoogleApiClient.disconnect();
}
}
In the ActivitiesIntentService class, we broadcast an Intent that has an array of detected activities and we need a receiver class to receive this. Before we create that, Include the following strings in the strings.xml. file.
<string name="in_vehicle">In a vehicle</string>
<string name="on_bicycle">On a bicycle</string>
<string name="on_foot">On foot</string>
<string name="running">Running</string>
<string name="walking">Walking</string>
<string name="still">Still</string>
<string name="tilting">Tilting</string>
<string name="unknown">Unknown activity</string>
<string name="unidentifiable_activity">Unidentifiable activity: %1$d</string>
In MainActivity, add the following method which we’ll use later in our BroadcastReciever. This takes the code for the detected activity type and returns a relevant string related to the activity.
public String getDetectedActivity(int detectedActivityType) {
Resources resources = this.getResources();
switch(detectedActivityType) {
case DetectedActivity.IN_VEHICLE:
return resources.getString(R.string.in_vehicle);
case DetectedActivity.ON_BICYCLE:
return resources.getString(R.string.on_bicycle);
case DetectedActivity.ON_FOOT:
return resources.getString(R.string.on_foot);
case DetectedActivity.RUNNING:
return resources.getString(R.string.running);
case DetectedActivity.WALKING:
return resources.getString(R.string.walking);
case DetectedActivity.STILL:
return resources.getString(R.string.still);
case DetectedActivity.TILTING:
return resources.getString(R.string.tilting);
case DetectedActivity.UNKNOWN:
return resources.getString(R.string.unknown);
default:
return resources.getString(R.string.unidentifiable_activity, detectedActivityType);
}
}
Add the following subclass to MainActivity that extends BroadcastReceiver.
public class ActivityDetectionBroadcastReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
ArrayList<DetectedActivity> detectedActivities = intent.getParcelableArrayListExtra(Constants.STRING_EXTRA);
String activityString = "";
for(DetectedActivity activity: detectedActivities){
activityString += "Activity: " + getDetectedActivity(activity.getType()) + ", Confidence: " + activity.getConfidence() + "%\n";
}
mDetectedActivityTextView.setText(activityString);
}
}
Above, we get the array of detected activities and iterate through them getting the type and confidence of each. We then append this to a string and update the UI with the string.
In MainActivity add the following variable.
private ActivityDetectionBroadcastReceiver mBroadcastReceiver;
Then instantiate it in onCreate() after the statement that instantiates mDetectedActivityTextView
mBroadcastReceiver = new ActivityDetectionBroadcastReceiver();
Add the following methods to MainActivity.
public void requestActivityUpdates(View view) {
if (!mGoogleApiClient.isConnected()) {
Toast.makeText(this, "GoogleApiClient not yet connected", Toast.LENGTH_SHORT).show();
} else {
ActivityRecognition.ActivityRecognitionApi.requestActivityUpdates(mGoogleApiClient, 0, getActivityDetectionPendingIntent()).setResultCallback(this);
}
}
public void removeActivityUpdates(View view) {
ActivityRecognition.ActivityRecognitionApi.removeActivityUpdates(mGoogleApiClient, getActivityDetectionPendingIntent()).setResultCallback(this);
}
private PendingIntent getActivityDetectionPendingIntent() {
Intent intent = new Intent(this, ActivitiesIntentService.class);
return PendingIntent.getService(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);
}
Then change the class definition to implement ResultCallback since in the code above we set the result callback to this.
public class MainActivity extends AppCompatActivity implements ConnectionCallbacks, OnConnectionFailedListener, ResultCallback<Status>
The first method above uses requestActivityUpdates() to register for activity recognition updates, while the second unregisters. The activities are detected by periodically waking up the device and reading short bursts of sensor data. It makes use of low power sensors to keep the power usage to a minimum. The activity detection update interval can be controlled with the second parameter. Larger values will result in fewer activity detections while improving battery life. Smaller values will result in more frequent activity detections but will consume more power since the device must be woken more frequently. Activities may arrive several seconds after the requested interval if the activity detection service requires more samples to make a more accurate prediction.
Implement the following ResultCallback method which takes the status and logs out different messages depending on it.
public void onResult(Status status) {
if (status.isSuccess()) {
Log.e(TAG, "Successfully added activity detection.");
} else {
Log.e(TAG, "Error: " + status.getStatusMessage());
}
}
Add the following to MainActivity.
@Override
protected void onResume() {
super.onResume();
LocalBroadcastManager.getInstance(this).registerReceiver(mBroadcastReceiver, new IntentFilter(Constants.STRING_ACTION));
}
@Override
protected void onPause() {
LocalBroadcastManager.getInstance(this).unregisterReceiver(mBroadcastReceiver);
super.onPause();
}
This registers and unregisters the broadcast receiver when the activity resumes and pauses respectively.
Run the app, and on pressing the Request Activity Updates button, you should start getting activity updates. You might need to wait a few seconds for the updates to start showing.
Example 3 Demo
The completed Example03 project can be downloaded here.
Conclusion
We have not exhausted all the capabilities of the Google Play Services Location APIs, there are some topics we haven’t covered such as Geofencing, getting the address of the Location and mapping the Location on a map. We’ll look into the topic in the Maps article that will be part of this Google Play Services series.
Please let me know if you have any questions or comments below.
Sponsors
|
__label__pos
| 0.890316 |
h. Cleanup
Congratulations! You have completed the container orchestration lab and learnt how to deploy your containers using AWS Batch.
In Lab 3 you build and pushed your container to ECR in an automated way using CICD pipeline via CodeCommit and CodeBuild. In Lab 4 you deployed the same container using Batch.
In this section, you will clean all the resources that you created in Lab 3 and Lab 4.
Clean Up
After you complete the workshop, clean up your environment by following these steps:
1. On the Cloud9 terminal run the following commands to delete the AWS Batch environment & resources you created:
aws cloudformation delete-stack --stack-name nextflow-batch-ce-jq --region $AWS_REGION
aws cloudformation delete-stack --stack-name nextflow-batch-jd --region $AWS_REGION
Note, it will take a few mins for the stacks to be deleted.
1. Navigate to the AWS CloudFormation Dashboard of the AWS Management Console and confirm that the stacks are deleted.
2. Navigate to Amazon S3 Dashboard of the AWS Management Console and delete the S3 bucket you created in Lab 4. Or, run the following CLI command on Cloud9.
source s3_vars
aws s3 rb s3://${BUCKET_NAME_RESULTS} --force
1. Navigate to the ECR service in the AWS Management Console and delete the repository you created earlier. Or, run the following CLI command on Cloud9.
REPO_NAME=sc21-container
aws ecr delete-repository --repository-name $REPO_NAME --force --region $AWS_REGION
1. Navigate to CodeCommit in the AWS Management Console and delete the repository you created in Lab 3. Or, run the following CLI command on Cloud9
CODECOMMIT_REPO_NAME=MyDemoRepo
aws codecommit delete-repository --repository-name $CODECOMMIT_REPO_NAME --region $AWS_REGION
1. Navigate to CodeBuild in the AWS Management Console and delete the build project you created in Lab 3. Or, run the following CLI command on Cloud9
CODEBUILD_PROJECT_NAME=MyDemoBuild
aws codebuild delete-project --name $CODEBUILD_PROJECT_NAME --region $AWS_REGION
1. Navigate to CodePipeline in the AWS Management Console and delete the pipeline that you creared in Lab 3. Or, run the following CLI command on Cloud9
CODEPIPELINE_NAME=MyDemoPipeline
aws codepipeline delete-pipeline --name $CODEPIPELINE_NAME --region $AWS_REGION
1. Navigate to IAM and delete the following Policies (click on Policies on the left hand pane) created as part of the labs. Search for the below policies. Click on the Policy -> Action -> Delete. Follow the required steps to confirm deletion.
• CodeBuildBasePolicy-<codebuild-project-name>-<region>
• AWSCodePipelineServiceRole-<region>-<codepipeline-name>
1. Navigate to IAM and delete the following Roles (click on Roles on the left hand pane) created as part of the labs. Search for the below roles. Click on the Role -> Action -> Delete. Follow the required steps to confirm deletion.
• AWSCodePipelineServiceRole-<region>-<codepipeline-name>
• codebuild-<codebuild-project-name>-service-role
• ecsTaskExecutionRole
|
__label__pos
| 0.834838 |
Welcome to Cadabra Q&A, where you can ask questions and receive answers from other members of the community.
0 votes
I tried to isolate terms with a certain order of derivation by defining a Weight property to the derivative.
(x, y)::Coordinate.
(i, j)::Indices(values={x, y}, position=fixed).
\nabla{#}::Derivative.
h_{i j}::Depends(x, y, \nabla{#}).
\delta{#}::KroneckerDelta:
\nabla{#}::Weight(label=order, value=1);
\nabla{#}::WeightInherit(label=order, type=additive);
Then, a test expression
test := A \delta_{i j} + h_{i j}
+ \nabla_{x}{h_{i j}}
+ B \nabla_{y}{h_{i j}}
+ C \nabla_{x}{ \nabla_{y}{ h_{i j} } }
+ \nabla_{x}{ \nabla_{x}{ h_{i j} } };
Finally, I'd like to define terms with a certain order on derivatives,
term0 := @(test).
keep_weight(_, $order=0$);
term1 := @(test).
keep_weight(_, $order=1$);
term2 := @(test).
keep_weight(_, $order=2$);
It results in the following
The zeroth order is Ok, but the first and second yield problems.
Question
What is the right way (if any) of providing Weight to a derivative operator?
P.D.: The same occurs with the PartialDerivative or by changing the WeightInherit type to multiplicative.
related to an answer for: Extending zoom() function
asked in Feature requests by (6.1k points)
1 Answer
+1 vote
Best answer
If you want a derivative to inherit a weight and have a weight of that same type itself, you need to use the self parameter to WeightInherit. So
\nabla{#}::WeightInherit(label=order, self=1, type=multiplicative);
(and drop the Weight property). The type should be multiplicative, which is a confusing way to say that the weights of the child node of the derivative should be combined as if they had been sitting in a product, that is, they should be added up.
(type=additive means that the weights of the children are handled as if the children are sitting in a sum, that is, they should all be equal).
answered by (53.8k points)
selected by
Thank you for the answer @kasper. I have notice, however, that the solution does not work properly when the derivative is a PartialDerivative. I assume that is due to the fact that higher order partial derivatives are denoted as $\partial{xy}$ or $\partial{xx}$.
...
|
__label__pos
| 0.956636 |
Education
How Sunscreen Is Effective In Protecting Your Skin
The problem of sunburn is common nowadays. According to the research conducted, the ozone layer is depleting with each passing day, resulting in the unauthorized entry of UV and other harmful radiations on the earth. The ultraviolet rays directly come in contact with your skin, causing numerous skin problems that can even result into skin cancer at a later stage.
In the initial stage, not everyone gets alert and make use of the right options to stay protected. Once the people get affected by UV radiations, they undergo different medications and therapies to get rid of skin problem. Canadian Pharmacy store allows you to buy any prescribed medicine with the home delivery option.
But all those who take proper precautions, use sunscreen as the boon that keeps them protection against the harmful effects of UV radiations.
Now, before moving further, you should know what is the major reason behind sunburn that causes numerous other skin problems?
Today, almost everyone agrees with the fact that the UV radiations landing on earth are far more than decades ago that has even caused the problem of global warming. The same UV radiations when gets in contact with your skin, it damages the DNA bonds in your skin.
As the bond breaks, the skin becomes dead and the dead skin starts to replicate progressively throughout the affected part. Basically, there are two types of UV radiations i.e. UVB and UVA. The UVA is considered to be more harmful as it reached to the depth of skin and causes internal damages.
How is Sunscreen Effective?
People recommend using sunscreen to stay protected from the unwanted UV radiations. Majorly, the sunscreen performs two actions when it comes in contact with the UV rays. They either absorb the rays or scatter them back.
As your sunscreen works as a protective layer for your skin, it scatters the UV rays hitting your exposed skin area. As a result, the UV rays becomes unable to target your skin and make any harm. In simple terms, the major task of applying sunscreen is to block the UV rays from reaching your skin.
What to Look While Buying Sunscreen?
While buying sunscreen, the only factor that you should look at is the SPF. The SPF stands for Sun Protection Factor that determines the effectiveness of the sunscreen in blocking the UV rays. But the SPF can only inform you about the blocking capability of UVB. If you want to choose a more effective protective product, you should opt broad-spectrum sunscreen.
All in all, the sunscreen is an effective product that can really help you stay protected from the UV rays and skip skin related issues. If you still encounter some sort of skin problems, you should get in touch with your doctor and share your problem along with the details of sunscreen you are using. This will help them better understand the situation and prescribe you accordingly. If you are buying medicines online, you should ensure that the website is correct and authentic.
Leave a Reply
Your email address will not be published. Required fields are marked *
36 ÷ four =
|
__label__pos
| 0.967129 |
Hypnotherapy, NLP & CBT in Watford
Treatment and training in Hypnosis, Hypnotherapy, NLP, CBT and more
What is CBT?
How can Cognitive and Behavioural therapy help?
It is a way of talking about:
• How you think about yourself, the world and other people
• How what you do affects your thoughts and feelings.
CBT can help you to change how you think ("Cognitive") and what you do ("Behaviour)". These changes can help you to feel better. Unlike some of the other talking treatments, it focuses on the "here and now" problems and difficulties. Instead of focussing on the causes of your distress or symptoms in the past, it looks for ways to improve your state of mind now.
It has been found to be helpful in Anxiety, Depression, Panic, Agoraphobia and other phobias, Social phobia, Bulimia, Obsessive compulsive disorder, Post traumatic stress disorder and Schizophrenia
How does it work?
CBT can help you to make sense of overwhelming problems by breaking them down into smaller parts. This makes it easier to see how they are connected and how they affect you. These parts are:
• A Situation - a problem, event or difficult situation
From this can follow:
• Thoughts
• Emotions
• Physical feelings
• Actions
Each of these areas can affect the others. How you think about a problem can affect how you feel physically and emotionally. It can also alter what you do about it. There are helpful and unhelpful ways of reacting to most situations, depending on how you think about them.
The same situation can led to two very different results, depending on how you thought about the situation. How you think has affected how you felt and what you did. In the example in the left hand column, you've jumped to a conclusion without very much evidence for it - and this matters, because it's led to:
• a number of uncomfortable feelings
• an unhelpful behaviour.
If you go home feeling depressed, you'll probably brood on what has happened and feel worse. If you get in touch with the other person, there's a good chance you'll feel better about yourself. If you don't, you won't have the chance to correct any misunderstandings about what they think of you - and you will probably feel worse. This is a simplified way of looking at what happens. The whole sequence, and parts of it, can also feedback like this:
cbt, hypnosis, hypnotherapist, watford bushey radlett
This "vicious circle" can make you feel worse. It can even create new situations that make you feel worse. You can start to believe quite unrealistic (and unpleasant) things about yourself. This happens because, when we are distressed, we are more likely to jump to conclusions and to interpret things in extreme and unhelpful ways.
CBT can help you to break this vicious circle of altered thinking, feelings and behaviour. When you see the parts of the sequence clearly, you can change them - and so change the way you feel. CBT aims to get you to a point where you can "do it yourself", and work out your own ways of tackling these problems.
"Five areas" assessment
This is another way of connecting all the 5 areas mentioned above. It builds in our relationships with other people and helps us to see how these can make us feel better or worse. Other issues such as debt, job and housing difficulties are also important. If you improve one area, you are likely to improve other parts of your life as well.
"5 areas" diagram.
What does CBT involve?
The sessions
CBT can be done individually or with a group of people. It can also be done from a self-help book or computer programme. In England and Wales two computer-based programmes have been approved for use by the NHS. Fear Fighter is for people with phobias or panic attacks, Beating the Blues is for people with mild to moderate depression.
If you have individual therapy:
• You will usually meet with a therapist for between 5 and 20, weekly, or fortnightly, sessions. Each session will last between 30 and 60 minutes.
• In the first 2-4 sessions, the therapist will check that you can use this sort of treatment and you will check that you feel comfortable with it.
• The therapist will also ask you questions about your past life and background. Although CBT concentrates on the here and now, at times you may need to talk about the past to understand how it is affecting you now.
• You decide what you want to deal with in the short, medium and long term.
• You and the therapist will usually start by agreeing on what to discuss that day.
The work
• With the therapist, you break each problem down into its separate parts, as in the example above. To help this process, your therapist may ask you to keep a diary. This will help you to identify your individual patterns of thoughts, emotions, bodily feelings and actions.
• Together you will look at your thoughts, feelings and behaviours to work out:
- if they are unrealistic or unhelpful
- how they affect each other, and you.
• The therapist will then help you to work out how to change unhelpful thoughts and behaviours
• It's easy to talk about doing something, much harder to actually do it. So, after you have identified what you can change, your therapist will recommend "homework" - you practise these changes in your everyday life. Depending on the situation, you might start to:
• Question a self-critical or upsetting thought and replace it with a positive (and more realistic) one that you have developed in CBT
• recognise that you are about to do something that will make you feel worse and, instead, do something more helpful.
• At each meeting you discuss how you've got on since the last session. Your therapist can help with suggestions if any of the tasks seem too hard or don't seem to be helping.
• They will not ask you to do things you don't want to do - you decide the pace of the treatment and what you will and won't try. The strength of CBT is that you can continue to practise and develop your skills even after the sessions have finished. This makes it less likely that your symptoms or problems will return.
How effective is CBT?
• It is one of the most effective treatments for conditions where anxiety or depression is the main problem
• It is the most effective psychological treatment for moderate and severe depression
• It is as effective as antidepressants for many types of depression
What other treatments are there and how do they compare?
CBT is used in many conditions, so it isn't possible to list them all in this leaflet. We will look at alternatives to the most common problems - anxiety and depression.
• CBT isn't for everyone and another type of talking treatment may work better for you.
• CBT is as effective as antidepressants for many forms of depression. It may be slightly more effective than antidepressants in treating anxiety.
• For severe depression, CBT should be used with antidepressant medication. When you are very low you may find it hard to change the way you think until antidepressants have started to make you feel better.
• Tranquillisers should not be used as a long term treatment for anxiety. CBT is a better option.
Problems with CBT
• If you are feeling low and are having difficulty concentrating, it can be hard, at first, to get the hang of CBT - or, indeed, any psychotherapy
• This may make you feel disappointed or overwhelmed. A good therapist will pace your sessions so you can cope with the work you are trying to do
• It can sometimes be difficult to talk about feelings of depression, anxiety, shame or anger
How long will the treatment last?
A course may be from 6 weeks to 6 months. It will depend on the type of problem and how it is working for you. The availability of CBT varies between different areas and there may be a waiting list for treatment.
What if the symptoms come back?
There is always a risk that the anxiety or depression will return. If they do, your CBT skills should make it easier for you to control them. So, it is important to keep practising your CBT skills, even after you are feeling better. There is some research that suggests CBT may be better than antidepressants at preventing depression coming back. If necessary, you can have a "refresher" course.
So what impact would CBT have on my life?
Depression and anxiety are unpleasant. They can seriously affect your ability to work and enjoy life. CBT can help you to control the symptoms. It is unlikely to have a negative effect on your life, apart from the time you need to give up to do it.
What will happen if I don't have CBT?
You could discuss alternatives with your doctor. You could also:
• Read more about the treatment and its alternatives
• If you want to "try before you buy", get hold of a self-help book or CD-Rom and see if it makes sense to you
• Wait to see if you get anyway - you can always ask for CBT later if you change your mind
Watford & District Hypnotherapy Centre offer a free of charge 30 minute no obligation assessment where we can discuss and identify what intervention or mix of interventions best suits your unique set of circumstances. Please do not hesitate to Mail call of contact us via this website with any questions you may have.
When 1+1= 3
In many cases a multi discipline approach where conscious work (1) such as NLP or CBT coupled with eyes closed work (1) such as hypnosis is far more powerful and impactful than any one discipline in it's own right....just using eyes open or eyes closed work on it's own is only utilising half of your brain and simply misses the point of therapy in brining a holistic resolution.
|
__label__pos
| 0.852549 |
What does "Case Hardened" mean when it's stamped on a piece of metal?
Steel is an amazing substance. By mixing in different elements and heating and cooling it in different ways, you can create surprisingly different properties. Some steels are easily bent, while others are so brittle they shatter. Some rust and others don't. The huge variety of steels means that they work in many different situations -- everything from a surgical scalpel to a skyscraper's massive metal frame can be made of steel!
The idea behind case hardening is to have two different types of steel at two different points in time. During manufacturing, what you would like is a relatively soft steel that is easy to bend and machine. For a lock's shank, however, a soft steel is not good because it is easy to cut with a metal saw. So after the piece is formed, you harden it to make it very difficult to cut. Case hardened steel is usually formed by diffusing carbon and/or nitrogen into the outer layer of the steel at high temperature. The carbon combines with the steel to make it nearly glass-like in its hardness. The core of the metal stays soft. This gives you a piece of metal that you cannot cut with a saw, but also will not shatter.
More to Explore
#}
|
__label__pos
| 0.985963 |
Honey Honey - 7 months ago 30
Git Question
What happens if I cancel a 'pull remote changes' midway?
I am just pulling from remote in Xcode.
It has been fetching changes for 15 minutes now ( the 'fetching changes' is still spinning)...for what it normally takes 5-30 seconds. I don't know if there is anything wrong or what. My internet speed is flawless.
Would I break anything if I cancel? ie get a messed up code. Does that ever happen? Is the process atomic?
Answer
Doing a git pull is the same as doing as git fetch followed by a git merge. The latter merge operations takes place completely locally between a local branch and its corresponding tracking branch, and it is not relevant to your actual question.
With regard git fetch, this blog appears to state that Git operations, presumably including git fetch, are atomic:
Git is known to have atomic operations i.e. an environment issue will not cause git to commit half of the files into the repository and leave the rest.
Assuming this to be accurate, then either a remote tracking branch will be completely updated during a git fetch, or it will remain as it was.
|
__label__pos
| 0.960733 |
Gdpr Record Of Processing Activities Example
A closer look at Commissioner...
Sweet Home 3d Example Files
Sweet Home 3D Roof Tutorial YouTube...
Example Of Text That Enumerates
Upload Enumerate text fields in a form....
Example Of Phenomenological Research In Education
A PHENOMENOLOGICAL STUDY OF THE LIVED EXPERIENCES OF...
Design And Construct Contract Example
Design and Build Contract The Joint Contracts Tribunal (JCT)...
Certificate Of Origin Example Canada
Electronic Certificate of Origin (eCO)...
Rc4 Algorithm Explanation With Example
Encryption Types media.datadirect.com...
Cover Letter Example For Teacher With No Experience
Cover Letter Sample Teacher No Experience...
Slack Real Time Messaging Api Example
How to make custom trigger with Azure Functions Medium...
Science Experiment Write Up Example
Science Experiment Recording Sheet Science experiment...
Chemical of equation an example a
23.12.2019 | Saskatchewan
an example of a chemical equation
Chemical Equation Definition & Example - Chemistry
Chemistry Chemical Equations Shmoop Chemistry. Balancing chemical equations - phet interactive simulations, definitions of acids and bases. arrhenius represented in their ionized form in chemical the k a or k b equation as you did earlier. example.
Balancing Chemical Reactions Practice Chemical calculator. 14/11/2018в в· how to balance chemical equations. a chemical equation is a written symbolic representation of a chemical reaction. the reactant chemical(s) are given on the left, chemistry: how to balance equations with polyatomic ions, examples and step by step solutions, three helpful tips and tricks that make balancing chemical equations.
The following diagram shows how to write a chemical equation. scroll down the page for more examples and solutions. conversion of word equation to chemical equation example of balancing the combustion reaction of ethylene, cв‚‚hв‚„. some tips on how to balance more complicated reactions.
Definitions of acids and bases. arrhenius represented in their ionized form in chemical the k a or k b equation as you did earlier. example 11/06/2011в в· mr. causey shows you how to write chemical equations. mr. causey discusses the parts of a chemical equation, the symbols involved and the steps required
When you write an equation for a chemical reaction, for example, if you write the following, it means you have two water molecules: balancing chemical equation with substitution. practice: balancing chemical equations 1. this is the currently selected item. next tutorial. stoichiometry
A chemical equation is a way to predict the way that two or more chemicals will work together. using what chemists know about the way chemicals act, we add the letter an acidвђ“base reaction is a chemical reaction that occurs between an acid and a base, which can be used to determine ph. several theoretical frameworks provide
an example of a chemical equation
Chemical Equation Definition & Example - Chemistry
Balancing more complex chemical equations (video) Khan. Let's take the reaction of hydrogen with oxygen to form water as an example. to write the chemical equation for this in writing chemical equations,, balancing chemical equations - phet interactive simulations); balancing chemical equations - phet interactive simulations, over 250 chemical reaction equations to balance - with the key to check your answers..
an example of a chemical equation
Balancing Chemical Equations Online Math Learning
Balancing Chemical Equations Online Math Learning. Formulas and equations. types of chemical formulas. in earlier topics, example in equation writing, by their empirical formula. being elements, the, let's take the reaction of hydrogen with oxygen to form water as an example. to write the chemical equation for this in writing chemical equations,.
an example of a chemical equation
Chemical Equation Definition & Example - Chemistry
Writing and Balancing Chemical Equations Chemistry for. Reactions can be represented by a chemical equation. chemical equations. fire, for example, is not a chemical - it is infact a form of energy,, example 1. write and balance the chemical equation for each given chemical reaction. hydrogen and chlorine react to make hcl. ethane, c 2 h 6, reacts with oxygen to.
Balancing chemical equation with substitution. practice: balancing chemical equations 1. this is the currently selected item. next tutorial. stoichiometry balancing chemical equations - phet interactive simulations
Write the complete ionic equation by describing water-soluble ionic compounds as separate ions and insoluble example 1 вђ“ predicting precipitation 12/11/2018в в· how to write a chemical equation. practice with some examples. the best way to learn formula writing is to practice with lots of examples.
Balancing of a chemical equation is based on the principle of atom what are balanced chemical equations? what are 6 examples of balanced chemical equations? example of balancing the combustion reaction of ethylene, cв‚‚hв‚„. some tips on how to balance more complicated reactions.
Example 1. write and balance the chemical equation for each given chemical reaction. hydrogen and chlorine react to make hcl. ethane, c 2 h 6, reacts with oxygen to examples of the chemical equations reagents give us feedback about your experience with chemical equation balancer. chemical equations balanced today:
EXCEL PIVOT TABLE David Geffen School think of a list as a simple database, The following table is a sample report which shows summarized expense for three Pivot table simple example pdf Pivot Tables/Charts (Microsoft Excel 2010) You can use pivot tables whenever you want to summarize a large amount of data, For example, click the boxes next
Example 1. write and balance the chemical equation for each given chemical reaction. hydrogen and chlorine react to make hcl. ethane, c 2 h 6, reacts with oxygen to, i. the meaning of a chemical equation a chemical equation is a chemistвђ™s shorthand expression for describing a chemical change. as an example, -).
an example of a chemical equation
Differential Equation Modeling - Chemical Reactions
Read the next post: smart products are an example of the internet of things
Read the previous post: example of direct presentation in literature
|
__label__pos
| 0.999946 |
Was this page helpful?
Your feedback about this content is important. Let us know what you think.
Additional feedback?
1500 characters remaining
Export (0) Print
Expand All
MasterKey Methods
The MasterKey type exposes the following members.
Name Description
Public method AddPasswordEncryption Adds password encryption to the master key.
Public method AddServiceKeyEncryption Adds service key encryption to the master key.
Public method Close Closes the master key.
Public method Create(String) Creates a master key that has the specified password.
Public method Create(String, String, String) Creates a master key from the specified file and that has the specified encryption and decryption passwords.
Public method Discover Discovers a list of type Object. (Inherited from SqlSmoObject.)
Public method Drop Removes the master key from the database.
Public method DropPasswordEncryption Removes the password encryption from the master key by using the associated password.
Public method DropServiceKeyEncryption Drops service key encryption from the master key.
Public method EnumKeyEncryptions Enumerates a list of the current set of key encryptions for the database master key.
Public method Equals (Inherited from Object.)
Public method Export Saves the database master key to the specified system path location by using the specified password.
Protected method FormatSqlVariant Formats an object as SqlVariant type. (Inherited from SqlSmoObject.)
Protected method GetContextDB Gets the context database that is associated with this object. (Inherited from SqlSmoObject.)
Protected method GetDBName Gets the database name that is associated with the object. (Inherited from SqlSmoObject.)
Public method GetHashCode (Inherited from Object.)
Protected method GetPropValue Gets a property value of the SqlSmoObject object. (Inherited from SqlSmoObject.)
Protected method GetPropValueOptional Gets a property value of the SqlSmoObject object. (Inherited from SqlSmoObject.)
Protected method GetPropValueOptionalAllowNull Gets a property value of the SqlSmoObject object. (Inherited from SqlSmoObject.)
Protected method GetServerObject Gets the server of the SqlSmoObject object. (Inherited from SqlSmoObject.)
Public method GetType (Inherited from Object.)
Public method Import(String, String, String) Loads the database master key from the file that is located at the specified system path by using the specified passwords to decrypt and encrypt the master key.
Public method Import(String, String, String, Boolean) Loads the database master key from the file that is located at the specified system path by using the specified passwords to decrypt and encrypt the master key, and with the option to force regeneration.
Public method Initialize() Initializes the object and forces the properties be loaded. (Inherited from SqlSmoObject.)
Public method Initialize(Boolean) Initializes the object and forces the properties be loaded. (Inherited from SqlSmoObject.)
Protected method IsObjectInitialized Verifies whether the object has been initialized. (Inherited from SqlSmoObject.)
Protected method IsObjectInSpace Verifies whether the object is isolated or connected to the instance of SQL Server. (Inherited from SqlSmoObject.)
Public method Open Opens the database master key by using the specified password.
Public method Refresh Refreshes the object and retrieves properties when the object is next accessed. (Inherited from SqlSmoObject.)
Public method Regenerate(String) Regenerates the database master key by using the specified new password.
Public method Regenerate(String, Boolean) Regenerates the database master key by using the specified new password, and with the option to force the regeneration, thus removing all items that cannot be successfully decrypted.
Protected method SetParentImpl Sets the parent of the SqlSmoObject to the newParent parameter. (Inherited from SqlSmoObject.)
Public method ToString Returns a String that represents the referenced object. (Inherited from SqlSmoObject.)
Public method Validate Validates the state of an object. (Inherited from SmoObjectBase.)
Top
Name Description
Explicit interface implemetation Private method IAlienObject.Discover Discovers any dependencies.Do not reference this member directly in your code. It supports the SQL Server infrastructure. (Inherited from SqlSmoObject.)
Explicit interface implemetation Private method IAlienObject.GetDomainRoot Returns the root of the domain. (Inherited from SqlSmoObject.)
Explicit interface implemetation Private method IAlienObject.GetParent Gets the parent of this object. Do not reference this member directly in your code. It supports the SQL Server infrastructure. (Inherited from SqlSmoObject.)
Explicit interface implemetation Private method IAlienObject.GetPropertyType Gets the type of the specified property. (Inherited from SqlSmoObject.)
Explicit interface implemetation Private method IAlienObject.GetPropertyValue Gets the value of the specified property. (Inherited from SqlSmoObject.)
Explicit interface implemetation Private method IAlienObject.GetUrn Gets the Unified Resource Name (URN) of the object.Do not reference this member directly in your code. It supports the SQL Server infrastructure. (Inherited from SqlSmoObject.)
Explicit interface implemetation Private method IAlienObject.Resolve Gets the instance that contains the information about the object from the Unified Resource Name (URN) of the object. (Inherited from SqlSmoObject.)
Explicit interface implemetation Private method IAlienObject.SetObjectState Sets the object state to the specified SfcObjectState value. (Inherited from SqlSmoObject.)
Explicit interface implemetation Private method IAlienObject.SetPropertyValue Sets the property value. (Inherited from SqlSmoObject.)
Explicit interface implemetation Private method ISfcPropertyProvider.GetPropertySet Gets the interface reference to the set of properties of this object.Do not reference this member directly in your code. It supports the SQL Server infrastructure. (Inherited from SqlSmoObject.)
Top
Community Additions
Show:
© 2015 Microsoft
|
__label__pos
| 0.581244 |
1 May
About embedded again: searching for bugs in the Embox project
PVS-Studio corporate blogCProgramming microcontrollersDevelopment for IOT
Рисунок 2
Embox is a cross-platform, multi-tasking real-time operating system for embedded systems. It is designed to work with limited computing resources and allows you to run Linux-based applications on microcontrollers without using Linux itself. Certainly, the same as other applications, Embox couldn't escape from bugs. This article is devoted to the analysis of errors found in the code of the Embox project.
A few months ago, I already wrote an article about checking FreeRTOS, another OS for embedded systems. I did not find errors in it then, but I found them in libraries added by the guys from Amazon when developing their own version of FreeRTOS.
The article that you are reading at the moment, in some way continues the topic of the previous one. We often received requests to check FreeRTOS, and we did it. This time, there were no requests to check a specific project, but I began to receive emails and comments from embedded developers who liked the previous review, and wanted more of them.
Well, the new publication of the column «PVS-Studio Embedded» is completed and is right in front of you. Enjoy reading!
The analysis procedure
The analysis was carried out using PVS-Studio — the static code analyzer for C, C++, C#, and Java. Before the analysis, the project needs to be built — this way we will be sure that the project code is working, and we will also give the analyzer the opportunity to collect the built information that can be useful for better code checking.
The instructions in the official Embox repository offer the ability to build under different systems (Arch Linux, macOS, Debian) and using Docker. I decided to add some variety to my life — to build and analyze the project under Debian, which I've recently installed on my virtual machine.
The build went smoothly. Now I had to move on to the analysis. Debian is one of the Linux-based systems supported by PVS-Studio. A convenient way to check projects under Linux is to trace compiler runs. This is a special mode in which the analyzer collects all the necessary information about the build so that you can then start the analysis with one click. All I had to do was:
1) Download and install PVS-Studio;
2) Launch the build tracking by going to the folder with Embox and typing in the terminal
pvs-studio-analyzer analyze -- make
3) After waiting for the build to complete, run the command:
pvs-studio-analyzer analyze -o /path/to/output.log
4) Convert the raw report to any convenient format The analyzer comes with a special utility PlogConverter, with which you can do this. For example, the command to convert the report to task list (for viewing, for example, in QtCreator) will look like this:
plog-converter -t tasklist -o /path/to/output.tasks /path/to/project
And that's it! It took me no more than 15 minutes to complete these steps. The report is ready, now you can view the errors. So let's get going!
Strange loop
One of the errors found by the analyzer was the strange while loop:
int main(int argc, char **argv) {
....
while (dp.skip != 0 ) {
n_read = read(ifd, tbuf, dp.bs);
if (n_read < 0) {
err = -errno;
goto out_cmd;
}
if (n_read == 0) {
goto out_cmd;
}
dp.skip --;
} while (dp.skip != 0); // <=
do {
n_read = read(ifd, tbuf, dp.bs);
if (n_read < 0) {
err = -errno;
break;
}
if (n_read == 0) {
break;
}
....
dp.count --;
} while (dp.count != 0);
....
}
PVS-Studio warning: V715 The 'while' operator has empty body. Suspicious pattern detected: 'while (expr) {...} while (dp.skip != 0) ;'. dd.c 225
Hm. A weird loop indeed. The expression while (dp.skip != 0) is written twice, once right above the loop, and the second time — just below it. In fact, now these are two different loops: one contains expressions in curly braces, and the second one is empty. In this case, the second loop will never be executed.
Below is a do… while loop with a similar condition, which leads me to think: the strange loop was originally meant as do… while, but something went wrong. I think, this piece of code most likely contains a logical error.
Memory leaks
Yes, they also sneaked in a plug.
int krename(const char *oldpath, const char *newpath) {
char *newpatharg, *oldpatharg;
....
oldpatharg =
calloc(strlen(oldpath) + diritemlen + 2, sizeof(char));
newpatharg =
calloc(strlen(newpath) + diritemlen + 2, sizeof(char));
if (NULL == oldpatharg || NULL == newpatharg) {
SET_ERRNO(ENOMEM);
return -1;
}
....
}
PVS-Studio warnings:
• V773 The function was exited without releasing the 'newpatharg' pointer. A memory leak is possible. kfsop.c 611
• V773 The function was exited without releasing the 'oldpatharg' pointer. A memory leak is possible. kfsop.c 611
The function creates the local variables newpatharg and oldpatharg inside itself. These pointers are assigned the addresses of new memory locations allocated internally using calloc. If a problem occurs while allocating memory, calloc returns a null pointer.
What if only one block of memory can be allocated? The function will crash without any memory being freed. The fragment that happened to be allocated will remain in memory without any opportunity to access it again and free it for further use.
Another example of a memory leak, a more illustrative one:
static int block_dev_test(....) {
int8_t *read_buf, *write_buf;
....
read_buf = malloc(blk_sz * m_blocks);
write_buf = malloc(blk_sz * m_blocks);
if (read_buf == NULL || write_buf == NULL) {
printf("Failed to allocate memory for buffer!\n");
if (read_buf != NULL) {
free(read_buf);
}
if (write_buf != NULL) {
free(write_buf);
}
return -ENOMEM;
}
if (s_block >= blocks) {
printf("Starting block should be less than number of blocks\n");
return -EINVAL; // <=
}
....
}
PVS-Studio warnings:
• V773 The function was exited without releasing the 'read_buf' pointer. A memory leak is possible. block_dev_test.c 195
• V773 The function was exited without releasing the 'write_buf' pointer. A memory leak is possible. block_dev_test.c 195
Here the programmer has shown neatness and correctly processed the case in which only one piece of memory was allocated. Processed correctly… and literally in the next expression made another mistake.
Thanks to a correctly written check, we can be sure that at the time the return -EINVAL expression is executed, we will definitely have memory allocated for both read_buf and write_buf. Thus, with such a return from the function, we will have two leaks at once.
I think that getting a memory leak on an embedded device can be more painful than on a classic PC. In conditions when resources are severely limited, you need to monitor them especially carefully.
Pointers mishandling
The following erroneous code is concise and simple enough:
static int scsi_write(struct block_dev *bdev, char *buffer,
size_t count, blkno_t blkno) {
struct scsi_dev *sdev;
int blksize;
....
sdev = bdev->privdata;
blksize = sdev->blk_size; // <=
if (!sdev) { // <=
return -ENODEV;
}
....
}
PVS-Studio warning: V595 The 'sdev' pointer was utilized before it was verified against nullptr. Check lines: 116, 118. scsi_disk.c 116
The sdev pointer is dereferenced just before it is checked for NULL. It is logical to assume that if someone wrote such a check, then this pointer may be null. In this case, we have the potential dereferencing of the null pointer in the line blksize = sdev->blk_size.
The error is that the check is not located where it is needed. It should have come after the line"sdev = bdev->privdata;", but before the line "blksize = sdev->blk_size;". Then potential accessing by the null address could be avoided.
PVS-Studio found two more errors in the following code:
void xdrrec_create(....)
{
char *buff;
....
buff = (char *)malloc(sendsz + recvsz);
assert(buff != NULL);
....
xs->extra.rec.in_base = xs->extra.rec.in_curr = buff;
xs->extra.rec.in_boundry
= xs->extra.rec.in_base + recvsz; // <=
....
xs->extra.rec.out_base
= xs->extra.rec.out_hdr = buff + recvsz; // <=
xs->extra.rec.out_curr
= xs->extra.rec.out_hdr + sizeof(union xdrrec_hdr);
....
}
PVS-Studio warnings:
• V769 The 'xs->extra.rec.in_base' pointer in the 'xs->extra.rec.in_base + recvsz' expression could be nullptr. In such case, resulting value will be senseless and it should not be used. Check lines: 56, 48. xdr_rec.c 56
• V769 The 'buff' pointer in the 'buff + recvsz' expression could be nullptr. In such case, resulting value will be senseless and it should not be used. Check lines: 61, 48. xdr_rec.c 61
The buf pointer is initialized with malloc, and then its value is used to initialize other pointers. The malloc function can return a null pointer, and this should always be checked. One would think, that there is the assert checking buf for NULL, and everything should work fine.
But not so fast! The fact is that asserts are used for debugging, and when building the project in the Release configuration, this assert will be deleted. It turns out that when working in Debug, the program will work correctly, and when building in Release, the null pointer will get further.
Using NULL in arithmetic operations is incorrect, because the result of such an operation will not make any sense, and you can't use such a result. This is what the analyzer warns us about.
Someone may object that the absence of the check after malloc/realloc/calloc is not crucial. Meaning that, at the first access by a null pointer, a signal / exception will occur and nothing scary will happen. In practice, everything is much more complicated. If the lack of the check does not seem dangerous to you, I suggest that you check out the article "Why it is important to check what the malloc function returned".
Incorrect handling of arrays
The following error is very similar to the example before last:
int fat_read_filename(struct fat_file_info *fi,
void *p_scratch,
char *name) {
int offt = 1;
....
offt = strlen(name);
while (name[offt - 1] == ' ' && offt > 0) { // <=
name[--offt] = '\0';
}
log_debug("name(%s)", name);
return DFS_OK;
}
PVS-Studio warning: V781 The value of the 'offt' index is checked after it was used. Perhaps there is a mistake in program logic. fat_common.c 1813
The offt variable is first used inside the indexing operation, and only then it is checked that its value is greater than zero. But what happens if name turns out to be an empty string? The strlen() function will return 0, followed by epic shooting yourself in the foot. The program will access by a negative index, which will lead to undefined behavior. Anything can happen, including a program crash. Not good at all!
Рисунок 1
Suspicious conditions
Just can't do without them! We find such errors literally in every project that we check.
int index_descriptor_cloexec_set(int fd, int cloexec) {
struct idesc_table *it;
it = task_resource_idesc_table(task_self());
assert(it);
if (cloexec | FD_CLOEXEC) {
idesc_cloexec_set(it->idesc_table[fd]);
} else {
idesc_cloexec_clear(it->idesc_table[fd]);
}
return 0;
}
PVS-Studio warning: V617 Consider inspecting the condition. The '0x0010' argument of the '|' bitwise operation contains a non-zero value. index_descriptor.c 55
In order to get where the error hides, let's look at the definition of the FD_CLOEXEC constant:
#define FD_CLOEXEC 0x0010
It turns out that there is always a nonzero constant in the expression if (cloexec | FD_CLOEXEC) to the right of the bitwise «or». The result of such an operation will always be a nonzero number. Thus, this expression will always be equivalent to the if(true) expression, and we will always process only the then branch of the if statement.
I suspect that this macro constant is used to pre-configure the Embox OS, but even if so this always true condition looks strange. Perhaps authors wanted to use the & operator, but made a typo.
Integer division
The following error relates to one feature of the C language:
#define SBSIZE 1024
static int ext2fs_format(struct block_dev *bdev, void *priv) {
size_t dev_bsize;
float dev_factor;
....
dev_size = block_dev_size(bdev);
dev_bsize = block_dev_block_size(bdev);
dev_factor = SBSIZE / dev_bsize; // <=
ext2_dflt_sb(&sb, dev_size, dev_factor);
ext2_dflt_gd(&sb, &gd);
....
}
PVS-Studio warning: V636 The '1024 / dev_bsize' expression was implicitly cast from 'int' type to 'float' type. Consider utilizing an explicit type cast to avoid the loss of a fractional part. An example: double A = (double)(X) / Y;. ext2.c 777
This feature is as follows: if we divide two integer values, then the result of the division will be integer as well. Thus, division will occur without a remainder, or, in other words, the fractional part will be discarded from the division result.
Sometimes programmers forget about it, and errors like this come out. The SBSIZE constant and the dev_bsize variable are of the integer type (int and size_t, respectively). Therefore, the result of the SBSIZE / dev_bsize expression will also be of the integer type.
But hold on. The dev_factor variable is of the float type! Obviously, the programmer expected to get a fractional division result. This can be further verified if you pay attention to the further use of this variable. For example, the ext2_dflt_sb function, where dev_factor is passed as the third parameter, has the following signature:
static void ext2_dflt_sb(struct ext2sb *sb, size_t dev_size, float dev_factor);
Similarly, in other places where the dev_factor variable is used: everything indicates that a floating-point number is expected.
To correct this error, one just has to cast one of the division operands to the floating-point type. For example:
dev_factor = float(SBSIZE) / dev_bsize;
Then the result of the division will be a fractional number.
Unchecked input data
The following error is related to the use of unchecked data received from outside of the program.
int main(int argc, char **argv) {
int ret;
char text[SMTP_TEXT_LEN + 1];
....
if (NULL == fgets(&text[0], sizeof text - 2, /* for \r\n */
stdin)) { ret = -EIO; goto error; }
text[strlen(&text[0]) - 1] = '\0'; /* remove \n */ // <=
....
}
PVS-Studio warning: V1010 Unchecked tainted data is used in index: 'strlen(& text[0])'. sendmail.c 102
Let's start with considering what exactly the fgets function returns. In case of successful reading of a string, the function returns a pointer to this string. In case if end-of-file is read before at least one element, or an input error occurs, the fgets function returns NULL.
Thus, the expression NULL == fgets(....) checks if the input received is correct. But there is one detail. If you pass a null terminal as the first character to be read (this can be done, for example, by pressing Ctrl + 2 in the Legacy mode of the Windows command line), the fgets function takes it into account without returning NULL. In doing so, there will be only one element in the string supposed for writing which is \0'.
What will happen next? The expression strlen(&text[0]) will return 0. As a result, we get a call by a negative index:
text[ 0 - 1 ] = '\0';
As a result, we can crash the program by simply passing the line termination character to the input. It is rather sloppy and it could potentially be used to attack systems that are using Embox.
My colleague who was developing this diagnostic rule even made a recording of an example of such an attack on the NcFTP project:
I recommend checking out if you still do not believe that it might happen :)
The analyzer also found two more places with the same error:
• V1010 Unchecked tainted data is used in index: 'strlen(& from[0])'. sendmail.c 55
• V1010 Unchecked tainted data is used in index: 'strlen(& to[0])'. sendmail.c 65
MISRA
MISRA is a set of guidelines and rules for writing secure C and C++ code for highly dependable embedded systems. In some way, this is a set of guidelines, following which you will be able to get rid of so called «code smells» and also protect your program from vulnerabilities.
MISRA is used where human lives depend on the quality of your embedded system: in the medical, automotive, aircraft and military industries.
PVS-Studio has an extensive set of diagnostic rules that allow you to check your code for compliance with MISRA C and MISRA C++ standards. By default, the mode with these diagnostics is turned off, but since we are looking for errors in a project for embedded systems, I simply could not do without MISRA.
Here is what I managed to find:
/* find and read symlink file */
static int ext2_read_symlink(struct nas *nas,
uint32_t parent_inumber,
const char **cp) {
char namebuf[MAXPATHLEN + 1];
....
*cp = namebuf; // <=
if (*namebuf != '/') {
inumber = parent_inumber;
} else {
inumber = (uint32_t) EXT2_ROOTINO;
}
rc = ext2_read_inode(nas, inumber);
return rc;
}
PVS-Studio warning: V2548 [MISRA C 18.6] Address of the local array 'namebuf' should not be stored outside the scope of this array. ext2.c 298
The analyzer detected a suspicious assignment that could potentially lead to undefined behavior.
Let's take a closer look at the code. Here, namebuf is an array created in the local scope of the function, and the cp pointer is passed to the function by pointer.
According to C syntax, the name of the array is a pointer to the first element in the memory area in which the array is stored. It turns out that the expression *cp = namebuf will assign the address of the array namebuf to the variable pointed by cp. Since cp is passed to the function by pointer, a change in the value that it points to will affect the place where the function was called.
It turns out that after the ext2_read_symlink function completes its work, its third parameter will indicate the area that the namebuf array once occupied.
There is only one slight hitch: since namebuf is an array reserved on the stack, it will be deleted when the function exits. Thus, a pointer that exists outside the function will point to the freed part of memory.
What will be at that address? No one can say for sure. It is possible that for some time the contents of the array will continue to be in memory, or it is possible that the program will immediately replace this area with something else. In general, accessing such an address will return an undefined value, and using such a value is a gross error.
The analyzer also found another error with the same warning:
• V2548 [MISRA C 18.6] Address of the local variable 'dst_haddr' should not be stored outside the scope of this variable. net_tx.c 82
Рисунок 6
Conclusion
I liked working with the Embox project. Despite the fact that I did not cite all the found errors in the article, the total number of warnings was relatively small, and in general, the project code is of high quality. Therefore, I express my gratitude to the developers, as well as to those who contributed to the project on behalf of the community. You did great!
On this occasion, let me send my best to the developers. Hope that it's not very cold in St. Petersburg right now :)
At this point, my article comes to an end. I hope you enjoyed reading it, and you found something new for yourself.
If you are interested in PVS-Studio and would like to independently check a project using it, download and try it. This will take no more than 15 minutes.
Tags:EmboxPVS-StudioCembedembedded systemsmicrocontrollersoperating systemsLinux
Hubs: PVS-Studio corporate blog C Programming microcontrollers Development for IOT
+3
178 0
Leave a comment
Top of the last 24 hours
|
__label__pos
| 0.831176 |
By pressing the button below I freely give my consent to set or activate cookies and external links. I know their functions because they are described in the privacy policy or explained in more detail in documents or external links implemented there. I have the right to withdraw my data protection consent at any time with effect for the future, by changing my cookie preferences or deleting my cookies. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. With a single action (pressing the button), several consents are granted. These are consents under data protection law as well as those under ePrivacy and telemedia law that are necessary for storing and reading out information and are required as a legal basis for planned further processing of the data read out. I am aware that I can refuse my consent by clicking on the other button or, if necessary, make individual settings. With my action I also confirm that I have read and taken note of the Privacy Policy and the Transparency Document.
Clinical Research
PsycApps, the company behind eQuoo the Emotional Fitness Game, is aware of the great responsibility that comes along with designing and distributing mental wellbeing and mental health products. Everything we develop is based on studies published in peer-reviewed journals and we have conducted multiple clinical trials with eQuoo and our other product, PsycApps. All studies conducted by Psychologist Silja Litvin, have been approved through the Ethics Committee of Ludwig Maximilian University, and will be published in open-source peer-reviewed journals for all to read.
TLDR:
1. We tested the effects of eQuoo on well-being -IT WORKS!
2. We tested the following well-being metrics: Psychological Well-Being (Ryff's Scale) - Positive Relationships with others, Resilience (ARS), Personal Growth (PGIS), a One Item Anxiety Likert Scale
3. It was a 5-week, 3-arm randomized controlled trial with 358 participants
Background
Young adults 18 - 28 years old are the most vulnerable population to mental health issues of which 29% are living with a diagnosed mental illness. With less than 35% having access to therapy and psychological care, it is pressing to develop therapeutic tools that are cost-efficient, effective and 'sticky'. The broad distribution of smartphones offers a compelling platform in the form of applications, but evidence-based apps struggle with high attrition. Additionally, prevention programs have attrition rates of up to 99%, making them difficult to implement. Research suggests gamification to be a valid strategy to intrinsically motivate patients to adhere to prevention and early-stage mobile interventions.
Methods
A game named eQuoo, teaching psychological concepts such as emotional bids, generalization, and reciprocity through psychoeducation, storytelling, and gamification was developed and published on all application platforms. A hypothesis was postulated that using the app over a period of 5 weeks would significantly boost Resilience, Personal Growth.
Psychological well-being: Positive Relations With Others and Anxiety as well as heighten adherence. 358 participants partook in a 5-week, 3-armed randomized controlled trial, of which a third used eQuoo, a third used a 'Treatment as Usual' CBT Journal app and a third was on a waitlist with no intervention. All 3 groups filled out the following questionnaires at 3 time-points: The Adult Resilience Scale, The Personal Growth Inventory Scale, the Psychological well-being: Positive Relations With Others Scale by Ryff and a 1 Item Anxiety Likert scale.
Results
Results of repeated measures of ANOVAs showed statistically significant increases in the well-being metrics and a significant decrease in anxiety when using the app over a timeframe of 5 weeks. The app significantly increased resilience as measured with the ARS by d .37, personal growth as measured by PGIS by d .67, positive relations with others as measured by Ryff's PWB by d .42 and anxiety as measured with a 1 Item Ankiety Likert Scale lowered by d .20. With 90% adherence, eQuoo was able to retain 21% more participants than the control or waitlist group.
Conclusion
eQuoo is a mental health game that significantly raises mental well-being and lowers anxiety as well as maintains high adherence. This allows a deduction that smartphones are a valuable and effective platform - for those who adhere to the intended therapy process - to offer mental health interventions within an app. Using gamification could be the key to achieving the attention and motivation needed to generate higher retention rates and reduce attrition for a certain age group. Future research would benefit from measuring eQuoo's effect on anxiety with a more sensitive tool GAD 7 as well as other widespread mental illness like depression.
TLDR:
1. Mobile interventions work.
2. We developed an EVIDENCE-BASED app called PsycAppsE and trialled it in a 2-arm RCT over 4 weeks.
3. Depression levels were significantly lowered, anxiety levels were lowered.
4. Attrition rates led me to abandon customary mood-tracking and CBT app-delivery and into developing eQuoo.
TLDR:
1. A brief review of existing evidence-based mental health apps
2. The evolution from PsycAppsE to eQuoo and why PsycApps moved towards gaming
3. An outlook on the two upcoming trials with eQuoo
|
__label__pos
| 0.608661 |
Is it safe to use Q-tips to clean my ears?
Share on FacebookTweet about this on TwitterEmail this to someonePrint this page
The American Academy of Otolaryngologists (doctors who specialize in conditions of the eyes, ears, nose and throat) say that: cotton tipped swabs such as “Q-tips” should NEVER be used to clean the ear canal. In fact, swabs can actually push ear wax further into the ear canal, causing a temporary hearing loss due to wax (cerumen) build-up deep in the ear canal.
Earwax is normal, healthy, and naturally protects the inner ear from dust and dirt particles. The safest way to clean the outer ear is to use a facecloth. Sometimes earwax can build up and cause a feeling of fullness in the ears, partial hearing loss, and/or tinnitus (ringing in the ear). If you have any of these symptoms, talk to your health care provider. Your provider may tell you to use a special over-the-counter ear wax removal medicine or he/she may need to remove the earwax in the office.
|
__label__pos
| 0.913408 |
Skip to:
Content
BuddyPress.org
Ignore:
Timestamp:
02/23/2015 01:39:05 AM (9 years ago)
Author:
boonebgorges
Message:
Use 'bp_member_member_type' as the member type cache bucket name.
Using 'bp_member_type' was creating the potential for collisions between WP's
taxonomy cache (which uses the taxonomy name 'bp_member_type' and term IDs as
cache keys) and BP's per-member member type cache (which uses the bucket
'bp_member_type' and user IDs as cache keys). The collisions take place only
when there is a 'bp_member_type' term ID that overlaps with a user ID.
The new cache group 'bp_member_member_type' is chosen to underscore that what's
being cached is the relationship between individual members and the user types
to which they belong.
Props imath, johnjamesjacoby.
Fixes #6242.
File:
1 edited
Legend:
Unmodified
Added
Removed
• trunk/src/bp-members/bp-members-cache.php
r9486 r9533
1515 */
1616function bp_members_prefetch_member_type( BP_User_Query $bp_user_query ) {
17 $uncached_member_ids = bp_get_non_cached_ids( $bp_user_query->user_ids, 'bp_member_type' );
17 $uncached_member_ids = bp_get_non_cached_ids( $bp_user_query->user_ids, 'bp_member_member_type' );
1818
1919 $member_types = bp_get_object_terms( $uncached_member_ids, 'bp_member_type', array(
3333 $cached_member_ids = array();
3434 foreach ( $keyed_member_types as $user_id => $user_member_types ) {
35 wp_cache_set( $user_id, $user_member_types, 'bp_member_type' );
35 wp_cache_set( $user_id, $user_member_types, 'bp_member_member_type' );
3636 $cached_member_ids[] = $user_id;
3737 }
3939 // Cache an empty value for users with no type.
4040 foreach ( array_diff( $uncached_member_ids, $cached_member_ids ) as $no_type_id ) {
41 wp_cache_set( $no_type_id, '', 'bp_member_type' );
41 wp_cache_set( $no_type_id, '', 'bp_member_member_type' );
4242 }
4343}
5454 */
5555function bp_members_clear_member_type_cache( $user_id ) {
56 wp_cache_delete( $user_id, 'bp_member_type' );
56 wp_cache_delete( $user_id, 'bp_member_member_type' );
5757}
5858add_action( 'wpmu_delete_user', 'bp_members_clear_member_type_cache' );
Note: See TracChangeset for help on using the changeset viewer.
|
__label__pos
| 0.749705 |
Ultraviolet index and racial differences in prostate cancer incidence and mortality
loading Checking for direct PDF access through Ovid
Abstract
BACKGROUND
Studies suggest that low levels of vitamin D may be associated with prostate cancer, and darker skin reduces the body's ability to generate vitamin D from sunshine. The impact of sunshine on racial disparities in prostate cancer incidence and mortality is unknown.
METHODS
Using the Surveillance, Epidemiology, and End Results program database, the authors calculated age-adjusted prostate cancer incidence rates among black and white men aged ≥45 years by race and county between 2000 and 2009 (N = 906,381 men). Similarly, county-level prostate cancer mortality rates were calculated from the National Vital Statistics System (N = 288,874). These data were linked with the average monthly solar ultraviolet (UV) radiation index by county and data regarding health, wellness, and demographics. Multivariable regression analysis was used to assess whether increases in the UV index (in deciles) moderated the association between black race and the incidence and mortality of prostate cancer.
RESULTS
Compared with counties in the lowest UV index decile, prostate cancer incidence rates for white and black men were lower in counties with a higher UV index (all Ps ≤ 0.051). Incidence rates were higher for black men versus white men, but the difference by race was less for counties in the fourth to fifth UV index deciles versus those in the first decile (Ps ≤ 0.02). Mortality rates also were found to decrease with increasing UV index for white men (Ps ≤ 0.003), but increase for black men, and an unexplained increase in racial differences in mortality rates was observed with an increasing UV index.
CONCLUSIONS
Racial disparities in the incidence of prostate cancer were larger in some areas with less sunshine. Additional research should confirm the findings of the current study and assess whether optimizing vitamin D levels among black men can reduce disparities. Cancer2013;119:3195–3203. © 2013 American Cancer Society.
CONCLUSIONS
In counties with more sunshine, the incidence of prostate cancer was reported to be 9% to 23% lower in white men and 10% to 34% lower in black men, with 9% lower racial differences in incidence. Similar patterns were observed for prostate cancer mortality for white men, but not for racial disparities.
Related Topics
loading Loading Related Articles
|
__label__pos
| 0.797715 |
CLINICAL INFORMATION 37
advertisement
CLINICAL
INFORMATION
37
1ST 8 Weeks
DX Group
EX Group
CLINICAL SCHEDULE FOR NURSING 222 – 2016
Wednesday, January 20, 2016 – Preclinical Skills Lab
Bring N222 Syllabus and Nurse Kit
Place: Nursing Skills Center
Time: 0800 – 1400
Susan Graven, Instructor
Tuesday, January 26, 2016 – Orientation
Bring N222 Syllabus and Nurse Kit
Wear uniform, CSM patch, and name tag
Meet at Peninsula Hospital Lobby
Time: 0800 – 1400
Susan Graven, Instructor
Thursday, January 21, 2016 – Preclinical Skills Lab
Bring N222 Syllabus and Nurse Kit
Place: Nursing Skills Center
Time: 0800 – 1400
Irene Luciano, Instructor
Friday, January 22, 2016– Orientation
Bring syllabus, wear uniform, CSM patch, and nametag
Meet at Kaiser Redwood City Hospital Lobby
Time: 0800 – 1400
Irene Luciano, Instructor
2nd 8 Weeks
AX Group
BX Group
CX Group
37
Thursday, March 23, 2016 – Preclinical Skills Lab
Bring N222 Syllabus and Nurse Kit
Place: Nursing Skills Center
Time: 0800 – 1400
TBA, Instructor
Friday, March 24, 2016 – Orientation
Bring syllabus, wear uniform, CSM patch, and nametag
Meet at Kaiser Hospital Hospital Lobby
Time: 0800 – 1400
TBA, Instructor
TBA – Preclinical Skills Lab
Bring N222 Syllabus and Nurse Kit
Place: Nursing Skills Center
Time: 0800 – 1400
TBA, Instructor
TBD – Orientation
Bring syllabus, wear uniform, CSM patch, and nametag
Meet at Kaiser Redwood City Hospital Lobby
Time: 0800 – 1400
TBA, Instructor
Tuesday, March 21, 2016 –Preclinical Skills Lab
Bring N222 Syllabus and Nurse Kit
Place: Nursing Skills Center
Time: 0800 – 1400
Susan Graven, Instructor
Wednesday, March 22, 2016 – Orientation
Bring syllabus, wear uniform, CSM patch, and nametag
Meet at Peninsula Hospital Lobby
Time: 0800 – 1400
Susan Graven, Instructor
NURSING 222 MATERNITY NURSING
Pre-clinical Skills Lab Content
∗
∗
See clinical schedule page for the date for your clinical group. There is only
one preclinical skills lab in this course. The other clinical day during the first
week will be used for orientation to the clinical facility.
Bring nursekit and London textbook.
Skills Lab Focus:
Common medications administered to women and neonates
IV maintenance, flow rates – primary and secondary
Assessment: postpartum patient and neonate
Interventions for postpartum patient related to perineal, hemorrhoidal pain,
comfort and hygiene
Breastfeeding – positioning, latch, interventions, problems
Activities:
Review procedure for medication cards and highlight common medications
administered and rationale for administration
IV’s – practice priming, labeling and running primary and secondary bags for
gravity flow and pump
Discuss and practice how to set the IV pump for various rates
Assessment – Postpartum Assessment
View video on postpartum assessment
Practice assessment on manikins
Discuss common interventions for perineal, hemorrhoidal pain
Discuss perineal pads, assessment and removal
Breastfeeding
View video – Breastfeeding – How to
Discuss techniques, latch, positioning, sore nipples and other potential
problems and interventions
Assessment – Newborn Assessment
View videos – Assessment of the Newborn/Gestational Age Assessment
Highlight and discuss assessment techniques
Readings: London, 4th edition, Chapters 24, 25, 26, and 29-30.
37
Clinical Orientation Guidelines
* Categories will vary with clinical agency
LDR ROOMS
1. Locate patient charts and procedure manuals. May be located on computer.
2. Explore an empty room and locate fetal monitoring equipment, sterile supplies, and
neonatal supplies stored in the room.
3. Find the census board where patient’s names or initials and labor status are
recorded.
4. Locate the operating rooms where surgical births take place.
5. Discover the mechanics of the birthing bed.
POSTPARTAL EQUIPMENT OR AREA
1. Locate supplies commonly used, such as peripads, linens, breast pumps, panties, IV
tubing, fluids.
2. Search the medication area, pyxis or cart and book. Identify the most commonly
given medications and how to document them.
3. Find the equipment used for vital signs and place to document.
4. Find charts and procedure manuals. May be located in computer.
5. Located teaching materials, including any audiovisuals.
6. Locate the room or unit refrigerator and tray areas. Find out how meals are served.
7. Find out what the visitor policy is for fathers, siblings, and extended family
members.
NURSERY AREA AND EQUIPMENT
1. Locate commonly used equipment, such as scales, thermometers, medications,
diapers, formula, linens, bottles, etc.
2. Identify the open radiant warmer and its temperature regulating features.
3. Find out where syringes, lancets, blood glucose monitors are kept.
4. Locate the board and equipment used for circumcision.
5. Locate resource materials and patient charts. May be located on computer.
37
CLINICAL NURSING COMPETENCIES MEASURED IN ALL ASPECTS OF CARE
As you progress through the nursing program, each course builds on the knowledge, skills,
and abilities of the previous course. Therefore, you are expected to perform competently in
the information already learned as you satisfactorily progress from course to course.
There are five specific nursing competencies or critical elements for which you are
responsible: Asepsis, Emotional Well-being, Interpersonal Relations, Physical Well-being,
and Professional Behaviors. You are responsible to competently implement these specific
critical elements. Any violation of the following critical elements will result in a clinical
failure. The areas listed below are examples, but are not all inclusive.
A.
ASEPSIS: The prevention of the introduction and/or transfer of organisms. Special
consideration should be given to handwashing.
1. Washes hands as appropriate.
2. Protects self from contamination.
3. Protects patient from contamination.
4. Disposes of contaminated material in designated containers.
5. Confines contaminated material to contaminated area.
6. Establishes a sterile field where required.
B.
EMOTIONAL WELL-BEING: Any action or inaction on the part of the student which
threatens the emotional well-being of the patient or significant others places that
person in emotional jeopardy. This can occur through omission, imminent, or actual
incorrect action by the student. Students must promote emotional well-being.
1. Maintains or respects patient confidentiality, including HIPAA guidelines.
a. Uses only patient initials on CSM worksheets and assignments.
b. Does not discuss patients data with anyone except healthcare staff. Does not
discuss data with patient’s family or significant other unless permission is
given by patient.
c. Does not discuss patient data in public areas such as hallways, elevators, etc.
C.
INTERPERSONAL RELATIONS: The patient-focused verbal and nonverbal interaction
between student
nurse and patient and or significant other.
1. Establishes verbal communication with patient at
beginning of
implementation phase by using at least one (1) of the following actions:
a. Introducing self.
b. Explaining nursing actions to be taken, or
c. Using touch with the patient who is a no communicating adult.
2. Interacts verbally with patient by using at least one (1) of the following methods:
a. Asking questions at least once to determine patient’s response to nursing care.
b. Asking questions at least once to determine patient’s comfort.
c. Directing the focus of communication toward patient-oriented interests.
d. Talking to a no communicating adult.
37
3. Uses language consistent with patient’s level of understanding.
4. Uses verbal expressions that are not excessively familiar, patronizing, demeaning,
abusive, or otherwise unacceptable.
5. Uses physical expressions that are not excessively familiar, patronizing,
demeaning, abusive, or otherwise unacceptable.
D.
1.
2.
3.
4.
E.
PHYSICAL WELL-BEING: Any action or inaction on the part of the student could
threaten the patient’s physical well-being. Students are accountable for the
patient’s safety. Physical well-being includes:
Maintaining the physical well-being of a patient such as reporting deterioration in
the patient’s clinical condition or imminent or actual incorrect action by the
student.
Appropriate use of physical restraints.
Appropriate use of side rails.
Correct use of procedures as learned in skills lab or identified in Clinical
Procedure Manual.
PROFESSIONAL BEHAVIORS: Maintains professional boundaries in all physical,
written, and verbal interpersonal encounters including but not limited to patients,
significant others, staff, peers, and faculty.
CoreCompetencies Rev2010.doc
Rev. 09/10
37
COLLEGE OF SAN MATEO
NURSING DEPARTMENT
CLINICAL EVALUATION
NURSING 222
STUDENT__________________________________
CLINICAL AREA_____________________________
DATE______________________________________
CLINICAL GRADE: PASS_____
NO PASS_____
ABSENCES__________________________________
CODE: P = Pass; NP = No Pass; NI = Needs Improvement
** = 100% Required To Pass; * = 96% Of Starred Criteria Required To Pass (39/41)
All “Competencies of Care” from previous courses are to be met.
ALL NURSING PROGRAM CORE COMPETENCIES MUST BE MET
**A. Asepsis
**B. Emotional Well-being
**C. Interpersonal Relations
**D. Physical Well-being
**E. Professional Behaviors
P
CLINICAL OBJECTIVES
P
I. OPERATIONALIZES THE NURSING PROCESS TO PROMOTE
HOMEOSTASIS
Uses the nursing process, with guidance, to provide safe nursing care for
pregnant women/families with common well-defined health needs.
A. Collects and organizes data from a variety of sources including data on
developmental levels to identify basic patient needs.
*1. Collects data from a variety of sources to identify the woman’s
___
newborn’s and family’s needs including grandparents..................
**2. Is prepared for patient assignment in each perinatal area………… ___
*3. Gathers data pertinent to patient, from patient, family, Kardex,
report, and chart and utilizes critical thinking skills to differentiate
normal from abnormal................................................................... ___
37
NP
COMMENTS - NI AND NP
REQUIRES EXPLANATION
NI
NP
COMMENTS - NI AND NP
REQUIRES EXPLANATION
NI
___
___
___
___
___
___
CLINICAL OBJECTIVES
P
*4. Correlates patient data with nursing theory to prepare for patient
assignment......................................................................................
B. Assess patient status
**1. Assesses the woman’s recovery from the birth process with a
thorough postpartum assessment…………………………………..
*2. Evaluates maternal/infant bonding behaviors.................................
**3. Assesses/describes the adaptation of the newborn to extra-uterine life
and identifies real/potential threats to homeostasis (e.g., vital signs,
skin color, reflexes, elimination, etc.)..............................................
*4. Determines the woman’s ability to meet the biopsychosocial needs
following birth and identifies potential problems..............................
*5. Identifies nursing diagnoses, and states related outcome criteria
written in care plans…………………………………………………
*6. Applies theoretical data to nursing practice.......................................
C. Plans with guidance, individualized nursing interventions designed to
assist women/family to meet their basic needs and to promote their
homeostatic adaptive mechanisms.
**1. Identifies nursing interventions that will assist in meeting stated
outcome criteria – written in care plans ........................................
*2. Involves the woman/family including grandparents in the plan of care…
*3. Presents plan to instructor/resource person prior to care..................
*4. Applies critical thinking principles to a variety of clinical situations…….
D. Consistently performs, with guidance, appropriate nursing interventions
safely and competently.
**1. Applies previously learned knowledge as well as perinatal concepts
to provide safe nursing care...........................................................
**2. Demonstrates knowledge of medications.....................................
*3. Adjusts nursing care to meet the needs of the maternity patient and
extended family members..............................................................
**4. Correctly performs treatments and administration of medications…
**5. Utilizes principles of asepsis and universal precautions......................
*6. Expands comfort measures to include those pertinent to the
perinatal patient.....................................................................
**7. Follows accepted protocols for safe newborn care............................
*8. Identifies priorities when organizing care, utilizing principles of time
management................................................................................
37
NP
COMMENTS - NI AND NP
REQUIRES EXPLANATION
NI
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
CLINICAL OBJECTIVES
E. Recognizes, with guidance, whether nursing intervention(s) met
identified needs.
*1. Explains appropriate rationales for nursing intervention(s) with
references, keeping in mind the specific needs of the perinatal
patient..........................................................................................
*2. Assists in revising nursing care by evaluating whether objectives
were met on the daily care worksheet...........................................
*3. Modifies the nursing care as needed keeping in mind the specific
needs of the perinatal patient....................................................
**F. Demonstrates competent performance of designated skill.....................
II. ASSUME ROLE AS A COMMUNICATOR
A. Utilizes a variety of basic communication skills to support the
woman/family, and to interact with other members of the health team.
*1. Assesses maternal verbal and non-verbal behaviors........................
**2. Communicates data and questions, regarding the woman/family to
appropriate health care professionals..............................................
*3. Communicates results of care during report and conferences............
*4. Evaluates effectiveness and identifies barriers to communication.......
*5. Identifies own limitations when giving information...........................
**6. Utilizes HIPAA guidelines for all verbal and written
communications………………………………………………………
B. Reports and records accurately, with guidance, patient assessments,
nursing interventions and their effectiveness, and other significant
occurrences affecting patient status.
*1. Charts accurately in a legible, pertinent, organized manner using
acceptable abbreviations, grammar and format for postpartum and
neonate patients in both paper and electronic modalities where
applicable……………………………………………………………
*2. Uses Nursing Care Plan as a guide for charting.................................
**3. Reports status of patient during clinical time and prior to leaving
unit................................................................................................
*4. Contributes in conferences..............................................................
P
COMMENTS - NI AND NP
REQUIRES EXPLANATION
NI
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
III. ASSUME ROLE AS A TEACHER
A. Recognize the woman’s/family’s obvious need for health teaching and
either conveys information or takes other appropriate action.................. ___
37
NP
___
___
___
___
___
CLINICAL OBJECTIVES
P
III. ASSUME ROLE AS A TEACHER
*1. Considers obvious factor(s) that may interfere with ability to learn…
**2. Initiates teaching as well as supports the teaching plan of others……
**3. Gives instruction in health promotion that assists in meeting the
needs of the perinatal patient, extended family members and the
newborn..........................................................................................
*4. Uses teaching materials provided by the clinical facility....................
*5. Documents patient/family teaching..................................................
**6. Completes graded teaching presentation............................................
IV. ASSUME ROLE AS A LEADER/MANAGER
A. Recognizes the patient’s/family need for the services of other team
members and/or agencies and discusses the need for appropriate
referrals.
*1. Assumes responsibility for managing care for assigned patient(s)….
*2. Identifies sociocultural differences and seeks help when necessary
(e.g., interpreter, dietitian, etc.)......................................................
**3. Completes post conference cultural presentation……………….…..
*4. Consults with health team members to meet needs that cannot be
met by the student............................................................
B. Identifies priorities and provides care for two to three patients within
the assigned clinical time.
*1. Implements care in stressful situations.............................................
*2. Describe RN role in perinatal care....................................................
V. ASSUME ROLE AS MEMBER WITHIN THE PROFESSION
Practices within the ethical standards and legal framework with guidance.
*1. Researches agency policies and procedures as needed........................
*2. Identifies ethical issues in the clinical area and discusses such with
instructor and at conference............................................................
*3. Describes the interventions used by a nurse functioning as a patient
or family advocate..........................................................................
B. Identifies own learning needs and demonstrates initiative in obtaining
specific experiences.
37
NP
COMMENTS - NI AND NP
REQUIRES EXPLANATION
NI
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
CLINICAL OBJECTIVES
P
NP
COMMENTS - NI AND NP
REQUIRES EXPLANATION
NI
V. ASSUME ROLE AS MEMBER WITHIN THE PROFESSION
*1. Communicates learning goals to instructor in writing on last page of
evaluation……………………………………………………………..
*2. Uses additional learning resources available......................................
C. Evaluates learning experiences and objectively assesses own progress
regularly with the instructor
**1. Completes weekly journal entry and submits to instructor for
review…………………………………………………………………
**2. Completes self evaluation and learning goals at end of the course or
as otherwise indicated by instructor.................................................
**3. Modifies performance based on previous evaluation of clinical
performance and current feedback.................................................
**4. Takes corrective actions when in error; reports such to the instructor
and follows through with appropriate written report.......................
D. Is accountable for his/her professional behavior
*1. Is punctual for clinical (cannot be late more than two times)............
*2. Is punctual for post-conference (cannot be late more than two times
without prior agreement with instructor.)........................................
**3. Is punctual in submitting written assignments..................................
*4. Completes clinical make up assignments according to policy………..
**5. Is punctual with medication and treatments....................................
**6. Follows correct procedure for notifying agency regarding absences
from clinical areas...........................................................................
**7. Follows dress code as described in the Nursing Student Handbook….
37
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
___
MIDTERM:
__________________
DATE
______________________________________________________________________
STUDENT (Signature indicates only that this evaluation has been read.)
__________________
DATE
______________________________________________________________________
INSTRUCTOR
COMMENTS:
STUDENT:
FACULTY:
≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈
FINAL:
__________________
DATE
______________________________________________________________________
STUDENT (Signature indicates only that this evaluation has been read.)
__________________
DATE
______________________________________________________________________
INSTRUCTOR
COMMENTS:
STUDENT:
FACULTY:
STUDENT LEARNING GOALS:
01/15
N222 Clinical Evaluation
37
CLINICAL CHECKLIST FOR LABOR AND DELIVERY
OR LDRP ROOM
* NOTE – with supervision indicates RN or instructor must be present.
OBJECTIVES/SKILLS
1. Assess and document vital signs.
2. Monitor uterine contraction pattern.
3. Observe and assist with electronic fetal monitoring.
4. Begin to interpret monitoring patterns.
5. Assess patient’s pain level.
6. Assist patient with relaxation techniques for pain
management.
7. Perform basic comfort and hygiene measures.
8. Observe a vaginal delivery.
9. Observe a cesarean delivery.
10. Give medications with supervision – IVPB, IM SC, PO
routes. (not IV push)
11. Assist in setting up IV drip.
12. Start IV’s (after week 5-6) with supervision.
13. Perform urinary catheterization with supervision.
14. Follow the instructions of caregivers during a delivery.
15. Give neonatal medications with supervision.
16. Perform postpartal assessments and document.
17. Maintain confidentiality of patient and family.
18. Participate in the transfer of the patient to
the postpartal unit. (differs with agencies)
37
DATE COMPLETED
CLINICAL CHECKLIST FOR POSTPARTUM OR LDRP ROOM
DATE COMPLETED
OBJECTIVES/SKILLS
1. Admits a patient to the unit (differs with agencies).
2. Completes a physical and psychosocial assessment of a vaginal
delivery patient.
3. Completes a physical and psychosocial assessment of a cesarean
birth patient.
4. Monitors vital signs according to agency protocol.
5. Assists with breastfeeding and refers to lactation consultant as
necessary.
6. Instructs patient in self-care measures.
7. Administers routine and prn medications with supervision.
8. Monitors and documents IV therapy.
9. Completes appropriate documentation.
10. Participates in discharge instructions.
11. Maintains confidentiality of patient and family.
12. Reviews and updates care plan as needed.
37
CLINICAL CHECKLIST FOR NEWBORN CARE
DATE COMPLETED
OBJECTIVES/SKILLS
1. Completes a newborn admission assessment (varies with agency)
with supervision.
2. Completes a physical assessment/gestational assessment with
supervision.
3. Assesses maternal/neonate attachment.
4. Performs a neonatal bath.
5. Performs circumcision care.
6. Performs cord care.
7. Performs a neonatal heelstick for glucose.
8. Assists with feeding.
9. Completes appropriate documentation.
10. Administers newborn medication.
11. Participates in discharge instruction.
12. Participates in teaching normal newborn care.
13. Maintains patient and newborn confidentiality.
14. Maintains newborn safety in crib, during transport, positioning,
and accurate identification.
15. Follows agency protocol for neonate abduction prevention.
37
Nursing 222 Maternity Nursing Sample Math Test
1. The IV order is to infuse 1 Liter D5LR over the next 8 hours. After 4 hours, 250 ml are
remaining in the IV bag. How many hours AHEAD is the IV running? _________
2. The order reads: Loading dose of Magnesium Sulfate 4 Gms IV to be administered over a
20 minute period. On hand is a premixed bag of 20 Gm of magnesium sulfate in 500 ml of
H2O.
Set the rate and volume to be infused on the pump.
Rate __________
Volume ________
3. The order reads: Maintenance dose of Magnesium Sulfate 2 Gms per hour IV. Continue
this order using the same premixed bag of 20 Gm of magnesium sulfate in 500 ml of H2O
that you used in #2.
Set the rate and volume to be infused on the pump.
Rate __________
Volume ________
4. Calculate the rate and volume to be infused for an order reading Penicillin 5 million units
mixed in 250 mls of NS to be administered over 2 hour. Set the pump for rate _________
and volume to be infused ________________.
37
5. Cefazolin (Anecef) 1.5 GMS is ordered for a labor patient with a temperature of 39°C. To
reconstitute the drug, the directions indicate to add 10 mls to a 5 Gm. Powered vial for a
dosage of 5 GM= 10mls. Once reconstituted, how many mls would you add to a secondary
bag for a dosage of 1.5 Gms to be given NOW? _________
6. Morphine sulfate is ordered for a neonate weighing 3.2 kg for pain. The range for neonates
is 0.02 mg/kg to 0.05 mg/kg every 3-4 hours. Calculate the correct MAXIMUM dosage for
this neonate. _________
7. Terbutaline 250 mcg sc is ordered for a preterm labor patient. Available is 1mg/ml.
Calculate the correct amount to be given. _________
8. Methergine 200 mcg is ordered for a postpartum patient who is experiencing postpartum
hemorrhage. Calculate the correct amount to be given with available amount of 0.4 mg.ml.
_________
9. Pitocin 20 units in 1000 ml LR is ordered to run at 150 mls per hour. Using a drop factor of
10 gtts.ml, calculate the drop rate. _________
10. Celestone Soluspan (betamethasone) has an available dosage of 6 mg/ml. Prepare a 4mg
injection. _________
37
Answers:
1.
2 hours
2.
300mL/hr, 100mls
3.
50mL/hr, 400mLs
4.
125mL/hr, 250mLs
5.
3mLs
6.
0.16 mg
7.
0.25 mL
8.
0.5 mL
9.
25 drops per minute
10. 0.67mLs
37
CLINICAL
MAKEUP
37
PREPARED CHILDBIRTH CLASS
(Assignment or clinical makeup)
At the completion of this experience, the student will be able to:
1. Compare and contrast three different methods of childbirth education.
2. Demonstrate one relaxation exercise observed during the class.
3. Demonstrate one breathing technique used for labor purposes.
4. Identify two concerns about the labor and delivery process expressed by the participants in
the class.
5. Describe the teaching style used by the instructor.
ASSIGNMENT
Type the answers to the following questions and submit to your clinical instructor one week after the
experience.
1.
2.
3.
4.
5.
Summarize the similarities and differences among three methods of childbirth education.
Describe one breathing technique that you saw demonstrated in the class.
Describe one relaxation exercise that was practiced in the class.
List two concerns about labor and delivery that were expressed by class participants.
Describe one effective teaching technique used by the instructor.
N222ChecklistForL&D.doc
37
ASSIGNMENTS
37
MEDICATION ADMINISTRATION CARDS
OVER VIEW: The purpose of this assignment is to assist in the preparation of common
medications during the maternity rotation. Since the patient stay is so short,
preparation in advance is not practical or possible. A list of commonly ordered
medications from the clinical facilities has been compiled to facilitate your preparation
prior to administration.
See the next page for the list. You are required to prepare a medication card for each
medication listed. *Bring the cards to clinical each day. A preprinted card will be
accepted, provided that the required information is highlighted. Include or highlight
the following information
On each card:
Name (trade and generic)
Usual dosage
Routes of administration
Classification
Action *This needs to be specific to the maternity or neonatal patient. May need to
write in specific information for preprinted card.
Common side effects
Nursing considerations
In addition to using the drug handbook, be sure and use the textbook for any
maternity or neonatal considerations.
N222MedAdministrationCards.doc
37
Medication List
The following is a list of commonly prescribed medications used in the perinatal
areas. See directions on the previous page of the syllabus.
NEONATE
Bacitracin
EMLA
Erythromycin Ointment
Hepatitis B vaccine
Vitamin K
Narcan
Lidocaine
LABOR PATIENT
POSTPARTUM PATIENT
Ampicillin
Benadryl
Colace
Cefotetan
Demerol
Dulcolax
Ferrous sulfate
Gentamycin
Hemabate
Lortab
Lovenox
Magnesium Sulfate
Methergine
Morphine sulfate
Motrin
Norco
Nubain
Pepcid
Percodan
Phenergan
Reglan
Rhogam
Rubella vaccine
TDAP
Toradol
Tylenol with codeine
Vistaril
Benadryl
Betamethasone
Cytotec
Clindamycin
Fentanyl
Keflex
Lidocaine
Lobetalol
Magnesium Sulfate
Nifedipine
Pitocin
Penicillin
Prostaglandin gel (PG gel)
Reglan
Terbutaline
Vancomycin
Zofran
N222MedicationList
37
GUIDELINES FOR THE GRADED TEACHING PRESENTATION
The postpartum or mother/baby experience provides the best opportunity for teaching
the family. Teaching may be provided for a new mother and any other family members
present. Demonstration of a baby bath and return demonstration also works well with a
group of parents. The topic chosen must be approved by the clinical instructor. The
student will sign up for a date to teach at the beginning of the rotation, depending on his
or her schedule.
References:
American Academy of Pediatrics
Teaching manual or other materials at the clinical facility
Guidelines for the Graded Teaching Presentation.doc
37
POSTPARTUM TEACHING PRESENTATION AND PAPER
Grade: Competent/not yet competent
Directions: During this rotation, the student will be responsible for one formal teaching
presentation, which will be graded by the instructor. The student and clinical instructor will agree
on a date and topic. The student will show the instructor a brief outline, notes and objectives prior
to doing the presentation. Following the presentation, the student will type the formal paper to b e
submitted one week after the teaching session.
5 points = competency = 80% of total (4/5 pts) = Cr
I. DATA COLLECTION (0.5 points)
Patient Initials, Room Number, Age
Date and Time of Delivery
Gravida and Para
Maternal Complications: labor, delivery, postpartum
Neonatal Complications:
II. LEARNING NEEDS ASSESSMENT (0.5 points)
State here the subjective and objective evidence for learning needs.
(Example: patient statements, questions indicating knowledge deficit)
III. SUMMARY OF PLAN AND TYPE OF PRESENTATION (0.5 points)
Indicate topic presented and type of presentation
(Example: baby bath – demonstration and discussion of normal newborn
deviations, reflexes and needs.)
IV. OUTLINE OF CONTENT TO BE PRESENTED (0.5 points)
Summarize in a brief outline form the content of the presentation. Include and
attach a copy of any audiovisual materials given to the patient.
V. OBJECTIVES (0.5 points)
List at least THREE objectives which you hope you have accomplished by the end
of the presentation. Objectives must be stated in measurable form. (Example :
By the end of this presentation, the patient will be able to demonstrate the
correct technique for bathing a baby)
VI.
EVALUATION (0.5 points)
State to what the extent the objectives were accomplished
(Example : The patient was able to demonstrate the correct technique for bathing the
baby, but needs more practice.)
* Describe here how you might change your presentation the next time to better
accomplish the objectives
SEE NEXT PAGE FOR TEACHING PRESENTATION GUIDELINES
N222TeachingPresentation&Paper.doc
37
TEACHING PRESENTATION GUIDELINES
GRADING CRITERIA FOR THE PRESENTATION
STUDENTS ________________________________________
DATE _______________
TOPIC ____________________________________________
(POINTS POSSIBLE FOR PRESENTATION: 2 points)
Competent
1. Topic is appropriate for this
patient. Content is current,
accurate, and well prepared.
0.5
2. Presentation is clear, organized,
speed is well paced, notes can be
referred to, but not read.
Audiovisual aids are used.
0.5
3. Presentation is summarized by
briefly repeating key terms or
asking the patient key questions.
0.5
4. Presentation is completed within
10-15 minutes. (Time limit within
instructor’s discretion, considering
topic)
0.5
TOTAL STUDENT GRADE
_____
PAPER SECTION (3 points possible)
_____
PRESENTATION SECTION (2 points possible)
_____
37
Not Yet Competent
Neonatal Assessment
5 Points
DIRECTIONS: After completing a head-to-toe physical & gestational age assessment on a normal
newborn:
1. Use the Sim Chart to document your assessment including the required information regarding the
infant’s mother and delivery information. Use gestational age form (Dubowitz/Ballard) to
record gestational age assessment.
2. Complete the Maternal-Newborn Attachment Tool.
2. Type the remaining information required below.
3. Submit a paper copy of your Sim Chart record along with the Maternal-Newborn Attachment Tool,
the Gestational age form, and this paper to your clinical instructor ONE WEEK after
completing the newborn assessment.
The due date will vary with your clinical schedule.
Section A
Data Collection
1.
Complete a cover page, stating your name and date of assessment
2.
Newborn’s date of birth
3.
Newborn’s age at time of assessment (in hours or days)
4.
Type of delivery (vaginal vs surgical)-SIM CHART
5.
Apgar Score-SIM CHART
6.
List any complications if applicable-paper and/or SIM CHART
(Section A point value=0.5 points)
Section B
1.
Growth and Development
Weigh, measure, and assess the estimated gestational age using your clinical
agency’s growth chart and gestational age form. -SIM CHART, Dubowitz/Ballard
2.
State the length, head circumference, chest circumference, and weight-SIM CHART
3.
State if normal or abnormal according to the norms of the textbook.
4.
Compare the weight you obtained to the birth weight and explain any differences.
5.
Attach the completed gestational age assessment form to this section. State
whether the neonate is AGA, SGA, or LGA and whether the neonate is term,
preterm, or post-term.
(Section B point value=1 point)
37
Section C
1.
Physical Assessment and Vital Signs
List the vital signs for this neonate and state whether the vitals are within normal
limits for this neonate. -SIM CHART
2.
Complete head-to-toe assessment-use Newborn Patient Worksheet Assessment as
a guide.- SIM CHART
(Section C point value=2 points)
Section D
1.
Psychosocial Adaptation
Using the standard assessment tool on the next page, assess maternal-neonate
bonding of this neonate and mother. Attach the completed tool to this paper and
discuss your observations in a separate typed paragraph.
Example: the score was 7 because…which causes some concerns, etc.
-MATERNAL-NEWBORN ATTACHMENT TOOL
(Section D point value=0.5 points)
Section E
1.
2.
Medications
State the purpose of giving Erythromycin eye ointment to the neonate.
Describe the nursing implications, including dosage, routes, and any potential side
effects.- SIM CHART and paragraph
3.
State the purpose of giving Vitamin K to the neonate.
4.
Describe the nursing implications, including the dosage, route, and any side
effects.-SIM CHART and paragraph
(Section E point value=1 point)
37
MATERNAL‐NEWBORN ATTACHMENT TOOL
This tool is to be scored and attached to the newborn paper. Include an explanation
of your score and your impressions of the interaction.
1 point for each YES answer.
10 points maximum.
8‐10 = positive interaction
5‐7 = continue to observe
0‐4 = refer to health care provider
YES
MATERNAL BEHAVIORS
Appropriate touch‐fingertip‐palming‐
enfolding
Holds neonate in “enface” position + eye
contact
Refers to neonate by name (initially “it”,
but quickly a name)
Responds consistently to neonate cues or
signals
Expresses satisfaction with feeding
method (breast or formula)
NEONATE BEHAVIORS
+ eye contact – mutual gazing
“Tracks” face with eye movement
Grasps finger and holds on
Moves with synchronized movement in
response to parent’s voice
Comforted by parent’s voice or touch
Neonatal Assessment.doc
37
NO
LABOR AND DELIVERY PAPER
(Point Value = 5 points)
DIRECTIONS: Observe a birth, whether vaginal or surgical, collect the following data and answer
questions regarding the experience. Type the answers to the following questions and submit to
your clinical instructor one week after the experience.
PAGE NUMBER: 3‐5 pages
PART I
DATA COLLECTION (O.5 points)
DATE:
STUDENT NAME:
Patient Initials:
Marital Status:
Age:
Ethnic Background:
Gravida, Para:
Length of Labor:
Complications: (examples – long labor, premature membrane rupture, extensive lacerations,
etc.)
PART II
THE BIRTH EXPERIENCE (3.0 points)
Describe the experience in terms of the following:
1. Location of birth, type of birth attendants and support persons present (0.25 points)
2. Interaction of the laboring woman and her partner/coach with each other and with the
nursing/medical staff attending the birth. Discuss any cultural implications and their
significance to the birth. If not apparent implications, state this here.
(0.25 points)
3. a. The type of and effectiveness of analgesia/anesthesia and bearing‐down efforts used
(vaginal birth only) OR
b. The type of and effectiveness of analgesia/anesthesia used for a surgical delivery
(cesarean birth only)
(0.5 points)
4. Immediate care the delivered woman for the first 2 hours following the birth. Immediate
assessments, interventions, and their effectiveness. Be sure to include SPECIFIC
assessments and interventions on YOUR patient. This includes the various postpartal
checks which are routinely made on each patient.
(1 point)
5. Immediate care of the newborn for the first 2 hours following the birth. Include
assessments and interventions following the birth. Include SPECIFIC assessments and
interventions on your neonate.
(1 point)
37
PART III
MATERNAL MEDICATIONS (1 point)
1. List ALL maternal medications used during the labor and delivery process. Include
analgesics, anesthetics, labor induction agents, and any other medications.
2. List the following information for each medication:
a. Name
b. Classification
c. Action
d. Dosage
e. Route
f. Nursing interventions
Be SPECIFIC to your patient’s situation.
PART IV
SUMMARY (0.5 points)
In one paragraph, describe:
a. How was this birth unique?
b. How was care individualized based on this patient’s unique situation?
Labor & Delivery Paper.doc
37
OB/GYN CLINIC EXPERIENCE
OBJECTIVES:
At the completion of this experience, the student will be able to:
1. Summarize the essential components of a prenatal history.
2. Define common terminology used in the history taking process.
3. Describe physiological changes expected to occur in the various trimesters.
4. Observe teaching taking place during the visit, whether prenatal, postpartum, or
gynecological visit.
5. Identify laboratory or screening tests performed and list the significance of abnormal
results.
6. Observe a postpartal checkup.
7. Review a typical prenatal, postpartal, or gynecological chart.
8. Describe the risk screening process for prenatal patients,
9. Identify the process for a typical gynecological checkup visit.
10. Observe typical information given by the advice RN.
ASSIGNMENT
Complete the paper which BEST pertains to your observation in the outpatient setting. Type the
answers to the questions and submit to your instructor one week after the experience.
Some students will observe prenatal visits and others will observe gynecological visits.
Choose the assignment which best fits your observation.
PRENATAL PATIENT ASSIGNMENT
Choose one patient observed as a model for your answers.
1. Identify your patient’s age, trimester of pregnancy, gravida and para.
2. List five factors which may affect the course of pregnancy. Identify any factors which
may affect the course of THIS pregnancy as identified by the patient’s caregiver.
3. Define the following terms: primigravida, multigravida, primipara, and multipara.
Identify which of these terms apply to your patient.
4. Describe typical physiological changes observed in your patient. Include in your
description if these changes are appropriate for your patient’s trimester of pregnancy.
5. Describe any teaching needs observed during the visit and explain how you would meet
these needs.
6. List any laboratory tests performed on this patient and explain the significance of any
abnormal values.
7. If the opportunity arose to listen to advice given over the telephone, describe one
caller’s question and the advice given.
37
GYNECOLOGICAL VISIT
Assignment:
If your patient’s visit was primarily gynecological in nature, i.e., checkup, gynecological problem,
etc., complete this assignment INSTEAD of the prenatal one.
Using one patient as an example, type the answers to the following:
1. Identify your patient’s age, reason for visit, chief complaint, if applicable.
2. Define your patient’s medical diagnosis OR attach pertinent internet printout.
3. List risk factors for this medical diagnosis and identify those which apply to your
patient.
4. Describe signs and symptoms displayed by your patient.
5. If the visit is a routine checkup, describe any testing completed and explain the
rationale for the tests completed.
6. Describe teaching observed during the visit and explain how you would meet these
needs.
7. List any laboratory tests performed on this patient. Explain the significance of any
abnormal values.
8. Describe the treatment prescribed for this patient, if applicable, and explain its
rationale in this case.
Ob‐Gyn Clinic Experience.doc
37
NEONATAL LEVEL II NURSERY
OBSERVATIONAL EXPERIENCE
OBJECTIVES
After completing the observational experience, the student will be able to:
1. Compare and contrast the role of the registered nurse in the Level I nursery with
the role of the registered nurse in the Level II Nursery.
2. Define the medical diagnosis of one patient observed as well as the treatment that
the newborn received for that medical diagnosis.
3. Describe one piece of equipment that was used in the Level II Nursery that is not
routinely used in the Level I nursery.
4. Discuss the care of the newborn who is being fed by alternate means than breast or
bottle. From your research, identify appropriate laboratory studies and other
nursing interventions needed to ensure that the newborn is receiving adequate
nutrition.
5. Describe the reaction of one newborn who was observed during a painful
experience. Identify several nursing interventions which were utilized as comfort
measures.
ASSIGNMENT
During your assigned visit, focus your observations on the objectives as much as possible.
Type your answers to the above objectives and submit to your clinical instructor one week
after the experience.
37
NURSING 222 MATERNITY NURSING
Objectives and Assignment for Lactation Education Experience
OBJECTIVES
1. Identify common concerns of lactating women.
2. List appropriate interventions for common breastfeeding problems.
3. Describe assessment techniques used by the nurse when assisting lactating women.
4. Discuss the purpose and services provided by the lactation education center.
ASSIGNMENT
After observing rounds with the lactation educator and observing activities in the lactation
education center, type the answers to the following questions.
1. Describe the three most common concerns expressed by lactating women while on
rounds with the lactation educator.
2. Using one lactating woman as an example, describe the assessment and
intervention techniques utilized by the lactation educator. List any questions asked
by the woman and any problems discovered during the interaction.
3. Using printed patient teaching materials from the lactation center, list and define
three lactation problems which should be reported to the health care provider.
4. Briefly describe the services provided by the lactation center. Focus on client needs,
community needs, and the benefits for lactating women and their newborn.
Type and submit to the clinical instructor one week after the experience.
Attach any pertinent patient teaching materials.
37
Urology Objectives
1. Provide a brief description of one patient encounter, including patient age, procedure
performed and rationale for testing/procedure
2. List any laboratory tests performed on this patient. Explain the significance of any
abnormal values
3. Describe the role of the RN/MA during this patients encounter
4. Describe any patient learning needs and how the staff addressed them during the
procedure
5. Describe any follow-up instructions or needs anticipated for this patient following the
procedure
6. Describe the differences between a diagnostic vs. therapeutic procedure performed in
this department
Assignment
Complete the paper pertaining to your observation in the outpatient setting. Type the answers
to the questions and submit to your instructor one week after the experience.
1. Identify your patient’s age, reason for visit, chief complaint, if applicable.
2. Define your patient’s medical diagnosis OR attach pertinent internet printout.
3. List risk factors for this medical diagnosis and identify those which apply to your
patient.
4. Describe signs and symptoms displayed by your patient.
5. If the visit is a routine checkup, describe any testing completed and explain the
rationale for the tests completed.
6. Describe teaching observed during the visit and explain how you would meet these
needs.
7. List any laboratory tests performed on this patient. Explain the significance of any
abnormal values.
8. Describe the treatment prescribed for this patient, if applicable, and explain its
rationale in this case.
37
PERINATAL CULTURAL VARIATIONS
POST CONFERENCE ASSIGNMENT AND DISCUSSION
PURPOSE: The purpose of this assignment is to explore various cultural variations of the
perinatal experience and to share this information with students in your clinical group in a
postconference setting.
DUE DATE: To be determined by the clinical instructor.
ASSIGNMENT:
1. Choose a culture, either your own or one that particularly interests you.
2. Sign up with your clinical instructor to avoid duplication of culture.
3. Use the internet to access information regarding perinatal aspects of this culture.
4. On the assigned date, be prepared to discuss the following variations of the culture,
as applicable.
a. Prenatal care
b. Preparation for childbirth
c. Labor and birth
d. Postpartum self‐care
e. Nutrition ‐ prenatal, lactation
f. Role of grandparents and other extended family members
g. Breastfeeding
h. Circumcision
i. Other variations?
5. Submit your notes and internet printout to instructor on the assigned date.
37
ANTEPARTUM TESTING – OBSERVATIONAL EXPERIENCE
(Mills Peninsula Health Services only)
OBJECTIVES:
1. Describe the role of the RN and the Perinatologist in the antepartum testing unit.
2. List and describe three tests utilized in the antepartum testing unit to help assess fetal
well being.
3. Provide a brief description of one patient encounter, include the patient’s age,
gravida, para, gestational age and the rationale for testing.
4. Describe any patient learning needs and how the staff addressed them during the
visit.
Submit the typed answers to the above objectives one week after the visit
to your clinical instructor.
GESTATIONAL DIABETIC EDUCATION OBSERVATIONAL EXPERIENCE
OBJECTIVES:
Provide typed answers to these objectives to your clinical instructor one week
following the experience.
1. Define gestational diabetes.
2. List the population groups considered to be at high risk for gestational diabetes.
3. Describe the testing criteria utilized for pregnant patients.
4. Describe the current treatment regimes prescribed for pregnant patients.
5. Identify the primary learning needs of the newly diagnosed gestational diabetic
patient. Prioritize these needs and describe how you would teach this type of patient.
N222 Antepartum Testing.doc
37
N222 Maternity Nursing
Critical Thinking-Clinical Reflections
Purpose/Objective
Critical thinking is a process or way of thinking that enables the nurse to give his/her
clients the very best individualized care. It involves the use of the mind in forming conclusions,
making decisions, drawing inferences and reflecting. (Gordon, 1995). The purpose of this
assignment is to develop your critical thinking skills in the area of reflection. Reflection is
defined as the process of thinking back or recalling an event to discover the meaning and purpose
of that event. (Miller and Babcock, 1996). As a student nurse, reflecting back on a clinical
experience or an interaction with a professional colleague can help you to understand how the
concepts from the theory class apply to the lab or the hospital experience. It also helps you to
evaluate yourself and to think about how you might have handled the situation differently to
achieve a more satisfying outcome.
Keeping a clinical journal is a way to develop your critical thinking skills in the area of
reflection. The journal should reflect your attitudes, feelings and actual learning experiences as
you enter the clinical setting of this course for the first time. As you progress in this semester of
the nursing program, this journal will be a resource for you to look back at significant
experiences and gain insight into your own professional development as a critical thinker. You
will be able to see the transition now from your novice beginnings in N211 through the specialty
areas of pediatric and maternity nursing.
ASSIGNMENT:
At the end of each week of clinical, starting with the first week of preclinical lab and
orientation, select an experience from either clinical day that stimulated your thinking. This
experience could be an interaction with a client, a family, a fellow student, or a teacher. For
each entry, cover the following four aspects:
1. Describe as completely as possible what happened and what you did.
2. Describe what you were thinking at the time, why you decided to do what you did or say
what you said.
3. Describe what you would do differently the next time when encountering a similar
incident or experience.
4. Describe your strengths and weaknesses in dealing with this particular situation. What
were you feeling?
Your clinical teacher will provide time at the end of the postconference for you to make your
entries. The completed journal will be due to the clinical teacher on the last day of clinical. It
is the teacher’s option to review the journal each week or at the end of the eight weeks.
GRADING: This is a required credit/no credit assignment for completion of N222.
37
EVALUATION SESSION TO DEMONSTRATE COMPETENCY
Purpose: Competency has been integrated into the curriculum to set
standards for student achievement. Nursing students need to be prepared
to perform at an entry level of practice. We are attempting to simulate
different evaluation situations and conditions that are similar to those
which you will encounter as a professional nurse. The purpose of the
evaluation session is to evaluate your performance of a situationally based
skill under testing conditions at a competent level. Competency is defined
as the correct performance, in the designated order the behaviors from
memory within a set time period.
Learning Objective: At the completion of the evaluation session, the student
will correctly administer a medication using the intramuscular route to a
neonate under testing conditions.
Skill: Administration of an intramuscular injection to a neonate.
Time Period: 15 minutes
Procedure: You must pass the medication administration test and the
evaluation session before administering an intramuscular injection to a
neonate. You will need to be determined competent in this section no later
than the end of the third week of class. Initial skill practice will take place
during the 225 lab. Additional practice time is available during open lab
hours. Sign up for a time slot with your clinical instructor.
Competency will be determined by a checklist (which will be distributed to
all students) and completion within the timeframe. You will have more than
one opportunity to achieve competency. If you have questions regarding the
skill, be sure to clarify with the clinical or skills lab instructor.
37
1. Checks doctor’s orders. Release order, see MAR
2. Obtains equipment-alcohol swabs, 1 mL syringe with needle
medication, 2x2 gauze, and gloves.
3. Pull medication from Pyxis and check-name, dosage, and
expiration date-1st check.
4. Washes hands.
5. Performs 2nd medication check and wipes top of medication
vial with alcohol.
6. Injects dosage amount of air into the vial.
7. Inverts vial and withdraws correct amount of dosage.
8. Expels any air bubbles.
9. Checks dosage for accuracy. Verbalizes correct dosage.
10. Takes all equipment to the crib side.
11. Identifies neonate by checking ID band. Perform 3rd med
check.
12. Applies gloves and identifies correct site.
13. Grasps leg with non-dominant hand and simulates bunching of
muscle.
14. Cleanse the site with alcohol or facility-preferred agent.
15. Verbalizes correct angle of injection
16. Applies pressure with 2x2 gauze pad until no further bleeding.
17. Disposes of syringe in sharps container.
18. Rewraps neonate snugly.
19. Removes all equipment and gloves.
20. Washes hands.
21. Documents correctly using present time and date
22. States I am finished and accesses instructor.
Competent _________________________
Not yet competent ___________________
37
No
N222SkillsChecklist.doc-2016
Partner Check Off #4
Yes
Partner Check Off #3
N222 COMPETENCY
SKILLS CHECKLIST: Administration of an
intramuscular injection to an infant.
Partner Check Off #2
Time allotted: 15 minutes
Partner Check Off #1
Name: ___________________________Date:_____________
Download
Related flashcards
Security
27 cards
Surgery
42 cards
Alternative medicine
24 cards
Security
37 cards
Surgery
45 cards
Create Flashcards
|
__label__pos
| 0.551928 |
Pedal Power Showdown: Alloy Cranks vs. Carbon – Which is Right for You?
When it comes to choosing the right cranks for your bike, there are a lot of factors to consider. One of the most significant decisions is whether to go with alloy or carbon cranks. Both materials have their advantages and disadvantages, and it’s essential to understand the differences to make an informed decision.
Alloy cranks are made from a blend of metals, typically aluminum, and are the most common type of cranks on the market. They are known for their durability, affordability, and ease of maintenance. Carbon cranks, on the other hand, are made from carbon fiber, a lightweight and strong material that is commonly used in high-performance applications. Carbon cranks are known for their stiffness, weight savings, and vibration dampening properties.
Key Takeaways
• Alloy cranks are durable, affordable, and easy to maintain.
• Carbon cranks are lightweight, stiff, and have vibration dampening properties.
• The choice between alloy and carbon cranks ultimately comes down to personal preference and the intended use of the bike.
Understanding Alloy Cranks
Composition of Alloy Cranks
Alloy cranks are made from a combination of metals, usually aluminum, magnesium, and/or zinc. These metals are melted down and cast into molds to create the shape of the crank. The resulting alloy is then heat-treated to increase its strength and durability.
Aluminum is the most common metal used in alloy cranks. It is lightweight, strong, and resistant to corrosion. Magnesium is also used in some alloy cranks because it is even lighter than aluminum, but it is more expensive and can be more brittle. Zinc is sometimes added to alloy cranks to improve their strength and durability.
Benefits of Alloy Cranks
One of the biggest benefits of alloy cranks is their affordability. They are generally less expensive than carbon cranks, which makes them a popular choice for budget-conscious riders. Alloy cranks are also more durable than carbon cranks and can withstand more abuse without cracking or breaking.
Alloy cranks are also easier to maintain than carbon cranks. They are less likely to get damaged during transport or storage, and they can be repaired if they do get damaged. Carbon cranks, on the other hand, are more fragile and can be more difficult to repair if they are damaged.
Another benefit of alloy cranks is their versatility. They can be used for a variety of riding styles, from road cycling to mountain biking. They are also available in a wide range of sizes and styles, so you can find the perfect crank for your bike and your riding style.
Overall, alloy cranks are a great choice for riders who want a durable, affordable, and versatile crankset that can handle a variety of riding styles and conditions.
Understanding Carbon Cranks
Composition of Carbon Cranks
Carbon cranks are made from a combination of carbon fiber and resin. The carbon fiber is a strong, lightweight material that is used to make the crank arm, while the resin is used to bond the fibers together. The resulting composite material is both lightweight and strong, making it a popular choice for high-performance bike components.
The manufacturing process for carbon cranks involves laying up the carbon fibers in a specific pattern, then coating them with resin. The resulting composite material is then cured in an oven to create a solid, rigid structure. The pattern of the carbon fibers can be adjusted to provide different levels of stiffness and strength, which allows manufacturers to tailor the cranks to specific applications.
Advantages of Carbon Cranks
One of the biggest advantages of carbon cranks over alloy cranks is their weight. Carbon cranks are significantly lighter than alloy cranks, which can help reduce the overall weight of your bike. This can make a big difference in terms of performance, particularly when it comes to climbing and acceleration.
Another advantage of carbon cranks is their stiffness. Carbon fiber is an incredibly stiff material, which means that carbon cranks are able to transfer power more efficiently than alloy cranks. This can result in a more responsive and efficient ride, particularly when you’re putting down a lot of power.
Finally, carbon cranks are also more durable than alloy cranks. While alloy cranks can be prone to cracking and breaking over time, carbon cranks are able to withstand more punishment without failing. This means that they can last longer and require less maintenance over time.
Overall, carbon cranks are a great choice for anyone looking to improve the performance of their bike. They offer a number of advantages over alloy cranks, including reduced weight, increased stiffness, and improved durability. If you’re looking to upgrade your bike, carbon cranks are definitely worth considering.
Comparative Analysis
When it comes to choosing between alloy cranks and carbon cranks, there are several factors that you need to consider. In this section, we will provide a comparative analysis of these two types of cranks to help you make an informed decision.
Durability Comparison
One of the most important factors to consider when choosing cranks is durability. In general, alloy cranks are more durable than carbon cranks. Alloy cranks can withstand more impact and abuse, and they are less likely to crack or break under stress. Carbon cranks, on the other hand, are more prone to damage from impacts and may crack or break if subjected to excessive stress.
Weight Comparison
Another important factor to consider is weight. Carbon cranks are generally lighter than alloy cranks. This can be an advantage for riders who are looking to shave off weight from their bikes. However, the weight difference between the two types of cranks may not be significant enough to make a noticeable difference in performance for most riders.
The table below compares the weight of alloy and carbon cranks:
Crank TypeWeight
Alloy700g
Carbon485g
Stiffness Comparison
Stiffness is another factor that can affect performance. In general, carbon cranks are stiffer than alloy cranks. This means that they transfer power more efficiently from the pedals to the chainring, resulting in better acceleration and more responsive handling. However, some riders may prefer the slightly more forgiving feel of alloy cranks, especially on rough terrain.
In conclusion, both alloy and carbon cranks have their advantages and disadvantages. Alloy cranks are generally more durable, while carbon cranks are lighter and stiffer. Ultimately, the choice between the two types of cranks will depend on your personal preferences and riding style.
Application Scenarios
Alloy Cranks in Racing
Alloy cranks are an excellent choice for racing scenarios. They are lightweight, durable, and can handle high power output. The stiffness of the alloy cranks makes them ideal for sprinting and accelerating quickly. They are also cost-effective, making them a great option for those on a budget.
In addition to their performance benefits, alloy cranks are also easy to maintain. They do not require special care or attention, and they can withstand harsh riding conditions. This makes them an ideal choice for racing scenarios where time is of the essence, and you need to focus on your performance rather than maintenance.
Carbon Cranks in Mountain Biking
Carbon cranks are an excellent choice for mountain biking scenarios. They are lightweight, which makes them ideal for climbing and maneuvering through technical terrain. The stiffness of carbon cranks also provides excellent power transfer, making them ideal for aggressive riding styles.
Carbon cranks are also known for their durability. They can withstand harsh riding conditions, such as rocks, roots, and drops. This makes them an ideal choice for mountain biking scenarios where you need a crankset that can handle whatever the trail throws at you.
However, carbon cranks do require special care and attention. They cannot handle the same level of abuse as alloy cranks, and they can be more expensive. If you are a serious mountain biker who wants the best performance and is willing to invest in your equipment, carbon cranks are an excellent choice.
Overall, the choice between alloy and carbon cranks depends on your specific needs and riding style. Alloy cranks are ideal for racing scenarios, while carbon cranks are ideal for mountain biking scenarios. Consider your budget, riding style, and maintenance preferences when choosing between the two.
User Preferences
When it comes to choosing between alloy and carbon cranks, user preferences play a significant role. Some riders prefer the traditional look and feel of aluminum, while others prefer the futuristic and modern look of carbon. Here are a few factors that may influence your preference:
Aesthetics
Some people prefer the aesthetics of carbon components and buy them for the looks rather than functional quality. Carbon cranks can give your bike a high-end look, while aluminum cranks may look more traditional. If you’re someone who values aesthetics, carbon cranks may be the way to go.
Stiffness
Some riders claim that carbon cranks are stiffer and thus result in a more efficient transfer of power. However, this claim is not universally accepted, and some riders may not notice a significant difference in stiffness between alloy and carbon cranks. If you’re someone who values stiffness, you may want to consider carbon cranks.
Weight
Weight is another factor that may influence your preference. Carbon cranks are generally lighter than aluminum cranks, which can make a significant difference in the overall weight of your bike. However, the weight difference may not be significant enough to justify the higher cost of carbon cranks. If you’re someone who values weight, you may want to consider carbon cranks.
Durability
Durability is another factor to consider when choosing between alloy and carbon cranks. Carbon cranks can be more brittle than aluminum cranks and may be more prone to damage from impacts. However, carbon cranks are generally more resistant to corrosion than aluminum cranks. If you’re someone who values durability, you may want to consider aluminum cranks.
Ultimately, the choice between alloy and carbon cranks comes down to personal preference. Consider your priorities and choose the option that best meets your needs.
Final Thoughts
When it comes to choosing between alloy and carbon cranks, there are several factors to consider. Both materials have their pros and cons, and the right choice for you will depend on your personal preferences and riding style.
If you’re looking for a lightweight option, carbon cranks are the clear winner. On average, carbon cranks are 27% lighter than their alloy counterparts, making them a popular choice among competitive riders. However, if you’re a recreational rider, the weight difference may not be as significant.
One advantage of alloy cranks is their durability. Alloy is a strong and reliable material that can withstand heavy use and abuse. Carbon, on the other hand, can be more fragile and prone to damage from impacts or over-tightening.
Another factor to consider is cost. Carbon cranks are generally more expensive than alloy cranks, and may not be worth the investment for casual riders. However, if you’re looking for the best performance and are willing to pay a premium, carbon cranks may be the way to go.
Ultimately, the choice between alloy and carbon cranks comes down to personal preference and priorities. If you prioritize weight and performance, carbon cranks may be the better choice. If you prioritize durability and affordability, alloy cranks may be the way to go.
Leave a Comment
RSS
Follow by Email
|
__label__pos
| 0.853694 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS4121018 A
Publication typeGrant
Application numberUS 05/712,505
Publication dateOct 17, 1978
Filing dateAug 9, 1976
Priority dateAug 9, 1976
Publication number05712505, 712505, US 4121018 A, US 4121018A, US-A-4121018, US4121018 A, US4121018A
InventorsMeer Danilovich Kocherginsky, Lidia Fedorovna Penkova, Viktor Arsenievich Naumenko
Original AssigneeMeer Danilovich Kocherginsky, Lidia Fedorovna Penkova, Naumenko Viktor
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Positive electrode for air-depolarized alkaline primary cell with thickened electrolyte
US 4121018 A
Abstract
A positive electrode for an air-depolarized alkaline primary cell comprising a catalyst for reduction of air oxygen, which catalyst is manganese dioxide, as well as carbon and a hydroxide solution of potassium, and is characterized, according to the invention, in that said catalyst is electrolytic or synthetic manganese dioxide of γ-modification.
The positive electrode of this invention can be employed in cylindrical and disc-type cells and alkaline electrolyte batteries.
Images(1)
Previous page
Next page
Claims(3)
What is claimed is:
1. In an air-depolarized alkaline primary cell comprising a positive electrode and a thickened electrolyte; the improvement, whereby said cell exhibits improved storage life and service life, comprises a positive electrode formed of an air oxygen reduction catalyst, carbon, and an alkaline solution; said air oxygen reduction catalyst being manganese dioxide of γ-modification, and the mass composition of said electrode, in terms of percentage by weight, is as follows:
______________________________________electrolytic or synthetic manganesedioxide of γ-modification 33 to 67carbon 12 to 40potassium hydroxide solution 20 to 28______________________________________
2. A positive electrode as claimed in claim 1, whose porosity is 8.5 to 40 percent.
3. A positive electrode as claimed in claim 1, including manganese ore in said mass composition, with the ratios of said electrolytic or synthetic manganese dioxide of γ-modification and said manganese ore being, respectively, in terms of percentages by weight, of from 16 to 33 and from 17 to 34, and with the weight percent of the other ingredients remaining the same.
Description
The present invention relates to primary alkaline cells intended for conversion of chemical energy into electric energy and, more particularly, to a positive electrode for an air-depolarized alkaline primary cell with thickened electrolyte.
The electrochemical reaction which takes place in air-depolarized cells with thickened electrolyte and an anode of zinc is well known. In fact, it may be called electrochemical "burning" of zinc according to this equation:
2Zn + O2 →2ZnO (1)
as the cell is discharging, air oxygen is adsorbed by the cathode and ionized on the three-phase catalyst-electrolyte-gas boundary. In order to accelerate the processes of adsorption and ionization of oxygen, use is made of catalysts comprising different elements, alloys and oxides or their mixtures.
There are known two types of positive air-depolarized electrodes used in cells with thickened electrolyte. In the first type, the developed three-phase catalyst-electrolyte-gas boundary, which is necessary for the oxygen ionization process, is produced by rendering catalyst particles hydrophobic, for example, with the aid of polytetrafluoroethylene. Using a hydrophobic catalyst is applied onto a thin porous diaphragm of polytetrafluoroethylene (fluorineplastic), through which air oxygen diffuses into the reaction zone, the diaphragm plays the role of a layer which prevents the penetration of the electrolyte to the gas side of the electrode.
Cells employing hydrophobic electrodes on the basis of fluorineplastic have a high specific energy (up to 250 watt-hours per kilogram) and can operate at current densities of up to 50 ma/cm2. Yet such cells have an important drawback which resides in that their capacity is reduced after long intermittent discharges. The storage life of such cells in the usable condition, i.e. without a sealed case, is limited and is about only 3 to 4 months. The cause of reducing the cell's capacity after long discharges and in the course of storage is the penetration of air through the thin porous diaphragm to the thickened electrolyte, its subsequent carbonization and oxidation of the zinc by air oxygen. Another disadvantage of cells with hydrophobic electrodes is their comparatively high price which, in turn, is due to high prices of the catalysts and fluorineplastic.
There are also known positive air electrodes, wherein the three-phase catalyst-electrolyte-gas boundary is produced by impregnating the cathodes with a limited quantity of electrolyte. In this case the electrodes do not have to be rendered hydrophobic. USSR Inventor's Certificate no 117,837 describes a hydrophilic air electrode which comprises an oxygen reduction catalyst in the form of manganese which comprises an oxygen reduction catalyst in the form of manganese ore (MnO2 of β-modification), carbon and an alkaline solution. The use of this type of electrode in air-depolarized cells with thickened alkaline electrolyte ensures a specific energy of up to 250 watt-hours per kilogramme. Yet such electrodes can only operate at current densities of not more than 5 ma/cm2.
It is an object of the present invention to provide a positive air-depolarized electrode with thickened electrolyte, wherein the catalyst is manganese dioxide of γ-modification and which can operate at elevated current densities.
It is another object of the invention to provide cells with the novel positive electrode, which cells have improved storage life and further retain their capacity after long intermittent discharges.
It is still another object of the invention to provide inexpensive positive air electrodes whose manufacture does not require any noble metals or scarce and costly materials.
The foregoing and other objects of the invention are attained by providing a positive electrode for an air-depolarized alkaline primary cell with thickned electrolyte, comprising an air oxygen reduction catalyst in the form of manganese dioxide, as well as carbon and an alkaline solution, in which electrode the catalyst, according to the invention, is electrolytic or synthetic manganese dioxide of γ-modification.
In order to manufacture the proposed type of electrode, it is expedient that use should be made of mass with the following content of the above-mentioned ingredients in terms of percentage by weight:
______________________________________electrolytic or synthetic manganesedioxide (MnO2) of γ-modification 33 to 67carbon 12 to 40potassium hydroxide solution 20 to 28.______________________________________
In order to reduce the cost of the electrode, the latter can be manufactured from a mass containing an addition of manganese ore, the ratio between the ingredients, in terms of percentage by weight, being as follows:
______________________________________electrolytic or synthetic manganesedioxide of γ-modification, 16 to 33manganese ore 17 to 34carbon 12 to 40potassium hydroxide solution 20 to 28.______________________________________
It is desirable that the electrode's porosity should be 8.5 to 40 percent.
The present invention is based upon the discovery of the following effect. In the course of a discharge of a positive electrode comprising electrolytic or synthetic manganese dioxide of γ-modification, and when there is an access of air oxygen to the electrode, the electrode's potential is first rapidly reduced, but then is stabilized at 1.1 V (measured against a zinc reference electrode). In the course of a discharge of such electrodes, oxygen regeneration has been found to take place, as shown in the following equation:
4MnO(OH) + O2 →4MnO2 = 2H2 O (2)
the discharge voltage of alkaline cells incorporating an electrode of the present invention is on the average 0.25 V higher than that of cells wherein manganese ore (MnO2) of β-modification is used as the catalyst.
In addition, freshly manufactured cells of the proposed type have an increased capacity which amounts to 2,400 minutes. They have a longer storage life and are effective despite long intermittent discharges.
The positive electrode of the present invention can be used in cylindrical and disc-type cells and batteries.
Other objects and advantages of the invention will be better understood from the following examples taken with reference to the accompanying drawing which shows three discharge curves.
EXAMPLE 1
There were manufactured three versions of a cell of the R6 type (the designation is given in accordance with the standards of the International Electrotechnical Commision), having a diameter of 14 mm and a height of 49 mm.
In the first cell, the positive electrode was made from mass with the following ratio of its ingredients in terms of percentage by weight:
______________________________________electrolytic manganese dioxide (MnO2)of γ-modification 67acetylene black 12,7potassium hydroxide solution withdensity of 1.5 20.3.______________________________________
In the electrolyte of the second cell, the ratio of the ingredients of the positive electrode, in terms of percentage by weight, was as follows:
______________________________________electrolytic manganese dioxideof γ-modification 33acetylene black 15activated carbon 25potassium hydroxide solution,density 1.5 27.______________________________________
In the third (control) cell, the positive electrode was made from conventional mass with the following ratio between the ingredients, in terms of percentage by weight:
______________________________________manganese ore 33acetylene black 15activated carbon 25potassium hydroxide solution,density 1.5 27.______________________________________
The taps of the positive electrode were made of nickel-plated steel.
The negative electrode was made from powdered zinc. The electrolyte was thickened with starch and flour.
When discharging, the positive electrodes were in communication with air.
The cells were discharged into a 5-ohm fixed resistor during 10 minutes every day. The current density at the positive electrode was 20 ma/cm2. The discharge curves are shown in the attached drawing. Curves 1, 2 and 3 are representative of the first, second and third cell versions, respectively.
The discharge curves show that the use of electrolytic manganese dioxide as the air oxygen reduction catalyst raises the mean discharge voltage by about 0.25 V, as compared to cells with manganese ore (MnO2 of β-modification). As a result, the catalyst of electrolytic manganese dioxide provides for a 600-minute discharge of 0.75 V, as compared to 50 minutes in the case of control cells with manganese ore. Measurements of the positive electrode potential of the cells with electrolytic manganese dioxide showed that in the course of a discharge, the electrode potential was never less than 1.1 V, i.e. in the presence of air oxygen there takes place insignificant reduction of the electrolytic manganese dioxide.
EXAMPLE 2
Batteries of the 3R 12 type (the designation is given in accordance with the standards of the International Electrotechnical Commission), each composed of three cells, were produced. The batteries were composed of disc-type cells according to U.S. Pat. No. 3,607,429 of Sept. 21, 1971). The composition of the positive electrode was as that of Example 1 (first cell version). The porosity of the positive electrode was 8.5 percent. The batteries were discharged into a 15-ohm fixed resistor during 30 minutes a day. The cut-off (end-point) voltage was 2.7 V. The freshly manufactured batteries worked for 2,400 minutes. After being stored for one year, the service life was as long as 2,100 minutes. Being intermittently discharged during 10 minutes a day to each a cut-off (end-point) voltage of 2.7 V, the batteries had worked for 1,800 minutes. It may be noted for the sake of comparison that the best batteries of the 3R 12 type employed at present and using salt electrolyte have a service life of 600 minutes. This example shows that the proposed low-porosity positive electrode reliably protects the thickened alkaline electrolyte from carbonization and prevents the penetration of air oxygen to the zinc over a prolonged period of time.
EXAMPLE 3
Batteries of the 3R 12 type were produced. The mass of the positive electrode was of the following composition, in terms of percentage by weight:
______________________________________electrolytic manganese dioxideof γ-modification 33manganese ore (from the Caucasusmanganese ore deposit) 34acetylene black 12.7potassium hydroxide solution,density, 1.5 20.3.______________________________________
The batteries were discharged into a 15-ohm fixed resistor during 30 minutes a day to reach a cut-off (end-point) voltage of 2.25 V. As in Example 2, the service life of the batteries amounted to 2,400 minutes, yet the mean discharge voltage of these batteries was lower by 0.20 V, as compared to Example 2.
EXAMPLE 4
Cells of the R20 type (the designation is given in accordance with the standards of the International Electrotechnical Commission) were produced. The positive electrodes were made of the mass of Example 1 (first cell version). Being discharged into a 5-ohm fixed resistor during 30 minutes a day, the cells had a capacity of 25 ampere-hours, which is 5 times as high as that of cells with salt electrolyte, and 2 times as high as that of sealed manganese-zinc alkaline cells.
The positive electrodes used in these cells are of low porosity (8.5 percent), so ionization of oxygen in the cells occurs as in the equation:
4MnO(OH) + O2 →4MnO2 + 2H2 O (2)
example 5
batteries of the 6F 22 type (the designation is given in accordance with the standards of the International Electrotechnical Commission) were manufactured, each being composed of six series connected disc-type cells. The composition of the positive electrode was as that of Example 1 (the second cell version). The porosity of the electrode was 40 percent. When discharged into a 900-ohm resistor during 4 hours a day to reach a cut-off (end-point) voltage of 5.4 V, the batteries are operable for 120 hours; when discharged into a 180-ohm, resistor for one hour a day, the batteries are in good working condition for 24 hours. It must be noted for comparison that the service life of the currently popular batteries with salt electrolyte is 3 to 4 times less.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US1760090 *Jul 6, 1928May 27, 1930Burgess Battery CoDry cell
US1899615 *Aug 10, 1925Feb 28, 1933Nat Carbon Co IncAir-depolarized primary battery
US3242013 *Aug 17, 1962Mar 22, 1966Knapsack AgDry cell batteries with a pyrolusite depolarizing agent
US3716411 *Jan 29, 1971Feb 13, 1973Matsushita Electric Ind Co LtdRechargeable alkaline manganese cell
US3902922 *Jul 18, 1974Sep 2, 1975Union Carbide CorpConductive coated vented cathode collector for thin flat cells
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US4312930 *Aug 25, 1980Jan 26, 1982Union Carbide CorporationMnO2 Derived from LiMn2 O4
US4333993 *Jul 10, 1981Jun 8, 1982Gould Inc.Air cathode for air depolarized cells
US4585710 *Apr 22, 1985Apr 29, 1986Duracell Inc.Zinc/air cell cathode
US4595643 *Jul 21, 1983Jun 17, 1986Matsushita Electrical Industrial Co., Ltd.Air cell electrode and process for preparing its catalyst
US5079106 *Feb 9, 1990Jan 7, 1992Eveready Battery Company, Inc.Air assisted alkaline cells
US5279905 *Mar 9, 1992Jan 18, 1994Eveready Battery Company, Inc.Miniature zinc-air cell having an indium plated anode cupe
US5489493 *Jun 7, 1995Feb 6, 1996Eveready Battery Company, Inc.Alkaline manganese dioxide cell
US6207322Nov 16, 1998Mar 27, 2001Duracell IncAlkaline cell with semisolid cathode
US6428931 *Aug 15, 2000Aug 6, 2002Aer Energy Resources, Inc.Methods for making oxygen reduction catalyst using micelle encapsulation and metal-air electrode including said catalyst
US6444364May 2, 2000Sep 3, 2002The Gillette CompanyHigh performance battery
US6833217May 22, 2002Dec 21, 2004Duracell Inc.Battery cathode
US7718319Sep 25, 2007May 18, 2010Board Of Regents, The University Of Texas SystemCation-substituted spinel oxide and oxyfluoride cathodes for lithium ion batteries
US8722246Apr 1, 2010May 13, 2014Board Of Regents Of The University Of Texas SystemCation-substituted spinel oxide and oxyfluoride cathodes for lithium ion batteries
US20030057967 *Dec 19, 2001Mar 27, 2003Lien Wee LiangCircuit for measuring changes in capacitor gap using a switched capacitor technique
US20030079337 *May 22, 2002May 1, 2003Duracell Inc.Battery cathode
EP0441592A2 *Feb 5, 1991Aug 14, 1991Eveready Battery Company, Inc.Air-assisted alkaline cells
WO2002089239A2 *Apr 16, 2002Nov 7, 2002The Gillette CompanyBattery with an oxidized carbon cathode
WO2002089239A3 *Apr 16, 2002Dec 4, 2003Gillette CoBattery with an oxidized carbon cathode
Classifications
U.S. Classification429/405, 429/224
International ClassificationH01M12/06, H01M4/00, H01M4/50
Cooperative ClassificationH01M12/06, H01M4/00, H01M4/50
European ClassificationH01M4/00, H01M4/50, H01M12/06
|
__label__pos
| 0.512331 |
next up previous
Next: Solution Techniques Up: CSE 2320: Algorithms and Previous: Solution Techniques
Example: Fibonacci
Fibonacci(n)
$\;\;\;\;\;$if n < 2
$\;\;\;\;\;$then return n
$\;\;\;\;\;$else return Fibonacci(n-1) + Fibonacci(n-2)
T(n) = \(\left\{ \begin{array}{ll}
\Theta(1) & {\rm if} \; n < 2 \\
T(n-1) \; + \; T(n-2) \; + \; \Theta(1) & {\rm if} \; n >= 2
\end{array} \right.\)
Copyright © 1998 The University of Texas at Arlington
|
__label__pos
| 0.985801 |
Shingles, or herpes zoster, is an extremely common viral infection. It may cause a range of symptoms, including a skin rash and fatigue. Some people may feel tired even after the shingles rash has cleared.
The same virus that causes chickenpox, the varicella-zoster virus (VZV), also causes shingles. After a person recovers from chickenpox, VZV remains in their body. If the virus reactivates, it can trigger shingles later in life.
This article discusses the symptoms of shingles, causes of fatigue after recovering from shingles, ways to manage shingles fatigue, and when to speak with a healthcare professional.
A person rubbing their right eye as if they are tired 1Share on Pinterest
Johner Images/Getty Images
Shingles often starts with several days of itching, tingling, or pain at the future site of the rash. The virus may then trigger the following symptoms:
• a tender, painful skin rash, usually on one side of the body or in a single stripe on one side of the face
• vision loss if a shingles rash affects the eye
• stomach problems
• chills
• headache
A person may also experience tiredness, or fatigue, while they have shingles.
Learn more about shingles symptoms.
In some cases, even after the shingles rash resolves, complications may develop that cause a person to experience fatigue and other symptoms.
Postherpetic neuralgia (PHN)
PHN causes at least 3 months of burning, aching, or throbbing nerve pain after a person’s shingles rash heals. It develops in the original area of the rash.
The condition develops in 10–18% of people who have shingles, and the risk increases with age. Older adults with shingles have a higher risk of longer lasting, severe PHN, while those who develop shingles under 40 years of age often do not often experience PHN at all.
Not everyone who develops PHN will experience fatigue. However, those with especially long lasting or painful PHN may experience:
These complications can last for months or years. However, they eventually resolve in most people.
Chronic fatigue syndrome (CFS)
CFS, which people may also call myalgic encephalomyelitis (ME), is a condition that causes disabling, constant tiredness. A doctor may diagnose CFS if:
• A person has felt this level of fatigue for more than 6 months.
• The tiredness has not developed from high energy activities, is not lifelong, and does not resolve after sleep.
• Fatigue is causing cognitive problems, such as issues with memory and attention.
• Symptoms worsen when a person assumes or maintains an upright posture.
It is not clear what causes CFS, but the Centers for Disease Control and Prevention (CDC) have identified immune system changes, stress, infections, genetics, and changes in the way the body makes energy as possible causes.
The CDC reports that symptoms qualifying for CFS diagnosis develop in around 1 in 10 people who contract Ross River virus, Epstein-Barr virus, or Coxiella burnetti.
Some researchers have looked into a connection between shingles and CFS.
An older 2009 review suggested a possible link resulting from VZV’s ability to remain inactive in nerve cells after a person experiences chickenpox. Another older study from 2014 found that a group of 9,205 people with previous experience of shingles had a higher rate of CFS than a group of 36,820 people who had never had shingles.
However, studies investigating whether shingles may trigger CFS are limited, and further research is necessary.
People who experience fatigue due to shingles may try the following tips to reduce the symptom:
• Make a sleep routine, but start by restricting sleep: Try to sleep for a set amount of time at a specific time. While pain may make falling asleep difficult, restricting sleep to 4 hours at first might help someone spend the entire time in bed sleeping. A person may gradually increase sleep by 15–30 minutes per night until they start to feel more rested.
• Eat low glycemic index foods: This category includes foods such as whole grains, vegetables with high fiber content, nuts, and healthy oils. These foods release energy slowly, which may help prevent energy lag and sugar crashes.
• Drink plenty of water: Tiredness is one of the earliest symptoms of dehydration. People should make sure to drink water regularly.
• Find healthy ways to manage stress: Stress is a possible trigger for herpes zoster and may worsen symptoms. Stress may also make fatigue worse. Stress management may involve deep breathing techniques, meditation, yoga, doing fun activities, or talking with a loved one about sources of stress.
• Manage priorities: Working too much can contribute to fatigue, especially if PHN pain stops someone from getting a good night’s sleep beforehand. A person may consider focusing on any unavoidable tasks and taking on fewer extra obligations in professional and personal life.
If a person still finds fatigue unmanageable or too much to cope with, they may benefit from speaking with a mental health professional or finding a PHN support group.
A person should speak with a doctor if they notice any symptoms of shingles or any pain after the shingles rash heals.
Healthcare professionals may help someone manage PHN pain and resulting fatigue by prescribing medications such as tricyclic antidepressants and topical pain relief medications.
A doctor can also administer vaccines to prevent shingles and, therefore, any of its potential complications. The CDC recommends the vaccine for those ages 50 years and over, as the risk of shingles and its complications increases with age.
A person may experience fatigue while they have shingles or as part of a complication of shingles, such as PHN or potentially CFS.
People who experience fatigue due to shingles can try to limit this symptom by changing their sleep routine, diet, and stress management techniques. If fatigue persists, they should speak with a healthcare professional for further advice.
The shingles vaccine can help reduce the risk of shingles and its complications.
|
__label__pos
| 0.710943 |
Instillation Abortion
Watchlist
Retrieved
2021-01-18
Source
Trials
Genes
Drugs
Instillation abortion is a rarely used method of late term abortion, performed by injecting a solution into the uterus.
Procedure
Instillation abortion is performed by injecting a chemical solution consisting of either saline, urea, or prostaglandin through the abdomen and into the amniotic sac. The cervix is dilated prior to the injection, and the chemical solution induces uterine contractions which expel the fetus. Sometimes a dilation and curettage procedure is necessary to remove any remaining fetal or placenta tissue.
Instillation methods can require hospitalization for 12 to 48 hours. In one study, when laminaria were used to dilate the cervix overnight, the time between injection and completion was reduced from 29 to 14 hours.
Usage
The method of instillation abortion was first developed in 1934 by Eugen Aburel. It is most frequently used between the 16th and 24th week of pregnancy, but its rate of use has declined dramatically in recent years. In 1968, abortion by the instillation of saline solution accounted for 28% of those procedures performed legally in San Francisco, California. Intrauterine instillation (of all kinds) declined from 10.4% of all legal abortions in the U.S. in 1972 to 1.7% in 1985, falling to 0.8% of the total incidence of induced abortion in the United States during 2002, and 0.1% in 2007.
In a 1998 Guttmacher Institute survey, sent to hospitals in Ontario, Canada, 9% of those hospitals in the province which offered abortion services used saline instillations, 4% used urea, and 25% used prostaglandin. A 1998 study of facilities in Nigeria which provide abortion found that only 5% of the total number in the country use saline.
Complications
Once in common practice, abortion by intrauterine instillation has fallen out of favor, due to its association with serious adverse effects and its replacement by procedures which require less time and cause less physical discomfort.
Saline is in general safer and more effective than the other intrauterine solutions because it is likely to work in one dose. Prostaglandin is fast-acting, but often requires a second injection, and carries more side effects, such as nausea, vomiting, and diarrhea.
Instillation of either saline or prostaglandin is associated with a higher risk of immediate complications than surgical D&C. Dilation and evacuation is also reported to be safer than instillation methods. One study found that the risk of complications associated with the injection of a combination of urea and prostaglandin into the amniotic fluid was 1.9 times that of D&E.
The rate of mortality reported in the United States between 1972 and 1981 was 9.6 per 100,000 for instillation methods. This is in comparison to rates of 4.9 per 100,000 for D&E and 60 per 100,000 for abortion by hysterotomy and hysterectomy.
There have been at least two documented cases of unsuccessful instillation abortions that resulted in live births.
|
__label__pos
| 0.510603 |
Windows 2012 Hosting - MVC 6 and SQL 2014 BLOG
Tutorial and Articles about Windows Hosting, SQL Hosting, MVC Hosting, and Silverlight Hosting
ASP.NET MVC - ASPHostPortal.com :: Internet & Web How to Fix Only One <configSections> Element Error in Web.Config
clock April 11, 2016 19:44 by author Armend
In this article you will learn the solution to the common error "Only one <configSections> element allowed".
Today I was working on Entity Framework and trying to add the connection string to the Web.Config to specify the database. I wrote the connection string like this:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<connectionStrings>
<add name="SQLConnect"
connectionString="Data Source=SAHIL; Initial Catalog=Demo; Integrated Security=SSPI"
providerName="System.Data.SqlClient" />
</connectionStrings>
<configSections>
<sectionnamesectionname="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework,
Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
</configSections>
:
:
:
:
:
:
:
</configuration>
When I run the application, I experienced a strange error that says: "Only one <configSections> element allowed. It must be the first child element of the root <configuration> element".
It took me some time to determine the cause of the error and how to fix it.
Error: "Only one <configSections> element allowed. It must be the first child element of the root <configuration> element".
If you read the error carefully, it states that only one <configSections> element is allowed inside the Web.config and it should be the first child element and placed at the top. The reason for the error is that I accidentally placed the <connectionStrings></connectionStrings> at the top over the <configSections></configSections> and by conventions this is a violation. So, to fix the error, I rearranged the elements and the error was fixed.
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<sectionnamesectionname="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework,
Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
</configSections>
<connectionStrings>
<add name="SQLConnect"
connectionString="Data Source=SAHIL; Initial Catalog=Demo; Integrated Security=SSPI"
providerName="System.Data.SqlClient" />
</connectionStrings>
:
:
:
:
:
:
:
</configuration>
Conclusion
Your feedback and constructive criticism is always appreciated, keep it coming. Until then try to put a ding in the Universe.
ASP.NET MVC - ASPHostPortal.com :: 7 Tips for Developing a Secure ASP.NET Web Application
clock March 7, 2016 20:07 by author Armend
As the usage of the internet and the number of web applications over the internet have gone exponentially high there are bad people who continuously work around the clock to hack them. It may be for personal gain or just as an amateur act. Despite the intention of the bad guy the damage caused to the organization hosting the site or its users should be taken into account. As a professional web application developer it is a must to be aware of the best practices to follow in order to make the application more secure. In this article I will be listing and explaining my top 7 tips for developing a secure asp.net application.
Don’t Let Your Users be Victims of Click Jacking
Have you ever thought about someone framing your website onto theirs, making your users to be the victims of click jacking? Yes, the attackers can load your website onto their site in an iframe. They can then skillfully place their transparent controls over your website and fetch the PII information, user credentials, make them perform an unwanted task like exposing their financial information, etc.
In order to prevent that you will have to use a frame busting technique. The following script will not allow your website to be iframed. This can be placed in your master pages.
<script type="text/javascript" language="javascript">
//Check if the top location is same as the current location
if (top.location.hostname != self.location.hostname) {
//If not then set the top to you current
top.location.href = self.location.href;
}
</script>
In addition to the above script don’t forget to add the following header, which informs the browser to DENY framing of this website. This is supported in all major browsers except IE versions less than 8.
The header should be added in the global.asax application start event.
protected void Application_Start(object sender, EventArgs e)
{
HttpContext.Current.Response.AddHeader("x-frame-options", "DENY");
}
White List the Request URL
Though we have many techniques to perform the security preventions inside the application it is most important to prevent the bad data from being entered into your website at the first place. Most attacks happen through the query string values passed through the URL. It is a best security practice to define a common place like an HttpModule to white list the URL, i.e. sanitize the entire URL with a set of white listed characters and drop all the bad ones. It means you will not encourage any other characters apart from a white listed set defined in your application.
It is important for you to know that black listing is not a foolproof mechanism and it can be broken by the hackers easily.
Practice of Encoding the Data
While processing and sending, the data in the response that is fetched from outside the trust boundary should always be encoded. The type of encoding may differ based on the usage of the non-trusted data. For example perform an HtmlEncode for the data that is sent to the client page.
Label1.Text = Server.HtmlEncode(Request.QueryString["BadValue"]);
Encoding the data will make the XSS scripts inactive and prevent them from being executed. Microsoft has provided the AntiXss library, which provides more sophisticated encoding methods including the JavascriptEncode.
Using Cookies
As a web developer you should take utmost care while using cookies, which may open a back door for the hackers to get into your applications. Following are the best practices while using a cookie to store information.
1. Is your website is hosted under SSL? Then be sure to mark your cookies as secure. This will make them available only in the SSL transmissions.
HttpCookie cookie = new HttpCookie("MySecureCookie");
cookie.Value = "This is a PII information";
cookie.Secure = true;
2. If your website is not SSL enabled then always encrypt the values using a strong encryption mechanism like AES 256 and then store them in the cookies.
Secure the Service Calls (WCF / Web Service)
Are you exposing WCF services through basicHttpBinding? Then think again because the messages transmitted over will be plain text and any intruder will be able to trap the requests and even simulate them easily. Use wsHttpBinding, which will transport the messages in an encrypted format, which makes the life of the intruder hard.
Though you make lots of protections for your WCF or web services it is a best practice to host the services under an SSL layer.
Never Deploy the Application with debug=”true”
It is strongly recommended not to deploy your applications in the production environment with compilation debug=”true” in your web.config. This will result in a big nightmare for performance and security of the application.
This may leak too much information for the attackers, for example the stack trace in the event of an unhandled exception and the debug trace information. Such exposure of the internals will be good bucks for the attackers.
<system.web>
<compilation debug="false" targetFramework="4.0" />
</system.web>
Thinking About Turning Off ViewStateMAC?
Turning off ViewStateMAC will create a security loophole in your asp.net application if you are using Viewstate on your web pages. The intruders will easily be able to intercept, read the 64 bit encoded values and modify them to do some bad things to your website. Having it turned on ensures that the viewstate values are not only encoded but also a cryptographic hash is performed using a secret key.
<pages enableViewStateMac="true"></pages>
I hope this article is useful for the developers who thrive at making their asp.net application an absolutely impossible place for the hackers to deal with.
Happy reading!
ASP.NET MVC - ASPHostPortal.com :: Simple Tips for ASP.NET MVC Model Binding
clock March 1, 2016 18:25 by author Armend
Tips for ASP.NET MVC Model Binding
Model binding in the ASP.NET MVC framework is simple. Your action methods need data, and the incoming HTTP request carries the data you need. The catch is that the data is embedded into POST-ed form values, and possibly the URL itself. Enter the DefaultModelBinder, which can magically convert form values and route data into objects. Model binders allow your controller code to remain cleanly separated from the dirtiness of interrogating the request and its associated environment.
Here are some tips on how to take advantage of model binding in your MVC projects.
Tip #1: Prefer Binding Over Request.Form
If you are writing your actions like this ..
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create()
{
Recipe recipe = new Recipe();
recipe.Name = Request.Form["Name"];
// ...
return View();
}
Then you are doing it all wrong. The model binder can save you from using the Request and HttpContext properties – those properties make the action harder to read and harder to test. One step up would be to use a FormCollection parameter instead:
public ActionResult Create(FormCollection values)
{
Recipe recipe = new Recipe();
recipe.Name = values["Name"];
// ...
return View();
}
With the FormCollection you don’t have to dig into the Request object, and sometimes you need this low level of control. But, if all of your data is in Request.Form, route data, or the URL query string, then you can let model binding work its magic:
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(Recipe newRecipe)
{
// ...
return View();
}
In this example, the model binder will create your newRecipe object and populate it with data it finds in the request (by matching up data with the recipe’s property names). It’s pure auto-magic. There are many ways to customize the binding process with “white lists”, “black lists”, prefixes, and marker interfaces. For more control over when the binding takes place you can use the UpdateModel and TryUpdateModel methods. Just beware of unintentional binding – see Justin Etheredge’s Think Before You Bind.
Tip #2: Custom model binders
Model binding is also one of the extensibility points in the MVC framework. If you can’t use the default binding behavior you can provide your own model binders, and mix and match binders. To implement a custom model binder you need to implement the IModelBinder interface. There is only method involved - how hard can it be?
public interface IModelBinder
{
object BindModel(ControllerContext controllerContext,
ModelBindingContext bindingContext);
}
Once you get neck deep into model binding, however, you’ll discover that the simple IModelBinder interface doesn’t fully describe all the implicit contracts and side-effects inside the framework. If you take a step back and look at the bigger picture you’ll see that model binding is but one move in a carefully orchestrated dance between the model binder, the ModelState, and the HtmlHelpers. You can pick up on some of these implicit behaviors by reading the unit tests for the default model binder.
If the default model binder has problems putting data into your object, it will place the error messages and the erroneous data value into ModelState. You can check ModelState.IsValid to see if binding problems are present, and use ModelState.AddModelError to inject your own error messages. See this very simple tutorial for more information on how ModelState and HtmlHelpers can work together to present validation errors to the user.
If you scroll down the comments to post you’ll see code. If a conversion fails, the code will use ModelState.AddModelError to propagate the error. Both the controller action and the view can look in ModelState to see if there was a binding problem. The controller would need to check ModelState for errors before saving stuff into the database, while the view can check ModelState for errors to give the user validation feedback. One important note is that the HtmlHelpers you use in a view will require ModelState to hold both a value (via ModelState.SetModelValue) and the error (via AddModelError) or you’ll have runtime errors (null reference exceptions). The following code can demonstrate the problem:
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(FormCollection Form)
{
// this is the wrong approach ...
if (Form["Name"].Trim().Length == 0)
ModelState.AddModelError("Name", "Name is required");
return View();
}
The above code creates a model error without ever setting a model value. It has other problems, too, but it will create exceptions if you render the following view.
<%= Html.TextBox("Name", Model.Name) %>
Even though you’ve specified Model.Name as the value for the textbox, the textbox helper will see the model error and attempt to display the “attempted value” that the user tried to put in the model. If you didn’t set the model value in model state you’ll see a null reference exception.
Tip #3: Custom Model Binding via Inheritance
If you’ve decided to implement a custom model binder, you might be able to cut down on the amount of work required by inheriting from DefaultModelBinder and adding some custom logic. In fact, this should be your default plan until you are certain you can’t subclass the default binder to achieve the functionality you need. For example, suppose you just want to have some control over the creation of your model object. The DefaultModelBinder will create object’s using Activator.CreateInstance and the model’s default constructor. If you don’t have a default constructor for your model, you can subclass the DefaultModelBinder and override the CreateModel method.
Jimmy Bogard has an example of sub classing the DefaultModelBinder in his post titled “A Better Model Binder”.
Tip #4: Using Data Annotations for Validation
Brad Wilson explains everything beautifully in this post: DataAnnotations and ASP.NET MVC.
I encourage you to go read Brad’s post, but if you are in a hurry, here is a summary:
.NET 3.5 SP1 shipped a System.ComponentModel.DataAnnotations assembly that looks to play a central role as we move forward with the .NET framework. By using data annotations and the DataAnnotationsModelBinder, you can take care of most of your server-side validation by simply decorating your model with attributes.
public class Recipe
{
[Required(ErrorMessage="We need a name for this dish.")]
[RegularExpression("^Bacon")]
public string Name { get; set; }
// ...
}
The DataAnnotationsModelBinder is also a great sample to read and understand how to effectively subclass the default model binder.
Tip #5 : Recognize Binding and Validation As Two Phases
Binding is about taking data from the environment and shoving it into the model, while validation is checking the model to make sure it meets our expectations. These are different different operations, but model binding tends to blur the distinction. If you want to perform validation and binding together in a model binder, you can – it’s exactly what the DataAnnotationsModelBinder will do. You can also find samples like Automatic Model Validation with ASP.NET MVC, xVal, Castle, and a Custom Binder (John McDowall), and Enterprise Library Validation Application Block with MVC Binders (Steve Michelotti). However, one thing that is often overlooked is how the DefaultModelBinder itself separates the binding and validation phases. If all you need is simple property validation, then all you need to do is override the OnPropertyValidating method of the DefaultModelBinder.
Tip #6: Binders Are About The Environment
Earlier I said that “model binders allow your controller code to remain cleanly separated from the dirtiness of interrogating the request and its associated environment”. Generally, when we think of binder we think of moving data from the routing data and posted form values into the model. However, there is no restriction of where you find data for your model. The context of a web request is rich with information about the client. A good example is another Scott Hanselman post on automatically binding the user’s identity into a model see: IPrincipal (User) ModelBinder in ASP.NET MVC for easier testing.
In Conclusion
Model binding is beautiful magic, so take advantage of the built-in magic when you can. I think the topic of model binding could use it’s own dedicated web site. It would be a very boring web site with lots of boring code, but model binding has many subtleties. For instance, we never even got to the topic of culture in this post.
Do you have any model binding tips?
ASP.NET MVC 6 Hosting - ASPHostPortal :: Creating Custom Controller Factory ASP.NET MVC
clock February 20, 2016 00:01 by author Jervis
I was reading about “Control application behavior by using MVC extensibility points” which is one of the objectives for the 70-486 Microsoft certification, and it was not clear to me, the explanation provided. So I decided to write about it to make clear for me, and I hope this help you as well.
An ASP.NET MVC application contains the following class:
public class HomeController: Controller
{
public HomeController(Ilogger logger)//notice the parameter in the constructor
{
}
}
This throw an error with the DefaultControllerFactory see image below.
The application won’t be able to load the Home controller because it have a parameter. You need to ensure that Home Controller can be instantiated by the MVC framework. In order to accomplish this we are going to use dependency injection.
The solution is to create a custom controller factory.
It calls the default constructor of that class. To have the MVC framework create controller class instances for constructors that have parameters, you must use a custom controller factory. To accomplish this, you create a class that implements IControllerFactory and implement its methods. You then call the SetControllerFactory method of the ControllerBuilder class to an instance of your custom controller factory class.
Create the CustomControllerFactory that inherit from IControllerFactory:
public class CustomControllerFactory : IControllerFactory
{
public CustomControllerFactory()
{
}
public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
{
ILogger logger = new DefaultLogger();
var controller = new HomeController(logger);
return controller;
}
public System.Web.SessionState.SessionStateBehavior GetControllerSessionBehavior(System.Web.Routing.RequestContext requestContext, string controllerName)
{
return SessionStateBehavior.Default;
}
public void ReleaseController(IController controller)
{
var disposable = controller as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}
}
You can implement the CreateController() method with a more generic way, using reflection.
public class CustomControllerFactory : IControllerFactory
{
private readonly string _controllerNamespace;
public CustomControllerFactory(string controllerNamespace)
{
_controllerNamespace = controllerNamespace;
}
public IController CreateController(System.Web.Routing.RequestContext requestContext, string controllerName)
{
ILogger logger = new DefaultLogger();
Type controllerType = Type.GetType(string.Concat(_controllerNamespace, ".", controllerName, "Controller"));
IController controller = Activator.CreateInstance(controllerType, new[] { logger }) as Controller;
return controller;
}
}
Set your controller factory in Application_Start by using SetControllerFactory method:
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
GlobalConfiguration.Configure(WebApiConfig.Register);
FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
RouteConfig.RegisterRoutes(RouteTable.Routes);
BundleConfig.RegisterBundles(BundleTable.Bundles);
ControllerBuilder.Current.SetControllerFactory(typeof(CustomControllerFactory));
}
This could be one of the objective of the Microsoft certification exam 70-486, more specific for “Develop the user experience”, sub-objective “Control application behavior by using MVC extensibility points”.
Hope this helped you to understand how to do dependency injection in controllers with MVC.
ASP.NET MVC Hosting - ASPHostPortal :: The Difference Between Controller and View in ASP.NET MVC
clock November 12, 2015 20:28 by author Jervis
One of the basic rules of MVC is that views should be only – exactly – views, that is to say: objects that present to the user something that is already “worked and calculated”.
They should perform little, if not none at all, calculation. All the significant code should be in the controllers. This allows better testability and maintainability.
Is this, in Microsoft’s interpretation of MVC, also justified by performance?
We tested this with a very simple code that does this:
– creates 200000 “cat” objects and adds them to a List
– creates 200000 “owner” objects and adds them to a List
– creates 200000 “catowner” objects (the MTM relation among cats and owners) and adds them to a List
– navigates through each cat, finds his/her owner, removes the owner from the list of owners (we don’t know if cats really wanted this, but their freedom on code fits our purposes).
We’ve run this code in a controller and in a razor view.
The result seem to suggest that the code in views runs just as fast as in controllers even if don’t pre-compile views (the compilation time in our test is negligible).
The average result for the code with the logic in the controller is 18.261 seconds.
The average result for the code with the logic in the view is 18.621 seconds.
The performance seems therefore very similar.
Here is how we got to this result.
Case 1: Calculations are in the CONTROLLER
Models:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace WebPageTest.Models
{
public class Owner
{
public string Name { get; set; }
public DateTime DOB { get; set; }
public virtual CatOwner CatOwner { get; set; }
}
public class Cat
{
public string Name { get; set; }
public DateTime DOB { get; set; }
public virtual CatOwner CatOwner { get; set; }
}
public class CatOwner
{
public virtual Cat Cat { get; set; }
public virtual Owner Owner { get; set; }
}
}
Controller:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using WebPageTest.Models;
namespace WebPageTest.Controllers
{
public class HomeController : Controller
{
public ActionResult Index()
{
Stopwatch howLongWillItTake = new Stopwatch();
howLongWillItTake.Start();
List<Owner> allOwners = new List<Owner>();
List<Cat> allCats = new List<Cat>();
List<CatOwner> allCatOwners = new List<CatOwner>();
// create lists with 200000 cats, 200000 owners, 200000 relations
for (int i = 0; i < 200000; i++)
{
//Cat
Cat CatX = new Cat();
CatX.Name = “Cat ” + i.ToString();
CatX.DOB = DateTime.Now.AddDays(i / 10);
//Owner
Owner OwnerX = new Owner();
OwnerX.Name = “Owner ” + i.ToString();
OwnerX.DOB = DateTime.Now.AddDays(-i / 10);
//Relationship “table”
CatOwner CatOwnerXX = new CatOwner();
CatOwnerXX.Cat = CatX;
// Relations
CatOwnerXX.Owner = OwnerX;
CatX.CatOwner = CatOwnerXX;
OwnerX.CatOwner = CatOwnerXX;
//add to list
allCats.Add(CatX);
allOwners.Add(OwnerX);
allCatOwners.Add(CatOwnerXX);
}
// now I remove all the items
foreach (Cat CatToDelete in allCats)
{
Owner OwnerToRemove = CatToDelete.CatOwner.Owner;
allOwners.Remove(OwnerToRemove);
}
// now all cats are free
int numberOfCats = allCats.Count();
int numberOfOwners = allOwners.Count();
howLongWillItTake.Stop();
long elapsedTime = howLongWillItTake.ElapsedMilliseconds;
// give info to the view
ViewBag.numberOfCats = numberOfCats;
ViewBag.numberOfOwners = numberOfOwners;
ViewBag.elapsedTime = elapsedTime;
return View();
}
}
}
View:
<div class=”row”>
<div class=”col-md-12″>
<hr />
<b>Results</b>
<br/>
Cats: @ViewBag.numberOfCats
<br/>
Owners: @ViewBag.numberOfOwners
<br/>
ElapsedTime in milliseconds: @ViewBag.ElapsedTime
<hr />
</div>
</div>
Case 2: Calculations are in the VIEW (pre-compiled)
Models: same as above
Controller:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace WebPageTest.Controllers
{
public class HomeBisController : Controller
{
public ActionResult Index()
{
return View();
}
}
}
View:
@using System;
@using System.Collections.Generic;
@using System.Diagnostics;
@using System.Linq;
@using System.Web;
@using WebPageTest.Models;
@using System.Web.Mvc;
@{
Stopwatch howLongWillItTake = new Stopwatch();
howLongWillItTake.Start();
List<Owner> allOwners = new List<Owner>();
List<Cat> allCats = new List<Cat>();
List<CatOwner> allCatOwners = new List<CatOwner>();
//create lists with 200000 cats, 200000 owners, 200000 relations
for (int i = 0; i < 200000; i++)
{
//Cat
Cat CatX = new Cat();
CatX.Name = “Cat ” + i.ToString();
CatX.DOB = DateTime.Now.AddDays(i / 10);
//Owner
Owner OwnerX = new Owner();
OwnerX.Name = “Owner ” + i.ToString();
OwnerX.DOB = DateTime.Now.AddDays(-i / 10);
//Relationship “table”
CatOwner CatOwnerXX = new CatOwner();
CatOwnerXX.Cat = CatX;
// Relations
CatOwnerXX.Owner = OwnerX;
CatX.CatOwner = CatOwnerXX;
OwnerX.CatOwner = CatOwnerXX;
//add to list
allCats.Add(CatX);
allOwners.Add(OwnerX);
allCatOwners.Add(CatOwnerXX);
}
// now I remove all the items
foreach (Cat CatToDelete in allCats)
{
Owner OwnerToRemove = CatToDelete.CatOwner.Owner;
allOwners.Remove(OwnerToRemove);
}
// now all cats are free
int numberOfCats = allCats.Count();
int numberOfOwners = allOwners.Count();
howLongWillItTake.Stop();
long elapsedTime = howLongWillItTake.ElapsedMilliseconds;
// give info to the view
}
<div class=”row”>
<div class=”col-md-12″>
<hr />
<b>Results</b>
<br />
Cats: @numberOfCats
<br />
Owners: @numberOfOwners
<br />
ElapsedTime in milliseconds: @elapsedTime
<hr />
</div>
</div>
ASP.NET MVC 4 Hosting - ASPHostPortal.com :: A Best Practice for Authenticating Users in ASP.NET MVC 4
clock December 20, 2013 06:16 by author Robert
If your site has even one or two actions where access is restricted to particular users, the smart thing to do is to restrict access to all the actions on your site and then selectively permit access to those actions that all users are allowed to request. That way, an error of omission (forgetting to make a method available) simply prevents users from accessing some action.
Unfortunately, by default ASP.NET MVC works exactly the opposite way: all actions are accessible to all users unless you specifically restrict access by applying the Authorization action filter to the method. Under this scenario, an error of omission (forgetting to put an Authorize attribute on a method) allows all users access to the action. It's literally the worst thing that can happen in a secure environment: unauthenticated and unauthorized access to a resource that should have been secured.
Global Filters provided a solution to this by allowing you to apply the Authorize attribute to all of your action methods, locking non-authenticated users out of your actions by default. You can then selectively override that setting by applying the Authorize attribute to individual methods, specifying specific roles and users authorized to use that action. That works, unless you have some action methods that don't require authentication, methods intended to be accessible to the general public. In that scenario, you can't use Global Filters to secure all of your action methods -- until ASP.NET MVC 4.
Implementing the best practice is possible in ASP.NET MVC 4 with the new AllowAnonymous action filter. The first step is to use the Global Filters in the FilterConfig class in the App_Start folder to apply the Authorize attribute to every action method:
public class FilterConfig
{
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
filters.Add(new AuthorizeAttribute);
}
}
The next step is to selectively allow access to actions that don't require authentication by decorating them with the AllowAnonymous attribute:
[AllowAnonymous]
Public ActionResult Get()
{
ASP.NET MVC 3 Hosting - ASPHostPortal :: Set up custom error pages to handle errors in “non-AJAX” requests and jQuery AJAX requests
clock May 4, 2012 08:16 by author Jervis
In this blog post I will show how to set up custom error pages in ASP.NET MVC 3 applications to create user-friendly error messages instead of the (yellow) IIS default error pages for both “normal” (non-AJAX) requests and jQuery AJAX requests.
In this showcase we will implement custom error pages to handle the HTTP error codes 404 (“Not Found”) and 500 (“Internal server error”) which I think are the most common errors that could occur in web applications. In a first step we will set up the custom error pages to handle errors occurring in “normal” non-AJAX requests and in a second step we add a little JavaScript jQuery code that handles jQuery AJAX errors.
We start with a new (empty) ASP.NET MVC 3 project and activate custom errors in the Web.config by adding the following lines under <system.web>:
<customErrors
mode="On" defaultRedirect="/Error">
<error redirect="/Error/NotFound" statusCode="404"/>
<error redirect="/Error/InternalServerError" statusCode="500"/>
</customErrors>
Note: You can set
mode=Off” to disable custom errors which could be helpful while developing or debugging. Setting mode=RemoteOnly” activates custom errors only for remote clients, i.e. disables custom errors when accessing via http://localhost/[...]. In this example setting mode=”On” is fine since we want to test our custom errors. You can find more information about the <customErrors> element here.
In a next step we
remove the following line in Global.asax.cs file:
filters.Add(new HandleErrorAttribute());
and add a new
ErrorController (Controllers/ErrorController.cs):
public class ErrorController : Controller
{
public ActionResult Index()
{
return InternalServerError();
}
public ActionResult NotFound()
{
Response.TrySkipIisCustomErrors = true;
Response.StatusCode = (int)HttpStatusCode.NotFound;
return View("NotFound");
}
public ActionResult InternalServerError()
{
Response.TrySkipIisCustomErrors = true;
Response.StatusCode = (int)HttpStatusCode.InternalServerError;
return View("InternalServerError");
}
}
In a last step we add the ErrorController‘s views (Views/Error/NotFound.cshtml and Views/Error/InternalServerError.cshtml) that defines the (error) pages the end user will see in case of an error. The views include a partial view defined in Views/Shared/Error/NotFoundInfo.cshtml respectively Views/Shared/Error/InternalServerErrorInfo.cshtml that contains the concrete error messages. As we will see below using these partial views enables us to reuse the same error messages to handle AJAX errors.
Views/Error/NotFound.cshtml:
@{
ViewBag.Title = "Not found";
}
@{
Html.RenderPartial("Error/NotFoundInfo");
}
Views/Shared/Error/NotFoundInfo.cshtml:
The URL you have requested was not found.
Views/Error/InternalServerError.cshtml:
@{
ViewBag.Title = "Internal server error";
}
@{
Html.RenderPartial("Error/InternalServerErrorInfo");
}
Views/Shared/Error/InternalServerErrorInfo.cshtml:
An internal Server error occured.
To handle errors occurring in (jQuery) AJAX calls we will use jQuery UI to show a dialog containing the error messages. In order to include jQuery UI we need to add two lines to Views/Shared/_Layout.cshtml:
<link href="@Url.Content("~/Content/themes/base/jquery.ui.all.css")" rel="stylesheet" type="text/css" />
<script src="@Url.Content("~/Scripts/jquery-ui-1.8.11.min.js")" type="text/javascript"></script>
Moreover we add the following jQuery JavaScript code (defining the global AJAX error handling) and the Razor snippet (defining the dialog containers) to Views/Shared/_Layout.cshtml:
<script type="text/javascript">
$(function () {
// Initialize dialogs ...
var dialogOptions = {
autoOpen: false,
draggable: false,
modal: true,
resizable: false,
title: "Error",
closeOnEscape: false,
open: function () { $(".ui-dialog-titlebar-close").hide(); }, // Hide close button
buttons: [{
text: "Close",
click: function () { $(this).dialog("close"); }
}]
};
$("#InternalServerErrorDialog").dialog(dialogOptions);
$("#NotFoundInfoDialog").dialog(dialogOptions);
// Set up AJAX error handling ...
$(document).ajaxError(function (event, jqXHR, ajaxSettings, thrownError) {
if (jqXHR.status == 404) {
$("#NotFoundInfoDialog").dialog("open");
} else if (jqXHR.status == 500) {
$("#InternalServerErrorDialog").dialog("open");
} else {
alert("Something unexpected happend :( ...");
}
});
});
</script>
<div id="NotFoundInfoDialog">
@{ Html.RenderPartial("Error/NotFoundInfo"); }
</div>
<div id="InternalServerErrorDialog">
@{ Html.RenderPartial("Error/InternalServerErrorInfo"); }
</div>
As you can see in the Razor snippet above we reuse the error texts defined in the partial views saved in Views/Shared/Error/.
To test our custom errors we define the HomeController (Controllers/HomeController.cs) as follows:
public class HomeController : Controller
{
public ActionResult Index()
{
return View();
}
public ActionResult Error500()
{
throw new Exception();
}
}
and the corresponding view
Views/Home/Index.cshtml:
@{
ViewBag.Title = "ViewPage1";
}
<
script type="text/javascript">
$function () {
$("a.ajax").click(function (event) {
event.preventDefault();
$.ajax({
url: $(this).attr('href'),
});
});
});
</
script>
<
ul>
<li>@Html.ActionLink("Error 404 (Not Found)", "Error404")</li>
<li>@Html.ActionLink("Error 404 (Not Found) [AJAX]", "Error404", new { }, new { Class = "ajax" })</li>
<li>@Html.ActionLink("Error 500 (Internal Server Error)", "Error500")</li>
<li>@Html.ActionLink("Error 500 (Internal Server Error) [AJAX]", "Error500", new { }, new { Class = "ajax" })</li>
</
ul>
To test the custom errors you can launch the project and click one of the four links defined in the view above. The “AJAX links” should open a dialog containing the error message and the “non-AJAX” links should redirect to a new page showing the same error message.
Summarized this blog post shows how to set up custom errors that handle errors occurring in both AJAX requests and “non-AJAX” requests. Depending on the project, one could customize the example code shown above to handle other HTTP errors as well or to show more customized error messages or dialogs.
Reasons why you must trust ASPHostPortal.com
Every provider will tell you how they treat their support, uptime, expertise, guarantees, etc., are. Take a close look. What they’re really offering you is nothing close to what
ASPHostPortal does. You will be treated with respect and provided the courtesy and service you would expect from a world-class web hosting business.
You’ll have highly trained, skilled professional technical support people ready, willing, and wanting to help you 24 hours a day. Your web hosting account servers are monitored from three monitoring points, with two alert points, every minute, 24 hours a day, 7 days a week, 365 days a year. The followings are the list of other added- benefits you can find when hosting with us:
- DELL Hardware
Dell hardware is engineered to keep critical enterprise applications running around the clock with clustered solutions fully tested and certified by Dell and other leading operating system and application providers.
- Recovery Systems
Recovery becomes easy and seamless with our fully managed backup services. We monitor your server to ensure your data is properly backed up and recoverable so when the time comes, you can easily repair or recover your data.
- Control Panel
We provide one of the most comprehensive customer control panels available. Providing maximum control and ease of use, our Control Panel serves as the central management point for your ASPHostPortal account. You’ll use a flexible, powerful hosting control panel that will give you direct control over your web hosting account. Our control panel and systems configuration is fully automated and this means your settings are configured automatically and instantly.
- Excellent Expertise in Technology
The reason we can provide you with a great amount of power, flexibility, and simplicity at such a discounted price is due to incredible efficiencies within our business. We have not just been providing hosting for many clients for years, we have also been researching, developing, and innovating every aspect of our operations, systems, procedures, strategy, management, and teams. Our operations are based on a continual improvement program where we review thousands of systems, operational and management metrics in real-time, to fine-tune every aspect of our operation and activities. We continually train and retrain all people in our teams. We provide all people in our teams with the time, space, and inspiration to research, understand, and explore the Internet in search of greater knowledge. We do this while providing you with the best hosting services for the lowest possible price.
- Data Center
ASPHostPortal modular Tier-3 data center was specifically designed to be a world-class web hosting facility totally dedicated to uncompromised performance and security
- Monitoring Services
From the moment your server is connected to our network it is monitored for connectivity, disk, memory and CPU utilization – as well as hardware failures. Our engineers are alerted to potential issues before they become critical.
- Network
ASPHostPortal has architected its network like no other hosting company. Every facet of our network infrastructure scales to gigabit speeds with no single point of failure.
- Security
Network security and the security of your server are ASPHostPortal’s top priorities. Our security team is constantly monitoring the entire network for unusual or suspicious behavior so that when it is detected we can address the issue before our network or your server is affected.
- Support Services
Engineers staff our data center 24 hours a day, 7 days a week, 365 days a year to manage the network infrastructure and oversee top-of-the-line servers that host our clients’ critical sites and services.
ASP.NET MVC 3 Hosting - ASPHostPortal :: User Activity logging in ASP.NET MVC app using Action Filter and log4net
clock April 6, 2012 07:59 by author Jervis
In this post, I will demonstrate how to use an action filter to log user tracking information in an ASP.NET MVC app. The below action filter will take logged user name, controller name, action name, timestamp information and the value of route data id. These user tracking information will be logged using log4net logging framework.
public class UserTrackerAttribute : ActionFilterAttribute, IActionFilter
{
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
var actionDescriptor= filterContext.ActionDescriptor;
string controllerName = actionDescriptor.ControllerDescriptor.ControllerName;
string actionName = actionDescriptor.ActionName;
string userName = filterContext.HttpContext.User.Identity.Name.ToString();
DateTime timeStamp = filterContext.HttpContext.Timestamp;
string routeId=string.Empty;
if (filterContext.RouteData.Values["id"] != null)
{
routeId = filterContext.RouteData.Values["id"].ToString();
}
StringBuilder message = new StringBuilder();
message.Append("UserName=");
message.Append(userName + "|");
message.Append("Controller=");
message.Append(controllerName+"|");
message.Append("Action=");
message.Append(actionName + "|");
message.Append("TimeStamp=");
message.Append(timeStamp.ToString() + "|");
if (!string.IsNullOrEmpty(routeId))
{
message.Append("RouteId=");
message.Append(routeId);
}
var log=ServiceLocator.Current.GetInstance<ILoggingService>();
log.Log(message.ToString());
base.OnActionExecuted(filterContext);
}
}
The LoggingService class is given below
public class LoggingService : ILoggingService
{
private static readonly ILog log = LogManager.GetLogger
(MethodBase.GetCurrentMethod().DeclaringType);
public void Log(string message)
{
log.Info(message);
}
}
public interface ILoggingService
{
void Log(string message);
}
The LoggingService class is using log4net framework for logging. You can add reference to log4net using
NuGet.
The following command on NuGet console will install log4net into your ASP.NET MVC app.
PM> install-package Log4Net
The below configuration information in web.config will configure log4net for using with Sql Server
Let me add a log4net section to the <configSections> of web.config
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,Log4net"/>
The below is the log4net section in the web.config file
<log4net>
<root>
<level value="ALL"/>
<appender-ref ref="ADONetAppender"/>
</root>
<appender name="ADONetAppender" type="log4net.Appender.AdoNetAppender">
<connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
<connectionString value="data source=.\SQLEXPRESS;Database=MyFinance;Trusted_Connection=true;" />
<commandText value="INSERT INTO Log ([Date],[Thread],[Level],[Logger],[Message]) VALUES (@log_date, @thread, @log_level, @logger, @message)" />
<parameter>
<parameterName value="@log_date" />
<dbType value="DateTime" />
<layout type="log4net.Layout.PatternLayout" value="%date{yyyy'-'MM'-'dd HH':'mm':'ss'.'fff}" />
</parameter>
<parameter>
<parameterName value="@thread" />
<dbType value="String" />
<size value="255" />
<layout type="log4net.Layout.PatternLayout" value="%thread" />
</parameter>
<parameter>
<parameterName value="@log_level" />
<dbType value="String" />
<size value="50" />
<layout type="log4net.Layout.PatternLayout" value="%level" />
</parameter>
<parameter>
<parameterName value="@logger" />
<dbType value="String" />
<size value="255" />
<layout type="log4net.Layout.PatternLayout" value="%logger" />
</parameter>
<parameter>
<parameterName value="@message" />
<dbType value="String" />
<size value="4000" />
<layout type="log4net.Layout.PatternLayout" value="%message" />
</parameter>
</appender>
</log4net>
Configure log4net
The below code in the Application_Start() of Global.asax.cs will configure the log4net
log4net.Config.XmlConfigurator.Configure();
The below Sql script is used for creating table in Sql Server for logging information
CREATE TABLE [dbo].[Log] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Date] [datetime] NOT NULL ,
[Thread] [varchar] (255) NOT NULL ,
[Level] [varchar] (20) NOT NULL ,
[Logger] [varchar] (255) NOT NULL ,
[Message] [varchar] (4000) NOT NULL
) ON [PRIMARY]
ASP.NET MVC 3 Hosting - ASPHostPortal :: ASP.NET MVC 3 Routing
clock March 15, 2012 07:52 by author Jervis
Routing in an ASP.NET module is responsible for mapping incoming browser requests to a particular MVC controller action.
When Requests arrive to an ASP.NET MVC-based web application it actually first passes through the UrlRoutingModule object, which is an HTTP module.
This module parses the request; i.e. the request from the URL and performs route selection. The UrlRoutingModule object selects the first route object that matches the current request.
If no routes match, the UrlRoutingModule object does nothing and lets the request fall back to the regular ASP.NET or IIS request processing.
Now From the selected Route object, the UrlRoutingModule object contains the IRouteHandler object that is associated with the Route object. This is an instance of MvcRouteHandler. The IRouteHandler instance creates an IHttpHandler object that passes it the IHttpContext object. By default, the IHttpHandler instance for MVC is the MvcHandler object. The MvcHandler object then selects the controller that will ultimately handle the request.
See when an ASP.NET MVC web application runs in IIS 7.0, no file name extension is required for MVC projects. So you cannot find the .aspx,.ascx,.asmx extension.
Now when you create a new ASP.NET MVC application, it will automatically configure to use ASP.NET Routing. ASP.NET Routing is set up in two places:
1. Webconfig File
2. Global.asax file
Now interestingly when you create a MVC2 application a route table is automatically created in the application's Global.asax file.
As we all know the Global.asax contains event handlers for ASP.NET application lifecycle events. The route table is created during the Application Start event. So when you open the Global.asax file you will see the following code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.Web.Routing;
namespace MvcApplication1
{
// Note: For instructions on enabling IIS6 or IIS7 classic mode,
// visit http://go.microsoft.com/?LinkId=9394801
public class MvcApplication : System.Web.HttpApplication
{
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
new { controller = "Home", action = "Index", id = "" } // Parameter defaults
);
}
protected void Application_Start()
{
RegisterRoutes(RouteTable.Routes);
}
}
}
See here the routing is occuring in the "RegisterRoutes" method.
See in the routes.MapRoute method we are passing 3 parameters.
1. Controller(Name of the controller)
2. Action (The method inside the controller)
3. id (Parameter)
See in the above url before home it is domain. The home is control. The action method is the index with parameter id. Now in the home controller the code will be:
using System.Web.Mvc;
namespace MvcApplication1.Controllers
{
public class HomeController : Controller
{
public ActionResult Index(int id)
{
return View();
}
}
}
Conclusion:
So in this article we have seen what the functionality of routing in ASP.Net MVC 3 is.
Reasons why you must trust ASPHostPortal.com
Every provider will tell you how they treat their support, uptime, expertise, guarantees, etc., are. Take a close look. What they’re really offering you is nothing close to what
ASPHostPortal does. You will be treated with respect and provided the courtesy and service you would expect from a world-class web hosting business.
You’ll have highly trained, skilled professional technical support people ready, willing, and wanting to help you 24 hours a day. Your web hosting account servers are monitored from three monitoring points, with two alert points, every minute, 24 hours a day, 7 days a week, 365 days a year. The followings are the list of other added- benefits you can find when hosting with us:
- DELL Hardware
Dell hardware is engineered to keep critical enterprise applications running around the clock with clustered solutions fully tested and certified by Dell and other leading operating system and application providers.
- Recovery Systems
Recovery becomes easy and seamless with our fully managed backup services. We monitor your server to ensure your data is properly backed up and recoverable so when the time comes, you can easily repair or recover your data.
- Control Panel
We provide one of the most comprehensive customer control panels available. Providing maximum control and ease of use, our Control Panel serves as the central management point for your ASPHostPortal account. You’ll use a flexible, powerful hosting control panel that will give you direct control over your web hosting account. Our control panel and systems configuration is fully automated and this means your settings are configured automatically and instantly.
- Excellent Expertise in Technology
The reason we can provide you with a great amount of power, flexibility, and simplicity at such a discounted price is due to incredible efficiencies within our business. We have not just been providing hosting for many clients for years, we have also been researching, developing, and innovating every aspect of our operations, systems, procedures, strategy, management, and teams. Our operations are based on a continual improvement program where we review thousands of systems, operational and management metrics in real-time, to fine-tune every aspect of our operation and activities. We continually train and retrain all people in our teams. We provide all people in our teams with the time, space, and inspiration to research, understand, and explore the Internet in search of greater knowledge. We do this while providing you with the best hosting services for the lowest possible price.
- Data Center
ASPHostPortal modular Tier-3 data center was specifically designed to be a world-class web hosting facility totally dedicated to uncompromised performance and security
- Monitoring Services
From the moment your server is connected to our network it is monitored for connectivity, disk, memory and CPU utilization – as well as hardware failures. Our engineers are alerted to potential issues before they become critical.
- Network
ASPHostPortal has architected its network like no other hosting company. Every facet of our network infrastructure scales to gigabit speeds with no single point of failure.
- Security
Network security and the security of your server are ASPHostPortal’s top priorities. Our security team is constantly monitoring the entire network for unusual or suspicious behavior so that when it is detected we can address the issue before our network or your server is affected.
- Support Services
Engineers staff our data center 24 hours a day, 7 days a week, 365 days a year to manage the network infrastructure and oversee top-of-the-line servers that host our clients’ critical sites and services.
ASP.NET MVC Hosting - ASPHostPortal :: Create ASP.NET MVC Localization with Language Detection
clock January 6, 2011 05:09 by author Jervis
In this tutorial I will show a simple way to create localization (globalization) for web application using APS.NET MVC framework. It should work fine with MVC 1 and 2 and I’m currently using .NET 3.5 SP1, but .NET 4.0 will work as well. All code is in C# and for language translations I use XML files.
Language files with XML
For translations of different languages I use simple xml files. I store them in App_Data/messages/<locale>.xml, for example en-US.xml or de-DE.xml. Here is the xml structure:
<items>
<item key="home">Home</item>
<item key="products">Products</item>
<item key="services">Services</item>
</items>
You should have identical language files for all desired languages. All translation items should be the same (with equal “key” attributes).
Create Translator class
Main translation work will be done by Translator singleton class. Create “Infrastructure” folder in your MVC project and put class Translator there.
First, let’s make class singleton:
private static Translator instance = null;
public static Translator Instance
{
get
{
if (instance == null)
{
instance = new Translator();
}
return instance;
}
}
private Translator() { }
Add the following fields and properties to the class:
private static string[] cultures = { "en-US", "bg-BG" };
private string locale = string.Empty;
public string Locale
{
get
{
if (string.IsNullOrEmpty(locale))
{
throw new Exception("Locale not set");
}
else
{
return locale;
}
}
set
{
if (Cultures.Contains(value))
{
locale = value;
load();
}
else
{
throw new Exception("Invalid locale");
}
}
}
public static string[] Cultures
{
get
{
return cultures;
}
}
Field "cultures" lists available cultures. "Locale" keeps current culture. And in "set" part of Locale property you can see invocation of load() method. I will talk about it later.
To keep localization data I will create simple dictionary and then use keys from XML for dictionary keys and XML item values as dictionary values. Simple Translate method will do translation job. I have indexer method for easy access.
private Dictionary data = null;
public string Translate(string key)
{
if (data != null && data.ContainsKey(key))
{
return data[key];
}
else
{
return ":" + key + ":";
}
}
public string this[string key]
{
get
{
return Translate(key);
}
}
If some key cannot be found and translated, I return the key with ":" around it, so you can easy find untranslated items.
Finally, for loading XML I use LINQ to XML. I have static caching dictionary, so I don't need reading XML on every request.
private static Dictionary<string, Dictionary<string, string>> cache =
new Dictionary<string, Dictionary<string, string>>();
private void load()
{
if (cache.ContainsKey(locale) == false) // CACHE MISS !
{
var doc = XDocument.Load(
HttpContext.Current.Server.MapPath(
"~/App_Data/messages/" + locale + ".xml"))
cache[locale] = (from item in doc.Descendants("item")
where item.Attribute("key") != null
select new
{
Key = item.Attribute("key").Value,
Data = item.Value,
}).ToDictionary(i => i.Key, i => i.Data);
}
data = cache[locale];
}
public static void ClearCache()
{
cache = new Dictionary<string, Dictionary<string, string>>();
}
You can use translator in your controller like this:
Translator.Instance[key];
After load() methid I have ClearCache method for easy developing (you know, once read, data is cached and you have to restart IIS Application Pool to refresh localization data).
Translator class is ready, I will show you how to use it later.
Create localization helpers
Create static class LocalizationHelpers and put it in "Helpers" folder in your project.
public static string CurrentCulture(this HtmlHelper html)
{
return Translator.Instance.Locale;
}
public static string T(this HtmlHelper html, string key)
{
return html.Encode(Translator.Instance[key]);
}
public static string T(this HtmlHelper html, string key,
params object[] args)
{
return html.Encode(string.Format(
Translator.Instance[key], args));
}
I will use this in html views for translation like this
<%= Html.T("products") %>
If you want params in translated values you can use second T implementation like string.Format. First helper CurrentCulture is used in language select user control to determine current culture.
Create BaseController class
Create BaseController class that extends Controller and put it in "Infrastructure" folder of your MVC project. You should extend all your controller classes from this class. Create simple property for current selected culture (locale)
public string CurrentCulture
{
get
{
return Translator.Instance.Locale;
}
}
You will use this in your controller when you initialize your model, for example.
In the following code I will explain language detection and saving with cookie.
private void initCulture(RequestContext requestContext)
{
string cultureCode = getCulture(requestContext.HttpContext);
requestContext.HttpContext.Response.Cookies.Add(
new HttpCookie("Culture", cultureCode)
{
Expires = DateTime.Now.AddYears(1),
HttpOnly = true,
}
);
Translator.Instance.Locale = cultureCode;
CultureInfo culture = new CultureInfo(cultureCode);
System.Threading.Thread.CurrentThread.CurrentCulture = culture;
System.Threading.Thread.CurrentThread.CurrentUICulture = culture;
}
private string getCulture(HttpContextBase context)
{
string code = getCookieCulture(context);
if (string.IsNullOrEmpty(code))
{
code = getCountryCulture(context);
}
return code;
}
private string getCookieCulture(HttpContextBase context)
{
HttpCookie cookie = context.Request.Cookies["Culture"];
if (cookie == null || string.IsNullOrEmpty(cookie.Value) ||
!Translator.Cultures.Contains(cookie.Value))
{
return string.Empty;
}
return cookie.Value;
}
private string getCountryCulture(HttpContextBase context)
{
// some GeoIp magic here
return "en-US";
}
First I try to get language cookie if there is any (if this is not first time visit). If there is no cookie you can detect browser language, make GeoIP IP address lookup and so on. After finding some valid locale/culture I set response cookie for next page visits. After this I change current thread culture. This is useful if you want to format some date or currency values.
You should call initCulture in overridden Initialize method.
Changes in HomeController
Don't forget to change parent class of all your controller to BaseController. Add following code to your HomeController, so you can change current culture. When you open specified URL, a cookie is set and user is redirected to index page. This URL is like example.com/home/culture/en-US. Clear cache method is for deleting current cache without restarting application pool. Access it with example.com/home/ClearLanguageCache.
public ActionResult Culture(string id)
{
HttpCookie cookie = Request.Cookies["Culture"];
cookie.Value = id;
cookie.Expires = DateTime.Now.AddYears(1);
Response.SetCookie(cookie);
return Redirect("/");
}
public ActionResult ClearLanguageCache(string id)
{
Translator.ClearCache();
return Redirect("/");
}
To change current language I will create special user control which will be included in may Master.Site layout. Create CultureUserControl.ascx and put it in Views/Shared/ folder of your MVC project. Here is the code:
<%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %>
<% if (Html.CurrentCulture() == "bg-BG") { %>
<a id="lang" href="/home/culture/en-US">en</a>
<% } else { %>
<a id="lang" href="/home/culture/bg-BG">bg</a>
<% } %>
In my layout I use <% Html.RenderPartial("CultureUserControl"); %> to include it.
Conclusion
In this simple tutorial I've created localization infrastructure for ASP.NET MVC web application. Translations of different languages are stored in XML files. Then I use Translator class to load them. Current user culture is kept in cookie. You can access Translator class in html views using some helpers. Also all the translation data i cached so it will not be loaded form XML every request.
Hope this tutorial helps.
Reasons why you must trust ASPHostPortal.com
Every provider will tell you how they treat their support, uptime, expertise, guarantees, etc., are. Take a close look. What they’re really offering you is nothing close to what ASPHostPortal does. You will be treated with respect and provided the courtesy and service you would expect from a world-class web hosting business.
You’ll have highly trained, skilled professional technical support people ready, willing, and wanting to help you 24 hours a day. Your web hosting account servers are monitored from three monitoring points, with two alert points, every minute, 24 hours a day, 7 days a week, 365 days a year. The followings are the list of other added- benefits you can find when hosting with us:
- DELL Hardware
Dell hardware is engineered to keep critical enterprise applications running around the clock with clustered solutions fully tested and certified by Dell and other leading operating system and application providers.
- Recovery Systems
Recovery becomes easy and seamless with our fully managed backup services. We monitor your server to ensure your data is properly backed up and recoverable so when the time comes, you can easily repair or recover your data.
- Control Panel
We provide one of the most comprehensive customer control panels available. Providing maximum control and ease of use, our Control Panel serves as the central management point for your ASPHostPortal account. You’ll use a flexible, powerful hosting control panel that will give you direct control over your web hosting account. Our control panel and systems configuration is fully automated and this means your settings are configured automatically and instantly.
- Excellent Expertise in Technology
The reason we can provide you with a great amount of power, flexibility, and simplicity at such a discounted price is due to incredible efficiencies within our business. We have not just been providing hosting for many clients for years, we have also been researching, developing, and innovating every aspect of our operations, systems, procedures, strategy, management, and teams. Our operations are based on a continual improvement program where we review thousands of systems, operational and management metrics in real-time, to fine-tune every aspect of our operation and activities. We continually train and retrain all people in our teams. We provide all people in our teams with the time, space, and inspiration to research, understand, and explore the Internet in search of greater knowledge. We do this while providing you with the best hosting services for the lowest possible price.
- Data Center
ASPHostPortal modular Tier-3 data center was specifically designed to be a world-class web hosting facility totally dedicated to uncompromised performance and security
- Monitoring Services
From the moment your server is connected to our network it is monitored for connectivity, disk, memory and CPU utilization – as well as hardware failures. Our engineers are alerted to potential issues before they become critical.
- Network
ASPHostPortal has architected its network like no other hosting company. Every facet of our network infrastructure scales to gigabit speeds with no single point of failure.
- Security
Network security and the security of your server are ASPHostPortal’s top priorities. Our security team is constantly monitoring the entire network for unusual or suspicious behavior so that when it is detected we can address the issue before our network or your server is affected.
- Support Services
Engineers staff our data center 24 hours a day, 7 days a week, 365 days a year to manage the network infrastructure and oversee top-of-the-line servers that host our clients’ critical sites and services.
About ASPHostPortal.com
We’re a company that works differently to most. Value is what we output and help our customers achieve, not how much money we put in the bank. It’s not because we are altruistic. It’s based on an even simpler principle. "Do good things, and good things will come to you".
Success for us is something that is continually experienced, not something that is reached. For us it is all about the experience – more than the journey. Life is a continual experience. We see the Internet as being an incredible amplifier to the experience of life for all of us. It can help humanity come together to explode in knowledge exploration and discussion. It is continual enlightenment of new ideas, experiences, and passions
photo ahp banner aspnet-01_zps87l92lcl.png
Author Link
Corporate Address (Location)
ASPHostPortal
170 W 56th Street, Suite 121
New York, NY 10019
United States
Sign in
|
__label__pos
| 0.598622 |
Skip to main content
Support
How to Optimize Human Biology: Where Genome Editing and Artificial Intelligence Collide
Genome editing and artificial intelligence (AI) could revolutionize medicine in the United States and globally. Though neither are new technologies, the discovery of CRISPR in genome editing and advances in deep learning for AI could finally grant clinical utility to both. The medical use of these technologies individually could result in their eventual combined use, raising new and troubling ethical, legal, and social questions. If ongoing technical challenges can be overcome, will the convergence of AI and CRISPR result practitioners ‘optimizing’ human health? And could viewing human biology as a machine result in a willingness to optimize biology for reasons other than health alone? Given the rapid technical progress and potential benefits of genome editing and AI, answering these questions will become more pressing in the near future. Such concerns apply not only to the United States but to the international medical community. Notably, China has demonstrated its desire to be a global leader in both genomics and AI, which could indicate the potential of these technologies to converge in China soon. What form should the international governance of these technologies take and how will it be enforced? To ensure responsible progress of genomics and AI in combination, a balance must be struck between promoting innovation and responding to ethical, social, and moral quandaries.
How to Optimize Human Biology: Where Genome Editing and Artificial Intelligence Collide by The Wilson Center on Scribd
Science and Technology Innovation Program
The Science and Technology Innovation Program (STIP) brings foresight to the frontier. Our experts explore emerging technologies through vital conversations, making science policy accessible to everyone. Read more
|
__label__pos
| 0.53024 |
DIY Digital Vacuum Gauge
11,991
34
9
Here are the instructions on how to build your very own digital vacuum gauge. Why would you want a vacuum gauge? If do your own maintenance on your motorcycle like synchronizing carburetors or adjusting the throttle bodies this tool makes the job easier.
The plans and code needed to build this project are all opensource and available online at sourceforge.net
Parts kits, circuit boards and pre-programmed chips for the project are available for a limited time on kickstarter here.
Step 1: How It Works
The digital vacuum gauge is based on the MPXH6250A pressure sensor by Freescale. It outputs a temperature compensated analog voltage that is proportional to an absolute pressure. It has a measuring range of 2.9 PSI to 36.3 PSI absolute. The microprocessor then reads those voltages and displays the results to the screen.
The microprocessor does the following to make measurements:
1. The processor in the vacuum gauge reads the 4 vacuum ports simultaneously
2. Digital filtering is applied to smooth out the vacuum readings (useful on idling engines to get a steady reading)
3. The filtered value is then converted to kPa, PSI, inHg or mmHg
4. The calibration data from the EEPROM is added to each of the vacuum readings
5. The values are displayed on the screen
Bar graphs are also available to show how far off a particular measurement is from the average of the 4 vacuum ports. This helps when tuning your engine as it gives you a visual indicator of close all the ports are to reading the same values.
Step 2: Getting Everything Ready
The first step is to get a PCB board. This can be done by either building your own or purchasing one through my kickstarter here. If you are building your own, head over to the sourceforge page and navigate to files. There are two folders, one called Firmware Files and the other is called Hardware Files. Click on Hardware Files and download the Eagle Files, Parts Placement, and BOM (bill of materials)
The Eagle PCB files can then be submitted to your favorite PCB manufacturing house. Once you have a PCB you can order all the parts in the BOM. The Excel file lists all the parts that are needed, the quantity and part number. The parts are available on Digikey and the LCD screen is purchased through EastRising (buydisplay.com). Go ahead and order all the the parts that you will need.
Finally print out the parts placement PDF. This will help you when you go to solder the components so you know what part goes where.
Step 3: Soldering the Surface Mount Devices
The first step is to solder all the surface mount components onto the board. Follow the parts placement PDF that you printed out. Match the part designation is the reference on the PCB.
Attach the following SMD (Surface Mount Devices)
1. Microprocessor - U1 - (be sure to mount in correct orientation as shown in the photo)
2. Regulator - U2
3. Resistors - R1-R7
4. Capacitors - C1-C9
5. Push button - SW2
6. Pressure Sensors - P1-P4
When soldering the surface mount devices make sure you don't short out any of the pins that are close together. You may need to use some solder wick to remove excess solder.
Step 4: Soldering the Hole Through Devices
The hole through devices need to be assembled in a specific order for the assembly to work The order is as follows:
1. Place double sided sticky tape on back of battery holder and attach to back of PCB
2. Solder battery holder in place on back of the PCB. (Clip excess length from battery holder pins)
3. Insert 16 pin header into PCB
4. Place LCD into position on PCB
5. Attach standoffs and screws between LCD and PCB boards
6. Solder both sides of the 16 pin header to the LCD and PCB
7. Insert the main power switch SW1 and solder
Step 5: Programming
The last step is to program the board. In order to program the microprocessor you need a PicKit3 programmer and a 5 pin header. The source code is available on the source forge page under the folder called Firmware. Download the project file. You will need to have MPLabX installed in order to compile and download the program. If you purchased a PCB with a pre-programmed chip then you can skip this step
1. Insert the 5 pin header into the PCB
2. Note, Pin 1 is towards the middle of the PCB and has a square pad
3. Attach the programmer
4. Turn on the power switched
5. Open MPLabX and load the project
6. Press the program button to download the firmware into the microcontroller.
7. Disconnect the programmer.
Step 6: Navigating the Menu
The main power switch has three positions:
• Down = Off
• Middle = On
• Top = On with back light
The Mode/Cal button is used to change the settings in the device. Pressing the button will bring up the menu. Pressing the button again will move you through the menu system. The item you are currently at will blink. In order to select that item you need to press and hold the button until the screen changes The button needs to be held for approx 2-3 seconds. All the settings and calibration data are save into the device's EEPROM.
The menu system is as follows:
1. Calibrate - Preformed Calibration and zeros the device
1. Relative_Mode - Calibrates to display pressures relative to atmospheric, ie no vacuum will read 0
2. Absolute_Mode - Calibrates to display pressures in absolute units, ie no vacuum will read approx 14psi
2. Units - Changes the display units of the devices
1. kPA
2. PSI
3. inHg
4. mmHg
3. Display - Adjusts the display mode
1. Numerical - Displays the readings as 4 numerical values on the screen, Mini bar graphs are next to each reading that compares that ports reading to the average reading
2. Bar_Graph - displays the for channels as horizontal bar graphs. Each the center is the average of all the bar graphs.
4. Filter - Changes the digital filtering
1. Off - Turns off the digital filtering, Numbers will update very fast but a slow running, high vacuum engine may cause readings to bounce all over the place
2. Low - Adds some filtering to smooth out the bouncing readings, takes a couple of seconds for a vacuum pulse to fully register on the display
3. High - Adds lots of filtering, can take 30 seconds or so for a change in vacuum to fully register. This will provide a good average over a long period of time.
5. Exit - Exits the main menu.
Share
Recommendations
• Arduino Contest 2019
Arduino Contest 2019
• Tape Contest
Tape Contest
• Gardening Contest
Gardening Contest
9 Discussions
0
None
JRMahesh
Question 5 months ago on Step 6
Hi..is that module for sale?
If it is..i would like to purchase it.
Pls advice. Tq.
0
None
edburzminski
Question 11 months ago on Introduction
How would you adapt this in to measure pressure instead instead of vacuum? I have an air horn with compressed air tank in my car to monitor.
0
None
MichaelB1186
1 year ago
Mark- very nice work. I'll admit I didn't realize until the vacuum sensor arrived how extremely small it is. I was hoping to use this project and adapt it for car.
What kind of hose did you use to connect the sensor to the carburetor or source? My idea was to use a very small hose then an adapter to a vacuum hose that would fit on the carb. Hoping you had a better solution.
Thanks!
0
None
stp715a
3 years ago
One of the most complete projects here! Thanks.
Is this sensor an acceptable substitute? Thanks again.
MPXx6115
http://www.nxp.com/products/sensors/pressure-sensors/barometric-pressure-15-to-115-kpa/-115-to-115kpa-gauge-and-absolute-pressure-sensor:MPXx6115
1 reply
0
None
markistuffstp715a
Reply 3 years ago
The MPXH6115 could work but the firmware would require reprogramming to accommodate the sensor's different measurement range. The pin out should match.
0
None
wheeljam
3 years ago
The link to sourceforge isn't working. I'd like to buy or build this
3 replies
0
None
markistuffwheeljam
Reply 3 years ago
Not sure why the link isn't working. Here it is again.
https://sourceforge.net/projects/digitalvacuumguage/files/
I do have some blank PCB boards left over for this from when I did a kickstarter for this project if you are interested.
|
__label__pos
| 0.539717 |
Jump to content
Can we talk about increasing Script memory from 64kb ?
Coffee Pancake
Share
You are about to reply to a thread that has been inactive for 93 days.
Please take a moment to consider if this thread is worth bumping.
Recommended Posts
5 minutes ago, Coffee Pancake said:
As per the title .. script memory being capped at 64kb is arbitrary and half the reason many complex scripted objects end up running a handful of scripts.
I suppose "we" can, but how will this initiate a change?
Can we also talk about adding another language to Second Life to be used in addition to LSL? I used to love the heated debates between residents about what language should be added.
How much 'script memory' would you suggest LSL have access to?
What language would you like to see added and how much memory should it have access to?
(Or shut me down by saying "No additional languages to be discussed in my thread!")
Link to comment
Share on other sites
If you'll allow me to be the devil's advocate, it needs to be capped somewhere, and 64kb seems fine unless you're doing something that involves large lists of strings.
If there were no cap, imagine the amount of memory even a novice scripter could use up (possibly by accident) all on LL's dime.
Link to comment
Share on other sites
1 hour ago, Quistess Alpha said:
If you'll allow me to be the devil's advocate, it needs to be capped somewhere, and 64kb seems fine unless you're doing something that involves large lists of strings.
If there were no cap, imagine the amount of memory even a novice scripter could use up (possibly by accident) all on LL's dime.
Possible solution: Current and future scripts stay 64KB by default, but llSetMemoryLimit can be used to increase the size up to some new maximum, such as 128KB. (Seems like a safe increase as a first experiment, and we'll be able to know when scripts might be using more memory than normal.) This way, most scripters won't be using any more memory than usual.
I'm all for increasing memory capacity somehow (I wish KVP wasn't Premium), but what I'd be concerned about is "what if something does go wrong?" And I don't mean things like bugs, I mean long-term real use-cases. If it does start causing significant degradation of sim performance across the grid and LL is unwilling or unable to improve their hardware with Amazon, going back to 64KB is equivalent to pulling out a barbed arrow. It's gonna hurt more coming out than going in.
Edited by Wulfie Reanimator
• Like 4
• Thanks 1
Link to comment
Share on other sites
More memory is always pretty nice.
Might also wish for some efficient shared memory between scripts in the same object. E.g. allow one script to "export" a read only list to other parts of a script set, to avoid tons of messages and duplication back and forth. That could simplify some things even when the 64kB limit of writeable memory per script survives.
Link to comment
Share on other sites
13 minutes ago, Wulfie Reanimator said:
and why LSO scripts are much faster than Mono scripts for that
Are they faster just because they're a quarter the size of Mono? Or is there some other aspect to the way Mono scripts are loaded that is the reason?
It seems to me that the need for larger scripts (and I accept there is a case for wanting more memory) is mostly for storing data, and in this case, perhaps there could be a more efficient way of storing data?
Link to comment
Share on other sites
if the cap was raised then we would find a way to fill it up and still not have enough room for everything we might want to do
if i wanted anything then I would want KVP to be grid-wide scope please. Just the KVP thanks. I can live without the experience permissions system being grid-wide scope
• Like 1
Link to comment
Share on other sites
9 minutes ago, Profaitchikenz Haiku said:
perhaps there could be a more efficient way of storing data?
Capital idea! How about we store data in a STATIONARY SCRIPT on a STATIONARY SCRIPT SERVER and provide some communication mechanism between stationary scripts and the scripts we have, wherever they may roam.
But, here's the deal: Any new storage mechanism dreamt up must also have an expiration system whereby the owner must actively maintain it or it goes static, then later, evaporates.
Link to comment
Share on other sites
I'd like to have the stack memory counted separately from code and heap. Large incoming messages, and some LSL calls, can briefly use up large amounts of stack and cause a stack-heap collision. Scripts don't get a chance to discard or reject or break up or process a big incoming message before it overflows memory. This makes some scripts unreliable. So I'd like to have 64K of code and data, plus up to 64K of stack. Stack memory is temporary; when you're waiting for an event, there's no stack space in active use, although space may be allocated.
As for larger scripts, that's a problem with technical debt. It recently slipped out at a user group meeting that the server processes are still in 32-bit mode, and thus sims can't use more than 4GB of memory. So there's a hard ceiling on sim memory. Understand, in the Linux world (the servers run Linux) that's rare. Just about everything that runs on Linux is 64-bit today. You have to download old libraries to even build 32-bit programs on LInux.
• Like 1
Link to comment
Share on other sites
1 minute ago, Mollymews said:
KVP
Maybe this is a solved-issue (as many times as I've thought about it, never actually used KVP), but as long as we're dreaming, I'd like a better way to segment the KVP database for different projects. Something like domain+key+value, and an easy way to get only keys associated with a given domain, or even just something like
llSearchKeysValue(string query,integer first, integer count);
which would fetch the names of keys if the name starts with query.
• Like 1
Link to comment
Share on other sites
Just now, Ardy Lay said:
How about we store data in a STATIONARY SCRIPT on a STATIONARY SCRIPT SERVER
That would obviously work but I was thinking more along the lines of speeding up notecard reads, ie find a way of improving what's already implemented.
My thinking is that whilst scripts are server-side and possibly need re-compiling with each TP because it's going to a different server, notecards just need to transit as an asset with no additional work required during the handover.
• Like 1
Link to comment
Share on other sites
17 minutes ago, Profaitchikenz Haiku said:
more efficient way of storing data?
Amazon Dynamo DB, perhaps. It's like KVP, but you can store all the data you can pay for, terabytes if necessary. The first 25GB is free. Someone who needs that should write LSL code to access AWS DynamoDB. It talks HTTP and JSON, so that's not too tough.
Link to comment
Share on other sites
6 minutes ago, animats said:
It talks HTTP and JSON, so that's not too tough.
But would it be fast enough? Imagine you have position and rotation coordinates stored for a journey, in a script you'd possibly use link messages to get the next sets, and the delay is trivial. If you read them from a notecard there's the inherent delays of llreadNotecardLine, but if you have to make a call to an outside web service there's not just the internet transits but the issue of what to do if the request just doesn't get answered?
I realise the other inherent problem with notecards is that they too are capped at an upper limit, but as they don't need to get bytecoded or thrown across the wires by http requests they do seem the most convenient option to look at for better data access.
Link to comment
Share on other sites
2 minutes ago, Profaitchikenz Haiku said:
But would it be fast enough? Imagine you have position and rotation coordinates stored for a journey, in a script you'd possibly use link messages to get the next sets, and the delay is trivial. If you read them from a notecard there's the inherent delays of llreadNotecardLine, but if you have to make a call to an outside web service there's not just the internet transits but the issue of what to do if the request just doesn't get answered?
I realise the other inherent problem with notecards is that they too are capped at an upper limit, but as they don't need to get bytecoded or thrown across the wires by http requests they do seem the most convenient option to look at for better data access.
Does your journey have any stops? How about loading up on waypoints while at the stop then zip through them from memory then do it again at the next stop.
Link to comment
Share on other sites
16 minutes ago, Ardy Lay said:
Does your journey have any stops? How about loading up on waypoints while at the stop then zip through them from memory then do it again at the next stop.
My NPCs actually do that sort of thing, with coarse path planning, fine path planning, and path execution all in different scripts, running concurrently. They give the illusion of being real-time, but are really executing plans made a few seconds previous. If anything goes wrong, they stop, and stand, arms folded, while replanning takes place. All this is to be able to handle overloaded sims with 64K scripts. Huge headache to write and debug.
Link to comment
Share on other sites
14 minutes ago, Profaitchikenz Haiku said:
That would obviously work but I was thinking more along the lines of speeding up notecard reads, ie find a way of improving what's already implemented.
My thinking is that whilst scripts are server-side and possibly need re-compiling with each TP because it's going to a different server, notecards just need to transit as an asset with no additional work required during the handover.
i vote for this. Writeable notecards. As then we can have as much persistent read/write data storage as we can stuff notecards into object contents. Something like:
integer result = llWriteNotecardLine(nameofnotecard, data, linenumber); // which overwrites existing line or appends if linemumber is >= EOF
if (result = -1) llOwnerSay(nameofnotecard + " write fail. Probable cause: Out of memory.");
if (result >= 0) llOwnerSay("line number written to :" + (string)result;
... and probably as well
integer availablememory = llGetNotecardMemory(nameofnotecard);
if (availablememory > 256) result = llWriteNotecardLine(nameofnotecard, somedatalessthan256bytes, linenumber);
... and also
integer result = llDeleteNotecardLines(nameofnotecard, beginline, endline);
llOwnerSay("available memory is: " + (string)result);
Link to comment
Share on other sites
58 minutes ago, Quistess Alpha said:
Maybe this is a solved-issue (as many times as I've thought about it, never actually used KVP), but as long as we're dreaming, I'd like a better way to segment the KVP database for different projects. Something like domain+key+value, and an easy way to get only keys associated with a given domain, or even just something like
llSearchKeysValue(string query,integer first, integer count);
the way this is typically scripted is
string DOMAIN = "MyApp";
string q = DOMAIN + "key1";
q = DOMAIN + "key2";
.. and so on
... in other app:
string DOMAIN = "MyOtherApp";
string q = DOMAIN + "key1";
q = DOMAIN + "key2";
• Like 1
Link to comment
Share on other sites
13 minutes ago, Mollymews said:
the way this is typically scripted is
My point is, that all of the apps need to know before hand what 'key1' and 'key2' are. You can't easily create a key in script a and expect it to be discoverable in script b, unless script b iterates over every single key in the database, (which might be fine for a database only used for one application, but otherwise would be infeasible for large databases.)
If you're dynamically adding keys, you might like to ask questions like: How many keys are there in this domain? Which keys in the database are relevant to this application? You could of course have more keys which store the answers to these questions, but that could get a bit messy to keep accurate.
Edited by Quistess Alpha
Link to comment
Share on other sites
1 hour ago, Profaitchikenz Haiku said:
Are they faster just because they're a quarter the size of Mono? Or is there some other aspect to the way Mono scripts are loaded that is the reason?
There is a distinct difference in the way LSO and Mono scripts are handled.
LSO scripts will always take up 16KB memory.. even if there are no variables or events. They are always allocated the full 16KB of memory.
Mono scripts have dynamic memory. While the maximum memory capacity is 64KB, the script will only take up a fraction of that in most cases.
This is significant when a script needs to be transferred from one sim to the next. To do that, the current sim needs to figure out the current size of a script so that it can be stored for transfer (don't forget bytecode sharing, heap/stack memory). For LSO scripts, it's very easy since they're always guaranteed to be exactly 16KB. The destination sim suffers even more since they have to receive that information, allocate the space, and then initialize those scripts. Mono scripts are very slow to initialize and start up compared to LSO and it can be easily observed in your day-to-day interaction with scripts.
1 hour ago, Profaitchikenz Haiku said:
It seems to me that the need for larger scripts (and I accept there is a case for wanting more memory) is mostly for storing data, and in this case, perhaps there could be a more efficient way of storing data?
No, there are quite a few things that are memory intensive without doing any long-term storage. HTTP requests, large llSetLinkPrimitiveParamsFast calls, multiple raycasts (I'm sure @animats's NPCs would benefit from more memory), general data processing, etc.
Edited by Wulfie Reanimator
• Thanks 1
Link to comment
Share on other sites
3 minutes ago, Quistess Alpha said:
My point is, that all of the apps need to know before hand what 'key1' and 'key2' are. You can't easily create a key in script a and expect it to be discoverable in script b, unless script b iterates over every single key in the database, (which might be fine for a database only used for one application, but otherwise would be infeasible for large databases.)
we can test for the presence of a key with llReadKeyValue http://wiki.secondlife.com/wiki/LlReadKeyValue
if the key doesn't exist then it returns an error
Link to comment
Share on other sites
You are about to reply to a thread that has been inactive for 93 days.
Please take a moment to consider if this thread is worth bumping.
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now
Share
×
×
• Create New...
|
__label__pos
| 0.592819 |
, Volume 34, Issue 1, pp 27-38
Computerized Depression Screening and Awareness
Abstract
The DEPRESSION Awareness, Recognition andTreatment (D/ART) program under the sponsorship of theNational Institutes of Health has made consistentefforts to help educate many communities around thenation about depression. One important aspect of thiseffort includes offering free screening for depressionto the general public. Since new technology oftenpromotes curiosity and interest, a computerizeddepression screening and awareness program was created touse at fairs and other local events. Individuals whoparticipated completed a computerized version of theCenter for Epidemiological Studies Depressed Mood Scale (CES-D) and then received a one pageprintout that described the common symptoms ofdepression, a score indicative of their level ofdepressed mood, a brief explanation of the score, and atelephone number where additional information could beobtained. This paper details the construction of thecomputerized version of the CES-D including anevaluation of psychometric properties and consumersatisfaction with the program.
|
__label__pos
| 0.652139 |
Home Lifestyle Health Testosterone: How Is It Made And Why Is It Important?
Testosterone: How Is It Made And Why Is It Important?
5 min read
0
0
394
Testosterone
Our body works with various systems working together in a great complexity. One of those systems is the endocrine system which regulates the working of the various hormones in the body. There are various hormones which regulate many functions in the body. One of the most commonly known hormones is thyroxine which regulates the functioning of the thyroid gland. There is another hormone in the body known as testosterone which is commonly believed to be present only in men but that is a myth. It is present in both males and females and is one of the most important regulatory hormones in the body. It is extremely helpful for the reproduction, muscle strength and bone maintenance.
Testosterone
Source
About Testosterone
Testosterone produced by gonads Leydig cells in the testes in all the men and ovaries in women. The small quantities produced under adrenal glands as well androgen encourages the male characteristics development. The greater level of this hormone production in men and women initiates the male reproductive organs development both internal and external to the foetal growth and important for sperm production. The hormone indicates the body to create potential new blood cells, stay bone stronger. And make sure muscles stronger during and later puberty along with improves libido in men and women. This hormone is linked to pubic hair growth, prostate gland and testes, height, aggressive and sexual behaviour. It stimulates the luteinizing hormone secretion and stimulates follicle hormone.
Also, Read:- Are You Obsessed With Selfies? Here Is What Experts Have To Say
How Is Testosterone Made?
Testosterone
Source
Testosterone is produced in small amounts by the adrenal glands at the top of the kidneys and in the testicles in males and ovaries in females. While the hypothalamus finds the body needs additional testosterone production called as gonadotropin creates the way to the pituitary gland in the rear part of the brain. While the pituitary gland identifies the release of gonadotropin hormone. It begins production of two hormones luteinizing and follicle stimulating hormone. While the LH and FSH reach the testicles. They express to carry out two different things FSH starts sperm production when the LH encourages Leydig cells in the testicles to make more testosterone.
What Does Lack Of Testosterone Cause?
If the lack of testosterone secretion in the foetal growth, the foetus masculinization will stop working to occur normally as well deliver increase to lack of sex growth. If the lack of this hormone occurs in the puberty, the boy’s growth gets slow and lack of spurt seen. Moreover, the child may stop to create full sexual characteristics established with men in the puberty such as testes, penis growth, voice deepening and pubic hair growth. Lowered levels can cause disturbance of mood, muscle tone loss, body fat gain, lack of sexual performance and inadequate erections, memory loss, sleep difficulties, osteoporosis, etc.
Load More By Harman
Load More In Health
Leave a Reply
Your email address will not be published. Required fields are marked *
Check Also
Study: Hand Dryer Air To Be Full Of Potentially Harmful Bacteria
A study which was recently published in the American Society for Microbiology’s Applied an…
|
__label__pos
| 0.86868 |
One Reason Why RNA Viruses (Like Covid) Mutate So Quickly? A Little Organic Chemistry
Related articles
New Covid variants are once again emerging, suggesting that a late summer surge is imminent. Why does this keep happening? Some of it can be explained by a simple chemical reaction that would be taught in any standard organic chemistry course.
My colleague Dr. Henry Miller is (rightfully) concerned about evolving Covid variants, especially how they may affect elderly people and others who are vulnerable to severe disease. Furthermore, increasing numbers of people are suffering from severe symptoms of Long Covid, which is manifesting itself in surprising and disturbing ways. Dr. Miller recently quoted Dr. Michael Osterholm, the Director of the Center for Infectious Disease Research and Policy at the University of Minnesota. (Osterholm was one of the infectious disease experts seen on TV at the onset of the pandemic.)
“By week[s] three and four [since the onset of his long Covid symptoms], the fatigue really set in worse than during the illness itself. And I started having memory loss. If you'd asked me, What's a Champagne and orange juice drink? I couldn't have thought of the word mimosa.”
Dr. Michael Osterholm
Given that Covid is not done with us I thought that it would be instructive to examine some of its inner workings, in particular, why is SARS-CoV-2, the virus that causes Covid, so different? It's largely because of its ease of mutation, something most of us know by now. What most of us don't know is that some of this can be explained by some rather simple chemistry (more on this later).
Rapid mutation of RNA viruses
The primary reason for rapid RNA viral mutation is an error – the incorporation of the wrong nucleobase into newly forming RNA and also the absence of enzymes to repair the error, such as those found in DNA. This results in a mutated protein, something I wrote about in 2021 when discussing how such errors can cause single amino acid changes in Covid spike proteins and how this can result in new variants or subvariants of the virus.
But there's another reason (1) that RNA viruses are less stable and more prone to mutation than DNA viruses and anyone who has made it it through a first-year organic chemistry course (as if that's so easy) should understand at least some of this. This is because, at the molecular level, one difference between stable DNA and unstable RNA comes down to a single oxygen atom. Prepare yourselves for a lesson in organic chemistry. Which means waking up Steve and Irving for another episode of...
Steve (left) and Irving, your eternal hosts of The Dreaded Chemistry Lesson From Hell®, seem to think that disturbing their nap for this particular episode is a waste of their time. Like they have any place better to go?
Sorry about the chemistry. But read it anyhow.
What is going on is a fundamental, well-known chemical reaction called ester hydrolysis – the breaking of an ester bond by water, producing a carboxylic acid and an alcohol (Figure 1)
Figure 1. Hydrolysis of an ester consists of water adding to the ester bond and displacing an alcohol (R-OH), forming a carboxylic acid. The red boxes show the original water molecule. The blue hatch line shows the bond that is broken in the reaction.
Phosphate esters, the backbone of DNA and RNA, can also be hydrolyzed. The process (Figure 2) is conceptually identical to that of esters.
Figure 2. Hydrolysis of a phosphate ester. The red squares show the original water molecule. The blue hatch line shows the bond that is broken in the reaction.
What does any of this have to do with the instability of RNA and viral mutation? Time to roll out the chemistry. Sorry.
Anchimeric assistance (Neighboring group participation) Causes RNA Autocleavage
For reasons best left unstated, when a hydroxyl group is either five or six atoms away from an ester or phosphate these groups become much more reactive and break down more easily. This phenomenon is called anchimeric assistance (or neighboring group participation) and its effect is significant (Figure 3).
Figure 3. Comparative hydrolysis rates of two esters. (Top) The hydroxyl group at position 5 of the ester hydrolyzes (breaks down) quickly. (Bottom) When the hydroxy group is at position 4 of the ester the reaction is slower.
Figure 4 demonstrates the concept of neighboring group participation, but why does one carbon atom make a difference? It makes no sense. Except it does. This is one reason people hate organic chemistry. For just about every rule there is an exception. Here's one of them.
Figure 4. Methyl 4-hydroxybutyrate illustrated in two different conformationsThings might get ugly right about now. Good luck.
In Figure 4, the two figures are simply different representations of the same molecule. They are drawn in different configurations (for a reason), but it's still the same molecule.
Now you can see (Figure 5) why I've drawn the molecule this way. When the hydroxyl (OH) group is 5 atoms from the carbonyl group it is within perfect bonding distance to react with the carbonyl (C=O) group. By contrast, when the two groups are 4 atoms apart they are not at an optimal bonding distance and the hydroxyl group does not interact with the carbonyl group.
Figure 5. (Top) The reaction of 4-hydroxybutyric acid methyl ester to gamma-butyrolactone is promoted by the hydroxyl group 5 atoms away. First, the 4-hydroxybutyric acid methyl ester reacts with itself to form gamma-butyrolactone, which then reacts with water (green hatch line) to complete the hydrolysis reaction. This is an example of neighboring group participation. (Bottom) Removing one carbon from the top example makes a big difference. The hydroxyl and ester groups are four atoms apart - not an ideal binding distance. Consequently, heating the molecule does not result in the formation of the corresponding four-membered lactone.
What does this have to do with the instability of RNA?
A lot. Here's why. It's anachiometric assistance again, this time with phosphorous instead of carbon (Figure 6). This effect promotes the autocleavage of RNA, explaining its instability.
Figure 6. Autocleavage of RNA. (Left) As in Figure 5, RNA has a hydroxyl group that is 5 atoms (just the right bonding distance) from the phosphate group (blue arrow) that holds RNA (and DNA) together. This promotes a cyclization reaction, forming a transient cyclic phosphate (red box) in which the critical phosphate bond (red hatch line) has been broken. The resulting RNA fragment is in the green box. (Right) DNA has a hydrogen atom (red) circle in place of the hydroxyl group in RNA. Hydrogen does not participate in neighboring group participation and does not enhance this hydrolysis. This is why RNA is unstable relative to DNA.
So, it really just comes down to this:
DNA differs from RNA by one oxygen atom, a seemingly trivial change in a huge molecule. But that's all it takes to make RNA less stable than DNA, which, in part, explains the seemingly countless variants and subvariants that spontaneously pop up all over the world. Like life itself, it's all based on "simple" chemistry.
NOTE:
(1) ChatGPT provides a nice explanation of how RNA strand damage contributes to mutation. I didn't dare put this in the article for fear of hate mail. Or worse.
"The 2' hydroxyl group is directly involved in the RNA replication process, and its presence contributes to the higher mutation rates observed in RNA viruses compared to DNA viruses. Here's how the 2' hydroxyl group can influence mutation...The combination of the error-prone replication and the lack of proofreading results in a higher mutation rate in RNA viruses. Mutations can accumulate rapidly in the viral genome, leading to the generation of diverse viral populations."
|
__label__pos
| 0.97817 |
What Is Functional Training and How Can It Benefit You?
What Is Functional Training and How Can It Benefit You?
Functional training ranks among the buzziest of fitness buzzwords. But what the heck do trainers mean when they call training functional? Isn’t all training performing some sort of function?
Yes, but when it comes to improving your fitness, functional training is more nuanced. “Ideally, functional training conditions you to perform the actions of daily life [more effectively and efficiently],” says Jim DiGregorio, an exercise physiologist based in Norwood, N.J.
For a more detailed explanation of functional training, read on.
For more fitness tips and tricks sign up for Openfit for free today!
Why is Functional Training Important?
Functional training matters because it helps you develop “real world” fitness. As you go through your day, your movements follow patterns. How you reach for an item on a high shelf, the way you squat to pick up a heavy object, how you get out of a chair — if you’re moving, you’re performing a function that involves a pattern of pushing, pulling, lifting, squatting, etc.
Functional training not only improves specific movement for a sport — for example, better side-to-side mobility for tennis or more efficient strides for running — but also streamlines how you move in general. As DiGregorio points out, your everyday actions improve.
Functional training helps you build strength, power, and mobility that translates beyond the gym.
Why does that matter? Because when you’re more efficient, you put less strain on your muscles, tendons, ligaments, and joints.
You distribute the work throughout your body instead of relying on one muscle group — significantly reducing the risk of overuse injuries and chronic tightness and strain. And that mobility translates at the gym as well, because your movement improves as you’re working out.
Functional training has origins in physical therapy
Functional training emerged after World War I, when soldiers returned home with injuries that affected basic daily functions such as walking, bending, sitting, and standing. Their physical therapy emphasized core strength and mobility (among other things), which are essential for virtually all movement.
Over the years, bodybuilding, powerlifting, and other disciplines have drawn the focus away from improving real-life movement to serving specific fitness objectives, such as creating defined, muscular physiques.
Modern fitness ideology has refocused on function, with an emphasis on compound (multi-joint) movements instead of isolation (single muscle group) exercises. By doing that, the fitness equipment arsenal expanded to include things like slosh pipes, battle ropes, sandbags, kettlebells, and suspension trainers, along with more traditional tools like medicine balls, barbells, and dumbbells.
Functional training focuses on movements, not muscles
From a functional perspective, most gym routines have two problems.
1. They train individual muscle groups (biceps, pecs, quads, hamstrings, etc.) instead of movement patterns (e.g., pushing, pulling, lifting, stepping, walking, crawling, jumping, squatting).
2. They typically occur only in the sagittal plane of motion. That involves forward and backward movements, encompassing most classic exercises like the squat, biceps curl, and even running.
Here’s the thing: Human movement doesn’t usually recruit one muscle group at a time, and it certainly isn’t limited to one plane of motion.
Movement occurs in three planes of motion:
1. sagittal (front and back)
2. frontal (side-to-side)
3. transverse (rotational)
But there’s more to functional training than simply incorporating compound movements and “non-sagittal” exercises like the lateral lunge and dumbbell reverse chop into your routine.
An effective functional training program:
• favors free weights over machines.
• works muscles through their full range of motion (no “half rep” curls or presses).
• incorporates plenty of instability work (to recruit more muscles and fire up your core to re-stabilize your body).
Functional training emphasizes unilateral movement
Unilateral, or single-limb, training is a cornerstone of functional training. If you’ve ever done the Bulgarian split squat, single-arm bent over row, or alternating shoulder press, you’ve done unilateral training. (By contrast, bilateral training trains two limbs simultaneously — think biceps curlbench press, or back squat.)
Unilateral training helps overcome muscle imbalances, and it also adds instability to cultivate balance that translates in the real world. (Think: staying upright on an icy sidewalk versus doing a pistol squat on a wobble board.)
The Weekly Warm-Up
Get at home workout guides, easy recipes, and more in your inbox every week!
By signing up, you agree to receive marketing emails from Openfit. For more details, please review our privacy policy here.
What’s The Difference Between Functional Training and CrossFit?
Doing either CrossFit or functional training will likely help you get better at the other, since both focus on strength, stability, and movement in a way designed to help you function better. But although there’s plenty of overlap between CrossFit and functional training, they aren’t synonymous.
Both functional fitness training and CrossFit can deliver significant benefits, and using them together may boost your progress.
Body Weight vs. Equipment
• Functional training tends to use body weight for resistance. (Though if you want to increase the intensity, you can add some weights.)
• CrossFit usually focuses on equipment, integrating gym staples like barbells, rowing machines, pull-up bars, and more.
Pace and Setup
• CrossFit follows a high-intensity model, so you follow a circuit within a certain (short) timeframe.
• With functional training, you’re focused more on awareness of your movement patterns and perfecting your form, which tends to follow a more relaxed pace. You can incorporate a circuit if you like, or work on one or two movements in a session.
7 Functional Training Exercises You Should Try
These seven functional training exercises will help you sculpt head-turning muscle, but more importantly, they’ll help you become stronger and more powerful in movement patterns outside the gym.
1. Dumbbell reverse chop
functional training- dumbbell reverse chop
Why we like it:
Operating in the transverse (rotational) plane of motion, this total-body move targets the core, shoulders, and quads.
How to do it:
1. Stand with your feet shoulder-width apart, holding a dumbbell in both hands in front of you at arm’s length.
2. Keeping your back flat and core braced, bend your knees and rotate left, lowering the dumbbell to the outside of your left knee. That’s the starting position.
3. In one explosive movement, stand and rotate to the right, pivoting your left foot as you lift the weight above your right shoulder.
4. Reverse the movement to return to the starting position.
5. Do equal reps on both sides.
2. Push-up
functional training- push up
Why we like it:
It’s tough to beat the push-up when it comes to building functional upper body strength.
How to do it:
1. Get on all fours with your feet together, your body straight from head to heels, and your hands in line with (but slightly wider than) your shoulders.
2. Engage your glutes and brace your core to lock your body into position.
3. Keeping your elbows tucked and head down, lower your torso until your chest is within a few inches of the floor.
4. Pause, then push yourself back up to the starting position as quickly as possible.
3. Dumbbell squat
functional training- dumbbell squat
Why we like it:
It’s the king of lower-body exercises. Few other moves engage more muscles below the waist — if you squat with perfect form.
How to do it:
1. Stand with your feet hip- to shoulder-width apart, holding a pair of dumbbells by your sides.
2. Keeping your back flat and core braced, push your hips back, bend your knees, and lower your body until your thighs are parallel to the floor.
3. Pause, then push yourself back up to the starting position.
4. Step-up
functional training- dumbbell step up
Why we like it:
This unilateral exercise helps iron out muscle imbalances while introducing an element of instability that boosts muscle engagement throughout the body. You might feel it most in your quads, but your glutes, calves, and core are also working.
How to do it:
1. Stand tall, holding a pair of dumbbells at arm’s length by your sides, and place your left foot on a bench so that your hip, knee, and ankle are all bent 90 degrees.
2. Keeping your chest up and shoulders back, push your body up with your left leg until it’s straight (keep your right foot elevated).
3. Pause, and then lower your body back to the starting position with control.
4. Perform equal reps on both legs.
5. Bear crawl
Why we like it:
This dynamic core exercise has a hidden benefit: Synchronizing the actions of opposite limbs can profoundly affect neuromuscular communication, balance, coordination, and mobility. In short, it can help you move more powerfully and efficiently in everything you do.
How to do it:
1. Get down on all fours with your arms straight, hands below your shoulders, and your knees bent 90 degrees below your hips. (Only your hands and toes should touch the ground.)
2. Keeping your back flat, crawl forward and backward, moving opposite hands and feet in unison (right hand and left foot, left hand and right foot).
3. Continue moving forward with opposite hands and feet in unison for the desired number of steps, then reverse the movement to work your way back.
6. Bulgarian split squat
functional training- bulgarian split squat
Why we like it:
Another powerful unilateral exercise, the Bulgarian split targets the quads and glutes, while building strength and stability from head to toe.
How to do it:
1. Stand facing away from a bench, holding a pair of dumbbells at arm’s length by your sides.
2. Place the toes of your left foot on the bench behind you.
3. Keeping your torso upright and core braced, lower your body until your right thigh is parallel to the ground (don’t let your left knee touch it).
4. Pause, and then push back up to the starting position.
5. Perform equal reps on both legs.
7. Single-leg foot-elevated hip raise
functional training- single leg elevated hip raise bench
Why we like it:
By engaging your glutes and opening up your hips, this exercise can help counteract the consequences of sitting (read: weak glutes and tight hips — and the back pain that often accompanies them).
How to do it:
1. Lie face-up on the floor with your arms by your sides, your right foot on a bench (or other stable object), and your left foot elevated.
2. Squeeze your glutes and push through your right foot, raising your hips until your body forms a straight line from your right knee to your shoulders.
3. Pause, then return to the starting position.
4. Perform equal reps on both legs.
About
Try Openfit for FREE Today!
Get Started
shares
|
__label__pos
| 0.881251 |
daily pastebin goal
7%
SHARE
TWEET
Untitled
a guest Feb 14th, 2018 57 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
1. var fs = require('fs')
2. var path = require('path')
3.
4. module.exports = {
5. run: function (clients,message,args,logger){
6. logger.info('!help summoned')
7. var thecommands = ""
8.
9. if(args.length==0){
10. thecommands = "Command list:\n"
11. fs.readdir('./commands',function(err,files){
12. files.forEach(file => {
13. if(file.endsWith('.js')&&(file !== 'help.js')){
14. console.log(file.slice(0,file.indexOf('.')))
15. thecommands = [thecommands,file.slice(0,path.basename(file))].join('\n')
16. }
17. })
18. })
19. }
20.
21. else{
22. for(var i=0;i<args.length;i++){
23. try{
24. var commandname = args[i]
25. thecommands += require(`./${commandname}.js`).help + '\n'
26. }catch(err){
27. logger.info(err)
28. }
29. }
30. }
31.
32. message.reply(thecommands)
33. }
34. }
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand
Top
|
__label__pos
| 0.998534 |
Uncategorized
E-Cigarettes – Most Common And Safest Smoking Threats
May 1, 2021 by wrigh588
vaping health risks
E-Cigarettes – Most Common And Safest Smoking Threats
Vaporizing: HEALTH THREATS, Popularity and Socio-economic Impact Vaping or e cigarettes have become extremely popular over modern times. It comes in a variety of sizes and shapes, incorporating trendy flavors and customizable add-ons. Considered a slick tobacco alternative, they are appealing to both adults and old alike. But recent reports reveal a steadily increasing lung disease associated with vaporizing. Here is a short list of the most frequent vaporizing health risks.
Many teens and high school students don’t realize the adverse affects that utilizing the cigarettes can have on the lungs. With little or no risk to your health, vaporizing cigarette butts can drastically cut down how much toxins inhaled by a smoker. A study published in May 2021 reports that among students, there exists a significant link between the quantity of cigarette butts smoked and the increased risk of cardiovascular disease. As smoking becomes popular amongst teenagers and young adults alike, the chance of vaporizing cigarette butts will continue steadily to increase in tandem.
Among the newest issues to face the world as a result of our reliance on electronic cigarettes is heart disease. In May of 2021, researchers published a study which found strong evidence that longterm unwanted effects of tapering cigarettes are dangerous to your health. The report directly cites two main findings which directly contradict each other. According to the research, long term use of the cigarettes escalates the risk of heart disease. However, the second finding demonstrates although long term unwanted effects of e cigarette smoking may result in a higher risk of cardiovascular disease, it is unlikely to cause death.
Many people may be surprised to find that vaporizing your personal cigarettes can result in serious lung damage. Because some vapers do not yet recognize that the temperature they use to heat up their device can severely damage the cells and tissues within the outer layers of the lungs. Damage due to prolonged smoking can also lead to weakening of the esophageal sphincter, that is responsible for maintaining the strength of the walls of your lungs.
Long term usage of any vaporizing device can also negatively affect the mind. Children who smoke while using the cigarettes have been proven to have slower reflexes and so are more likely to experience short attention spans. Longterm exposure can even bring about the increased loss of mental faculties. This is one of the biggest e-cigarette smoking dangers and one of why it has become so hard for parents to avoid teens from getting involved in vapinger.com this harmful activity.
The final vaporizing health risks we are going to discuss are the effects of long term use of these devices. Nicotine in the liquid used in most vaporizers can stay static in your system for up to six hours once you finish smoking. Therefore you could be exposing your system to cigarette toxins for much longer than you realize. E cigarettes, because of their porous and thin nature, can absorb a substantial amount of tar and toxic gases into the body. Tar deposits can build-up on the lungs and in severe cases cause cancer. Nicotine is highly addictive and will be an incredibly hard habit to break.
One of the most common complications of long-term nicotine use is lung injury. The liquid may enter the lungs through the mouth and nose and get in to the airways, where it continues to irritate the liner of the lungs. The more frequent and longer the smoker uses these devices, the more chance that he / she will establish chronic lung injuries. Chronic nicotine use is among the biggest of using tobacco dangers and the type of lung injury that can develop is very severe.
Chronic bronchitis is a very serious condition that can develop if a smoker will not quit. Chronic bronchitis is due to the constant irritation of the liner of the lungs. If it goes untreated, then it could turn into pneumonia, and this can be deadly. Nicotine has also been associated with cases of skin rash and oral cancer. E using tobacco is probably the biggest verified deaths because of the hazards of the product and the risk that it poses to your wellbeing.
|
__label__pos
| 0.64504 |
This documentation is archived and is not being maintained.
HttpRequest.ApplicationPath Property
Gets the ASP.NET application's virtual application root path on the server.
Namespace: System.Web
Assembly: System.Web (in System.Web.dll)
public string ApplicationPath { get; }
Property Value
Type: System.String
The virtual path of the current application.
Use this property to construct a URL relative to the application root from a page or Web user control that is not in the root directory. This allows pages and shared controls that exist at different levels of a directory structure to use the same code to link to resources at fixed locations in the application.
The following example uses the Write method to HTML-encode and then write the value of the ApplicationPath property to a text file. This code example is part of a larger example provided for the HttpRequest class. It assumes the existence of a StreamWriter object named sw.
// Write request information to the file with HTML encoding.
sw.WriteLine(Server.HtmlEncode(DateTime.Now.ToString()));
sw.WriteLine(Server.HtmlEncode(Request.CurrentExecutionFilePath));
sw.WriteLine(Server.HtmlEncode(Request.ApplicationPath));
sw.WriteLine(Server.HtmlEncode(Request.FilePath));
sw.WriteLine(Server.HtmlEncode(Request.Path));
The following example uses the ApplicationPath property to programmatically construct a path to a resource that is in a fixed location in the application. The page that references the resource does not have to be located in the same directory as the resource.
<%@ Page Language="C#" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<script runat="server">
protected void Page_Load(object sender, EventArgs e)
{
Label1.Text = Request.ApplicationPath;
Image1.ImageUrl = Request.ApplicationPath + "/images/Image1.gif";
Label2.Text = Image1.ImageUrl;
}
</script>
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
<title>HttpRequest.ApplicationPath Example</title>
</head>
<body>
<form id="form1" runat="server">
<div>
ApplicationPath:<br />
<asp:Label ID="Label1" runat="server" ForeColor="Brown" /><br />
<asp:Image ID="Image1" runat="server" /><br />
ImageUrl:<br />
<asp:Label ID="Label2" runat="server" ForeColor="Brown" />
<br />
</div>
</form>
</body>
</html>
If you run this example in a Web application that is named WebSite1, /WebSite1 will be displayed as the value of the ApplicationPath property and /WebSite1/images/Image1.gif will be displayed as the complete path of the image.
.NET Framework
Supported in: 4, 3.5, 3.0, 2.0, 1.1, 1.0
Windows 7, Windows Vista SP1 or later, Windows XP SP3, Windows XP SP2 x64 Edition, Windows Server 2008 (Server Core not supported), Windows Server 2008 R2 (Server Core supported with SP1 or later), Windows Server 2003 SP2
The .NET Framework does not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.
Show:
|
__label__pos
| 0.57299 |
src/Pure/Makefile
author wenzelm
Fri Feb 28 16:38:55 1997 +0100 (1997-02-28)
changeset 2692 484ec6ca0c50
parent 2235 866dbb04816c
child 2960 a6b56d03ed0d
permissions -rw-r--r--
added Syntax/token_trans.ML;
1 # $Id$
2 #########################################################################
3 # #
4 # Makefile for Isabelle (Pure) #
5 # #
6 #########################################################################
7
8 #The pure part is common to all systems.
9 #Object-logics (like FOL) are loaded on top of it.
10
11 #To make the system, cd to this directory and type
12 # make -f Makefile
13
14 #Environment variable ML_DBASE specifies the initial Poly/ML database, from
15 # the Poly/ML distribution directory.
16 #WARNING: Poly/ML parent databases should not be moved!
17
18 #Environment variable ISABELLECOMP specifies the compiler.
19 #Environment variable ISABELLEBIN specifies the destination directory.
20 #For Poly/ML, ISABELLEBIN must begin with a /
21
22 BIN = $(ISABELLEBIN)
23 COMP = $(ISABELLECOMP)
24 FILES = POLY.ML NJ.ML NJ093.ML NJ1xx.ML ROOT.ML basis.ML library.ML\
25 term.ML symtab.ML type.ML sign.ML\
26 sequence.ML envir.ML pattern.ML unify.ML logic.ML theory.ML thm.ML\
27 net.ML display.ML deriv.ML drule.ML tctical.ML search.ML tactic.ML\
28 goals.ML axclass.ML install_pp.ML\
29 NJ093.ML NJ1xx.ML ../Provers/simplifier.ML
30
31 SYNTAX_FILES = Syntax/ROOT.ML Syntax/ast.ML Syntax/lexicon.ML\
32 Syntax/parser.ML Syntax/type_ext.ML Syntax/syn_trans.ML\
33 Syntax/pretty.ML Syntax/printer.ML Syntax/syntax.ML\
34 Syntax/syn_ext.ML Syntax/mixfix.ML Syntax/symbol_font.ML\
35 Syntax/token_trans.ML
36
37 THY_FILES = Thy/ROOT.ML Thy/thy_scan.ML Thy/thy_parse.ML\
38 Thy/thy_syn.ML Thy/thy_read.ML Thy/thm_database.ML
39
40 #Uses cp rather than make_database because Poly/ML allows only 3 levels
41 $(BIN)/Pure: $(FILES) $(SYNTAX_FILES) $(THY_FILES) $(ML_DBASE)
42 @case `basename "$(COMP)"` in \
43 poly*) echo database=$${ML_DBASE:?'No Poly/ML database specified'};\
44 cp $(ML_DBASE) $(BIN)/Pure; chmod u+w $(BIN)/Pure;\
45 echo 'PolyML.use"POLY";use"ROOT" handle _=> exit 1;' \
46 | $(COMP) $(BIN)/Pure;\
47 discgarb -c $(BIN)/Pure;;\
48 sml*) if [ ! '(' -d $${ISABELLEBIN:?} -a -w $${ISABELLEBIN:?} ')' ];\
49 then echo Bad value for ISABELLEBIN: \
50 $(BIN) is not a writable directory; \
51 exit 1; \
52 fi;\
53 echo 'use"NJ.ML"; use"ROOT.ML" handle _=> exit 1; xML"$(BIN)/Pure" banner;' | $(COMP);;\
54 *) echo Bad value for ISABELLECOMP: $(COMP); \
55 echo " " \"`basename "$(COMP)"`\" is not poly or sml;;\
56 esac
57
58
59 test: $(BIN)/Pure
60
61 .PRECIOUS: $(BIN)/Pure
|
__label__pos
| 0.743795 |
DrivingItalia simulatori di guida Jump to content
VELOCIPEDE
rFactor 2 FAQ, problemi noti e loro soluzione
Recommended Posts
VELOCIPEDE
dal sito ufficiale:
Pre-Purchase questions
Is there a free demo of rFactor2?
Why won’t the demo run, why am I being asked to register or login for a free demo?
What are the minimum system requirements for rFactor2?
What is the difference between the “Standard” and “Lifetime” options when purchasing?
Can I change from Standard to Lifetime after purchase?
What is the activation policy?
What are the “online services” I have to renew for if I purchase the Standard version of rFactor2?
What is the refund policy for rFactor2?
Can I install rFactor2 on another machine?
Can I share my ‘account’?
Is modding allowed?
Problems launching rFactor2
All options in the launcher are greyed out and I cannot click them.
Just the multiplayer option is greyed out on the launcher.
The simulation does not start at all.
Problems running rFactor2
My simulation seems to be running in slow motion.
My controller (most likely Logitech G25 or G27) is rattling and very noisy when I am driving.
Activation questions
All options in the launcher are greyed out and I cannot click them.
How do I reactivate? (on a new system, second machine, etc)
Can I use rFactor2 on a second machine?
How can I ‘save’ my activation data so I do not have to reactivate?
Usage information
How do I install the simulation?
How do I update the simulation?
How do I install content?
How do I update content?
Options information
What does the ‘sync’ option do in the graphics settings?
Email
How can I contact ISI for support?
I have not received any email from you after purchase!
Rights and permissions
Is modding allowed?
Can I use footage of your simulations?
Can I use ISI content in a project?
Can I use rFactor2 on a second machine?
Multiplayer
What is lag/warp, and how can I fix it?
Edited by VELOCIPEDE
Share this post
Link to post
Uff
Primi suggerimenti per provare a diminuire/risolvere alcuni problemi noti.
- force feedback esagerato e rumoroso
If you are experiencing a strong noise/rattle with your controller...
...such as the G25/G27 you can try increasing this value in your Userdata/player controller.ini file (if you saved your own profile you would have to change it there as well):
Steering torque filter="0"
The range is from 0 to 32.
This will filter the FFB effect over a series of frames, which will smooth out the feedback and reduce the rattle. Please note that as you increase this value the FFB may start to feel a bit "spongy" or "numb" and that it will increase latency.
The reason you don't feel/hear this rattle in many other sims is because they by default provide some filtering. By default we do not.
The value you set this to will be down to personal preference. If it feels good to you, and the rattle/noise is acceptable, then go for it. We did some testing locally and came up with some different values.
One of our internal testers felt 4 was good.
Luc settled on a value of 8. He felt any higher felt to numb for him.
I personally like 16. I could still feel the car, catch it when it got loose, it didn't affect my lap times at all, and I didn't perceive any latency, neither visually nor in feel. Yes, it did feel more on the numb side, but after a few laps I found I liked it actually.
So, give this a try and see if you can find something acceptable...
Edit: Please note this is not a permanent solution, but hopefully will get you to something acceptable for the time being...
- stuttering grafico
Open your player.PLR file and find this line:
Flush Previous Frame="0" // Make sure command queue from previous frame is finished (may help prevent stuttering)
Change the 0 to a 1
There is also this line:
Record To Memory="0" // record replays to memory rather than disk (may possibly reduce stuttering, but at your own risk because memory usage will be significant for long races)
Share this post
Link to post
giuliano
Ciao ,scusate ho provato ad aprire un dedicated server ma non riesco a far entrare nessuno, tutti quelli che ci provano compare la scritta server non disponibile,(provato anche con il local server abilitato) avete qualche suggerimento? Come in rfactor ho aperto le porte del gioco nel mio router che, se non ho letto male, sono per rfactor2 44297 e 54297.
Grazie
Share this post
Link to post
el_filo
Ciao ,scusate ho provato ad aprire un dedicated server ma non riesco a far entrare nessuno, tutti quelli che ci provano compare la scritta server non disponibile,(provato anche con il local server abilitato) avete qualche suggerimento? Come in rfactor ho aperto le porte del gioco nel mio router che, se non ho letto male, sono per rfactor2 44297 e 54297.
Grazie
Pure io ho lo stesso problema :doh:
Share this post
Link to post
Marco Erc
Scrivo qui sperando sia giusto, qualcuno sa perchè non vedo i vari monitor d'informazione in Rf2? Quadri dei tempi, posizione, mappa, menù box....insomma vedo i quadrati ma son vuoti!! Che devo fare?? Aiuto...
Share this post
Link to post
Manu15_Matto
Qualcuno mi può aiutare? Entro bella lobby di rfactor e ad ogni server non posso entrare! Il pulsante e oscurato! Anche se ho la mod! E in quelle dove viene scritto mod ci clicca sopra e non mi scarica mai la mod! Mi potete aiutare per piacere!
Share this post
Link to post
Tr51
@ Marco Erc
@ Manu15_Matto
Molti piccoli problemi che avevo li ho risolti così : disinstallazione , riavvio PC , installazione. (io poi faccio un defrag che non c'entra con i problemi , ma fa ordine & compatta)
ps: salvate i file : controller.ini & PLR se avete impostato dati particolari
...sembra assurdo ma ora va benissimo.
Share this post
Link to post
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Guest
Reply to this topic...
× Pasted as rich text. Paste as plain text instead
Only 75 emoji are allowed.
× Your link has been automatically embedded. Display as a link instead
× Your previous content has been restored. Clear editor
× You cannot paste images directly. Upload or insert images from URL.
×
×
• Create New...
Important Information
By using this site, you agree to our Terms of Use.
|
__label__pos
| 0.82565 |
Дали протеинови шейкове помогне за изграждане на мускули?
0
191
Does Protein Shakes Help Build Muscles
With the advancement in technologies, it has become quite easy to build up muscles and lose weight. въпреки това, if you are a beginner in this field then your initial workout sessions may not be simple. It requires a lot of strength, potential, patience, and hard work to get a fit body. In such cases, protein shakes work as an aid for the practitioner by easing the muscle build up and muscle recovery after a workout. As per the sources, повече от 70% of the athletes consider protein shake as the vital supplement for human’s body.
Now if your question is “does protein shakes help build muscles?”Although there are no clinical reports and evidence proving the fact that protein powder helps in muscles development, the several studies conducted for the bodybuilders stated that it helped them to feel full and stay fit. В общи линии, the doctors recommend the normal man to consume at least 55 grams of protein a day whereas 45 grams of protein is required by a female body. apart from building up muscles and losing weight, protein is termed to be an essential food for a healthy diet as it provides you with adequate energy to perform your day to day activities, it helps is tissues growth and repair.
Дали протеинови шейкове помогне за изграждане на мускули
въпреки това, the quantity of protein requirement for an athlete is a lot more than the normal people. This is the major reason why athletes often prefer having protein shakes rather than eating eggs, риба, месо, and nuts as its tiresome to prepare the protein-rich meals every time. Here I am presenting the facts regarding how the protein shakes are linked with muscles build up?
Does protein shakes help build muscles?
According to the famous surveys, Muscles growth of a person automatically starts diminishing at an age of 20-23 години. Many people believe that consuming protein can help them strengthen their muscles however, this perception does not hold water as easting various protein-related supplements may not prove to be effective enough in the same.
If you really desire to build muscles and recover tissue then a regular exercise and workout with physical activities is a must. въпреки това, workout often leads to the loss of protein from the body and hence, it is recommended to have protein powder to ease your sessions. Всичко на всичко, protein shakes plays a vital role in muscles build up but the same should be preferred along with strength training and exercises. освен това, it is advisable to take the protein shake after 30 minutes of your activity.
There are mainly two types of protein powders i.e., dairy-based protein powder and non-milk protein powder. Although both of the powders are sufficient in muscles growth, the digestion and absorption of the non-milk protein are comparatively complicated than that of dairy-based protein powder. Следователно, it is recommended to opt for the later.
Some of the major dairy-based proteins are as follows:
• Whey protein – one of the most famous and highly preferred protein powders is whey protein. Although whey contains a high quantity of lactose (difficult to digest), the same is lost during the processing and preparation of protein powder and hence, make the digestion easier. This protein powder is enriched with Branched-chain amino acids that are mainly known for their muscles build up features. The studies have proven that whey protein help in building up the muscle mass assist in muscles recovery after long workout sessions and enhance muscles. Besides that, it is also found to be effective enough in losing appetite and reducing the calorie intake per day to a great extent.
• Casein protein – casein is another dairy-based protein powder that is extracted from milk. въпреки това, the absorption of casein protein is comparatively slower than that in whey protein. This is because casein comprises a gel that slows down the digestion process when interacts with stomach acids. Due to the delayed digestion, it diminishes the scale of muscles protein breakdown. освен това, the intake of two to three glasses of casein protein shake, you may feel full for a long period of time
• Egg protein – the egg is considered to be the major source of protein that especially features the loss of appetite and controlling hunger hormones. въпреки това, this protein powder is extracted and prepares out of the eggs white due to which the feeling of fullness may decrease to some extent (as the egg yolk is taken out while the processing of egg protein powder). This protein powder consists of a total of 9 kinds of amino acids that your body is not capable to make itself. Egg protein is termed to be the second most protein powder that contains the highest quantity of branched-chain amino acids after whey protein. Although egg protein is not a dairy based product, the same is considered to be the best alternative for the people having allergies with the above-mentioned protein powders.
заключение
This way protein shakes make muscle build up easy. въпреки това, make sure to consume it only after practicing the workout. Освен това, whether you are a beginner or a professional athlete, it is recommended to avoid taking more than 2-3 glasses of a protein shakes a day. This is because the digestion of plant and dairy-based protein powder is a bit complicated and is a time-consuming process. въпреки това, if you desire to get the fit body with powerful muscles then what is better than opting for these shakes?
ОСТАВЕТЕ КОМЕНТАР
Моля, въведете вашия коментар!
Моля въведете вашето име тук
|
__label__pos
| 0.773665 |
Pin Me
The Role of the Hippocampus in Generalized Anxiety Disorder
written by: Nicholas Kuvaas • edited by: jen2008 • updated: 10/14/2010
The brain is a complex organ which affects so many things including psychological disorders. This article examines the specific relationship between generalized anxiety disorder and the hippocampus.
• slide 1 of 4
A Brief Summary of Generalized Anxiety Disorder
Everyone experiences anxiety at one time in their life. Anxiety can manifest itself as fear, agitation, or worry, and some events can be very anxious. However, if anxiety begins to affect your daily life on a regular basis, it becomes problematic1. Generalized anxiety disorder is excessive and constant worrying about, well, everything that also affects daily life and activities. Life in general such as relationships, work, and day-to-day activities may be the source of excessive and constant worrying, but this is different from specific anxiety disorders in that many things can trigger an anxious response.
Other symptoms related to generalized anxiety disorder appear in two forms, physical symptoms and cognitive symptoms. Physical symptoms may include fatigue, muscle tension, trembling or twitchiness, sweating or nausea, and shortness of breath or rapid heartbeat. The cognitive symptoms are restlessness or feeling on edge, difficulty concentrating, and trouble sleeping due to anxiety. As possible causes of generalized anxiety disorder start to be studied, the brain became the center focus. Overtime, certain areas of the brain have been associated with generalized anxiety disorder, and one of these areas is the hippocampus.
• slide 2 of 4
What Does the Hippocampus Do?
In the brain, attached to the temporal lobe is the hippocampus. Specifically, it lies under the medial temporal lobe and exists on both sides of the brain. If you know where your temple is, the temporal lobe is located directly beneath it. The hippocampus plays an important role in memory, specifically episodic memory (events) and facts2. Its role is so important that damage to it can lead to anterograde amnesia (loss of ability to form new memories while remembering memories that happened before the damage), and it is also easily damaged compared to the rest of the brain making it vulnerable. New research has found that the hippocampus is also related to emotions3, and this is where generalized anxiety disorder and the hippocampus become related.
• slide 3 of 4
The Hippocampus and Generalized Anxiety Disorder
There is a relationship between the hippocampus and generalized anxiety disorder, but it is not only the hippocampus that is involved in this process3. There is also the amygdala which governs emotions and is responsible for fear responses in the brain and directly interacts with the hippocampus. Together, these two brain structures connect an emotion to an event and this leads to a release of stress hormones which increase arousal, a factor related to anxiety. Over time, the memory alone of the event can bring on this reaction leading to a period of anxiety. People who suffer from generalized anxiety disorder may have dysfunctional pathways which release too many or too few neurotransmitters leading to an abnormal anxiety response.
Again, the specifics are still unclear. However, this is believed to be how the hippocampus is related to generalized anxiety disorder, and it is also believed to related to other anxiety disorders such as panic disorder, post traumatic stress disorder, and specific phobias. The brain is a complex mechanism and few things act alone, but, with time, science will determine the mechanisms related to disorders, and this will lead to cures and better solutions.
privacy policy
|
__label__pos
| 0.836923 |
The principal component of car transmission is the gearbox. This efficient feat of engineering innovation propels the gears into motion when the gear selector is activated, providing the driver the facility to change up and down the gears, controlling the speed effectively depending on the nature of the road.
A vehicle is propelled forward when the gearbox converts the speed of the engine into torque. Maximum torque and speed is achieved the gears are separated into separate categories with a corresponding gear attached to the category.
In more cases than not, gearboxes are designed to increase the torque whilst reducing the driveshaft speed of the engine in the car. This is achieved as the driveshaft in the gearbox rotates at a much slower speed than the driveshaft of the engine. This reduction in speed will convert energy into speed thrust, causing the gearbox driveshaft to produce more power, and ultimately more thrust.
The gearbox designs of manual transmission are simple, and require the manual movement of a sliding gear. The gear shifter and the lever are connected, allowing the slider gear to move. When the clutch is activated the sliding gear will disengage from the existing position and slide along the gearbox to re-engage with the rest of the gears, allowing you to select a higher or lower gear.
Contemporary manual gearboxes feature a diagonal gear design, resting alongside the principal gears of the vehicle. This co-ordination allows the sliding gear to effortlessly engage the remainder of the gears, preventing different gears from coming into contact with each other and damaging the transmission.
Automatic gearboxes are ever so slightly different. Unlike manual gearboxes, they select the appropriate gear automatically. The driver doesn’t need to shift the gears as they engine performs this on its own. Automatic gearboxes feature hydraulic systems, detecting the pressure of the fluids in the engine, and choosing the appropriate gear. Automatic gearboxes use torque as opposed to a clutch when selecting gears.
The eternal combustion engine is a wonderful example of engineering ingenuity. The gearbox is one of the principal, and more complicated, examples of innovation and what can be achieved with some considered thought and application.
|
__label__pos
| 0.84844 |
How to Add a Word to the Dictionary in Python
Python is a programming language that is widely used in data science, machine learning, and artificial intelligence. It is a versatile language that allows developers to create powerful and efficient applications. One of the most useful features of Python is its ability to manipulate text. In this article, we are going to discuss how to add a word to the dictionary in Python. The dictionary is a built-in data type in Python, and it is essentially a collection of key-value pairs. It is used to store data in a way that is easy to search and retrieve.
Table of Contents
What is a Dictionary in Python?
Before we dive into how to add a word to the dictionary in Python, it is important to understand what a dictionary is. As mentioned earlier, a dictionary is a collection of key-value pairs. The key is like an index in a list, and the value is the data that is associated with that key. In Python, dictionaries are denoted by curly braces {}. Each key-value pair is separated by a colon, and each pair is separated by a comma. Here is an example of a dictionary:
my_dict = {"apple": 1, "orange": 2, "banana": 3}
In this example, "apple", "orange", and "banana" are the keys, and 1, 2, and 3 are the values. We can access the values in the dictionary by using the keys. For example, if we want to access the value associated with the key "apple", we can do so like this:
print(my_dict["apple"])
This will output the value 1 to the console.
Adding a Word to the Dictionary in Python
Now that we understand what a dictionary is in Python, let’s move on to how to add a word to the dictionary. To add a key-value pair to a dictionary, we simply assign the value to the key. Here is an example:
my_dict = {"apple": 1, "orange": 2, "banana": 3}
my_dict["pear"] = 4
print(my_dict)
In this example, we added the key-value pair "pear": 4 to the dictionary. This is done by assigning the value 4 to the key "pear". The output of this program will be:
{"apple": 1, "orange": 2, "banana": 3, "pear": 4}
As you can see, the new key-value pair has been added to the dictionary.
Updating a Value in the Dictionary
Sometimes we may need to update the value associated with a key in the dictionary. To do this, we simply assign a new value to the key. Here is an example:
my_dict = {"apple": 1, "orange": 2, "banana": 3}
my_dict["apple"] = 5
print(my_dict)
In this example, we updated the value associated with the key "apple" to 5. The output of this program will be:
{"apple": 5, "orange": 2, "banana": 3}
As you can see, the value associated with the key "apple" has been updated to 5.
Checking if a Key Exists in the Dictionary
Before we add a new key-value pair to the dictionary, we may want to check if the key already exists. We can do this using the "in" keyword. Here is an example:
my_dict = {"apple": 1, "orange": 2, "banana": 3}
if "pear" in my_dict:
print("The key 'pear' already exists.")
else:
my_dict["pear"] = 4
print(my_dict)
In this example, we check if the key "pear" already exists in the dictionary. If it does, we print a message to the console. If it does not, we add the key-value pair "pear": 4 to the dictionary and print the updated dictionary to the console.
Conclusion
In conclusion, adding a word to the dictionary in Python is a simple task. We can do this by assigning a value to a key. We can also update a value associated with a key by assigning a new value to the key. Before adding a new key-value pair to the dictionary, we can check if the key already exists using the "in" keyword. Dictionaries are a powerful data type in Python, and they are essential for many applications. By understanding how to manipulate dictionaries, we can create efficient and powerful Python programs.
Leave a Comment
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.998447 |
florida-scrub-jay.jpg
The trails offer pleasant exploration through several of the dominant plant communities. Florida scrub-jay and gopher tortoise are common in the oak scrub. Seasonal wetlands host wading birds and provide critical breeding habitat for gopher frogs, one of the many upland species found in gopher tortoise burrows. Listen for Bachman’s sparrows in the pine flatwoods and watch for the occasional deer, turkey, and fox squirrel. Open areas, such as former pastures, may host southeastern American kestrels, northern harriers (winter) and other raptors. Diverse wildflowers attract numerous butterfly species.
Wildlife Spotlight: Florida Scrub Jay
Distant cousin to the blue jay, the Florida scrub-jay is the only bird species unique to Florida and is found only in very specific scrub oak habitat. Scrub-jays are about 11 inches long and mostly blue, with pale gray on the back and belly. The plumage of males and females does not differ, but only the female incubates eggs and utters the “hic-cup” call.
Scrub jays are found only in the relict patches of oak scrub, where four low-growing oak species provide acorns, the birds’ most important winter food. From August to November, each bird buries (caches) several thousand acorns just beneath the sand and retrieves them when other foods are scarce. Scrub-jays also eat a variety of insects and other small animals.
Check out other species recorded from Moody Branch WEA, or add observations of your own, by visiting the Moody Branch WEA Nature Trackers project.
Scrub-jays form family groups of 2 to 8 birds that include some young from previous breeding seasons. Family groups defend a specific territory and each bird takes a turn as sentry, sitting atop an exposed perch to watch for predators or intruders. Nesting occurs from March through June. Scrub-jays build their nests about a yard off the ground in shrubs, mostly low-growing oak species, and construct a platform of twigs lined with palmetto or cabbage palm fibers.
Since the early 1900s, the number of Florida scrub-jays has declined by as much as 90%, primarily due to habitat loss from residential development, citrus production and the exclusion of periodic fires necessary to maintain the oak scrub plant community in the low, open conditions favored by the birds and other scrub species. As a result, the Florida scrub-jay is listed as Threatened. Biologists are trying to help scrub-jays by restoring and maintaining quality habitat using prescribed fire and other management methods. Learn more about these birds and what you can do to help.
FWC Facts:
Whooping cranes mate for life, but they will take a new mate after the loss of the original. The pair will return to use and defend the same nesting and wintering territory year after year.
Learn More at AskFWC
|
__label__pos
| 0.528988 |
Anatomy unit 3
Card Set Information
Author:
Anonymous
ID:
41003
Filename:
Anatomy unit 3
Updated:
2010-10-10 05:18:18
Tags:
Face Dissection
Folders:
Description:
Relationships
Show Answers:
Home > Flashcards > Print Preview
The flashcards below were created by user Anonymous on FreezingBlue Flashcards. What would you like to do?
1. Parotid duct is what to the masseter m.?
Lateral (Superficial) and Anterior
2. Parotid Gland to Massester m.?
Posterior and Lateral (superficial)
3. Branches of the facial nerve to the masseter m.?
Lateral (superficial)
4. Facial artery to the mandible?
Lateral (superficial)
5. Facial vein to Facial artery?
Posterior
6. Sternocleidomastoid m. to omohyoid m.
Superficial
7. Sternocleidomastoid m. to the carotid sheath?
superficial
8. External Jugular v. to the sternocleidomastoid m.?
Lateral (superficial)
9. Great Auricular nerve sternocleidomastoid m.?
Posterior and Lateral (superficial)
10. Transverse Cervical n. to the sternocleidomastoid m.?
Posterior and Lateral (superficial)
What would you like to do?
Home > Flashcards > Print Preview
|
__label__pos
| 0.996407 |
PLE glossary
Schmuckbild
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
A
Artifact
Synonym for Engineering Asset
Application Engineering
Application engineering is the creation of an individual variant based on the toolkit that domain engineering has created. If domain engineering is called "engineering for reuse", application engineering can be called "engineering with reuse" or "engineering benefitting from reuse". Application engineering basically involves creating a configuration for the desired variant and then deriving the engineering assets according to that configuration. If all the needs of the variant have already been foreseen by domain engineering, the derived assets already constitute the solution variant. The usual case, however, is to take the derived assets as the basis and to complement them with application-specific assets in order to produce the full solution variant.
D
Domain Engineering
Domain engineering is the production and management of the toolkit of feature models, family models, and engineering assets from which various product variants (aka applications) can be built. Differently from application engineering, which focuses on a single variant, domain engineering keeps all the variants in the scope of the product line in perspective. This is why domain engineering can be called "engineering for reuse."
E
Engineering Assets
Any (typically digital) asset which the product is composed of or which is used to help create the products. Typical engineering assets are requirements, models, code, test cases and documentation. These are usually created and maintained as part of the engineering process.
F
Family Model
Family models provide the link between problem space (the world of features and attributes) and solution space (the world of requirements, architecture models, and other engineering assets). More specifically, a family model is a representation of an engineering asset that allows you to control the variation points in this asset.
For example, if your engineering asset is a requirements module, then the corresponding family model will represent each chapter and each individual requirement as a so-called family model element. You can then connect the family model elements to your feature models and define rules such as "if feature A and B are selected, chapter 3 shall be included; otherwise, the chapter shall be excluded". But instead of defining such rules, you can also select (or exclude) family model elements directly when you configure your variant. Which way is better depends on your use case.
Feature Model
A feature model is a means to organize features and express the relationships between them. Feature models are usually represented as trees, in which the more general features appear closer to the root, and the more specialized features appear further away, as so-called "child" features of a more general "parent" feature. Besides generalization and specialization, feature models also express other kinds of relationships between features. For example, some features are options - they can be freely selected, independently of other features - whereas other features are alternatives - if one is selected, then its alternatives cannot be selected at the same time. These relationships (and more) are encoded in a feature model, and together, they form a rule set that describes whether a feature configuration is valid (corresponds to a valid product variant).
H
Holistic Variant Management
Holistic Variant Management is an important part of PLE. Modern systems engineering requires various kinds of shared engineering assets (building blocks) such as requirements, models, code, tests. Instead of managing variability and product configurations separately for each kind of asset, using vendor-specific concepts and tools, the goal of Holistic Variant Management is to maintain all variability-related information in a single source of truth, and to apply this information across all assets in a unified way. This includes not only the management of variability and product configurations, but also the (automated) derivation of variant asset. This way, holistic variant management helps to give all stakeholders (managers, engineers, customers) access to the same consistent information regarding variability and product configuration(s).
P
Product Line Engineering
Product Line Engineering (PLE) is used to identify engineering approaches which focus on the development of multiple similar products as a single product line. Often PLE is a shorthand for Systems and Software Product Line Engineering, where the focus is on engineering physical products which are complex and software-intensive such as cars, control systems or automation components.
V
Variant Assets
Variant Assets are the assets which a variant (product) is composed of. Variant assets can be either (derived from) shared assets (source code, specifications, etc.) or assets which are unique to a given product (e.g. the variant configuration [ISO25680: bill-of-features]).
Variant Management vs. Configuration Management
Although variant management and configuration management are often used interchangeably, there is a clear distinction between these terms. While variant management covers the variability in space (what at a given point in time is configurable and how different product instances aka variants are configured), configuration management covers the variability in time (records different states of assets over time). Effectively, variant management and configuration management complement each other. And they are strongly linked as well, because each change in variability (for example, the addition or removal of a feature) also affects the assets maintained and tracked through configuration management.
Variant Model
A variant model is a configuration of a (part of a) product variant. It holds the information about which features are selected or excluded, as well as which values are given to the configurable attributes in the feature models. Furthermore, the variant model also holds the information how variation points in the engineering assets are configured. Mapped to the models in pure::variants, a variant model contains configuration choices for both feature models and family models.
You might also want to have a look at
|
__label__pos
| 0.734113 |
Home > KNOWLEDGE > Content
Rubber seal Classified by material
Jun 06, 2018
Rubber seal Classified by material
1. Silicone rubber series
Equipped with advanced testing equipment, clean and dust-free workshop. Products are widely used in electronics, medical equipment, food and other industries.
Plastic species use a variety of domestic and imported silicone, the use of temperature can meet -60 ° - +200 °C, the product can meet the oil, steam resistance, medical, edible full transparency, high strength, flame retardant, conductive silicone rubber.
2. Fluorine rubber products
Widely used in automotive, shipbuilding, military, electrical and electronic industries, operating temperature -40 °C +200 °C, can withstand fuel oil, high temperature, freon, resistant to hot water, steam and excellent chemical resistance, welcome Customers choose to customize.
3. PTFE series products
Teflon gaskets have high temperature resistance and corrosion resistance. Even at high temperatures, they do not react with concentrated acids, alkalis, or strong oxidants. They have been widely used as sealing materials for pipes, flanges, and reactions. Kettles, valves, and seals on containers. Shaped parts are produced according to the user's design drawings.
4. All kinds of miscellaneous pieces
The material is selected from nitrile, natural, fluorine rubber, silica gel, etc., widely used in the automotive, machinery, valves and other industries, use different types of rubber to meet the temperature range of -50 °C - +200 °C. The product has good wear resistance and flexion performance, and can be designed as required.
5. Metal rubber seals
The material is metal rubber, which is a homogeneous elastic porous material, which not only has the elasticity of the rubber but also has the excellent characteristics of the metal and can work at a temperature of -150-800°C. At the same time, it can also withstand the pressure, in a certain range has the dual role of pressure and sealing. Its use:
(1) FRP pipe; glass fiber reinforced plastic pipe; glass steel pipe; cable protection pipe; flue gas desulfurization pipe; coal gas drainage pipe; power plant desulfurization and dust removal pipe;
(2) Municipal water supply and drainage pipelines are sealed;
(3) Sealing of various process pipelines (oil, chemical, metallurgy, paper making, sewage, seawater desalination, food brewing and beverage processing, medicine, etc.);
(4) Sewage collection and transportation pipelines, and water pipeline seals;
(5) Sealing of drinking water transportation mains and distribution pipes;
(6) oilfield water injection pipe seal;
(7) Hot water transmission pipelines and hot spring water transmission pipes are sealed.
|
__label__pos
| 0.60107 |
blob: e10ccd299d6666887ba9347308afaa112ee8816f [file] [log] [blame]
/*
* Provide common bits of early_ioremap() support for architectures needing
* temporary mappings during boot before ioremap() is available.
*
* This is mostly a direct copy of the x86 early_ioremap implementation.
*
* (C) Copyright 1995 1996, 2014 Linus Torvalds
*
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <asm/fixmap.h>
#ifdef CONFIG_MMU
static int early_ioremap_debug __initdata;
static int __init early_ioremap_debug_setup(char *str)
{
early_ioremap_debug = 1;
return 0;
}
early_param("early_ioremap_debug", early_ioremap_debug_setup);
static int after_paging_init __initdata;
void __init __weak early_ioremap_shutdown(void)
{
}
void __init early_ioremap_reset(void)
{
early_ioremap_shutdown();
after_paging_init = 1;
}
/*
* Generally, ioremap() is available after paging_init() has been called.
* Architectures wanting to allow early_ioremap after paging_init() can
* define __late_set_fixmap and __late_clear_fixmap to do the right thing.
*/
#ifndef __late_set_fixmap
static inline void __init __late_set_fixmap(enum fixed_addresses idx,
phys_addr_t phys, pgprot_t prot)
{
BUG();
}
#endif
#ifndef __late_clear_fixmap
static inline void __init __late_clear_fixmap(enum fixed_addresses idx)
{
BUG();
}
#endif
static void __iomem *prev_map[FIX_BTMAPS_SLOTS] __initdata;
static unsigned long prev_size[FIX_BTMAPS_SLOTS] __initdata;
static unsigned long slot_virt[FIX_BTMAPS_SLOTS] __initdata;
void __init early_ioremap_setup(void)
{
int i;
for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
if (WARN_ON(prev_map[i]))
break;
for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
slot_virt[i] = __fix_to_virt(FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*i);
}
static int __init check_early_ioremap_leak(void)
{
int count = 0;
int i;
for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
if (prev_map[i])
count++;
if (WARN(count, KERN_WARNING
"Debug warning: early ioremap leak of %d areas detected.\n"
"please boot with early_ioremap_debug and report the dmesg.\n",
count))
return 1;
return 0;
}
late_initcall(check_early_ioremap_leak);
static void __init __iomem *
__early_ioremap(resource_size_t phys_addr, unsigned long size, pgprot_t prot)
{
unsigned long offset;
resource_size_t last_addr;
unsigned int nrpages;
enum fixed_addresses idx;
int i, slot;
WARN_ON(system_state != SYSTEM_BOOTING);
slot = -1;
for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
if (!prev_map[i]) {
slot = i;
break;
}
}
if (WARN(slot < 0, "%s(%08llx, %08lx) not found slot\n",
__func__, (u64)phys_addr, size))
return NULL;
/* Don't allow wraparound or zero size */
last_addr = phys_addr + size - 1;
if (WARN_ON(!size || last_addr < phys_addr))
return NULL;
prev_size[slot] = size;
/*
* Mappings have to be page-aligned
*/
offset = phys_addr & ~PAGE_MASK;
phys_addr &= PAGE_MASK;
size = PAGE_ALIGN(last_addr + 1) - phys_addr;
/*
* Mappings have to fit in the FIX_BTMAP area.
*/
nrpages = size >> PAGE_SHIFT;
if (WARN_ON(nrpages > NR_FIX_BTMAPS))
return NULL;
/*
* Ok, go for it..
*/
idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
while (nrpages > 0) {
if (after_paging_init)
__late_set_fixmap(idx, phys_addr, prot);
else
__early_set_fixmap(idx, phys_addr, prot);
phys_addr += PAGE_SIZE;
--idx;
--nrpages;
}
WARN(early_ioremap_debug, "%s(%08llx, %08lx) [%d] => %08lx + %08lx\n",
__func__, (u64)phys_addr, size, slot, offset, slot_virt[slot]);
prev_map[slot] = (void __iomem *)(offset + slot_virt[slot]);
return prev_map[slot];
}
void __init early_iounmap(void __iomem *addr, unsigned long size)
{
unsigned long virt_addr;
unsigned long offset;
unsigned int nrpages;
enum fixed_addresses idx;
int i, slot;
slot = -1;
for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
if (prev_map[i] == addr) {
slot = i;
break;
}
}
if (WARN(slot < 0, "early_iounmap(%p, %08lx) not found slot\n",
addr, size))
return;
if (WARN(prev_size[slot] != size,
"early_iounmap(%p, %08lx) [%d] size not consistent %08lx\n",
addr, size, slot, prev_size[slot]))
return;
WARN(early_ioremap_debug, "early_iounmap(%p, %08lx) [%d]\n",
addr, size, slot);
virt_addr = (unsigned long)addr;
if (WARN_ON(virt_addr < fix_to_virt(FIX_BTMAP_BEGIN)))
return;
offset = virt_addr & ~PAGE_MASK;
nrpages = PAGE_ALIGN(offset + size) >> PAGE_SHIFT;
idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
while (nrpages > 0) {
if (after_paging_init)
__late_clear_fixmap(idx);
else
__early_set_fixmap(idx, 0, FIXMAP_PAGE_CLEAR);
--idx;
--nrpages;
}
prev_map[slot] = NULL;
}
/* Remap an IO device */
void __init __iomem *
early_ioremap(resource_size_t phys_addr, unsigned long size)
{
return __early_ioremap(phys_addr, size, FIXMAP_PAGE_IO);
}
/* Remap memory */
void __init *
early_memremap(resource_size_t phys_addr, unsigned long size)
{
return (__force void *)__early_ioremap(phys_addr, size,
FIXMAP_PAGE_NORMAL);
}
#else /* CONFIG_MMU */
void __init __iomem *
early_ioremap(resource_size_t phys_addr, unsigned long size)
{
return (__force void __iomem *)phys_addr;
}
/* Remap memory */
void __init *
early_memremap(resource_size_t phys_addr, unsigned long size)
{
return (void *)phys_addr;
}
void __init early_iounmap(void __iomem *addr, unsigned long size)
{
}
#endif /* CONFIG_MMU */
void __init early_memunmap(void *addr, unsigned long size)
{
early_iounmap((__force void __iomem *)addr, size);
}
|
__label__pos
| 0.945773 |
summaryrefslogtreecommitdiffstats
blob: f8cbca771afdf9eb256e758d7dcad13b0e769a86 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
/*
* Copyright (C) 2015 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "elf_fake.h"
#include <stdint.h>
#include <string>
class Backtrace;
std::string g_build_id;
void elf_set_fake_build_id(const std::string& build_id) {
g_build_id = build_id;
}
bool elf_get_build_id(Backtrace*, uintptr_t, std::string* build_id) {
if (g_build_id != "") {
*build_id = g_build_id;
return true;
}
return false;
}
|
__label__pos
| 0.909239 |
Telstra Network Disruptions比赛记录
数据准备和探索
• 缺失值处理
• 异常值检查
• 为类型变量添加隐含变量,并将其数字化
• 数据透视表来对分表检测,并寻找相应的特征,包括在某个特征中训练数据占的比例,以及某个特征下多数的类别标签是什么(majority votes)
• 可视化数据分布,并对异常分布(某些值太大)进行log运算转换
• 将连续数据进行归类,比如location,先将其转换为value_counts , 然后根据数量的多少设置阈值| 还有就是对于类别过多的特征,将数量少的归类到“其他”类别
使用XGBoost预训练,观察结果,并提交,决定是否基于此模型继续探索
特征工程
考虑时间因素,此处的特征工程主要基于由XGBoost得到的features importance中前n个影响最大的特征来继续探索 ## 时间序列的特征提取 ### lag features
1
2
3
pandas.DataFrame.shift
pandas.DataFrame.rolling
pandas.DataFrame.expanding
偏移原数据(severity type),并构建统计量 ### pattern based on features(severity, log feature , event type) 此处将features以one-hot编码的形式,嵌入到一个pattern中,并将其作为新的特征
Ensemble
过程
1. xgboost 在特征工程之后进行调参,调参完毕后开始使用stacking技术来对数据集进行处理
2. 基于上面得到的结果,再次使用xgboost进行训练,根据结果决定是否再调参,然后设置不同的seed,进行bagging
3. 训练随机森林(gini , entropy)和 Extra-tree (gini , entropy),基于原始的train和test集合,同时进行调参,使得结果最优。
4. 将得到的5个模型按照序列进行stacking,同时每个模型基于上一次得到的数据集进行训练。
5. 使用xgboost对得到的最后的数据集进行fit,同时根据结果决定调参
6. 然后为xgboost设置不同的seed,进行bagging
7. 最后根据LB score的反馈再次bagging
stacking and bagging
构建多个模型,并对数据进行训练,同时进行调参 1. 随机森林 - gini - entropy 2. Extra-Tree - gini - entropy 3. XGBoost - 采用多个不同的seed进行训练,得到多个不同的结果,然后使用bagging的原理将所有的结果整合起来 4. 逻辑回归
stacking
1. 将训练数据分成两半
2. 使用模型利用训练集的一半来预测另一半
3. 将两份训练集反过来,用同一个模型再训练一次
4. 然后用同样的模型将整个训练集训练一次,同时预测测试集
5. 将以上三个模型的预测结果放在其对应的测试集中,然后得到一份新的训练集和新的测试集
按照上面得到的新的训练集和测试集,再用另外一个xgboost进行训练,从而提高了模型的精度
同时将随机森林和extra-tree按照上面的方法,将两个模型的结果也放在新的训练集和测试集上,从而又得到一份新的训练集和测试集,注意,这里的随机森林和extra-tree并不是并联的,而是串联的方式,extra-tree的训练集中已经有了随机森林的stack了
然后将得到的训练集和测试集再使用xgboost进行训练,继而进行调参,然后再进行一步bagging,得到最终结果。
调整XGBoost正则化相关参数
得到最后结果 ## ensemble 然后根据LB score的反馈,对部分模型再进行bagging,得到最终结果
难点
• 调参
• 过拟合
结果
top 12% 119/975
代码
github
热评文章
|
__label__pos
| 0.566046 |
70-697題庫 70-483學習指南 70-697 Exam Prep
NO.1 You are developing a C# application. The application includes the following code segment, (Line
numbers are included for reference only.)
The application fails at line 17 with the following error message: "An item with the same key
has already been added."
You need to resolve the error.
Which code segment should you insert at line 16?
A. Option A
B. Option D
C. Option B
D. Option C
Answer: A
NO.2 You are testing an application. The application includes methods named CalculateInterest and
LogLine.
The CalculateInterest() method calculates loan interest. The LogLine()
method sends diagnostic messages to a console window.
The following code implements the methods. (Line numbers are included for reference only.)
You have the following requirements:
- The Calculatelnterest() method must run for all build configurations.
- The LogLine() method must run only for debug builds.
You need to ensure that the methods run correctly.
What are two possible ways to achieve this goal? (Each correct answer presents a complete solution.
Choose two.)
A. Insert the following code segment at line 05:
#if DEBUG
Insert the following code segment at line 07:
#endif
B. Insert the following code segment at line 01:
[Conditional(MDEBUG")]
C. Insert the following code segment at line 05:
#region DEBUG
Insert the following code segment at line 07:
#endregion
D. Insert the following code segment at line 01:
#if DE30G
Insert the following code segment at line 10:
#endif
E. Insert the following code segment at line 10:
[Conditional(MDEBUG")]
F. Insert the following code segment at line 01:
#region DEBUG
Insert the following code segment at line 10:
#endregion
G. Insert the following code segment at line 10: [Conditional("RELEASE")]
Answer: A,E
70-483學習指南 70-483考試指南
Explanation:
#if DEBUG: The code in here won't even reach the IL on release.
[Conditional("DEBUG")]: This code will reach the IL, however the calls to the method will not execute
unless DEBUG is on.
http://stackoverflow.com/questions/3788605/if-debug-vs-conditionaldebug
NO.3 You plan to store passwords in a Windows Azure SQL Database database.
You need to ensure that the passwords are stored in the database by using a hash algorithm,
Which cryptographic algorithm should you use?
A. RSA-768
B. AES-256
C. ECDSA
D. SHA-256
Answer: D
70-483證照考試
NO.4 You are developing an application by using C#. The application includes the following code
segment. (Line numbers are included for reference only.)
The DoWork() method must throw an InvalidCastException exception if the obj object is not of type
IDataContainer when accessing the Data property.
You need to meet the requirements. Which code segment should you insert at line 07?
A. var dataContainer = obj as IDataContainer;
B. var dataContainer = (IDataContainer) obj;
C. var dataContainer = obj is IDataContainer;
D. dynamic dataContainer = obj;
Answer: B
70-483 免費DEMO下載: http://www.testpdf.net/70-483.html
你正在為了怎樣通過Microsoft的70-483學習指南考試絞盡腦汁嗎?Microsoft的70-483學習指南考試的認證資格是當代眾多IT認證考試中最有價值的資格之一。在近幾十年裏,IT已獲得了世界各地人們的關注,它已經成為了現代生活中不可或缺的一部分。其中,Microsoft的認證資格已經獲得了國際社會的廣泛認可。所以很多IT人士通過Microsoft的考試認證來提高自己的知識和技能。70-483學習指南就是最重要的考試之一。這個認證資格能為大家帶來很大的好處。
你可以選擇我們的TestPDF.NET為你提供的培訓資料。如果你選擇了TestPDF.NET,通過Microsoft 70-697題庫不再是一個夢想。
70-697考古題代碼: 70-697
題庫名稱: Configuring Windows Devices
一年免費更新,沒有通過全額返還!
70-697題庫 問答數: 52
最近更新: 05-30,2016
70-697 認證考試: >>70-697題庫
70-483考古題代碼: 70-483
題庫名稱: Programming in C#
一年免費更新,沒有通過全額返還!
70-483學習指南 問答數: 236
最近更新: 05-30,2016
70-483 考試指南: >>70-483學習指南
While preparing for Microsoft 70-697 Material exam ", dumps and practice exam in PDF 70-697 Material created by Lotus experts, Free Latest Exam Questions for Citrix 70-697 Material Test, Cisco TelePresence IX5000 Immersive Solutions consist of top quality material 70-697 Material, Because Dumps Study Guide 70-697 Material exam dumps contain all, Guaranteed Adobe Photoshop 70-697 Material Certified, Multi Platform 70-697 Material PDF.
TestPDF.NET的70-483學習指南是一個保證你一次及格的資料。這個考古題的命中率非常高,所以你只需要用這一個資料就可以通過考試。如果不相信就先試用一下。因為如果考試不合格的話TestPDF.NET會全額退款,所以你不會有任何損失。用過以後你就知道70-483學習指南的品質了,因此趕緊試一下吧。問題有提供demo,點擊TestPDF.NET的網站去下載吧。
目前很熱門的Microsoft 70-483學習指南 認證證書就是其中之一。雖然通過Microsoft 70-483學習指南 認證考試不是很容易,但是還是有很多通過Microsoft 70-483學習指南 認證考試的辦法。你可以選擇花大量的時間和精力來鞏固考試相關知識,也可以選擇一些有效的培訓課程。TestPDF.NET提供的針對性模擬測試就很有效,能節約你的寶貴的時間和精力就能達到你想要目標,TestPDF.NET會是你很好的選擇。
|
__label__pos
| 0.582108 |
Skip to content
eoPortal
Satellite Missions Catalogue
CHOMPTT (CubeSat Handling of Multisystem Precision Time Transfer)
Aug 15, 2019
Non-EO
Quick facts
Overview
Mission typeNon-EO
Launch date16 Dec 2018
CHOMPTT (CubeSat Handling of Multisystem Precision Time Transfer)
OPTI Spacecraft Ground Segment Launch Mission Status References
CHOMPTT is a demonstration of precision ground-to-space time-transfer using a laser link to an orbiting CubeSat. The University of Florida-led mission is a collaboration with the NASA Ames Research Center. The 1U optical time-transfer payload was designed and built by the Precision Space Systems Lab at the University of Florida. The payload was integrated with a NASA Ames NODeS (Network & Operation Demonstration Satellite) -derived spacecraft bus to form a 3U spacecraft. The CHOMPTT satellite was successfully launched into low Earth orbit on 16 December 2018 on NASA's ELaNa XIX mission using the Rocket Lab USA Electron vehicle. 1)
Background
Ground-to-space clock synchronization with accuracies below the nanosecond level is important for navigation systems, communications, networking, remote sensing using distributed spacecraft, and tests of fundamental physics. The Global Positioning System is the most widely used tool for synchronizing spatially separated clocks. State-of-the art GPS time transfer is currently accurate to a few nanoseconds. 2)
Several precision time transfer experiments between ground and space, beyond GPS, have been carried out recently and are planned in the near future. OCA (Observatoire de la Côte d'Azur) and CNES (Centre National d'Études Spatiales), France, launched T2L2 (Time-Transfer by Laser Link) in 2008 on the Jason-2 satellite2. Like CHOMPTT, the T2L2 experiment was based on the techniques of satellite laser ranging. It consisted of synchronizing ground and space clocks using short laser pulses travelling between the ground clocks and the satellite instrument. The measured T2L2 time transfer precision was ~50 ps. One-way laser ranging to the LRO (Lunar Reconnaissance Orbiter), commissioned in 2009, has been conducted successfully from NASA's NGSLR (Next Generation Satellite Laser Ranging System) at GGAO (Goddard Geophysical and Astronomical Observatory) in Greenbelt, Maryland. 3) A one-way ranging technique was used, where the Earth laser station measured the transmit times of its outgoing laser pulses and the Lunar Orbiter Laser Altimeter (LOLA), one of the instruments onboard LRO, measured the receive times. The time transfer precision was limited to 100 ns by the NGSLR.
In the near future, the ACES (Atomic Clock Ensemble in Space) mission, sponsored by the European Space Agency, will fly aboard the ISS (International Space Station). 4) ACES is a fundamental physics experiment that will use a new generation of atomic clocks operating in the microgravity environment of space, which will be compared to a network of ultra-stable clocks on the ground. The ACES clock time will be transferred between space and ground by microwave and optical links.
The CHOMPTT mission incorporates the novel compact, low-power OPTI (Optical Precision Time-transfer Instrument), developed by the Precision Space Systems Laboratory (PSSL) at the University of Florida (UF), and a 3U CubeSat bus developed by the NASA Ames Research Center (ARC). The bus is derived from the NASA ARC's Edison Demonstration of Smallsat Networks (EDSN) CubeSat, which was also used for the Network & Operation Demonstration Satellite (NODeS) mission. 5) In the 2018 paper, we describe in detail the instrument and mission design, as well as the results of ground testing of the flight payload. 6) In this paper, we present some of the initial results of the CubeSat mission.
Mission Concept and Goals
The CHOMPTT mission employs an optical time-transfer scheme, which can be significantly more accurate than that for radio frequencies, due to lower propagation uncertainties through the Earth's ionosphere. A second advantage of optical frequencies is the high degree of beam collimation that enables a compact receiver for the space segment of the mission. A single photodetector with an aperture diameter less than 1 mm can be used to receive the optical signal transmitted from the ground, and a cm-scale retroreflector array can be used to return the signal back to the ground segment.
The mission's primary goal is to demonstrate an instantaneous ground-to-space time transfer with a precision of 200 ps, corresponding to a position error of 6 cm, which is sufficient for most navigation applications. A secondary goal of the mission include demonstrating the on-orbit performance of the two chip-scale atomic clocks (CSAC) incorporated into OPTI. Compared to previous experiments, this mission will demonstrate near state-of-the-art optical time transfer performance, but will do so on a power-limited and low cost nanosatellite platform. The 1U OPTI payload uses a time-transfer concept similar to that of the T2L2 mission. However, unlike the T2L2 mission, which was a secondary payload on board the Jason-2 satellite, OPTI will be incorporated into a dedicated CubeSat bus whose attitude control is dictated by the requirements of OPTI. CHOMPTT is the first CubeSat mission dedicated to precision optical time transfer and the first to successfully operate CSACs in space on a CubeSat platform.
The 3U CHOMPTT CubeSat was successfully launched into low Earth orbit on 16 December 2018 on NASA's ELaNa XIX mission using the Rocket Lab USA Electron vehicle. The orbit is a 500 km altitude circular Earth orbit with an inclination of 85º.
The two optical ground segments for the mission utilize a satellite tracking telescope, a pulsed 1064 nm laser system, an atomic clock and precision timing equipment. The primary facility is located at the TISTEF (Townes Institute Science and Technology Experimentation Facility), operated by the University of Central Florida, CREOL College of Optics and Photonics. The TISTEF is physically located at the Kennedy Space Center on Merritt Island, FL. The secondary optical ground segment is operated by EOS Space Systems and located on Mount Stromlo, Australia. The continental separation of these two facilities increases opportunities for time-transfer activities as discussed below. A UHF/VHF ground station located on the UF campus is used to send commands and receive telemetry from the CHOMPTT CubeSat. The mission duration is envisioned to span at least nine months.
The Optical Precision Time-transfer Instrument concept of operations is shown in Figure 1. The time-transfer scheme is similar to that of the T2L2 mission. A satellite laser ranging facility on the ground will transmit short (~2 ns-long) laser pulses to the CHOMPTT CubeSat. These pulses are timed with respect to the atomic clock on the ground and are detected by an avalanche photodetector mounted on the nadir face of OPTI. An event timer records the arrival time with respect to the on-board clock with a typical precision of less than 100 ps. At the same time, a retroreflector array returns the transmitted pulse back to the ground. By comparing the transmitted and received times on the ground and the arrival time of the pulse at the satellite, the time difference between the ground and space clocks can be measured. During a single SLR contact with the satellite, roughly 1,000 such measurements will be performed over a ~100 s interval to estimate the time transfer precision over time scales <100 s.
Figure 1: CHOMPTT time-transfer concept (image credit: CHOMPTT Team)
Figure 1: CHOMPTT time-transfer concept (image credit: CHOMPTT Team)
The equation shown in Figure 1, describes how the three observables are used to compute the ground/space clock discrepancy, χ. A light pulse is transmitted from the ground at time t0ground (referenced to the ground clock) and is received at the satellite at time t1space (referenced to the space clock). The time that the returned pulse is received back at the ground is t2ground. The clock discrepancy is then the difference between the measured arrival time of the pulse at the satellite and the expected arrival time based on time measurements made on the ground. The expected time is the average of the emitted and received times on the ground plus a correction, Δt. The correction, Δt, accounts for the systematic time offset that is the sum of contributions from (a) asymmetry in the atmospheric delay between the uplink and downlink paths of the laser pulse, (b) the geometrical offset between the reflection and detection equivalent locations on OPTI, and (c) general and special relativity. The relativistic time offset is the only contribution that is significant and must be accounted for, while the atmospheric and geometric effects are negligible compared with the mission's 200 ps precision goal (Ref.6) . Estimates of the satellite's ephemeris based on on-board GPS data are used to compute both the relativistic rate difference between the ground and space clocks and the relativistic contribution to Δt.
In the baseline mission concept, shown in Figure 1, the measured arrival time of the optical pulse at the CubeSat is transmitted to the University of Florida ground station using the amateur radio frequency band. By combining these data with the timing measurements obtained at the SLR facility and the orbit determination information, the clock discrepancy, χ, can be calculated.
OPTI (Optical Precision Time-transfer Instrument)
The CHOMPTT spacecraft comprises the space instrument, OPTI, and its host CubeSat bus. The Optical Precision Time-transfer Instrument is a 1U, 1 kg device that incorporates all components of the space segment needed to perform ground-to-space optical time-transfer. All of the critical time-transfer elements are doubly redundant. These include two small atomic clocks, two picosecond event timers and microprocessor-based clock counters, and two nadir-facing avalanche photodetectors.
The design OPTI is shown in Figure 2. The electronics elements comprise two main instrument channels (A and B), a Supervisor, and an optical beacon. The two instrument channels are identical providing redundancy. Each contains one chip scale atomic clock (CSAC), one event timer, one avalanche photodetector, one microcontroller, and the ancillary electronics needed to support each of these components.
Each of the two main instrument channels house their own identical SA.45s chip scale atomic clocks, manufactured by Microsemi Frequency and Time Corporation. The primary output of the CSAC is a 10 MHz square wave. This signal is distributed to the event timer, channel board microprocessor, and the Supervisor using clock distribution electronics. The CSAC also provides temperature and other health and safety information to the Supervisor. The short-term frequency stability (Allan deviation) of the CSAC is one limiting factor in the overall time-transfer precision. The specified short-term (τ = 1 s averaging time) Allan deviation for the CSAC is 3 x 10–10, corresponding to a time error of 300 ps over 1 s. The measured short-term frequency stability of the CHOMPTT CSACs in a laboratory environment prior to launch was three times lower than the specification, equivalent to 100 ps at τ = 1 s.
The event timer on each Channel is the precision electronics component the measures the arrival time of the optical pulses with respect to the on-board atomic clocks. The main even timer component is the TDC-GPX time-to-digital converter manufactured by Acam-Messelectronic GmbH. 7) It has a specified single shot precision of 10 ps, which was measured in the Precision Space Systems Lab to 12 ps (one standard deviation), and a maximum range of 7 µs. Due to the limited range of the TDC-GPX, a separate Texas Instruments MSP-430 microcontroller is incorporated on each instrument channel to count clock cycles over the entire lifetime of the mission. The clock counts recorded by the MSP-430 and the time stamps measured by the TDC-GPX event timer are combined digitally in software.
Figure 2: OPTI payload design (image credit: CHOMPTT Team)
Figure 2: OPTI payload design (image credit: CHOMPTT Team)
Each instrument channel is also equipped with one avalanche photodetector (APD) for recording the received light pulses. The APD are 200 μm active area diameter InGaAs photodiodes manufactured by Laser Components USA, Inc. A high voltage circuit on each channel is tuned to its specific APD to provide a reverse bias voltage that is less than but within 5 volts of the APD's breakdown voltage, which is typically in the range of 50-70 V. The temperature of the APDs are actively controlled during time-transfer events to improve their stability. The APDs are mounted on the edge of the channel electronics board and they protrude through a small hole in the nadir face of the OPTI structure. A bandpass optical filter mounted in front of the APD on the nadir face of OPTI prevents stray light from inadvertently triggering timing events.
The Supervisor acts at the payload controller, and it is the single electrical interface to the spacecraft bus. It uses a Texas Instruments MSP-430 microcontroller to route commands to each of the two channels and retrieve data from each channel, which is stored in flash memory on the Supervisor electronics board, until it is requested by the spacecraft bus. The Supervisor MSP-430 also controls the electronics that drive an optical beacon that aides in the tracking of the CubeSat by the SLR facility. The beacon electronics drive four 0.5 W VCSEL (Vertical Cavity Surface-Emitting Laser) diode arrays. These arrays emit uncollimated 808 nm light with a collective divergence angle of ~14 º (half-angle).
A single retroreflector array is mounted on the nadir face of OPTI. The space-capable array, which was custom designed by PLX, Inc. and consists of six, 1 cm effective diameter hollow retroreflectors integrated into a single package.
The Supervisor, Channels A and B, the retroreflector array, and the optical beacon are mechanically integrated by a custom structure that provides structural integrity during launch, thermal capacity and conductivity, and electromagnetic shielding for the Channel A and Channel B electronics boards. The OPTI structure is constructed from aluminum and is designed to be modular for ease of testing and integration. The OPTI structure is integrated inside a standard Pumpkin, Inc. 3U chassis and is mounted to the chassis by side fasteners. A Pumpkin, Inc. Large-aperture Cover Plate, also shown in Figure 1, mounted on the nadir face of OPTI, accommodates the payload's optical components and serves as the structural end plate of the spacecraft.
In 2014 we reported on the end-to-end time transfer performance of a breadboard version of OPTI,8) and in 2016 we reported on the measured performance of the OPTI Engineering Unit (Ref. 6). In Figure 3 we show the measured performance of the OPTI Flight Model in terms of Allan deviation, σy. On short time scales (τ = 1 second) where the limitation is the time-transfer precision, the measured Allan deviation was 75 x 10–12. This corresponds to a time error of Δt = σy x τ = 75 ps. Over longer time scales the performance is limited by the CSAC, and over the period of one orbit (τ = 6,000s) the time error was <20 ns. These measurements show that the performance of the flight hardware is capable of achieving the 200 ps time-transfer performance goal for the mission.
Figure 3: Measured Allan Deviation during optical time-transfer tests using the OPTI flight hardware (image credit: CHOMPTT Team)
Figure 3: Measured Allan Deviation during optical time-transfer tests using the OPTI flight hardware (image credit: CHOMPTT Team)
Spacecraft
The CHOMPTT satellite is a single 3U CubeSat with a total mass of 3.7 kg. An exploded view of the satellite, showing both the OPTI payload and the CubeSat bus is provided in Figure 4.
Figure 4: The 3U CHOMPTT spacecraft and payload (image credit: CHOMPTT Team)
Figure 4: The 3U CHOMPTT spacecraft and payload (image credit: CHOMPTT Team)
The Command and Data Handling Subsystem uses a Nexus S smartphone as the main processor. It autonomously schedules GPS acquisitions and uplink/downlink operations by propagating its own orbit and predicting when the spacecraft will be over the specified CHOMPTT RF ground stations. Additional distributed Arduino-based processors run other activity tasks such as interfacing with the payload, polling sensor data, and interfacing with the GPS.
The ADCS (Attitude Determination and Control Subsystem) consists of three orthogonal brushless motor reaction wheels and torque coils embedded in the solar panel PCBs (Printed Circuit Boards). Attitude determination uses a magnetometer sensor and inertial measurement unit (IMU) combined with coarse sun sensors also embedded in the solar panels. The ADCS has two distinct modes of operation. The first is magnetic control, which is used to de-tumble the spacecraft and align it with the local magnetic field for GPS acquisition and downlink activities. The second is 3-axis control, which uses the reaction wheels and attitude determination to point the nadir face of the CubeSat toward the SLR facility to enable time-transfer operations. The pre-launch estimate of the pointing accuracy in this mode was ±5º. A Novatel OEMV-1 GPS receiver is used to get position, velocity, and time fixes approximately once every 25 hours for activity scheduling.
The EPS (Electrical Power Subsystem) consists of the body mounted solar arrays, rechargeable lithium ion 18650 battery storage capable of sustaining subsystems during operating loads and orbit eclipses. The EPS also includes a watchdog timer to limit radio transmissions if command from Earth is lost.
The CHOMPTT Communications Subsystem uses two radios to perform two different tasks: beaconing and two-way ground communications. Two-way ground communications is performed with an Astrodev (Astronautical Development, LLC) Lithium 1 UHF transceiver and a deployable tape-measure monopole antenna. The Uplink and downlink rate is 9,600 bit/s under the AX.25 protocol with 1 W transmitted power from the satellite. The Astrodev transceiver is only powered when an uplink and downlink is scheduled over the ground station. The beacon uses a StenSat UHF transmitter with a tape measure monopole antenna, sending packets of data every 60 seconds at 1,200 bit/s when the Lithium transceiver is not on.
Figure 5 is a photo of the flight spacecraft during on-ground mission simulation tests in the University of Florida clean room. The nadir face of the spacecraft showing the OPTI payload can be seen on the right side of the image.
Figure 5: CHOMPTT flight spacecraft during mission simulation tests in the UF clean room (image credit: UF)
Figure 5: CHOMPTT flight spacecraft during mission simulation tests in the UF clean room (image credit: UF)
RF and Optical Ground Segments
The ground segment of the mission consists of a radio frequency (RF) ground station located at the University of Florida, as well as primary and secondary SLR (Satellite Laser Ranging) facilities. The UF RF ground station receives all telemetry and transmits all commands to/from the satellite. Both uplink and downlink communications use the amateur portion of the UHF band. Prior to launch the UF ground station included a Hy-Gain UB-7030 antenna with an iCOM IC-9100 transceiver. However, these equipment were later upgraded to improve the radio link to the satellite. This was needed because the receive sensitivity of the primary Lithium 1 radio system on board the spacecraft was worse than what was measured before launch. See the next section on flight operations for details.
Two SLR facilities are used for the CHOMPTT mission. The primary facility was developed by the Precision Space Systems Lab at UF, the University of Central Florida, NASA ARC, and the Naval Information Warfare Systems Command (SPAWAR). It is located at the TISTEF (Townes Institute Science and Technology Experimentation Facility) at the Kennedy Space Center on Merritt Island, FL. This facility, shown schematically in Figure 6, consists of a high energy, pulsed laser system and precision timing equipment integrated with a series of optical satellite tracking telescopes.
Figure 6: Primary optical ground station configuration at TISTEF, KSC, FL (image credit: UF)
Figure 6: Primary optical ground station configuration at TISTEF, KSC, FL (image credit: UF)
The most critical part of this SLR facility is TISTEF's 50 cm aperture optical telescope, capable of tracking satellites in low Earth orbit. A custom InGaAs avalanche photodetector system is mounted onto the backplane of this telescope to receive the returned laser pulses from OPTI and time-stamp them with respect to the ground atomic clock.
The pulsed laser system is a FLARE 1064-50-50 manufactured by Coherent, Inc. It is a Q-switched laser, producing 2.5 ns-wide, 1 mJ pulses of 1064 nm light. A small fraction of the emitted light is redirected by a 103:1 beam splitter to the first ground detector, APD 0, which records t0ground. The bulk of the laser power is expanded from 1.1 mm diameter to 33 mm by a commercial Galilean beam expander. A pair of steering mirrors (not shown in Figure 6) direct the expanded beam into the entrance of a coudé path that uses a series of dichroic mirrors to align the out-going beam with the 50 cm receive telescope. The last coudé path mirror is a fast steering mirror (FSM), which is used to accommodate the point-ahead angle between the transmitted and returned laser beams.
The laser is driven by rising edge triggers produced by a FPGA (Field Programmable Gate Array)-based pulse modulator using a Microsemi Frequency & Time Corp. SmartFusion2 FPGA. This modulator can produce microsecond-level variations in the nominal 10 Hz repetition rate in order to correlate timing measurements made on the ground with those measured in space.
The ground clock at the SLR facility is a Microsemi Frequency & Time Corp. SA.31m Rubidium Clock. This clock has an Allan deviation that is ~3 times lower than that of the CSAC for averaging times less than 6,000 s. It will therefore not contribute significantly to the overall timing performance of the mission. The SLR facility event timer is an AMS TDC-GPX2 time-to-digital converter. This unit records timing events t0ground and t2ground based on pulse detections made by APD 0 and APD 2, respectively, at the SLR facility.
Tracking of the CHOMPTT CubeSat is performed by the optical beacon incorporated into OPTI. The SLR telescopes will initially follow the azimuth and elevation track of the CubeSat based on orbit solutions using both GPS telemetry and data provided by the CSpOC (Combined Space Operations Center), formerly the JSpOC (Joint Space Operations Center). A series of tracking telescopes equipped with infrared imagers and covering a range of fields of view search for the optical beacon transmitted by OPTI. Once the beacon is detected, the telescope mount is driven by feedback control to keep the beacon signal centered in the image and bore sighted with the transmit laser.
The secondary SLR facility is located on Mount Stromlo, Australia and is owned and operated by EOS Space Systems. Functionally, it is very similar to the TISTEF facility. Key differences are the higher laser power and larger aperture telescopes used by EOS, which improve the optical link margin. The largest EOS SLR receive telescope has an aperture of 1.8 m, while the receive aperture at TISTEF is 0.5 m. The maximum average transmit laser power at EOS is roughly two orders of magnitude higher than the 1 W average power for the TISTEF laser.
Launch
On 16 December 2018, the US small satellite launch company Rocket Lab launched its third orbital mission of 2018, successfully deploying satellites to orbit for NASA. The mission, designated Educational Launch of Nanosatellites (ELaNa)-19 , took place just over a month after Rocket Lab's last successful orbital launch, ‘It's Business Time.' Rocket Lab has launched a total of 24 satellites to orbit in 2018. 9) 10)
Figure 7: Rocket Lab's Electron launch vehicle successfully lifted off at 06:33 UTC (19:33 NZDT) on 16 December 2018 from the Rocket Lab Launch Complex 1 on New Zealand's Māhia Peninsula with the ELaNa-19 payloads (image credit: Rocket Lab)
Figure 7: Rocket Lab's Electron launch vehicle successfully lifted off at 06:33 UTC (19:33 NZDT) on 16 December 2018 from the Rocket Lab Launch Complex 1 on New Zealand's Māhia Peninsula with the ELaNa-19 payloads (image credit: Rocket Lab)
Orbit: After being launched to an elliptical orbit, Electron's Curie engine-powered kick stage separated from the vehicle's second stage before circularizing to a 500 x 500 km orbit at an 85 º inclination. After 56 minutes into the mission, the 13 satellites on board were individually deployed to their precise, designated orbits.
The nanosatellites launched come from NASA's Goddard Space Flight Center, Glenn Research Center and Langley Research Center, along with the U.S. Naval Academy and educational institutions in California, Florida, Idaho, Illinois, New Mexico and West Virginia. There are also CubeSats from the Aerospace Corp. based in Southern California, and the Defense Advanced Research Projects Agency — the research and development arm of the U.S. Defense Department.
Payload Complement of 13 CubeSats
This mission includes 10 ELaNa-19 (Educational Launch of Nanosatellites-19) payloads, selected by NASA's CubeSat Launch Initiative. The initiative is designed to enhance technology development and student involvement. These payloads will provide information and demonstrations in the following areas: 11)
• CeREs (Compact Radiation belt Explorer), a 3U CubeSat of NASA. High energy particle measurement in Earth's radiation belt.
• STF-1 (Simulation-to-Flight-1), a 3U CubeSat (4 kg) of WVU (West Virginia University). The objective is to demonstrate how established simulation technologies may be adapted for flexible and effective use on missions using the CubeSat Platform.
• AlBus (Advanced Electrical Bus), a 3U CubeSat of NASA/GRC to demonstrate power technology for high density CubeSats.
• CHOMPTT (CubeSat Handling Of Multisystem Precision Time Transfer), a 3U CubeSat of UFL (University of Florida). CHOMPTT is equipped with atomic clocks to be synchronized with a ground clock via laser pulses.
• CubeSail, a mission of the University of Illinois at Urbana-Champaign. A low-cost demonstration of the UltraSail solar sailing concept, using two near-identical 1.5U CubeSat satellites to deploy a 260 m-long, 20 m2 reflecting film.
• NMTSat (New Mexico Tech Satellite), a 3U CubeSat developed by the New Mexico Institute of Mining and Technology with the goal to monitor space weather in low Earth orbit and correlate this data with results from structural and electrical health monitoring systems.
• RSat-P (Repair Satellite-Prototype), a 3U CubeSat of the USNA (US Naval Academy ) in Annapolis Maryland to demonstrate capabilities for in-orbit repair systems (manipulation of robotic arms).
• ISX (Ionospheric Scintillation Explorer), a 3U CubeSat of NASA and CalPoly to investigate the physics of naturally occurring Equatorial Spread F ionospheric irregularities by deploying a passive ultra-high frequency radio scintillation receiver.
• Shields-1, a 3U CubeSat of NASA/LaRC, a technology demonstration of environmentally durable space hardware to increase the technology readiness level of new commercial hardware through performance validation in the relevant space environment.
• Da Vinci, a 3U CubeSat of the North Idaho STEM Charter Academy to teach students about radio waves, aeronautical engineering, space propulsion, and geography by sending a communication signal to schools around the world.
In addition to the 10 CubeSats to be launched through NASA's ELaNa program, there are three more nanosatellites set for liftoff on top of the Electron rocket in New Zealand. NASA also provided a launch opportunity for:
• AeroCube 11 consists of two nearly identical 3U CubeSats developed by the Aerospace Corp. in El Segundo, California. The AeroCube 11 mission's two CubeSats, named TOMSat EagleScout and TOMSat R3, will test miniaturized imagers. One of the CubeSats carries a pushbroom imager to collect vegetation data for comparison to the much larger OLI (Operational Land Imager) aboard the Landsat-8 satellite, and the other TOMSat CubeSat has a focal plane array on-board to take pictures of Earth, the moon and stars. Both satellites feature a laser communication downlink.
• SHFT (Space-based High Frequency Testbed), a 3U CubeSat (5 kg) mission of DARPA, developed by NASA/JPL. The objective is to study variations in the plasma density of the ionosphere by collecting high-frequency radio signals, including those from natural galactic background emissions, from Jupiter, and from transmitters on Earth.
Rocket Lab has christened the mission "This One's for Pickering" in honor of the New Zealand-born scientist William Pickering, who was director of the Jet Propulsion Laboratory in Pasadena, California, for 22 years until his retirement in 1976.
Operations and Early Flight Data
The CHOMPTT satellite was successfully launched into low Earth orbit on 16 December 2018 on NASA's Venture Class Launch Services (VCLS) ELaNa XIX mission using the Rocket Lab USA Electron vehicle from Mahia, New Zealand. The satellite was inserted into a near-circular orbit with a perigee altitude of 498.5 km and an apogee altitude of 502.8 km with an inclination of 85.0º. At this altitude, we expect the spacecraft to remain on orbit for approximately 7 years before reentry into Earth's atmosphere (Ref. 1).
After deployment from its CubeSat dispenser on the Electron Kick Stage, the CHOMPTT spacecraft entered a quiescent-mode for 15 minutes where all of the subsystems were off. After the designated quiescent period, the spacecraft bus and payload powered on automatically. The spacecraft ADCS then began a 5 day detumble period using magnetorquers. The magnetorquers were activated for a period of 30 minutes every 2.5 hours during this 5 day period. When the OPTI payload was powered on, it entered a low power 'clock counting mode'. In this mode, only one of the two instrument channels is active with only that channel's CSAC turned on and the associated MSP-430 microprocessor counting clock cycles. During this early period of the mission, the spacecraft's StenSat UHF radio beaconed health and safety information to the ground every 60 seconds. This beacon data provided the first indication that both the spacecraft and payload survived the launch and orbit insertion.
Upon completion of the detumble activity, the satellite entered a cycle of activities that repeats every 25 hours. At the start of these 25 hour cycles, the spacecraft aligns its GPS antenna in the zenith direction and attempts to autonomously acquire GPS data. The spacecraft uses these data to propagate its orbit and determine when it is over either the UF ground station or over NASA ARC in the San Francisco Bay area. When the spacecraft is over one of these locations (alternating between locations each 25 hour cycle), the spacecraft turns on the primary Lithium 1 radio and listens for commands from the ground.
During each 25 hour cycle, spacecraft and payload health and safety data is recorded continuously. Some of these data are transmitted to the ground every 60 seconds via the Stensat beacon radio, except when the Lithium 1 radio is active to avoid interference. Both radios use the same carrier frequency. If no commands are received by the Lithium 1 radio within a specified time limit (typically 14 days), the watchdog timer turns off the beacon transmissions. Other spacecraft and payload activities occur during these 25 hour cycles when specifically commanded to do so.
Acquisition of GPS data and subsequent scheduling of Lithium 1 uplink activities by the spacecraft has occurred successfully during nearly every 25 hour cycle. One notable exception was the very first 25 hour cycle after the de-tumble activity. However, attempts to send commands to the spacecraft using the UF RF ground station were initially unsuccessful. This resulted in a timeout of the watchdog timer, causing beacon transmissions to cease for a short period. After multiple attempts to command the spacecraft from the UF ground station and additional successful commanding operations using the SRI International 18 m parabolic antenna in Stanford, CA, it was determined that the receive sensitivity of the CHOMPTT Lithium 1 radio on orbit was about –85 dBm. This is higher than the pre-launch measurements of –95 dBm taken in the anechoic chamber at NASA ARC. The 10 dB degradation is suspected to be caused by electrical noise from the bus, either due to the increased ADCS activity or the battery charging from the solar panels. Neither of those factors were present in the RF anechoic chamber tests.
From early 2019 until April 2019, flight operations continued in a limited capacity through the use of the SRI 18 m antenna. During this time period, the UF RF ground station was upgraded. The 75 W iCOM radio was replaced with an Ettus SDR (Software Defined Radio) and a 550 W Beko amplifier. The Hy-Gain antenna was also replaced with a M2 436CP30 antenna with 1.5 dB of additional gain. With these upgrades, starting in late May 2019, the UF ground station has been able to close the RF link and command the CHOMPTT spacecraft reliably.
Mission Status
• As of June 2019, over 8,098 spacecraft beacon packets and over 6,038 OPTI payload packets were collected. Both the spacecraft and payload are functioning nominally. The majority of these beacon data were received by the Amateur Radio community and uploaded the PSSL (Precision Space Systems Laboratory) database via our web portal at UF. See: https://pssl.mae.ufl.edu/#chomptt/main/
- Due to the sun-synchronous orbit, high-efficient GOM Space solar panels, high capacity batteries, and low power-state of the spacecraft, the spacecraft has remained in a nominally power positive state over the past six months. The bus battery maximum voltage is 8.4 V and the measured voltage never fell below 7.9 V.
- The spacecraft has temperature sensors on the StenSat, EPS, C&DH smartphone, ADCS, Router, and Lithium PCBs as well as each of the solar panels. The measured on-orbit temperatures are within 10ºC of the estimated values determined by a Thermal Desktop model, with on orbit maximum values near 50ºC. The measured values fall well within the range of the bounding acceptable ratings of –10ºC and +60ºC for all spacecraft and payload components. Figure 8 shows one example of the spacecraft bus temperature variations over a four month period and the predicted steady state value under the hottest conditions. The gap in the data near the beginning of the mission was caused by the watchdog timer timeout, which halted beacon transmissions.
- The OPTI payload temperatures typically fall within the range of 5ºC and 30ºC. This is consistent with prelaunch analyses as is the payload power consumption. Both chip scale atomic clocks continue to function nominally on orbit, although typically only the Channel A CSAC is operating.
Figure 8: C&DH (Nexus S smartphone) temperatures (blue points) and pre-launch steady state high temperature predictions (red line) during the first six months of the mission (image credit: UF)
Figure 8: C&DH (Nexus S smartphone) temperatures (blue points) and pre-launch steady state high temperature predictions (red line) during the first six months of the mission (image credit: UF)
- Spacecraft Pointing Performance: Prelaunch analysis of the CHOMPTT spacecraft ADCS predicted a pointing accuracy that was within ±5º. Once the satellite was on orbit, a post launch calibration of the sun sensor photodiodes and the magnetometers was required to tune the ADCS. Most important was the estimation of the magnetometer bias in three axes and the gains and offsets of the sun sensor photodiodes on all faces of the spacecraft. Once these parameters were determined by analysis of the flight data on the ground, the new biases and gains were uploaded to the spacecraft. This calibration process reduced pointing errors from ±8º to ±0.5º.
- Time-Transfer Operations: During nominal time-transfer operations, the CHOMPTT spacecraft first de-tumbles the spacecraft to a desired body rate. The spacecraft then uses its magnetometers, sun sensors, IMU, and reaction wheels to point itself in an inertially fixed direction that maximizes its contact duration with SLR facility. The OPTI payload is switched from its nominal clockcounting mode to time-transfer mode using one of its two channels. In this mode, the TEC temperature control for the APD is activated, the event timer is switched on and made ready to record timing events with respect to the CSAC, and the 808 nm laser beacon is switched on to assist SLR tracking of the CubeSat. The SLR facility initially uses CHOMPTT's ephemeris as the pointing reference for the optical telescopes. Tracking telescopes with various fields of view, boresighted with the main receive telescope, image OPTI's laser beacon or glints from the Sun reflecting off of the solar panels. Once the beacon or CHOMPTT image is acquired, the tracking telescopes are then used as the primary pointing reference for the SLR facility.
- Active tracking of CHOMPTT is currently only reliable when the SLR facility is in darkness. Otherwise, daytime sky radiance reduces the signal-to-noise of the image and neither CHOMPTT nor its optical beacons can be seen. In addition, the primary pointing reference for the spacecraft ADCS is the Sun's orientation measured by the sun sensors. Because of this, nominal time-transfer operations have only been planned during ‘terminator passes', in which the spacecraft is illuminated by the Sun and the SLR facility is in darkness. Terminator passes only occur just before sunrise or just after sunset at the SLR facility.
- Due to regulatory issues regarding laser safety, CHOMPTT has not yet been lased by the TISTEF facility at KSC. These issues are set to be resolved by mid-June 2019. However, with the assistance of SPAWAR, TISTEF was able to passively track the spacecraft by imaging its optical beacons, which could be seen modulating at 1 Hz as expected on 24 April 2019. Telemetry from current monitors on the OPTI payload also show that the spacecraft optical beacons are functioning properly. This test provides confidence that we can acquire and track CHOMPTT from TISTEF with sufficient pointing accuracy to enable time-transfer operations.
- CHOMPTT has also been successfully tracked from EOS Space Systems' Australian sites. Figure 9 shows an image of CHOMPTT taken by the EOS tracking telescopes in Western Australia. However, EOS Space Systems has not yet had an opportunity to lase the spacecraft from the Mount Stromlo site. This is primarily due to cloud cover during the acceptable terminator passes over the SRL facility or spacecraft mis-pointing at the SLR facility due to late-tracking and saturation of the reaction wheels.
Figure 9: Image of the CHOMPTT spacecraft captured by the EOS WASSA SLR tracking camera on 28 March 2019 (image credit: EOS Space Systems)
Figure 9: Image of the CHOMPTT spacecraft captured by the EOS WASSA SLR tracking camera on 28 March 2019 (image credit: EOS Space Systems)
- CHOMPTT currently relies on spacecraft terminator passes over either SLR facility to facilitate tracking and optical time-transfer. This condition occurs for approximately 1 month, every 3 months, with about 4-6 sufficient passes. While CHOMPTT was able to be tracked during the month of April from both stations, the next opportunity for time-transfer operations during terminator passes from both SLR facilities occurs in July 2019. Additional efforts are also underway to attempt daytime tracking with the EOS facility, where the both the spacecraft and ground station are in direct sun. We have procured a narrow band pass optical filter centered on the 808 nm wavelength of the OPTI beacon lasers. This would block a sufficient portion of the sky irradiance and allow the EOS tracking telescope to image the spacecraft's laser beacons for tracking.
- In summary, the CHOMPTT laser time-transfer technology demonstration mission was successfully launched into low Earth orbit in December of 2018. The 1U OPTI payload was designed to transfer terrestrial time standards to a low Earth orbiting CubeSat using standard Satellite Laser Ranging facilities. The instrument incorporates two small atomic clocks, two picosecond event timers and microprocessor-based clock counters, two nadir-facing avalanche photodetectors, and a single retroreflector array. The measured short term performance of the OPTI flight unit was 75 ps, and over longer time scales, its timing precision is limited by the frequency stability of the on-board chip scale atomic clocks. The 1U OPTI payload was integrated with a 3U CubeSat bus, which has heritage from the NASA Ames Research Center EDSN/NODeS bus. There are two optical ground segments for the mission located at the Kennedy Space Center in Florida and Mount Stromlo in Australia. The NASA spacecraft bus and University of Florida payload are both operational and healthy after six months on-orbit. Both chip scale atomic clocks are performing nominally and the payload thermal environment and power draw are consistent with pre-launch analyses. We were able passively track the spacecraft from both SLR sites with sufficient accuracy, and we intend to perform optical time-transfer operations during the month of July, 2019. A successful demonstration of precision time-transfer by OPTI will enable future missions requiring precision time distribution on compact space platforms (Ref. 1).
References
1) John W. Conklin, Seth Nydam, Tyler Ritz, Nathan Barnwell, Paul Serra, John Hanson, Anh N. Nguyen, Cedric Priscal, Jan Stupl, Belgacem Jaroux, Adam Zufall, "Preliminary results from the CHOMPTT laser time-transfer mission," Proceedings of the 33rd Annual AIAA/USU Conference on Small Satellites, August 3-8, 2019, Logan, UT, USA, paper: SSC19-VI-03, URL: https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=4407&context=smallsat
2) P Defraigne and G Petit, "Time transfer to TAI using geodetic receivers," Metrologia, Volume 40, Number 4, published 25 June 2003, https://doi.org/10.1088/0026-1394/40/4/307
3) Jan McGarry, Tom Zagwodzki, Ron Zellar, Carey Noll, Greg Neumann, Mark Torrence, Julie Horvath, Bart Clarke, Randy Ricklefs, Mike Pearlman, "Laser ranging to the lunar reconnaissance orbiter: a global network effort," 16th International Workshop On Laser Ranging, Poznan Poland, 2006, URL: https://cddis.nasa.gov/lw16/docs/presentations/llr_5_McGarry.pdf
4) L. Cacciapuoti, Ch. Salomon, "Space clocks and fundamental tests: The aces experiment," The European Physical Journal Special Topics, Volume 172, Issue 1, pp 57–68, June 2009, https://doi.org/10.1140/epjst/e2009-01041-7
5) James Chartres, Hugo Sanchez, John Hanson, "EDSN development lessons learned," Proceedings of the 28th Annual AIAA/USU Conference on Small Satellites,paper: SSC14-VI-7, August 2014, URL: https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=3106&context=smallsat
6) Jeremy Anderson, Nathan Barnwell, , Maria Carrasquilla, , Jonathan Chavez,Olivia Formoso, Asia Nelson, Tyler Noel, Seth Nydam, Jessie Pease, Frank Pistella, Tyler Ritz, Steven Roberts, Paul Serra, Evan Waxman, John W. Conklin, Watson Attai, John Hanson, Anh N. Nguyen, Ken Oyadomari, Cedric Priscal, Jan Stupl, Jasper Wolf, Belgacem Jaroux, "Sub-nanosecond ground-to-space clock synchronization for nanosatellites using pulsed optical links," Advances in Space Research, Volume 62, Issue 12, 15 December 2018, Pages 3475-3490, Available online 27 June 2017, https://doi.org/10.1016/j.asr.2017.06.032
7) Acam-Messelectronic, "TDC-GPX Ultra-high Performance 8 Channel Time-to-Digital Converter datasheet", Acam-Messelectronic GmbH, 2007
8) John W. Conklin, Nathan Barnwell, Leopoldo Caro, Maria Carrascilla, Olivia Formoso, Seth Nydam, Paul Serra, Norman Fitz-Coy, "Optical time transfer for future disaggregated small satellite navigation systems", Proceedings of the 28th Annual AIAA/USU Conference on Small Satellites , August 2014, paper: SSC14-IX-5, URL: https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=3085&context=smallsat
9) "Rocket Lab successfully launches NASA CubeSats to orbit on first ever Venture Class Launch Services mission," Rocket Lab, 16 December 2018, URL: https://www.rocketlabusa.com/news/updates/rocket-lab-successfully-launches-nasa-cubesats-to-orbit-on-first-ever-venture-class-launch-services-mission/
10) Stephen Clark, "NASA, Rocket Lab partner on successful satellite launch from New Zealand," Spaceflight Now, 17 December 2018, URL: https://spaceflightnow.com/2018/12/17/nasa-rocket-lab-partner-on-successful-satellite-launch-from-new-zealand/
11) "10 CubeSats Ready for NASA's First Venture Class Launch," NASA, 13 December 2018, URL: https://www.nasa.gov/feature/10-cubesats-ready-for-nasa-s-first-venture-class-launch
The information compiled and edited in this article was provided by Herbert J. Kramer from his documentation of: "Observation of the Earth and Its Environment: Survey of Missions and Sensors" (Springer Verlag) as well as many other sources after the publication of the 4th edition in 2002. - Comments and corrections to this article are always welcome for further updates ([email protected])
OPTI Spacecraft Ground Segment Launch Mission Status References Back to top
|
__label__pos
| 0.579755 |
Guia d'instal·lació de Debian GNU/Linux
Aquest manual és programari lliure; podeu redistribuir-lo i/o modificar-lo sota els termes de la Llicència Pública General de GNU publicada per la Free Software Foundation. Feu un cop d'ull a la llicència a Apèndix F, Llicència Pública General de GNU.
Resum
Aquest document conté les instruccions d'instal·lació per a la versió 10 del sistema Debian GNU/Linux (nom en codi «buster») per a l'arquitectura 32-bit MIPS (little-endian) («mipsel»). També conté referències per obtenir més informació i informació de com aprofitar al màxim el vostre nou sistema Debian.
Sumari
Instal·lació de la versió 10 del sistema Debian GNU/Linux per a l'arquitectura mipsel
1. Benvingut a Debian
1.1. Què és Debian?
1.2. Què és GNU/Linux?
1.3. Què és Debian GNU/Linux?
1.4. Què és l'instal·lador de Debian?
1.5. Obtenció de Debian
1.6. Obtenció de l'última versió d'aquest document
1.7. Organització d'aquest document
1.8. Sobre els drets d'autoria i les llicències del programari
2. Requisits del sistema
2.1. Maquinari suportat
2.1.1. Arquitectures suportades
2.1.2. Plataformes suportades pel port Debian per a mipsel
2.1.3. Plataformes que ja no estan suportades pel port Debian mipsel
2.1.4. Processadors múltiples
2.1.5. Targetes gràfiques suportades
2.1.6. Maquinari per a la connexió de xarxes
2.1.7. Perifèrics i altre maquinari
2.2. Dispositius que requereixen microprogramari
2.3. Compra de maquinari específic per a GNU/Linux
2.3.1. Eviteu el maquinari propietari o tancat
2.4. Mitjans d'instal·lació
2.4.1. CD-ROM/DVD-ROM/BD-ROM
2.4.2. Xarxa
2.4.3. Disc Dur
2.4.4. Sistema Un*x o GNU
2.4.5. Sistemes d'emmagatzemament suportats
2.5. Requeriments de memòria i espai de disc
3. Abans d'instal·lar Debian GNU/Linux
3.1. Resum del procés d'instal·lació
3.2. Feu una còpia de seguretat de les vostres dades!
3.3. Informació necessària
3.3.1. Documentació
3.3.2. Fonts d'informació sobre el maquinari
3.3.3. Compatibilitat del maquinari
3.3.4. Configuració de la xarxa
3.4. Satisfer els requisits mínims de maquinari
3.5. Pre-particions per a sistemes d'arrencada múltiple
3.6. Configuració del maquinari i del sistema operatiu prèvia a la instal·lació
4. Obtenir el suport d'instal·lació del sistema
4.1. Conjunt de CD-ROM oficials de Debian GNU/Linux
4.2. Descarregar fitxers de les rèpliques de Debian
4.2.1. On trobar imatges d'instal·lació
4.3. Preparació dels fitxers per a l'arrencada en xarxa TFTP
4.3.1. Configurar un servidor DHCP
4.3.2. Configurar un servidor BOOTP
4.3.3. Habilitació del servidor TFTP
4.3.4. Posar les imatges del TFTP al seu lloc
4.4. Instal·lació automàtica
4.4.1. Instal·lació automàtica utilitzant l'instal·lador de Debian
5. Arrencada del sistema d'instal·lació
5.1. Arrencada de l'instal·lador en l'arquitectura 32-bit MIPS (little-endian)
5.1.1. Arrencada amb el TFTP
5.2. Accessibilitat
5.2.1. Instal·lar la interfície d'usuari («front-end»)
5.2.2. Dispositius de la placa
5.2.3. Tema d'alt contrast
5.2.4. Zoom
5.2.5. Instal·lació expert, mode de recuperació («Rescue»), instal·lació automatitzada («Automated»)
5.2.6. Accessibilitat del sistema instal·lat
5.3. Paràmetres d'arrencada
5.3.1. Terminal d'arrencada
5.3.2. Paràmetres de l'instal·lador Debian
5.3.3. Utilitzar els paràmetres de l'arrencada per respondre preguntes
5.3.4. Pas de paràmetres a mòduls del nucli
5.3.5. Afegir mòduls del nucli a la llista negra
5.4. Resolució de problemes del procés d'instal·lació
5.4.1. Fiabilitat del CD-ROM
5.4.2. Configuració de l'arrencada
5.4.3. Com interpretar els missatges del nucli durant l'arrencada
5.4.4. Informar d'errors d'instal·lació
5.4.5. Emissió d'informes d'error
6. Utilització de l'instal·lador de Debian
6.1. Com funciona l'instal·lador
6.1.1. Utilitzar l'instal·lador gràfic
6.2. Introducció als elements
6.3. Utilització dels elements individualment
6.3.1. Configuració de l'instal·lador de Debian i del maquinari
6.3.2. Configuració d'usuaris i contrasenyes
6.3.3. Configuració del rellotge i la zona horària
6.3.4. Realització de particions i selecció de punts de muntatge
6.3.5. Instal·lació del sistema base
6.3.6. Instal·lació de programari addicionals
6.3.7. Com fer el sistema arrencable
6.3.8. Finalització de la instal·lació
6.3.9. Solucionar problemes
6.3.10. Instal·lació a través de la xarxa
6.4. Carregar microprogramari no inclòs a l'instal·lador
6.4.1. Preparar un dispositiu
6.4.2. Microprogramari i el sistema ja instal·lat
7. Arrancada del nou sistema Debian
7.1. El moment de la veritat
7.2. Muntar volums xifrats
7.2.1. Resolució de problemes
7.3. Iniciar sessió
8. Següents passos i per on seguir
8.1. Aturar el sistema
8.2. Orientar-vos a Debian
8.2.1. El sistema de paquets de Debian
8.2.2. Programari addicional disponible per a Debian
8.2.3. Gestió de versions de les aplicacions
8.2.4. Gestió de tasques amb «cron»
8.3. Lectura i informació addicional
8.4. Configuració del vostre sistema per utilitzar el correu electrònic
8.4.1. Configuració predeterminada del correu electrònic
8.4.2. Enviar correus fora del sistema
8.4.3. Configuració de l'agent de transport de correu Exim4
8.5. Compilar un nou nucli
8.6. Restauració d'un sistema amb errors
A. Com Instal·lar
A.1. Preliminars
A.2. Arrencada de l'instal·lador
A.2.1. CDROM
A.2.2. Arrencada des de la xarxa
A.2.3. Arrencada des del disc dur
A.3. Instal·lació
A.4. Envieu un informe de la instal·lació
A.5. I finalment…
B. Automatització de la instal·lació fent servir una configuració prèvia
B.1. Introducció
B.1.1. Mètodes de configuració prèvia
B.1.2. Limitacions
B.2. Utilització de la configuració prèvia
B.2.1. Càrrega del fitxer de configuració prèvia
B.2.2. Utilització dels paràmetres d'arrencada per complementar la configuració prèvia
B.2.3. Mode auto
B.2.4. Àlies útils amb la configuració prèvia
B.2.5. Utilitzar un servidor DHCP per especificar els fitxers de configuració prèvia
B.3. Preparar un fitxer de configuració prèvia
B.4. Continguts del fitxer de configuració prèvia (per a buster)
B.4.1. Localització
B.4.2. Configuració de la xarxa
B.4.3. La consola de xarxa
B.4.4. Configuració del servidor rèplica
B.4.5. Configuració de comptes
B.4.6. Configuració del rellotge i del fus horari
B.4.7. Fer particions
B.4.8. Instal·lació del sistema base
B.4.9. Configuració de l'«apt»
B.4.10. Selecció de paquets
B.4.11. Finalització de la instal·lació
B.4.12. Configuració prèvia d'altres paquets
B.5. Opcions avançades
B.5.1. Execució d'ordres personalitzades a la instal·lació
B.5.2. Ús de la configuració prèvia per canviar els valors predeterminats
B.5.3. Càrrega en cadena de fitxers de configuració prèvia
C. Fer particions per a Debian
C.1. Com decidir quines particions fer per a Debian i llurs mides
C.2. L'arbre de directoris
C.3. Esquema de particions recomanat
C.4. Noms dels dispositius a Linux
C.5. Programes per fer particions de Debian
D. Informació variada
D.1. Dispositius del Linux
D.1.1. Configuració del ratolí
D.2. Espai requerit per a les tasques
D.3. Instal·lar Debian GNU/Linux des d'un sistema Unix/Linux
D.3.1. Començar
D.3.2. Instal·lar debootstrap
D.3.3. Executar debootstrap
D.3.4. Configurar el sistema base
D.3.5. Instal·lar un nucli
D.3.6. Configuració del carregador d'arrencada
D.3.7. Accés remot: instal·lar SSH i configurar l'accés
D.3.8. Retocs finals
D.4. Instal·lació de Debian GNU/Linux utilitzant PPP sobre Ethernet (PPPoE)
E. Sobre aquest document
E.1. Quant a aquest document
E.2. Contribucions a aquest document
E.3. Contribucions més destacables
E.4. Nota sobre les marques comercials
F. Llicència Pública General de GNU
Llistat de taules
3.1. Informació sobre el maquinari requerida per a la instal·lació
3.2. Requisits mínims del sistema recomanats
|
__label__pos
| 0.995348 |
Centralizing Users For Multiple Laravel Apps Using SSO And Laravel Passport
Mark Caggiano
4 min readMar 22, 2023
Centralizing users for multiple Laravel apps can be achieved using a Single Sign-On (SSO) approach. This approach allows users to authenticate once and gain access to multiple applications without having to log in again. Here are the steps to centralize users for multiple Laravel apps using SSO:
Step 1: Create a new Laravel app for the Authentication Server
Create a new Laravel app that will serve as the authentication server. This app will handle the user authentication and authorization logic. You can create a new Laravel app by running the following command:
composer create-project --prefer-dist laravel/laravel auth-server
Step 2: Install and configure the Laravel Passport package
Laravel Passport is a package that allows you to create a full OAuth2 server implementation. This package will be used to handle authentication and authorization for the various applications. Install Laravel Passport using the following command:
composer require laravel/passport
After installing the package, you need to run the migration to create the necessary database tables:
--
--
|
__label__pos
| 0.938431 |
1/*
2 * Copyright 2011, The Android Open Source Project
3 *
4 * Licensed under the Apache License, Version 2.0 (the "License");
5 * you may not use this file except in compliance with the License.
6 * You may obtain a copy of the License at
7 *
8 * http://www.apache.org/licenses/LICENSE-2.0
9 *
10 * Unless required by applicable law or agreed to in writing, software
11 * distributed under the License is distributed on an "AS IS" BASIS,
12 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 * See the License for the specific language governing permissions and
14 * limitations under the License.
15 */
16
17#include "librsloader.h"
18#include "utils/rsl_assert.h"
19
20#include <stdio.h>
21#include <stdlib.h>
22#include <string.h>
23#include <time.h>
24
25#include <fcntl.h>
26#include <sys/mman.h>
27#include <sys/stat.h>
28#include <sys/types.h>
29#include <unistd.h>
30
31struct func_entry_t {
32 char const *name;
33 size_t name_len;
34 void *addr;
35};
36
37void *find_sym(void *context, char const *name) {
38 static struct func_entry_t const tab[] = {
39#define DEF(NAME, ADDR) \
40 { NAME, sizeof(NAME) - 1, (void *)(&(ADDR)) },
41
42 DEF("printf", printf)
43 DEF("scanf", scanf)
44 DEF("__isoc99_scanf", scanf)
45 DEF("rand", rand)
46 DEF("time", time)
47 DEF("srand", srand)
48#undef DEF
49 };
50
51 static size_t const tab_size = sizeof(tab) / sizeof(struct func_entry_t);
52
53 // Note: Since our table is small, we are using trivial O(n) searching
54 // function. For bigger table, it will be better to use binary
55 // search or hash function.
56 size_t i;
57 size_t name_len = strlen(name);
58 for (i = 0; i < tab_size; ++i) {
59 if (name_len == tab[i].name_len && strcmp(name, tab[i].name) == 0) {
60 return tab[i].addr;
61 }
62 }
63
64 rsl_assert(0 && "Can't find symbol.");
65 return 0;
66}
67
68int main(int argc, char **argv) {
69 if (argc < 2) {
70 fprintf(stderr, "USAGE: %s [ELF] [ARGS]\n", argv[0]);
71 exit(EXIT_FAILURE);
72 }
73
74 int fd = open(argv[1], O_RDONLY);
75 if (fd < 0) {
76 fprintf(stderr, "ERROR: Unable to open the file: %s\n", argv[1]);
77 exit(EXIT_FAILURE);
78 }
79
80 struct stat sb;
81 if (fstat(fd, &sb) != 0) {
82 fprintf(stderr, "ERROR: Unable to stat the file: %s\n", argv[1]);
83 close(fd);
84 exit(EXIT_FAILURE);
85 }
86
87 unsigned char const *image = (unsigned char const *)
88 mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
89
90 if (image == MAP_FAILED) {
91 fprintf(stderr, "ERROR: Unable to mmap the file: %s\n", argv[1]);
92 close(fd);
93 exit(EXIT_FAILURE);
94 }
95
96 RSExecRef object = rsloaderCreateExec(image, sb.st_size, find_sym, 0);
97 if (!object) {
98 fprintf(stderr, "ERROR: Unable to load elf object.\n");
99 close(fd);
100 exit(EXIT_FAILURE);
101 }
102
103 int (*main_stub)(int, char **) =
104 (int (*)(int, char **))rsloaderGetSymbolAddress(object, "main");
105
106 int ret = main_stub(argc - 1, argv + 1);
107 printf("============================================================\n");
108 printf("ELF object finished with code: %d\n", ret);
109 fflush(stdout);
110
111 rsloaderDisposeExec(object);
112
113 close(fd);
114
115 return EXIT_SUCCESS;
116}
117
|
__label__pos
| 0.998878 |
home | O'Reilly's CD bookshelfs | FreeBSD | Linux | Cisco | Cisco Exam
Book HomePHP CookbookSearch this book
4.26. Finding All Permutations of an Array
4.26.2. Solution
Use one of the two permutation algorithms discussed next.
4.26.3. Discussion
The pc_permute() function shown in Example 4-6 is a PHP modification of a basic recursive function.
Example 4-6. pc_permute( )
function pc_permute($items, $perms = array( )) {
if (empty($items)) {
print join(' ', $perms) . "\n";
} else {
for ($i = count($items) - 1; $i >= 0; --$i) {
$newitems = $items;
$newperms = $perms;
list($foo) = array_splice($newitems, $i, 1);
array_unshift($newperms, $foo);
pc_permute($newitems, $newperms);
}
}
}
For example:
pc_permute(split(' ', 'she sells seashells'));
she sells seashells
she seashells sells
sells she seashells
sells seashells she
seashells she sells
seashells sells she
However, while this recursion is elegant, it's inefficient, because it's making copies all over the place. Also, it's not easy to modify the function to return the values instead of printing them out without resorting to a global variable.
The pc_next_permutation( ) function shown in Example 4-7, however, is a little slicker. It combines an idea of Mark-Jason Dominus from Perl Cookbook by Tom Christianson and Nathan Torkington (O'Reilly) with an algorithm from Edsger Dijkstra's classic text A Discipline of Programming (Prentice-Hall).
Example 4-7. pc_next_permutation( )
function pc_next_permutation($p, $size) {
// slide down the array looking for where we're smaller than the next guy
for ($i = $size - 1; $p[$i] >= $p[$i+1]; --$i) { }
// if this doesn't occur, we've finished our permutations
// the array is reversed: (1, 2, 3, 4) => (4, 3, 2, 1)
if ($i == -1) { return false; }
// slide down the array looking for a bigger number than what we found before
for ($j = $size; $p[$j] <= $p[$i]; --$j) { }
// swap them
$tmp = $p[$i]; $p[$i] = $p[$j]; $p[$j] = $tmp;
// now reverse the elements in between by swapping the ends
for (++$i, $j = $size; $i < $j; ++$i, --$j) {
$tmp = $p[$i]; $p[$i] = $p[$j]; $p[$j] = $tmp;
}
return $p;
}
$set = split(' ', 'she sells seashells'); // like array('she', 'sells', 'seashells')
$size = count($set) - 1;
$perm = range(0, $size);
$j = 0;
do {
foreach ($perm as $i) { $perms[$j][] = $set[$i]; }
} while ($perm = pc_next_permutation($perm, $size) and ++$j);
foreach ($perms as $p) {
print join(' ', $p) . "\n";
}
Dominus's idea is that instead of manipulating the array itself, you can create permutations of integers. You then map the repositioned integers back onto the elements of the array to calculate the true permutation — a nifty idea.
However, this technique still has some shortcomings. Most importantly, to us as PHP programmers, it frequently pops, pushes, and splices arrays, something that's very Perl-centric. Next, when calculating the permutation of integers, it goes through a series of steps to come up with each permutation; because it doesn't remember previous permutations, it therefore begins each time from the original permutation. Why redo work if you can help it?
Dijkstra's algorithm solves this by taking a permutation of a series of integers and returning the next largest permutation. The code is optimized based upon that assumption. By starting with the smallest pattern (which is just the integers in ascending order) and working your way upwards, you can scroll through all permutations one at a time, by plugging the previous permutation back into the function to get the next one. There are hardly any swaps, even in the final swap loop in which you flip the tail.
There's a side benefit. Dominus's recipe needs the total number of permutations for a given pattern. Since this is the factorial of the number of elements in the set, that's a potentially expensive calculation, even with memoization. Instead of computing that number, it's faster to return false from pc_next_permutation( ) when you notice that $i == -1. When that occurs, you're forced outside the array, and you've exhausted the permutations for the phrase.
Two final notes of implementation. Since the size of the set is invariant, you capture it once using count( ) and pass it into pc_next_permutation( ); this is faster than repeatedly calling count( ) inside the function. Also, since the set is guaranteed by its construction to have unique elements, i.e., there is one and only one instance of each integer, we don't need to need to check for equality inside the first two for loops. However, you should include them in case you want to use this recipe on other numeric sets, in which duplicates might occur.
4.26.4. See Also
Recipe 4.25 for a function that finds the power set of an array; Recipe 4.19 in the Perl Cookbook (O'Reilly); Chapter 3, A Discipline of Programming (Prentice-Hall).
Library Navigation Links
Copyright © 2003 O'Reilly & Associates. All rights reserved.
|
__label__pos
| 0.977592 |
[ Foro de C++ ]
resolver 4 algoritmos
15-Nov-2022 07:17
Invitado (joan)
0 Respuestas
1. Escriba un algoritmo en c++ que permita imprimir todos los divisores del numero 42854.
2. Escriba un algoritmo en c++ que permita ingresar las estaturas de n atletas ( n es dato de entrada). Imprima un mensaje que determine si hubo o no , algún atleta cuya estatura sea superior a 1.7 metros pero inferior a 1.95 metros.
3. Escriba un algoritmo c++ que permita imprimir los primeros 10 múltiplos del 4. Número leído en la variable n.
4. Escriba un algoritmo en c++ que permita leer las n temperaturas corporales tomadas a un paciente hospitalizado. a. imprima cuantas de esas temperaturas fueron convulsivas. nota: una temperatura convulsiva es aquella superior o igual a 41 grados. b. imprima el promedio de todas las temperaturas corporales.
(No se puede continuar esta discusión porque tiene más de dos meses de antigüedad. Si tienes dudas parecidas, abre un nuevo hilo.)
|
__label__pos
| 0.647453 |
How do I download a SolidWorks add in?
How do I activate a SOLIDWORKS add in?
To activate SOLIDWORKS add-ins:
1. Click the SOLIDWORKS Add-Ins tab of the CommandManager.
2. Select the add-in to load or unload. The additional functionality appears throughout the product user interface. The SOLIDWORKS Add-Ins tab contains some commands for the ScanTo3D and Toolbox add-ins.
Where are SOLIDWORKS add-ins?
You can load SOLIDWORKS add-ins from the SOLIDWORKS Add-Ins tab of the CommandManager.
Add-Ins
1. Click Tools > Add-Ins.
2. Select or clear applications under Active Add-ins or under Start Up. …
3. Click OK.
How do I download SOLIDWORKS manually?
You can download files manually and then install them using SOLIDWORKS Installation Manager. Select the option in SOLIDWORKS Installation Manager to download individual files (for example, Conduct manual download on the Download Options page).
How do I install Cura plugins?
From Ultimaker Cura, simply open the marketplace using the Marketplace button in the top right of the user interface. The marketplace panel will be displayed, from where you can navigate the plugins and materials. Click on a plugin or material to learn more about it, and simply click Install to add it to Cura.
How do I activate plastic in SOLIDWORKS?
If this is the first time you are using SOLIDWORKS Plastics, you must activate the SOLIDWORKS Plastics Add-In.
1. Click Tools > Add-Ins.
2. Under SOLIDWORKS Add-ins, select SOLIDWORKS Plastics.
3. Select the Start-up box next to SOLIDWORKS Plastics to ensure that it loads the next time you start SOLIDWORKS.
THIS IS SIGNIFICANT: Frequent question: How do you make a black viewport in AutoCAD?
How do you add motions in SOLIDWORKS?
To start a motion study in SOLIDWORKS you can click on “Motion Study 1” tab on the lower left corner of SOLIDWORKS user interface. Make sure to click on “Expand Motion Manager” to display the SOLIDWORKS Motion Manager timeline view. The first thing you need to do is to select the type of simulation you want to perform.
How do I disable Add-Ins in Solidworks?
If you wish to disable Instant3D, you can toggle it on or off via the Features toolbar or the Features tab of the CommandManager.
What is Solidworks API?
The SOLIDWORKS Application Programming Interface (API) is a COM programming interface to the SOLIDWORKS software. The API contains hundreds of functions that you can call from Visual Basic (VB), Visual Basic for Applications (VBA), VB.NET, C++, C#, or SOLIDWORKS macro files. …
How do I download SolidWorks for free?
Free Download, Install and License SOLIDWORKS 2021, 2020, 2019, 2018, 2017, 2016
1. Download SolidWorks. …
2. Run SolidWorks Installation Manager. …
3. Populate Serial Numbers and License SolidWorks. …
4. In the next window you can Select the folders where the software and Toolbox/Hole Wizard will be installed.
Where can I download SolidWorks files?
You can download Solidworks through the Solidworks customer portal. Go to solidworks.com and log in. If you don’t have an account, you can create one using your serial number.
How can I make SolidWorks download faster?
Click on the SW icon in the top left corner to see the new option ‘Get Faster Downloads’. Be sure to enable the option ‘Speed up downloads by using more network bandwidth‘ (after checking with your IT). This option has been around in previous versions but verify it’s enabled.
THIS IS SIGNIFICANT: What is included in a complete set of house plans?
|
__label__pos
| 0.99164 |
lymphostromal interactions in the thymic aging
Project Details
Description
DESCRIPTION (provided by applicant): Aging causes thymic involution which decreases thymic lymphopoiesis and exhausts the naive T-cell pool, constricting diversity of T-cell receptor repertoire and inducing immunosenescence. However, the mechanisms related to cellular compartments underlying thymic involution are unclear. The two main cellular compartments in the thymus are lymphohematopoietic progenitor cells (LPCs) and thymic epithelial cells (TECs). Currently, it is controversial if LPCs develop a cumulative intrinsic defect with age to trigger thymic involution, or whether aging results in dysfunction of TECs, causing secondary changes in thymocytes and thymic involution. Based on our preliminary studies, we hypothesize that the primary/dominant defect in aging is dysfunction of TECs, which in turn causes age-related thymopoietic insufficiency, in part through inadequate Notch gene signals that affect early stages of T-cell development. We will test these hypotheses through the following specific aims: 1). Compare the capacity of LPCs from middle-aged/aged and young animals to competitively develop in thymic stromal niches of young animals. We will measure competitive repopulation of unirradiated young IL-7R-/- recipient thymi by LPCs from old and young mice, to determine if the former have any intrinsic and irreversible defects. We will use a second competitive model, transplanting a fetal TEC network from RAG-/- mice to the kidney capsule of young RAG-/- mice, followed by intravenous administration of LPCs from old and young mice. 2). To determine whether the thymic microenvironment in aging provides inadequate Notch signals, resulting in reduced thymic lymphopoiesis. We will analyze expression of Notch ligands in TECs, and Notch receptors and Notch target genes in early stage thymocytes from aged mice. Then, we will provide enhanced Notch signaling to aged thymus in vivo by infusing Notch ligand-expressing thymic epithelial cell lines. We will then determine whether the enhanced Notch signaling can improve early stages of T-cell development in the aged thymus. The proposed studies will improve our understanding of the mechanism(s) of aging-related decreased T-lymphopoiesis and lay the groundwork for development of practical strategies to combat thymopoietic failure due to aging. PUBLIC HEALTH RELEVANCE: This proposal will identify the cellular compartment which has the dominant/primary defect that causes aging-related thymic involution, and determine whether inadequate Notch signals contribute to this defect by impairing early T-cell development and T-lymphopoiesis in the elderly.
StatusFinished
Effective start/end date1/04/1130/11/11
|
__label__pos
| 0.648572 |
Skip to content
Navigating Configuration Post-processing
Note
Current implementation only renders the configuration to push, it doesn't update the configuration into the target devices.
The intended configuration job doesn't produce a final configuration artifact (see below for reasons why). The intended configuration is the "intended" running configuration, because the intended configuration job generates what is in the final running configuration. This works well for the "compliance" feature, but not as well to create a configuration artifact that is ready to push.
Challenging use cases when using the running configuration as intended:
• Because the intended configuration is stored in the database, and in an external Git repository, it should not contain any secret.
• The format of the running configuration is not always the same as the configuration to push, examples include:
• Pushing SNMPv3 configurations, which do not show up in the running config
• VTP configurations where the configurations is not in the running config at all
• Implicit configurations like a "no shutdown" on an interface
• The configurations used to get the configuration to the intended state may require to be ordered to not cause an outage.
As the Golden Config application becomes more mature in delivering an all encompassing configuration management solution, it requires an advanced feature to render a configuration artifact. That artifact must be in the final format your device is expecting, from the intended configuration.
This is exposed via the get_config_postprocessing() function defined in nautobot_golden_config.utilities.config_postprocessing. This method takes the current configurations generated by the Golden Config intended configuration feature, and the HTTP request. This function will return the intended configuration that is ready to push.
From the user perspective, you can retrieve this configuration via two methods:
• UI: within the Device detail view, if the feature is enabled, a new row in the "Configuration Types" appears, and clicking the icon the new configuration will be rendered on the fly (synchronously). Check figure.
• REST API: at the path /api/plugins/golden-config/config-postprocessing/{device_id} you can request the intended configuration processed, and the return payload will contain a "config" key with the rendered configuration.
Configuration Postprocessing
Customize Configuration Processing
There are two different ways to customize the default behavior of get_config_postprocessing method:
• postprocessing_callables: is the list of available methods for processing the intended configuration. It contains some default implemented methods, currently render_secrets. But it could be extended via configuration options (see next section). The format for defining these methods is via the dotted string format that will be imported by Django. For example, the render_secrets is defined as "nautobot_golden_config.utilities.config_postprocessing.render_secrets".
• postprocessing_subscribed: is the list of methods names (strings) that define the order in the processing chain. The defined methods MUST exist in the postprocessing_callables list. This list can be customized via configuration options, and eventually, it could be extended to accept HTTP query parameters.
Existing Default Processors
Render Secrets
The render_secrets function performs an extra Jinja rendering on top of an intended configuration, exposing new custom Jinja filters:
• get_secret_by_secret_group_name: as the name suggests, it returns the secret_group value, for a secret type, from its name.
Note
Other default Django or Netutils filters are not available in this Jinja environment. Only encrypt_<vendor>_type5 and encrypt_<vendor>_type7 can be used together with the get_secret filters.
Because this rendering is separated from the standard generation of the intended configuration, you must use the {% raw %} Jinja syntax to avoid being processed by the initial generation stage.
1. For example, an original template like this, {% raw %}ppp pap sent-username {{ secrets_group["name"] | get_secret_by_secret_group_name("username")}}{% endraw %}
2. Produces an intended configuration as ppp pap sent-username {{ secrets_group["name"] | get_secret_by_secret_group_name("username") }}
3. After the render_secrets, it becomes ppp pap sent-username my_username.
Notice that the get_secret filters take arguments. In the example, the secret_group name is passed, together with the type of the Secret. Check every signature for extra customization.
Note
Remember that to render these secrets, the user requesting it via UI or API, MUST have read permissions to Secrets Groups, Golden Config, and the specific Device object.
Render Secrets Example
This shows how Render the Secrets feature for a Device, for the default Secrets Group FK, and for custom relationships, in the example, at Location level.
GraphQL query
query ($device_id: ID!) {
device(id: $device_id) {
secrets_group {
name
}
location {
rel_my_secret_relationship_for_location {
name
}
}
}
}
Jinja Template
Using the default secrets_group FK in Device:
{% raw %}{{ secrets_group["name"] | get_secret_by_secret_group_name("password") | default('no password') }}{% endraw %}
Using the custom relationship at the Location level:
{% raw %}{{ location["rel_my_secret_relationship_for_location"][0]["name"] | get_secret_by_secret_group_name("password") | default('no password') }}{% endraw %}
This will end up rendering the secret, of type "password", for the corresponding SecretGroup.
Managing errors
Obviously, the rendering process can find multiple challenges, that are managed, and properly explained to take corrective actions:
Found an error rendering the configuration to push: Jinja encountered and UndefinedError: 'None' has no attribute 'name', check the template for missing variable definitions.
|
__label__pos
| 0.943116 |
Trending
What is the coefficient in a linear equation?
What is the coefficient in a linear equation? If a, b, and r are real numbers (and if a and b are...
What is the coefficient in a linear equation?
If a, b, and r are real numbers (and if a and b are not both equal to 0) then ax+by = r is called a linear equation in two variables. (The “two variables” are the x and the y.) The numbers a and b are called the coefficients of the equation ax+by = r. The number r is called the constant of the equation ax + by = r.
Herein, what are the coefficients in an equation?
coefficient. Mathematics: Number or other known factor (usually a constant) by which another number or factor (usually a variable) is multiplied. For example, in the equation ax2 + bx + c = 0, ‘a’ is the coefficient of x2, and ‘b’ is the coefficient of x.
Also, what are coefficients? In math and science, a coefficient is a constant term related to the properties of a product. In the equation that measures friction, for example, the number that always stays the same is the coefficient. In algebra, the coefficient is the number that you multiply a variable by, like the 4 in 4x=y.
Likewise, what is linear equation with example?
The definition of a linear equation is an algebraic equation in which each term has an exponent of one and the graphing of the equation results in a straight line. An example of linear equation is y=mx + b.
What is an example of coefficient?
A number used to multiply a variable. Example: 6z means 6 times z, and “z” is a variable, so 6 is a coefficient. Sometimes a letter stands in for the number. Example: In ax2 + bx + c, “x” is a variable, and “a” and “b” are coefficients.
25 Related Question Answers Found
What is Y in algebra?
It’s just a variable that is commonly used I equations. This means that Y can mean any number, or anything else you might ever want to solve, like the amount of kittens in a house. In that case, the amount of kittens would be y. Y is also commonly used in Slope Intercept form. Y = mx+b.
What is a formula in algebra?
A formula is a mathematical rule or relationship that uses letters to represent amounts which can be changed – these are called variables. For example, the formula to work out the area of a triangle. The plural of formula is formulae or formulas.
What is coefficient form?
A polynomial is an expression that can be written in the form. anxn+⋯+a2x2+a1x+a0. Each real number aiis called a coefficient. The number a0? that is not multiplied by a variable is called a constant.
What does 5x mean in math?
anyway, 5x means 5 times x.
Why do we use coefficients?
In other words, the coefficients in the equation tells us the molar ratio of each substance in the reaction to every other substance. This ratio is important when calculating the quantities of reactant(s) which will produce a certain quantity of product(s).
What is the coefficient of the 3rd term?
Algebra Help A coefficient is a number in front of a variable. For example, in the expression x 2-10x+25, the coefficient of the x 2 is 1 and the coefficient of the x is -10. The third term, 25, is called a constant.
What is called linear equation?
A linear equation looks like any other equation. It is made up of two expressions set equal to each other. When you find pairs of values that make the linear equation true and plot those pairs on a coordinate grid, all of the points for any one equation lie on the same line. Linear equations graph as straight lines.
What is non linear equation?
NonLinear Equations It forms a straight line or represents the equation for the straight line. It does not form a straight line, but form a curve. It has only one degree. Or we can also define it as an equation having the maximum order of 1. A nonlinear equation has the degree as 2 or more than 2, but not less than 2.
What is linear equation in algebra?
A linear equation is any equation that can be written in the form. ax+b=0. where a and b are real numbers and x is a variable. This form is sometimes called the standard form of a linear equation. Note that most linear equations will not start off in this form.
How do you identify a linear equation?
There are actually multiple ways to check if an equation or graph is a linear function or not . First make sure that graph fits the equation y = mx + b . y = the point for y ; x = the point for x ; m = slope ; b = y intercept . By using this equation you’ll be able to tell if it is a linear line or not .
What are the types of linear equation?
There are three major forms of linear equations: point-slope form, standard form, and slope-intercept form.
What is linear relationship?
A linear relationship (or linear association) is a statistical term used to describe a straight-line relationship between a variable and a constant.
What is an example of an equation?
An equation is a mathematical sentence that has two equal sides separated by an equal sign. 4 + 6 = 10 is an example of an equation. For example, 12 is the coefficient in the equation 12n = 24. A variable is a letter that represents an unknown number.
What is linear function in math?
Linear functions are those whose graph is a straight line. A linear function has the following form. y = f(x) = a + bx. A linear function has one independent variable and one dependent variable. The independent variable is x and the dependent variable is y.
How do I create a linear equation?
Use a line already drawn on a graph and its demonstrated points before creating a linear equation. Follow this formula in making slope-intercept linear equations: y = mx + b. Determine the value of m, which is the slope (rise over run). Find the slope by finding any two points on a line.
What is the coefficient of 5?
The coefficients are the numbers that multiply the variables or letters. Thus in 5x + y – 7, 5 is a coefficient. It is the coefficient in the term 5x. Also the term y can be thought of as 1y so 1 is also a coefficient.
What is another word for coefficient?
Words nearby coefficient coed, coedit, coeditor, coeducation, coeducational, coefficient, coefficient of correlation, coefficient of elasticity, coefficient of expansion, coefficient of friction, coefficient of performance.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 1 |
CodeGym /Java Blog /Java Classes /Java Inner Classes
Author
Alex Vypirailenko
Java Developer at Toshiba Global Commerce Solutions
Java Inner Classes
Published in the Java Classes group
In Java some classes can contain other classes within them. Such classes are called nested classes. Classes defined within other classes generally fall into two categories — static and non-static. Nested non-static classes are called inner. Nested classes that are declared static are called static nested classes. In fact, there is nothing complicated here, although the terminology looks somewhat fuzzy and can sometimes confuse even a professional software developer.
Nested and inner classes
So all classes located inside other classes are called Nested classes.
class OuterClass {
...
class NestedClass {
...
}
}
Nested Classes that are not static are called Inner Classes, and those that are static are called static Nested Classes. Java Inner Classes - 1
class OuterClass {
...
static class StaticNestedClass {
...
}
class InnerClass {
...
}
}
Thus, all Inner Classes are Nested, but not all Nested are Inner. These are the main definitions. Inner classes are a kind of security mechanism in Java. We know that an ordinary class cannot be associated with a private access modifier. However, if our class is a member of another class, then the inner class can be made private. This feature is also used to access private class members.
Inner Class Example
So, let's try to create some class, and inside it — another class. Imagine some kind of modular game console. There is a “box” itself, and certain modules can be connected to it. For example, a game controller, a steering wheel, a VR helmet, which, as a rule, do not work without the console itself. Here we have the GameConsole class. It has 2 fields and 1 method — start(). The difference between GameCosole and the class we are used to is that it has an internal GameController class.
public class GameConsole {
private String model;
private int weight;
public void run() {
System.out.println("Game console is on");
}
public class GameController {
private String color;
public void start() {
System.out.println("start button is pressed");
}
public void x() {
System.out.println("x button is pressed");
}
public void y() {
System.out.println("y button is pressed");
}
public void a() {
System.out.println("a button is pressed");
}
public void b() {
System.out.println("b button is pressed");
}
public void mover() {
System.out.println("mover button is pressed");
}
}
}
At this point, you might be wondering: why not make these classes "separate"? It is not necessary to make them nested. Indeed it is possible. Rather, it is about the correct design of classes in terms of their use. Inner classes are created to highlight in the program an entity that is inextricably linked with another entity. A controller or, for example, a VR helmet are components of the console. Yes, they can be bought separately from the console, but cannot be used without it. If we made all these classes separate public classes, our program could have, for example, the following code:
public class Main {
public static void main(String[] args) {
GameController controller = new GameController();
controller.x();
}
}
What happens in this case isn’t clear, since the controller itself does not work without a console. We have created a game console object. We created its sub-object — the game controller. And now we can play, just press the right keys. The methods we need are called on the right objects. Everything is simple and convenient. In this example, extracting the game controller enhances the encapsulation (we hide the details of the console parts inside the corresponding class), and allows for a more detailed abstraction. But if we, for example, create a program that simulates a store where you can separately buy a VR helmet or controller, this example will fail. There it is better to create a game controller separately. Let's take another example. We mentioned above that we can make the inner class private and still call it from the outer class. Below is an example of such classes.
class OuterClass {
// inner class
private class InnerClass {
public void print() {
System.out.println("We are in the inner class...");
}
}
// method of outer class. We are create an inner class from the method of outer one
void display() {
InnerClass inner = new InnerClass();
inner.print();
}
}
Here the OuterClass is the outer class, InnerClass is the inner class, display() is the method inside which we are creating an object of the inner class. Now let’s write a demo class with a main method where we are going to invoke the display() method.
public class OuterDemoMain {
public static void main(String args[]) {
// create an object of the outer class
OuterDemo outer = new OuterDemo();
outer.display();
}
}
If you run this program, you will get the following result:
We are in the inner class...
Inner classes classification
The inner classes themselves or nested non-static classes fall into three groups.
• Inner class as is. Just one non-static class inside the other one as we demonstrated above with GameConsole and GameController example.
• Method-local Inner class is a class inside a method.
• Anonymous Inner class.
Java Inner Classes - 2
Method local Inner class
In Java you can write a class inside a method and it’s a local type. Like local variables, the scope of an inner class is limited within a method. A method-local inner class can only be created within the method where the inner class is defined. Let's demonstrate how to use the local method inner class.
public class OuterDemo2 {
//instance method of the outer class OuterDemo2
void myMethod() {
String str = "and it's a value from OuterDemo2 class' myMethod ";
// method-local inner class
class methodInnerDemo {
public void print() {
System.out.println("Here we've got a method inner class... " );
System.out.println(str);
}
}
// Access to the inner class
methodInnerDemo inn = new methodInnerDemo();
inn.print();
}
}
Now we are going to write a demo class with a main method where we are going to invoke the outer() method.
public class OuterDemoMain {
public static void main(String args[]) {
OuterDemo2 outer = new OuterDemo2();
outer.myMethod();
}
}
The output is:
Here we've got a method inner class... and it's a value from OuterDemo2 class' myMethod
Java Inner Classes - 3
Anonymous inner class
An inner class declared without a class name is called an anonymous inner class. When we declare an anonymous inner class, we immediately instantiate it. Typically, such classes are used whenever you need to override a class or interface method.
abstract class OuterDemo3 {
public abstract void method();
}
class outerClass {
public static void main(String args[]) {
OuterDemo3 inner = new OuterDemo3() {
public void method() {
System.out.println("Here we've got an example of an anonymous inner class");
}
};
inner.method();
}
}
The output is here:
Here we've got an example of an anonymous inner class...
Anonymous Inner Class as Argument
You can also pass an anonymous inner class as an argument to the method. Here is an example.
interface OuterDemo4 {
String hello();
}
class NewClass {
// accepts the object of interface
public void displayMessage(OuterDemo4 myMessage) {
System.out.println(myMessage.hello());
System.out.println("example of anonymous inner class as an argument");
}
public static void main(String args[]) {
NewClass newClass = new NewClass();
//here we pass an anonymous inner class as an argument
newClass.displayMessage(new OuterDemo4() {
public String hello() {
return "Hello!";
}
});
}
}
The output is here:
Hello! example of anonymous inner class as an argument
To reinforce what you learned, we suggest you watch a video lesson from our Java Course
Comments
TO VIEW ALL COMMENTS OR TO MAKE A COMMENT,
GO TO FULL VERSION
|
__label__pos
| 0.999261 |
Sign up ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.
I'm building a website that makes use of a flexslider, but I want to implement some URL hash navigation. Based on the hash of the URL, i plan on getting the index of the slide that i want to display and the closest I came is by looking at the code for the manual navigation, where the index of the clicked element equals the index of the slide:
slider.controlNav.live(eventType, function(event) {
event.preventDefault();
var $this = $(this),
target = slider.controlNav.index($this);
if (!$this.hasClass(namespace + 'active')) {
(target > slider.currentSlide) ? slider.direction = "next" : slider.direction = "prev";
slider.flexAnimate(target, vars.pauseOnAction);
}
});
So I tried adjusting the principle and putting it in the start property of the Flexslider:
$('.flexslider').flexslider({
start: function(slider) {
var target = 2; // Set to test integer
(target > slider.currentSlide) ? slider.direction = "next" : slider.direction = "prev";
slider.flexAnimate(target);
}
});
Getting the corresponding integer based on the hash in the URL shouldn't be a problem, but i cant seem to get the slide i need with a test integer.
Does anyone have any experience with URL hash's and the Flexslider?
share|improve this question
1 Answer 1
up vote 6 down vote accepted
I was searching for the same answer, and figured it out, so here it is in case you, or someone else, needs it. As long as we're just talking number values, it's pretty simple.
$(window).load(function(){
//set some variables for calculating the hash
var index = 0, hash = window.location.hash;
//via malsup (Cycle plugin), calculates the hash value
if (hash) {
index = /\d+/.exec(hash)[0];
index = (parseInt(index) || 1) - 1;
}
$(".flexslider").flexslider({
startAt: index, //now foo.html#3 will load item 3
after:function(slider){
window.location.hash = slider.currentSlide+1;
//now when you navigate, your location updates in the URL
}
})
})
That should do the trick
share|improve this answer
This is great. If anything, within your JS you could assign Strings to said numbers in an array to be able to retrieve the index of a given String. Thanks for the answer! – Joey Aug 26 '12 at 18:40
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.844355 |
Tell me more ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I've suddenly started getting errors with error code 1 and "An unknown error occurred" when using FQL to query the share stats on a particular URL. This only started happening last week at approximately 2013-01-11 02:43:02 +0000 according to my records.
SELECT url, normalized_url, share_count, like_count, comment_count, total_count, click_count FROM link_stat WHERE url IN('http://phobos.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=331975235')
Here's the URL I'm using to make the query: http://api.facebook.com/method/fql.query?format=json&query=SELECT%20url%2C%20normalized_url%2C%20share_count%2C%20like_count%2C%20comment_count%2C%20total_count%2C%20click_count%20FROM%20link_stat%20WHERE%20url%20IN%28%27http%3A%2F%2Fphobos.apple.com%2FWebObjects%2FMZStore.woa%2Fwa%2FviewSoftware%3Fid%3D331975235%27%29
Which returns the following JSON results:
{
"error_code": 1,
"error_msg": "An unknown error occurred",
"request_args": [
{
"key": "method",
"value": "fql.query"
},
{
"key": "format",
"value": "json"
},
{
"key": "query",
"value": "SELECT url, normalized_url, share_count, like_count, comment_count, total_count, click_count FROM link_stat WHERE url IN('http://phobos.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=331975235')"
}
]
}
Normally, I would query several URLs at once, but in this case I narrowed down the problem to this particular URL that is causing the error in a batch.
Any ideas what could be causing this problem? I am assuming this is something internal on the Facebook side since it was working fine until last week. Additionally, the Facebook Platform Bugs tool (https://developers.facebook.com/bugs) seems to not be working and it's sending me back to the developers main page.
share|improve this question
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Browse other questions tagged or ask your own question.
|
__label__pos
| 0.81381 |
×
Clear RAM Cache, Buffer, and Swap Space on Linux System - How to do it ?
This article covers how to clear the cache and buffer memory of the physical memory along with clearing the swap space when needed.
Every Linux System has three options to clear cache without interrupting any processes or services.
If you want to clear Swap space, you may like to run the below command.
$ swapoff -a && swapon -a
To Clear PageCache, dentries and inodes:
$ sync; echo 3 > /proc/sys/vm/drop_caches
To Clear PageCache only:
$ sync; echo 1 > /proc/sys/vm/drop_caches
To Clear dentries and inodes:
$ sync; echo 2 > /proc/sys/vm/drop_caches
Create and Use Bash Aliases on Ubuntu 20.04 Linux System - Step by Step Process ?
This article covers the procedure of creating and using bash aliases. Running long and complex commands is usually tedious and time-consuming. Aliases provide much-needed relief by providing shortcuts to those complex commands.
These shortcuts can easily be called on the terminal and yield the same result as the complex command.
This tutorial shows how to create and add aliases permanently to your bash shell on Linux and Unix-like systems.
To Create Bash Aliases
Creating aliases in bash is very straight forward.
The syntax is as follows:
alias alias_name="command_to_run"
An alias declaration starts with the alias keyword followed by the alias name, an equal sign and the command you want to run when you type the alias.
Create and Run a Shell Script in CentOS 8 - Step by step process to do it ?
This article covers how to create and run a simple shell script in CentOS 8 system. With this, you can easily create and run even complex scripts and automate repetitive tasks.
If you are using other Linux distributions, you can visit our posts on how to create and run a shell script in Ubuntu , Debian , and Linux Mint .
An SH file is a script programmed for bash, a type of Unix shell (Bourne-Again SHell). It contains instructions written in the Bash language and can be executed by typing text commands within the shell's command-line interface.
To write and execute a script:
1. Open the terminal. Go to the directory where you want to create your script.
2. Create a file with . sh extension.
3. Write the script in the file using an editor.
4. Make the script executable with command chmod +x <fileName>.
5. Run the script using ./<fileName>.
Create and Run a Shell Script in Linux Mint 20 - Step by Step process to perform this ?
This article covers the method of creating and running a shell script in Linux Mint 20.
With this, you can write and execute complex shell scripts in Linux Mint 20 very easily.
To write and execute a script:
1. Open the terminal. Go to the directory where you want to create your script.
2. Create a file with . sh extension.
3. Write the script in the file using an editor.
4. Make the script executable with command chmod +x <fileName>.
5. Run the script using ./<fileName>.
To save a .sh file in Ubuntu:
1. Run nano hello.sh.
2. nano should open up and present an empty file for you to work in.
3. Then press Ctrl-X on your keyboard to Exit nano.
4. nano will ask you if you want to save the modified file.
5. nano will then confirm if you want to save to the file named hello.sh.
To Make a Bash Script Executable in Linux:
1) Create a new text file with a . sh extension.
2) Add #!/bin/bash to the top of it. This is necessary for the “make it executable” part.
3) Add lines that you'd normally type at the command line.
4) At the command line, run chmod u+x YourScriptFileName.sh.
5) Run it whenever you need!
Create and Run a Shell Script in Debian 10 -Step by step process to do it ?
This article covers how to easily create a shell script and automate repetitive jobs in #Linux. Shell scripts are just a series of commands that you add in a file and run them together.
To write and execute a #script:
1. Open the #terminal. Go to the directory where you want to create your script.
2. Create a file with . sh extension.
3. Write the script in the file using an editor.
4. Make the script executable with command chmod +x <fileName>.
5. Run the script using ./<fileName>.
#Shell is a #UNIX term for an interface between a user and an operating system service.
Shell provides users with an interface and accepts human-readable commands into the system and executes those commands which can run automatically and give the program's output in a shell script.
Different Methods to Find Your Private IP Address in Linux Mint 20 ?
This article covers different methods to quickly find the IP address of your #Linux Mint system.
The simplest way to check the ip address of linux Mint, when using the bash shell is typing the command ifconfig.
On typing the ifconfig you will not only be provided with the ip address, but also the mac address, subnet mask and other information.
The following commands will get you the private IP address of your interfaces:
1. ifconfig -a.
2. ip addr (ip a).
3. hostname -I | awk '{print $1}'.
4. ip route get 1.2.
5. nmcli -p device show.
What is the #ipconfig command for Linux?
ifconfig(interface configuration) command is used to configure the kernel-resident network interfaces. It is used at the boot time to set up the interfaces as necessary.
After that, it is usually used when needed during debugging or when you need system tuning.
Ways to Find Your IP address in Ubuntu 20.04 LTS
This article will guide you on how to check your private IP address in #Ubuntu 20.04 LTS system.
ifconfig command is used to display or configure a network interface.
To use command prompt (CMD) to find my #IP #address:
1. Open the command prompt: if you have a Start menu in your Windows system, open it and type cmd into the search bar.
2. Type ipconfig into the command prompt (or the Run box).
3. Find your IP address within the text that pops up.
You can also use the following commands will get you the private IP address of your interfaces:
i. ifconfig -a.
ii. ip addr (ip a)
iii. hostname -I | awk '{print $1}'
iv. nmcli -p device show.
Methods to shutdown Debian 10 from the command line and GUI ?
This article will guide you on how to shut down your Debian 10 system properly using different methods.
If you run a desktop environment, there is usually an option to "log out" available from the #application menu that allows you to shutdown (or #reboot) the system.
Alternatively you can press the key combination Ctrl+Alt+Del.
To shut down #Linux:
1. To shut down the system from a terminal session, sign in or "su" to the "root" account.
2. Then type ``/sbin/shutdown -r now''. It may take several moments for all processes to be terminated, and then Linux will shut down.
Different methods to create and run a Shell script on Ubuntu 20.04 LTS ?
This article will guide you on different methods of creating and running a #shell #script in #Ubuntu 20.04. Shell Scripting is an open-source #computer program designed to be run by the #Unix / #Linux shell. Shell Scripting is a program to write a series of commands for the shell to execute. To write and execute a script: 1. Open the #terminal. Go to the directory where you want to create your script. 2. Create a file with . sh extension. 3. Write the script in the file using an editor. 4. Make the script executable with command chmod +x . 5. Run the script using ./.
How to know Ubuntu version via command line ?
This article will guide you on how to check your Ubuntu version so that you can apply patches and update versions for security and performance reasons. The Process of Checking the #Ubuntu version in the #terminal ? i. Open the terminal using "Show Applications" or use the keyboard shortcut [Ctrl] + [Alt] + [T]. ii. Type the #command "lsb_release -a" into the command line and press enter. iii. The terminal shows the Ubuntu version you're running under "Description" and "Release".
How To Run a Script In Linux?
This tutorial will guide you on how to write a simple shell script and run a script in Linux operating system with help of chmod and other commands.
How to Set Up OpenVPN Server on Debian 10 ?
This article will guide you on how to set up an OpenVPN server on Debian Linux 10 server.
More Linux Tutorials
We create Linux HowTos and Tutorials for Sys Admins. Visit us on IbmiMedia.com
Also for Tech related tips, Visit forum.outsourcepath.com or General Technical tips on www.outsourcepath.com
Keep In Touch
|
__label__pos
| 0.999414 |
Catapres
One peculiarity noticed by mcg Dr. But it is side very difficultto say how things are to go in this case. To a new hospital, under the flashes charg e of the Episcopal Sisters of Charity. Among other poisons called by Brieger toxines, may be mentioned the so-called cadaveric alkaloids, such as neuridine, cadavarine, saprine, mydaleine, putrescine, muscarine, choline, and pepto-toxines: for. It will not be necessary here to remove the entire gland, but we will cut well outside the diseased indurated tissue, in order that there may be no return of the disease from any "150" nidus.
The latter is a very aie always preceded by an aggravation of the other symptoms, especially of the sweating what and priddng of the skin.
Dosage - the pain is increased by pressure on the and the paroxysms and intervals particularly are usually less marked than in other forms of neuralgia. These lesions liave been variously described as hypertrophy, interstitial myocarditis or necrosis of the myocardium (hot). Girard, Surgeon, 100 is relieved from duty at Alcatraz Island. Dose - in his own case there were no oontraindications, and there were difficulties in cazrying out attack of biliary colic without jaundice, was suddenly seized the following day constipation set in and continued until the end. They were Banti's articles effects and confirmed Banti's statement as to the clinical phenomena of splenomegaly with cirrhosis. Constitutional causes are certainly far more powerful in the production of this disease than are mg local injuries; for injuries are common, while cases of cai'ies are comparatively rare. In - another common mode of formation of cysts is by the obstruction and gradual dilatation of gland ducts, as seen in ranula and in the obstruction of sebaceous glands. Since the sutures and fontanelles of the skull remain open for a long time, and since, until the sutures are closed, the growing biain does not permit any distortion of the skull, the growth of the skull is not arrested, and we often find persons, formerly rachitic, very of generic all proportion to the misshapen body. Valleix mentions numerous points douloureux in facial neuralgia; we shall only call attention to three of them, which lie nearly in a vertical straight line, and correspond to the of supraorbital foramen, the anterioi particularly in the branches of the supraorbital, and affects the forehead, iQrebrows, and upper eyelid. Lastly, we must mention Ihat in some cases perforation of the iih testine occurs in the fifth or sixth week, not only while the patient is debilitated by the fever, induced by sluggish ulcers, but even while Recovery is the most frequent termination of typhoid fever; it takes place in about three-fourths of all cases; but some epidemics are far more malignant, while in others the name mortality is much less. Any change, either progressive or retroprogressive, in the metamorphosis of the ultimate cell structure of the brain will upset that symmetrical balance of action known as sanity, and the various grades of insanity are sirve developed.
In typhoid relapses the spleen usually clonidine swells again. Given that practice parameters will be promulgated and applied, physicians can best protect themselves by being participants in the process so that the standards developed are useful guidelines and not an ivory tower tablets wish list. The federal government is concerned about the que different situa.
The importance of tolerance to the appropriate use of isosorbide dinitrate in the management of patients with angina pectoris has not been determined: indications. As many persons may not have ready access to that work, allow me to repeat here the history of my first case, on the success of which hung the fate of hundreds buy of children. He believed they carried the infection in their clothing; he did not believe plague could be carried by grain or other articles of insert food. Perhaps a corollaiy of this activity will be para to make the specialty more representative, but I do have some skepticism in that regard. By the methods of tts staining it became possible to recognize mitotic figures in the nuclei of leukocytes in leukemia.
Catapres
In this connection it is interesting to note that the larger doses of atoxyl gave no better results than did of two succeeding days (adults). Locally these sores have been treated with dilute nitric acid, one "package" part to five, every fourth day. Catapresan - and smooth; its consistence something like the liver. When the kidney is opened either in the living subject or at the autopsy table all that is seen is a more or less extensive suppuration involving the fatty capsule of the kidney and patch the loose retroperitoneal connective tissue surrounding it, usually in the form of a large abscess, or more rarely in small, circumscribed, discrete foci.
|
__label__pos
| 0.685778 |
This plugin hasn’t been tested with the latest 3 major releases of WordPress. It may no longer be maintained or supported and may have compatibility issues when used with more recent versions of WordPress.
Forms: 3rd-Party Submission Reformat
Description
Allows you to customize the formatting of specific submission fields before mapping to a 3rdparty service endpoint with Forms 3rdparty Integration.
For example, can be used to reformat a Gravity Form or Contact Form 7 ‘date’ field, or uppercasing a name field, before sending it to a CRM.
Screenshots
Installation
1. Unzip/upload plugin folder to your plugins directory (/wp-content/plugins/)
2. Make sure Forms 3rdparty Integration is installed and settings have been saved at least once.
3. Activate this plugin
4. Choose which fields to reformat, as they appear in the ‘mapping’ column
5. Provide one or more regular expression patterns to find and replace, like /(\d+)\/(\d+)\/(\d+)/ to change the date from ‘dd/mm/yyyy’
6. Provide one or more regular expression replacement patterns to replace, like $2-$1-$3 to change the date to ‘mm-dd-yyyy’
FAQ
Installation Instructions
1. Unzip/upload plugin folder to your plugins directory (/wp-content/plugins/)
2. Make sure Forms 3rdparty Integration is installed and settings have been saved at least once.
3. Activate this plugin
4. Choose which fields to reformat, as they appear in the ‘mapping’ column
5. Provide one or more regular expression patterns to find and replace, like /(\d+)\/(\d+)\/(\d+)/ to change the date from ‘dd/mm/yyyy’
6. Provide one or more regular expression replacement patterns to replace, like $2-$1-$3 to change the date to ‘mm-dd-yyyy’
How do I write a regex?
Sorry, you’ll have to learn that the hard way…
It doesn’t work right…
Drop an issue at https://github.com/zaus/forms-3rdparty-submission-format
Reviews
There are no reviews for this plugin.
Contributors & Developers
“Forms: 3rd-Party Submission Reformat” adalah perisian sumber terbuka. Orang-orang berikut telah menyumbang kepada pemalam ini.
Penyumbang
Changelog
0.2
• refactored to generic, settings-enabled
0.1
• targeting specific fields and replacement formats
|
__label__pos
| 0.776432 |
🌞
JavaScript Temelleri: Hoisting
neredeyse 3 yıl
Zingat'ta yazılımcılarla gerçekleştirdiğimiz iş görüşmelerinde sıklıkla karşılaştığımız problemlerden biri adayların kullandıkları dilin temel unsurlarıyla ilişkilerinin bir miktar kopuk olması. Çoğu genç arkadaş güncel web framework'leriyle (React, Vue vs.) yahut JS temelli cross-platform geliştirme ortamlarıyla (React-Native, Ionic vs.) ilgilenmiş oluyor; ancak JavaScript'in hikayesinden ya da dil ve dilin çalıştığı ortamların gerçeklerinden uzak durumdalar.
Bu konuda hem bir kaynak üretmek, hem de bu minvalde kendi bilgilerimi derleyip toplamak ve biraz da kişisel bir referans noktası oluşturmak adına yeni bir yazı dizisine başlamayı uygun gördüm. Okuduğum, izlediğim kaynaklardan edindiğim bilgileri kendimce düzenli bir biçimde sunmak adına böyle bir serüvene çıkıyorum.
Burada anlatacaklarım İnternet'te hali hazırda bulabileceğiniz şeyler, Türkçe ve İngilizce bu konularda söylenmiş pek çok şey var. Haliyle uzun süredir JavaScript'le geliştirme yapanlar burada yazılanları zaten biliyor olacaklar; fakat özellikle dili öğrenmeye yeni başlamış yahut dili kullanarak iş geliştirebilen -lakin daha da derinleşmek isteyenler için güzel bir başlangıç noktası olabilir bu yazı dizisi.
JavaScript motoru bir kod parçacığını çalıştırırken bazı aşamalardan geçirir. Bunları temel olarak:
1. Yaratım fazı (creation phase)
2. Yürütme fazı (execution phase)
olarak ifade edebiliriz. Yaratım fazı aşamasında JavaScript motoru bize global objeyi ve this anahtar kelimesini verir. Tarayıcı ortamı için global, window objesidir. this ayrı bir konu, kendisini bilahare inceleyeceğiz. Bunlar, global execution context'in oluşması sonucu elimizde olanlar aslında. Kullandığınız tarayıcıya boş bir .js dosyası verip geliştirici konsolunda global (window) ve this ifadelerini yazdırmayı deneyin, dosyanız boş olmasına rağmen bunlara erişiminiz olduğunu göreceksiniz.
Yaratım fazında gerçekleşen bir durum daha var. Bu yazının da konusu olan hoisting (yukarı kaldırma/çekme) tam da bu noktada gerçekleşiyor. Aşağıdaki gibi bir kod parçacığımız olsun:
console.log(person);
console.log(greetPerson);
var person = "ahmet";
function greetPerson() {
console.log("Hello " + person);
}
JavaScript'te diğer birçok programlama dilinden farklı olarak person ve greetPerson değişkenlerinin kodun başında konsola yazılmalarını sağlayan fonksiyonları çağırmak herhangi bir hataya sebep olmaz. Hoisting sonucu değişken deklarasyonları (variable declaration) ve fonksiyon deklarasyonları (function declaration) hoist edilir. Aşağıdaki gibi bir durum oluştuğunu anlayışımızı kolaylaştırmak adına varsayabiliriz:
var person = undefined;
function greetPerson() {
console.log("Hello " + person);
}
console.log(person); // undefined
console.log(greetPerson); // function greetPerson()
person = "ahmet";
Tabii aslında olan bu değil. JavaScript motoru yaratım fazında bütün kod bloğunun üzerinden geçer, global execution context ile varsa function execution context'ler (JavaScript'te her bir fonksiyonun kendi yürütme konteksti bulunur) içerisindeki değişken ve fonksiyon deklarasyonlarını hafızaya (memory) kaydeder.
Peki böyle bir ihtiyaç neden hasıl olmuş? Benim de bu yazıyı hazırlarken rast geldiğim, Brendan Eich'a ithaf edilen şu cümleyi paylaşmak isterim:
"var hoisting was thus [an] unintended consequence of function hoisting, no block scope, [and] JS as a 1995 rush job."
Yine dönüp dolaşıp JavaScript'in şahsına münhasır dizaynından kaynaklı bir çıktı gibi duruyor.
Hoisting konusunu anlama gayretimizde dikkat edilecek bazı noktalar var, bunları örnekler üzerinden inceleyelim:
console.log(person);
console.log(greetPerson);
var person = "ahmet";
(function greetPerson() {
console.log("Hello " + person);
});
Bu kodu çalıştırdığımızda person değişkeni için yukarıda anlattıklarımız halen geçerli; fakat greetPerson fonksiyon deklarasyonu artık bir IIFE (Immediately Invoked Function Expression). Bu durumda ortada bir deklarasyon yok aslında. Haliyle JavaScript motoru bu kodun üzerinden geçerken hoist edilen sadece person değişkeni olacak.
var person = undefined;
console.log(person); // undefined
console.log(greetPerson); // ReferenceError: greetPerson is not defined
person = "ahmet";
(function greetPerson() {
console.log("Hello " + person);
});
Şimdi de sadece variable initialization durumunu ele alalım:
console.log(person);
person = "ahmet";
Kodu çalıştırdığımızda hata alacağız. Başta dediğimiz gibi, sadece deklarasyonlar hoist ediliyor, burada bir deklarasyon durumu mevzu bahis değil.
console.log(person); // ReferenceError: person is not defined
person = "ahmet";
Benzeri bir durum da fonksiyon ifadeleri (function expression) nezdinde gerçekleşiyor:
console.log(person);
console.log(greetPerson);
var person = "ahmet";
var greetPerson = function () {
console.log("Hello " + person);
};
Bu sefer çıktımız aşağıdaki gibi olacak; zira bu fonksiyon ifadesi bu bağlamda bir değişken deklarasyonu olmakta. Hoisting'in çalışma prensibi gereğince de bu bağlamda ele alınmakta.
var person = undefined;
var greetPerson = undefined;
console.log(person); // undefined
console.log(greetPerson); // undefined
person = "ahmet";
greetPerson = function () {
console.log("Hello " + person);
};
Örneğimizi biraz daha geliştirip bu sefer işin içine function execution context'leri de katalım:
console.log(person);
console.log(greetPerson);
console.log(greetPerson());
var person = "ahmet";
function greetPerson() {
console.log("Hello " + person);
var person = "mehmet";
}
Dikkat etmemiz gereken nokta greetPerson fonksiyonu içerisinde de aynı global execution context içerisinde olduğu gibi hoisting gerçekleşiyor ve JavaScript motoru var person = 'mehmet'; satırını gördüğü anda person değişkeni için hafızada yer arıyor (memory allocation). JavaScript'teki kapsam zinciri (scope chain) işleyişi sonucu person değişkeninin davranışına da dikkatinizi çekmiş olayım.
var person = undefined;
function greetPerson() {
var person = undefined;
console.log("Hello " + person);
person = "mehmet";
}
console.log(person); // undefined
console.log(greetPerson); // function greetPerson()
console.log(greetPerson()); // 'Hello undefined'
person = "ahmet";
Bir diğer örnek:
console.log(person);
var person = "ahmet";
var person = "mehmet";
Hoisting gerçekleştiğinde şöyle olacağını varsayabiliriz:
var person = undefined;
console.log(person); // undefined
person = "ahmet";
person = "mehmet";
Değişken deklarasyonları için şu ana kadar gördüğümüz örneklerden farklı bir şey yok gibi duruyor, peki fonksiyon deklarasyonları için durum ne?
console.log(greetPerson());
function greetPerson() {
console.log("Hello");
}
function greetPerson() {
console.log("Hi");
}
JavaScript motoru ilk greetPerson deklarasyonunu gördüğü anda bunu hoist edecek, ardından ikinci fonksiyon deklarasyonunu görecek ve bunu da aynı şekilde hoist edecek. Hafızadaki greetPerson fonksiyonunun üzerine ikinci fonksiyonu yazacak.
function greetPerson() {
console.log("Hi");
}
console.log(greetPerson()); // 'Hi'
function greetPerson() {
console.log("Hello");
}
Son olarak da ECMAScript 2015 ile birlikte hayatımıza giren let ve const anahtar kelimelerinin hoisting durumundaki davranışlarına bakalım:
console.log(person);
console.log(greetPerson);
console.log(greetPerson());
const person = "ahmet";
function greetPerson() {
console.log("Hello " + person);
let person = "mehmet";
}
Aslında let, const, var, class ve function anahtar kelimeleri hoist ediliyor; fakat initialization aşamasında oluşan farklılıktan dolayı let ve const'un bu şekilde kullanımı ReferenceError oluşmasına yol açıyor. let/const ile deklare edilen bir değişken ilgili ifade (statement) koşulana kadar başlatılmamış (uninitialized) durumda olurken, var ile deklare edilen bir değişken undefined primitifi ile başlatılmış da oluyor aynı zamanda.
console.log(person); // ReferenceError: person is not defined
console.log(greetPerson);
console.log(greetPerson());
const person = "ahmet";
function greetPerson() {
console.log("Hello " + person);
let person = "mehmet";
}
İlk konsol ifadesini kaldırırsak bu sefer let sebepli aynı hatayı greetPerson fonksiyonunun çalıştırılması dolayısıyla alacağız.
* Bu yazı ilk olarak labs.zingat.com adresinde belirtilen tarihte yayımlanmıştır.
|
__label__pos
| 0.659069 |
ryanzec ryanzec - 8 months ago 47
Javascript Question
Custom NodeJS Grunt Command
I have a custom grunt task that looks like this:
grunt.registerTask('list', 'test', function()
{
var child;
child = exec('touch skhjdfgkshjgdf',
function (error, stdout, stderr) {
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if (error !== null) {
console.log('exec error: ' + error);
}
});
});
This works however when I try to run the pwd command, I don't get any output. The end goal of this is to be able to compile sass files with grunt and I figure the best way of doing that is by running the the command line command to compile sass through grunt however I want to get some sort of output to the screen that is worked properly. Is there any reason this code would not print the results of running unix commands through grunt/nodejs?
Answer
exec() is async so you need to tell grunt that and execute the callback when it's done:
grunt.registerTask('list', 'test', function()
{
// Tell grunt the task is async
var cb = this.async();
var child = exec('touch skhjdfgkshjgdf', function (error, stdout, stderr) {
if (error !== null) {
console.log('exec error: ' + error);
}
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
// Execute the callback when the async task is done
cb();
});
});
From grunt docs: Why doesn't my asynchronous task complete?
|
__label__pos
| 0.997297 |
• Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
Explain how cancers are a result of uncontrolled cell division and list factors that can increase the chances of cancerous growth
Extracts from this document...
Introduction
Explain how cancers are a result of uncontrolled cell division and list factors that can increase the chances of cancerous growth What is Mitosis? (Cell division) Mitosis is a form of cell division which produces two daughter cells that are genetically identical to the parent cell. It occurs during growth and asexual reproduction. Mitosis, like meiosis is part of the cell cycle which consists of three phases; consisting of interphase, nuclear division and cell division. Interphase is when the cell carries out its normal processes and grows to the original size it was before cell division. It then receives a message that is should replicate, so the DNA replicates so that each chromosome consists of two identical chromatids, each containing a copy of that chromosomes DNA. In Nuclear Division the Nucleus divides, with help of the spindle fibres and then the whole cell divides (cell division). Mitosis is composed of five stages: Prophase, Metaphase, Anaphase, Telophase and Cytokinesis. What is Cancer? Cancer is the second most common cause of death in the western world (after heart disease). Cancer results in a development of a tumour, which is a group of abnormal cells that grow abnormally fast. ...read more.
Middle
However if the onconogene is mutated two possible outcomes can happen: Firstly the proteins can deregulate the pathways between cells so that the receptor protein is mutated. If the receptor protein is mutated, it can no longer respond to the growth factors so triggers relay proteins to be produced which go on and cause the DNA and the cell to replicate. This is all done in the absence of the growth factor so reproduction is continuous and has no limits. Secondly an abnormal onconogene can inflict many growth factors to be produced in excessive amounts. This causes the receptor protein to be continually bound to a growth factor so replication rate is very fast. Both of these result in a formation of a lump of cells where growth has been excessive. This lump is called an adenoma. It can grow larger and invade surrounding tissue. Some cells may even break off and be transported from the body in the blood stream if it is malignant. If it is benign, the growth is sealed by non-cancerous local specialized cells and white blood cells. * Tumour Suppressor Genes Tumour suppressor genes are the "breaks" on cell division. ...read more.
Conclusion
Apoptosis is triggered by the gene P53. If this is lacking or mutated not only will it cause faulty cells to replicate (see above) but will not be able to destroy them when they do replicate. What factors increase the chances of developing cancer? (Carcinogenic factors) * Smoking - This is the biggest risk factor causing 30% of all cancers. It damages the lungs and causes dangerous chemicals such as Carbon Monoxide. * Age - as someone ages, genes will suffer damage just through day-to-day living * Alcohol consumption - Excessive alcohol consumption plays a role in mouth, throat, oesophagus and liver cancers. * Chemicals present in the combustion of fuel i.e. coal. * Heredity - Chances are much more likely of developing cancer if a family member has already had it due to inherited genes. * Sun/UV Exposure - Causes damage to epidermis of skin so it is more vulnerable. * Diet - poor diet (high animal fat content) can cause malnutrition and poor health of body increasing the chances of cancer. * Exercise - Not exercising can again cause poor health of the body, poor circulation etc increases the chances of cancer. * Occupation - if exposed to radiation, asbestos, silicon, dusts, etc. * Medication - radiation and/or drugs for one cancer can cause later cancers. ...read more.
The above preview is unformatted text
This student written piece of work is one of many that can be found in our AS and A Level Molecules & Cells section.
Found what you're looking for?
• Start learning 29% faster today
• 150,000+ documents available
• Just £6.99 a month
Not the one? Search for your essay title...
• Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
See related essaysSee related essays
Related AS and A Level Molecules & Cells essays
1. Marked by a teacher
Explain the link between cancer and mitosis. Describe how the chances of cancer developing ...
4 star(s)
By the time a tumour is detected it may contain about a thousand, million cells, which show abnormal changes in shape. There are basically two different types of tumours- malignant and benign.
2. Follicular development
Variations in the pulses of GnRH may be influenced by gonadal steroid production. It was observed in ewes, that the GnRH pulse frequency increased with the production of estradiol, which in turn leads to an increased LH secretion (Molter-G�rard et al, 1999).
• Over 160,000 pieces
of student written work
• Annotated by
experienced teachers
• Ideas and feedback to
improve your own work
|
__label__pos
| 0.905062 |
How to optimize memory
Source: Internet
Author: User
Tags memory usage
1. Adjust the size of the cache area
You can set the percentage of the system to take advantage of the cache on the primary uses tab of the computer. If the system has more memory, select Network server, so the system will use more memory as the cache. In the CD-ROM label, you can directly adjust how much memory the system uses to read and write to the CD-ROM disc.
2. Monitor Memory
The system's memory, no matter how big, will always run out. Although there is virtual memory, but because the hard drive can not read and write speed compared with the speed of memory, so when using memory, you need to monitor the use of memory at all times. A System Monitor is available in the Windows operating system to monitor memory usage. In general, if only 60% of the memory resources available, then you should pay attention to adjust the memory, otherwise it will seriously affect the computer's operating speed and system performance.
3. Free memory space in time
If you find that the system does not have much memory, you should pay attention to free memory. Freeing memory is the release of data that resides in memory from memory. The simplest and most efficient way to free memory is to restart the computer. In addition, to close the temporary use of the program. Also note that if the image data is stored in the Clipboard, it takes up a lot of memory space. At this point, as long as the clip a few words, you can remove the memory of the original image of the Clipboard, so that it occupies a large amount of memory released.
4. Change the size of the paging file
After changing the location of the paging file, we can also make some adjustments to its size. When adjusting, we need to be careful not to set the maximum and minimum paging file as equivalent. Because the memory is not really "stuffed", it automatically places some of the temporarily unused data on the hard disk as memory reserves reach a certain level. The larger the minimum paging file, the lower the percentage, and the slower the execution rate. The maximum paging file is the limit value, sometimes open a lot of programs, memory and the smallest paging file has been "stuffed", will automatically overflow to the maximum paging file. So it is unreasonable to set the two as equivalent. In general, the minimum paging file is smaller, which makes it more efficient to store as much data as possible in memory. The largest page file set larger, so as not to appear "full" situation.
5. Optimize the data in memory
In Windows, the more data resides in memory, the more memory resources are consumed. Therefore, do not set too many shortcut icons on the desktop and in the taskbar. If memory resources are more stressful, consider using as little as possible a variety of background-resident programs. Do not open too many files or windows when you are working on a computer. When you use your computer for a long time, if you do not restart your computer, the array of data in memory may be confusing, causing system performance to degrade. Then you need to consider restarting your computer.
6. Improve the performance of other parts of the system
The performance of other parts of the computer also has a greater impact on the use of memory, such as bus type, CPU, hard disk and video memory. If the video memory is too small, and the amount of data displayed is large, no amount of memory can improve its speed and system efficiency. If the hard drive is too slow, it can seriously affect the entire system.
7. Change the location of the paging file
The purpose is primarily to maintain the continuity of virtual memory. Because the hard drive reads data is by the magnetic head in the magnetism material reads, the page file puts on the disk the different area, the head will jump to jump, the nature is not conducive to improve the efficiency. and the system disk files are numerous, virtual memory is certainly not continuous, so to put it on the other disk. The way to change the location of the page file is: Right click on "My Computer", select "Properties → advanced → performance set → advanced → change virtual memory", in the drive bar to select the location you want to change to. It is worth noting that when the page file is moved, delete the original file (the system will not automatically delete).
8. Empty page File
There is a "clearpagefileatshutdown (clear paging file on shutdown)" At the same location, setting this value to "1". The "purge" paging file here is not meant to completely remove the Pagefile.sys file from the hard disk, but to "clean" and organize it, so that you can better prepare for the next time you start Windows XP with virtual memory.
We understand its working relationship between memory and hard disk, and realize that virtual memory is not as large as possible, but should be adjusted reasonably according to the specific configuration of the computer. I believe that everyone in the real understanding and grasp the role of virtual memory and optimization methods, will make love machine in the performance of a rise
Contact Us
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to: [email protected] and provide relevant evidence. A staff member will contact you within 5 working days.
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.
|
__label__pos
| 0.569771 |
Heilpraktiker München - Grünwald Heilpraktiker Grünwald
Ingrown Toenail
An ingrown toenail is a toenail that cuts into the skin of the toe. An ingrown toenail is a common condition in which the corners or sides of the toenail cut into the skin of the toe. This usually happens to the big toe, and can affect people of all ages. An ingrown toenail is [...]
Infection
Infection is a process in which bacteria, viruses, fungi or other organisms enter the body, attach to cells, and multiply. To do this, they must evade or overcome the body’s natural defenses at each step. Infections have the potential to cause illness, but in many cases the infected person does not get sick. How Does [...]
|
__label__pos
| 0.90099 |
Mineral Primer
As the remarkable properties of vitamins have revealed themselves to investigators, so too have those of the various minerals in our food and water. The seven macrominerals– calcium, chloride, magnesium, phosphorus, potassium, sodium and sulphur–now share the research spotlight with a longer list of essential trace minerals. These are needed only in minute amounts, but their absence results in many disease conditions. The number of trace minerals known to be essential to life now exceeds thirty, and some researchers believe that for optimum health we need to take in every substance found in the earth’s crust. Along with familiar trace minerals, such as iron and iodine, the body also needs others less well known, like cobalt, germanium and boron.
Mankind ingests minerals in a number of different forms. He can take them in as salts; that is, as molecules in which a negatively charged atom is bonded ionically to a positively charged atom as in common table salt (sodium chloride) or less well-known salts such as magnesium chloride, calcium phosphate or zinc sulfate. In water and other liquids, these form a solution as the salts dissolve into positively and negatively charged mineral ions.
Minerals are also ingested as integral parts of the foods we eat, in which case the minerals are held ionically in a claw-like way or “chelated” by a large molecule. Examples include chlorophyll (which chelates a magnesium atom), hemoglobin (which chelates an iron atom) and enzymes that chelate copper, iron, zinc and manganese.
Minerals are usually absorbed in ionic form. If they are not in ionic form when consumed, they are ionized in the gut, with salts dissolving into their two components or chelates releasing their key elements. The system by which mineral ions are then absorbed is truly remarkable. If, for example, the body needs calcium, the parathyroid gland will send a signal to the intestinal wall to form a calcium-binding protein. That calcium-binding protein will then pick up a free calcium ion, transport it through the intestinal mucosa and release it into the blood.1 Manganese and magnesium have similar carriers and their absorption, retention and excretion is likewise governed by complex feedback mechanisms involving other nutrients and hormonal signals. Absorption and excretion of phosphorus is regulated in part by activity of the adrenal glands and vitamin D status.
There are a number of factors that can prevent the uptake of minerals, even when they are available in our food. The glandular system that regulates the messages sent to the intestinal mucosa require plentiful fat-soluble vitamins in the diet to work properly. Likewise, the intestinal mucosa requires fat-soluble vitamins and adequate dietary cholesterol to maintain proper integrity so that it passes only those nutrients the body needs, while at the same time keeping out toxins and large, undigested proteins that can cause allergic reactions. Minerals may “compete” for receptor sites. Excess calcium may impede the absorption of manganese, for example. Lack of hydrochloric acid in the stomach, an over-alkaline environment in the upper intestine or deficiencies in certain enzymes, vitamin C and other nutrients may prevent chelates from releasing their minerals. Finally, strong chelating substances, such as phytic acid in grains, oxalic acid in green leafy vegetables and tannins in tea may bind with ionized minerals in the digestive tract and prevent them from being absorbed.
Several types of mineral supplements are available commercially including chelated minerals, mineral salts, minerals dissolved in water and “colloidal” mineral preparations. A colloid is a dispersion of small particles in another substance. Soap, for example, forms a colloidal dispersion in water; milk is a dispersion of colloidal fats and proteins in water, along with dissolved lactose and minerals. Colloidal mineral preparations presumably differ from true solutions in that the size of the dispersed particles is ten to one thousand times larger than ions dissolved in a liquid. Colloidal dispersions tend to be cloudy; or they will scatter light that passes through them. Shine a flashlight through water containing soap or a few drops of milk and its path can be clearly seen, even if the water seems clear.
There is no evidence that the body absorbs colloidal mineral preparations any better than true solutions of mineral salts or minerals in chelated form. Many so-called “colloidal” formulas often contain undesirable additives, including citric acid, that prevent the mineral particles from settling to the bottom of the container. Furthermore, these products may contain an abundance of minerals that can be toxic in large amounts, such as silver and aluminum. Even mineral preparations in which the minerals are in true solution may contain minerals in amounts that may be toxic. If a product tastes very bitter, it probably should be avoided.
Some commercial interests sell minerals chelated to amino acids which they claim do not break down in the gut, but which pass in their entirety through the mucosa and into the blood, thus bypassing certain blocks to mineral absorption. However, such products, if they work, bypass the body’s exquisitely designed system for taking in just what it needs and may cause serious imbalances. Obviously, such formulations should be taken only under the supervision of an experienced health care practitioner.
The proper way to take in minerals is through mineral-rich water; through nutrient-dense foods and beverages; through mineral-rich bone broths in which all of the macrominerals–sodium, chloride, calcium, magnesium, phosphorus, potassium and sulphur–are available in ready-to-use ionized form as a true electrolyte solution; through the use of unrefined sea salt; and by adding small amounts of fine clay or mud as a supplement to water or food, a practice found in many traditional societies throughout the world. Analysis of clays from Africa, Sardinia and California reveals that clay can provide a variety of macro- and trace minerals including calcium, phosphorus, magnesium, iron and zinc.2 Clay also contains aluminum, but silicon, present in large amounts in all clays, prevents absorption of this toxic metal and actually helps the body eliminate aluminum that is bound in the tissues.3
When mixed with water, clay forms a temporary colloidal system in which fine particles are dispersed throughout the water. Eventually the particles settle to the bottom of the container, but a variety of mineral ions will remain in the water. These mineral ions are available for absorption, while other minerals that form an integral part of the clay particles may, in some circumstances, be available for absorption through ionic exchange at the point of contact with the intestinal villi.
Clay particles, defined as having a size less than 1-2 microns, have a very large surface area relative to their size. They carry a negative electric charge and can attract positively charged pathogenic organisms along with their toxins and carry them out of the body,4 Thus, clay compounds not only provide minerals but also can be used as detoxifying agents. As such, they facilitate assimilation and can help prevent intestinal complaints, such as food poisoning and diarrhea. They also will bind with antinutrients found in plant foods, such as bitter tannins, and prevent their absorption.
The seven macrominerals, needed in relatively large amounts, are as follows:
Calcium: Not only vital for strong bones and teeth, calcium is also needed for the heart and nervous system and for muscle growth and contraction. Good calcium status prevents acid-alkaline imbalances in the blood. The best sources of usable calcium are dairy products and bone broth (although the amounts are much smaller in bone broth). In cultures where dairy products are not used, bone broth is essential. Calcium in meats, vegetables and grains is difficult to absorb. Both iron and zinc can inhibit calcium absorption as can excess phosphorus and magnesium. Phytic acid in the bran of grains that have not been soaked, fermented, sprouted or naturally leavened will bind with calcium and other minerals in the intestinal tract, making these minerals less available. Sufficient vitamin D is needed for calcium absorption as is a proper potassium/calcium ratio in the blood. Sugar consumption and stress both pull calcium from the bones.
Chloride: Chloride is widely distributed in the body in ionic form, in balance with sodium or potassium. It helps regulate the correct acid-alkaline balance in the blood and the passage of fluids across cell membranes. It is needed for the production of hydrochloric acid and hence for protein digestion. It also activates the production of amylase enzymes needed for carbohydrate digestion. Chloride is also essential to proper growth and functioning of the brain. The most important source of chloride is salt, as only traces are found in most other foods. Lacto-fermented beverages and bone broths both provide easily assimilated chloride. Other sources include celery and coconut.
Magnesium: This mineral is essential for enzyme activity, calcium and potassium uptake, nerve transmission, bone formation and metabolism of carbohydrates and minerals. It is magnesium, not calcium, that helps form hard tooth enamel, resistant to decay. Like calcium and chloride, magnesium also plays a role in regulating the acid-alkaline balance in the body. High magnesium levels in drinking water have been linked to resistance to heart disease. Although it is found in many foods, including dairy products, nuts, vegetables, fish, meat and seafood, deficiencies are common in America due to soil depletion, poor absorption and lack of minerals in drinking water. A diet high in carbohydrates, oxalic acid in foods like raw spinach and phytic acid found in whole grains can cause deficiencies. An excellent source of usable magnesium is beef, chicken or fish broth. High amounts of zinc and vitamin D increase magnesium requirements. Magnesium deficiency can result in coronary heart disease, chronic weight loss, obesity, fatigue, epilepsy and impaired brain function. Chocolate cravings are a sign of magnesium deficiency.
Phosphorus: The second most abundant mineral in the body, phosphorus is needed for bone growth, kidney function and cell growth. It also plays a role in maintaining the body’s acid-alkaline balance. Phosphorus is found in many foods, but in order to be properly utilized, it must be in proper balance with magnesium and calcium in the blood. Excessive levels of phosphorus in the blood, often due to the consumption of soft drinks containing phosphoric acid, can lead to calcium loss and to cravings for sugar and alcohol; too little phosphorus inhibits calcium absorption and can lead to osteoporosis. Best sources are animal products, whole grains, legumes and nuts.
Potassium: Potassium and sodium work together–inner cell fluids are high in potassium while fluids outside the cell are high in sodium. Thus, potassium is important for many chemical reactions within the cells. Potassium is helpful in treating high blood pressure. It is found in a wide variety of nuts, grains and vegetables. Excessive use of salt along with inadequate intake of fruits and vegetables can result in a potassium deficiency.
Sodium: As all body fluids contain sodium, it can be said that sodium is essential to life. It is needed for many biochemical processes including water balance regulation, fluid distribution on either side of the cell walls, muscle contraction and expansion, nerve stimulation and acid-alkaline balance. Sodium is very important to the proper function of the adrenal glands. However, excessive sodium may result in high blood pressure, potassium deficiency, and liver, kidney and heart disease; symptoms of deficiency include confusion, low blood sugar, weakness, lethargy and heart palpitations. Meat broths and zucchini are excellent sources.
Sulphur: Part of the chemical structure of several amino acids, sulphur aids in many biochemical processes. It helps protect the body from infection, blocks the harmful effects of radiation and pollution and slows down the aging process. Sulphur-containing proteins are the building blocks of cell membranes, and sulphur is a major component of the gel-like connective tissue in cartilage and skin. Sulphur is found in cruciferous vegetables, eggs, milk and animal products.
Although needed in only minute amounts, trace minerals are essential for many biochemical processes. Often it is a single atom of a trace mineral, incorporated into a complex protein, that gives the compound its specific characteristic–iron as a part of the hemoglobin molecule, for example, or a trace mineral as the distinguishing component of a specific enzyme. The following list is not meant to be exhaustive but merely indicative of the complexity of bodily processes and their dependence on well-mineralized soil and food.
Boron: Needed for healthy bones, boron is found in fruits, especially apples, leafy green vegetables, nuts and grains.
Chromium: Essential for glucose metabolism, chromium is needed for blood sugar regulation as well as for the synthesis of cholesterol, fats and protein. Most Americans are deficient in chromium because they eat so many refined carbohydrates. Best sources are animal products, molasses, nuts, whole wheat, eggs and vegetables.
Cobalt: This mineral works with copper to promote assimilation of iron. A cobalt atom resides in the center of the vitamin B12 molecule. As the best sources are animal products, cobalt deficiency occurs most frequently in vegetarians.
Copper: Needed for the formation of bone, hemoglobin and red blood cells, copper also promotes healthy nerves, a healthy immune system and collagen formation. Copper works in balance with zinc and vitamin C. Along with manganese, magnesium and iodine, copper plays an important role in memory and brain function. Nuts, molasses and oats contain copper but liver is the best and most easily assimilated source. Copper deficiency is widespread in America. Animal experiments indicate that copper deficiency combined with high fructose consumption has particularly deleterious effects on infants and growing children.
Germanium: A newcomer to the list of trace minerals, germanium is now considered to be essential to optimum health. Germanium-rich foods help combat rheumatoid arthritis, food allergies, fungal overgrowth, viral infections and cancer. Certain foods will concentrate germanium if it is found in the soil–garlic, ginseng, mushrooms, onions and the herbs aloe vera, comfrey and suma.
Iodine: Although needed in only minute amounts, iodine is essential for numerous biochemical processes, such as fat metabolism, thyroid function and the production of sex hormones. Muscle cramps are a sign of deficiency as are cold hands and feet, proneness to weight gain, poor memory, constipation, depression and headaches. It seems to be essential for mental development. Iodine deficiency has been linked to mental retardation, coronary heart disease, susceptibility to polio and breast cancer. Sources include most sea foods, unrefined sea salt, kelp and other sea weeds, fish broth, butter, pineapple, artichokes, asparagus and dark green vegetables. Certain vegetables, such as cabbage and spinach, can block iodine absorption when eaten raw or unfermented. Requirements for iodine vary widely. In general, those whose ancestors come from seacoast areas require more iodine than those whose ancestors come from inland regions. Proper iodine utilization requires sufficient levels of vitamin A, supplied by animal fats. In excess, iodine can be toxic. Consumption of high amounts of inorganic iodine (as in iodized salt or iodine-fortified bread) as well as of organic iodine (as in kelp) can cause thyroid problems similar to those of iodine deficiency, including goiter.5
Iron: As part of the hemoglobin molecule, iron is vital for healthy blood; iron also forms an essential part of many enzymes. Iron deficiency is associated with poor mental development and problems with the immune system. It is found in eggs, fish, liver, meat and green leafy vegetables. Iron from animal protein is more readily absorbed than iron from vegetable foods. The addition of fat-soluble vitamins found in butter and cod liver oil to the diet often results in an improvement in iron status. Recently, researchers have warned against inorganic iron used to supplement white flour. In this form, iron cannot be utilized by the body and its buildup in the blood and tissues is essentially a buildup of toxins. Elevated amounts of inorganic iron have been linked to heart disease and cancer.
Manganese: Needed for healthy nerves, a healthy immune system and blood sugar regulation, manganese also plays a part in the formation of mother’s milk and in the growth of healthy bones. Deficiency may lead to trembling hands, seizures and lack of coordination. Excessive milk consumption may cause manganese deficiency as calcium can interfere with manganese absorption. Phosphorus antagonizes manganese as well. Best sources are nuts (especially pecans), seeds, whole grains and butterfat.
Molybdenum: This mineral is needed in small amounts for nitrogen metabolism, iron absorption, fat oxidation and normal cell function. Best sources are lentils, liver, grains, legumes and dark green leafy vegetables.
Selenium: A vital antioxidant, selenium acts with vitamin E to protect the immune system and maintain healthy heart function. It is needed for pancreatic function and tissue elasticity and has been shown to protect against radiation and toxic minerals. High levels of heart disease are associated with selenium-deficient soil in Finland and a tendency to fibrotic heart lesions is associated with selenium deficiency in parts of China. Best sources are butter, Brazil nuts, seafood and grains grown in selenium-rich soil.
Silicon: This much neglected element is needed for strong yet flexible bones and healthy cartilage, connective tissue, skin, hair and nails. In the blood vessels, the presence of adequate silicon helps prevent atherosclerosis. Silicon also protects against toxic aluminum. Good sources are grains with shiny surfaces, such as millet, corn and flax, the stems of green vegetables and homemade bone broths in which chicken feet or calves’ feet have been included.
Vanadium: Needed for cellular metabolism and the formation of bones and teeth, vanadium also plays a role in growth and reproduction and helps control cholesterol levels in the blood. Deficiency has been linked to cardiovascular and kidney disease. Buckwheat, unrefined vegetable oils, grains and olives are the best sources. Vanadium is difficult to absorb.
Zinc: Called the intelligence mineral, zinc is required for mental development, for healthy reproductive organs (particularly the prostate gland), for protein synthesis and collagen formation. Zinc is also involved in the blood sugar control mechanism and thus protects against diabetes. Zinc is needed to maintain proper levels of vitamin E in the blood. Inability to taste or smell and loss of appetite are signs of zinc deficiency. High levels of phytic acid in cereal grains and legumes block zinc absorption. Zinc deficiency during pregnancy can cause birth defects. As oral contraceptives diminish zinc levels, it is important for women to wait at least six months after discontinuing the pill before becoming pregnant. Best sources include red meat, oysters, fish, nuts, seeds and ginger.
Not all minerals are beneficial. Lead, cadmium, mercury, aluminum and arsenic, while possibly needed in minute amounts, are poisons to the body in large quantities. These come from polluted air, water, soil and food; lead finds its way into the water supply through lead pipes. Sources of aluminum include processed soy products, aluminum cookware, refined table salt, deodorants and antacids. Baking powder can be another source of aluminum and should be avoided. Amalgam fillings are the principle source of toxic mercury in the system–linked to Alzheimer’s and a number of other disease conditions. Minerals like calcium and magnesium, and the antioxidants–vitamin A, carotenes, vitamin C, vitamin E and selenium–all protect against these toxins and help the body to eliminate them. Adequate silicon protects against aluminum.
REFERENCES
1. Linder, Maria C, ed, Nutritional Biochemistry and Metabolism with Clinical Applications, 2nd ed, 1991, Appleton & Lange, Norwalk, CT, 191-212.
2. Johns, T, and M Duquette, American Journal of Clinical Nutrition, 1991, 53:448-56.
3. Jacqmin-Gada, H, et al, Epidemiology, 1996, 7(3):281-85; Bellia, J P, et al, Annals of Clinical Laboratory Science, 1996, 26(3):227-33.
4. Damrau, F, Medical Annals of the District of Columbia, Jun 1961, 30:(6):326-328.
5. Ensminger, A H, et al, The Concise Encyclopedia of Foods & Nutrition, 1995, CRC Press, Boca Raton, FL, 586.
Copyright: Nourishing Traditions: The Cookbook that Challenges Policitally Correct Nutrition and the Diet Dictocrats by Sally Fallon and Mary G. Enig, PhD., Revised Second Edition, ©2001, pp-40-45, NewTrends Publishing (877) 707-1776, www.newtrendspublishing.com.
[email protected]'
Sally Fallon Morell is the founding president of the Weston A. Price Foundation and founder of A Campaign for Real Milk. She is the author of the best-selling cookbook, Nourishing Traditions (with Mary G. Enig, PhD) and the Nourishing Traditions Book of Baby & Child Care (with Thomas S. Cowan, MD). She is also the author of Nourishing Broth (with Kaayla T. Daniel, PhD, CCN). ______________________________________________________________________________________________ Mary G. Enig, PhD, FACN, CNS, is an expert of international renown in the field of lipid chemistry. She has headed a number of studies on the content and effects of trans fatty acids in America and Israel and has successfully challenged government assertions that dietary animal fat causes cancer and heart disease. Recent scientific and media attention on the possible adverse health effects of trans fatty acids has brought increased attention to her work. She is a licensed nutritionist, certified by the Certification Board for Nutrition Specialists; a qualified expert witness; nutrition consultant to individuals, industry and state and federal governments; contributing editor to a number of scientific publications; Fellow of the American College of Nutrition; and President of the Maryland Nutritionists Association. She is the author of over 60 technical papers and presentations, as well as a popular lecturer. She is the author of Know Your Fats, a primer on the biochemistry of dietary fats as well as of Eat Fat Lose Fat (Penguin, Hudson Street Press, 2004). She is the mother of three healthy children.
3 Responses to Mineral Primer
1. [email protected]' Sarah says:
Do you recommend any commercial trace mineral supplements? I am pregnant and fearful of using them due to arsenic and Mercury. Are these fears unfounded? So difficult to get straightforward ino!!
2. [email protected]' Graham says:
Colloidal minerals sourced from humic shale claim to have many more mineral types as they are from ancient deposits. Do you know of any research into their possible benefits?
Leave a reply
© 2015 The Weston A. Price Foundation for Wise Traditions in Food, Farming, and the Healing Arts.
|
__label__pos
| 0.864213 |
top of page
Article: How to choose CBRN air filtration system for your shelter
Author: Andrey Shpak | LinkedIn >
Different NBC air filtration systems
WHAT IS NBC / CBRN AIR FILTRATION SYSTEM?
NBC air filtration systems are part of life-support systems which are usually installed in the shelters, bunkers or protected facilities. The systems designed to filter the incoming fresh air that may be contaminated with nuclear, biological and chemical particles and agents. Such systems are usually designed for a specific range of threats and depending on the filters installed and filtration system characteristics can provide efficient protection from threats it has been designed for. Such threats can be air contamination as a result of NBC attack in wartime or terror act, natural disasters, industrial accidents with release of toxic industrial chemicals (TIC) or radioactive materials, dust and more.
From my personal standpoint, NBC filtration systems can be divided by the following categories:
• Systems for shelters and protective structures
• Systems for mobile and deployable applications – vehicles and mobile platforms
• Systems for deployable stationary applications – tents and field camps
Each category have subcategories and the systems in each category significantly differ one from another in specification like airflow, size, AC/DC power supply, shock resistance, resistance to vibrations, operational environment, pushing or pulling configurations and many more.
Definitions:
NBC / CBRN / CBR / ABC- common abbreviations of similar meaning: Nuclear/Atomic/Radiological, Chemical warfare, Biological threats). In this article we will use NBC for all the above.
TFA – Toxic Free Area
Overpressure - Positive differential pressure inside a protective space compared with the outside
WHY IS NBC FILTRATION SYSTEM NEEDED?
NBC air filtration systems are required for supplying fresh and filtered air for people and equipment which allows creating inside the shelter a toxic free area (TFA) and protect them from contaminated environment outside the shelter. The constant supply of fresh air allows to create replacement of air in closed space of the shelter and prevents creation of dangerous concentration of carbon dioxide (CO2) which can be harmful to people. In case the shelter is airtight the NBC filtration system by introducing fresh air into the protected space creates overpressure inside the TFA area that is regulated by overpressure valves that usually is part of the complete air management system in NBC shelters. The overpressure insures that contaminated air will not enter inside through small openings in shelter perimeter and only filtered by NBC system air will enter the shelter.
THERE ARE SEVERAL TYPES OF SHELTERS:
1. Simple covers / escape shelters designed for very short stay - usually not equipped with forced ventilation or air supply system
2. Shelters for longer stay (not NBC) – usually equipped with ventilation / fresh air supply system
3. NBC protected shelters – airtight and specially designed shelters equipped with NBC filtration system
4. Shelters isolated from outside environment – airtight and specially designed shelters equipped with air regeneration systems (CO2 scrubbers, Oxygen supply equipment and overpressure maintenance systems and more).
In many cases, NBC shelters additionally to NBC filtration mode can be operated in isolation mode for some periods of time if equipped with isolating systems or with air regeneration systems.
HOW FILTRATION PROCESS IS WORKING IN DIFFERENT SYSTEM CONFIGURATIONS
As mentioned above there are two main systems configurations depending on the application it is used in:
• Pulling filtration system: such systems are commonly used in shelters and protective structures and the blower is “pulling” the air through the NBC filter.
Typical pulling system airflow configuration scheme: prefilter > NBC filter > blower > Release to TFA
Pulling configuration considered as a safer when the NBC filtration system is placed inside the TFA area. The blower pulling the air through the system creating under pressure inside the air duct of system and reduces the possibility of unfiltered contaminated air leakage before the filter. The main issue that need special attention is the possible radioactive contamination of the filter therefore it is recommended to place the filter in separate room (technical zone) separated by wall to reduce the radiation exposure.
• Pushing system: commonly used in applications where the NBC filtration system is placed outside the TFA area like vehicles, containers, tents and some types of shelters.
Typical pushing system airflow configuration scheme: blower > prefilter > NBC filter > Release to TFA
Pushing configuration considered safer for vehicles or mobile applications where the NBC system influenced by vibration and other external forces and air leakage in the system will not cause contamination of the TFA area. An additional advantage in such configuration is that the filters placed outside of TFA and their replacement in most cases can be made from outside as well as system decontamination.
MAIN COMPONENTS OF NBC AIR FILTRATION SYSTEM
We will focus on the pulling systems that are mainly used in shelters and bunkers.
The main system components include:
1. Blast valve/s – installed on the air inlets or integrated in ventilation duct and used to stop the blast wave entering trough the ventilation openings to the protected area and preventing blast wave destroying the filters and other system components.
2. Prefilter – used for filtering coarse dust and larger particles to prevent them from entering the shelter and to prevent clogging of the HEPA filter inside the NBC filter.
3. Air ducts – there are two definite air duct lines. The dirty line, which is the duct before the filter, which has contaminated air inside and must be built stronger and gastight. The clean line, which is the duct after the filter, which holds only clean air.
4. Gastight Shutoff Valves (GSV) and airflow control valves
5. NBC/CBRN filters – the filter can be radial, or flatbed type and typically built for two stages filtration:
Mechanical filtration of biologic particles and aerosols is performed by high efficiency particulate filter (HEPA).
Adsorption of gases and chemical components is performed by the activated carbon layer that is specially impregnated to comply with specific requirements and threat range.
6. Electric blowers – the system has to include high pressure blower to be able to overcome the pressure drop of the complete system (NBC filter, valves, duct and other components)
7. Overpressure valves – normally closed valves designed to release the pressure in the shelter over a specific value. The pressure created by combination of airtightness of the shelter and the air pushed by the NBC filtration system.
NBC Filtration systems components
HOW TO SELECT AN NBC FILTRATION SYSTEM FOR A SHELTER
These are the main questions we ask the customers prior offering the NBC filtration system:
• Number of people?
• Size of the TFA area?
• Specific threats or standards the system needs to comply with?
• Is manual backup required or generator is installed in the shelter?
There is no united civil defense standard accepted by all countries and the system is chosen based on accepted standards or local threat analysis, environment conditions (temperature range, humidity and more) as well as specific local demands.
Main considerations:
The volume of air required per one person: usually determined by different standards and presented in Cubic Meter Per Hour (CMH or m3/h) the values can be from 2 CMH per person and higher than 20 CMH. The most common is between 3-6 CMH for Civil Defense and and 17+ CMH for military/government facilities. Such a wide range can be explained based on the specific factors described above as well as shelter purpose and the activity of people sheltering inside. As an example we can compare the activity of people in command and control center which is much more intensive than the activity of people in civil defense shelter. As well command and control center may be equipped with various systems that may require large amount of air for proper function.
Air changes by the size of the shelter / TFA can be small or extremely large relative to the number of people sheltering inside. Besides the number of people in the shelter the size factor must be taken into consideration. When choosing the airflow of the NBC filtration system and based on the calculation of number of people vs size of the shelter we always take the higher between them. Typically we will consider 1-2 Air Changes per Hour (ACH) as a norm but will have to consider other factors that can influence this values.
National standards and specific regional threats - There are only a few countries around the world that has local civil defense standards that determines and regulates the shelters specifications and population protection requirements. Among such countries Finland, Switzerland, Israel, Singapore, Ukraine and more countries. Although most of the standards have a lot of similarities they were adapted to the unique local conditions and regional threats.
Of course, there is a long list of considerations that can be added, and each shelter project require a detailed professional evaluation of the existing and future unique technical, structural and general details.
MAINTENANCE FOR NBC FILTRATION SYSTEMS
Usually, life support systems are designed for long standby time and have to be completely operational in emergencies, even after years of storage. The parts in NBC filtration systems that are most vulnerable to long periods of storage are the NBC filters, EPDM / rubber seals and flexible duct.
It is important to follow the manufacturers maintenance schedule recommendations as well as local standards requirements that usually include visual and physical checks of the moving parts of all system components and weighting of the NBC filter for checking if they in allowed weight range. Filters that gained additional weight may have not been correctly stored, damaged or not hermetically sealed and the carbon absorbed too much moisture from the air. In some cases the filter can be dried or need to be replaced if gained moisture over allowed values.
GENERAL LIST OF THE MAIN EQUIPMENT USED IN SHELTER PROJECTS
1. NBC air filtration and ventilation systems
2. Blast valves for every air opening (if not included in the filtration system set)
3. Gastight Shutoff Valves (GSV) for ventilation and sanitary ducts
4. Multi Cable Transmitters (MCT) for the safe passage of cables and pipes
5. Blast proof doors and windows (if applicable)
6. Airlock system (if required) for entry and exit under NBC conditions
7. Differential Pressure Meters (manometers) for measuring the overpressure in the shelter
8. CO2 removal system (scrubbers), if an extended lock-up mode is considered
9. Decontamination showers, if entry and exit of the shelter are permitted during CBRN conditions
10. Wall sleeves for the blast valves, GSV, MCT, and manometers
11. Shelter accessories, like toilets, beds, water reservoirs, radio, communication, batteries, etc.
bottom of page
|
__label__pos
| 0.962891 |
Hipster Handbook - Using RDX Quikstor media
Originally contributed by Bryan Iotti on the OpenIndiana Wiki.
What is RDX?
RDX is essentially a 2.5" SATA disk enclosed in a shockproof, electrostatically shielded container. It is a popular and affordable type of backup media that by design allows random access, not possible on tape drives. For more information, visit RDX Technology.
Usage with OpenIndiana
SATA RDX readers work fine with OpenIndiana 151a8 (USB not tested, please add information if available). The system will recognize a reader out of the box and ZFS works fine with the inserted disks. The Hardware Abstraction Layer (hal) however doesn't know that this cartridge is removable.
Modifications to the system
To allow it to be ejected, add a file (named along the lines of 10-rdx.fdi) to /etc/hal/fdi/preprobe/30user with content like the following:
<?xml version="1.0" encoding="UTF-8"?>
<deviceinfo version="0.2">
<device>
<match key="info.udi" string="/org/freedesktop/Hal/devices/pci_0_0/pci_ide_1f_2/ide_0_2/sd20/sd20">
<merge key="storage.removable" type="bool">true</merge>
<merge key="storage.hotpluggable" type="bool">true</merge>
<merge key="storage.requires_eject" type="bool">true</merge>
</match>
</device>
</deviceinfo>
The "match" key is the only system-specific item. You could alternatively use the info.product key and match an 'RDX' string.
Now, restart the hal service:
pfexec svcadm disable hal
pfexec svcadm enable hal
When you need to eject the RDX, use pfexec eject /dev/dsk/cXtXdX where cXtXdX is your specific drive address.
What if I can't/don't want to modify my system?
You can either:
Debugging issues with eject and hal
The Hardware Abstraction Layer can be troublesome to get working properly. Here are some tidbits that are useful for debugging purposes: The lshal command that dumps "what HAL thinks about a device". Its output can be lengthy, but it's invaluable when finding out what keys one should match and merge. The hal daemon can be run from the command line in debug mode, showing what happens when you plug in a new device:
svcadm disable hal
pfexec /usr/lib/hal/hald --daemon=no --verbose=yes
Suggested reading
What works
What doesn't work
|
__label__pos
| 0.785286 |
前言
20 号参加 pycon, 发现有个招聘公司 知道创宇 , 正好换工作,就去公司网站转了下,发现挺有意思:投简历需要一个网站爬虫程序,基本要求如下 (可以直接点开上面网页去看):
使用python编写一个网站爬虫程序,支持参数如下:
spider.py -u url -d deep -f logfile -l loglevel(1-5) --testself -thread number --dbfile filepath --key=”HTML5”
参数说明:
-u 指定爬虫开始地址
-d 指定爬虫深度
--thread 指定线程池大小,多线程爬取页面,可选参数,默认10
--dbfile 存放结果数据到指定的数据库(sqlite)文件中
--key 页面内的关键词,获取满足该关键词的网页,可选参数,默认为所有页面
-l 日志记录文件记录详细程度,数字越大记录越详细,可选参数,默认spider.log
--testself 程序自测,可选参数
功能描述:
1、指定网站爬取指定深度的页面,将包含指定关键词的页面内容存放到sqlite3数据库文件中
2、程序每隔10秒在屏幕上打印进度信息
3、支持线程池机制,并发爬取网页
4、代码需要详尽的注释,自己需要深刻理解该程序所涉及到的各类知识点
5、需要自己实现线程池
搞了 2 天,根据研究,弄了一个版本 (友情提示,仅供学习参考,要是面试这个职位,建议大家用其它方法实现,因为我投递过了,不要拿来主义额 ^.^)
代码如下 (隐藏了个人信息用 'XXX' 代替)
#!/usr/bin/env python
#coding=utf-8
import urllib2
import Queue
import sys
import traceback
import threading
import re
import datetime
import lxml
import chardet
import logging
import logging.handlers
from time import sleep
from urlparse import urlparse
from lxml import etree
from optparse import OptionParser
try:
from sqlite3 import dbapi2 as sqlite
except:
from pysqlite2 import dbapi2 as sqlite
#__doc__注释 执行脚本 -h 或者 --help 打印输出的内容
'''
This script is used to crawl analyzing web!
The Feature:
1 可以指定抓取的深度
2 将抓取到的关键字数据存放在sqlite
3 使用logging记录日志
4 并发线程池
Required dependencies:
1 chardet #分析抓取页面的字符集
sudo easy_install chardet
Usage:
spider.py -u url -d deep -f logfile -l loglevel(1-5) --testself -thread number --dbfile filepath --key=”HTML5”
Writer: Dongweiming
Date: 2012.10.22
'''
lock = threading.Lock() #设置线程锁
LOGGER = logging.getLogger('Crawler') #设置logging模块的前缀
LEVELS={ #日志级别
1:'CRITICAL',
2:'ERROR',
3:'WARNING',
4:'INFO',
5:'DEBUG',#数字越大记录越详细
}
formatter = logging.Formatter('%(name)s %(asctime)s %(levelname)s %(message)s') #自定义日志格式
class mySqlite(object):
def __init__(self, path, logger, level):
'''初始化数据库连接.
>>> from sqlite3 import dbapi2 as sqlite
>>> conn = sqlite.connect('testdb')
'''
try:
self.conn = sqlite.connect(path) #连接sqlite
self.cur = self.conn.cursor() #cursor是一个记录游标,用于一行一行迭代的访问查询返回的结果
except Exception, e:
myLogger(logger, self.loglevel, e, True)
return -1
self.logger = logger
self.loglevel = level
def create(self, table):
'''创建table,我这里创建包含2个段 ID(数字型,自增长),Data(char 128字符)'''
try:
self.cur.execute("CREATE TABLE IF NOT EXISTS %s(Id INTEGER PRIMARY KEY AUTOINCREMENT, Data VARCHAR(40))"% table)
self.done()
except sqlite.Error ,e: #异常记录日志并且做事务回滚,以下相同
myLogger(self.logger, self.loglevel, e, True)
self.conn.rollback()
if self.loglevel >3: #设置在日志级别较高才记录,这样级别高的详细
myLogger(self.logger, self.loglevel, '创建表%s' % table)
def insert(self, table, data):
'''插入数据,指定表名,设置data的数据'''
try:
self.cur.execute("INSERT INTO %s(Data) VALUES('%s')" % (table,data))
self.done()
except sqlite.Error ,e:
myLogger(self.logger, self.loglevel, e, True)
self.conn.rollback()
else:
if self.loglevel >4:
myLogger(self.logger, self.loglevel, '插入数据成功')
def done(self):
'''事务提交'''
self.conn.commit()
def close(self):
'''关闭连接'''
self.cur.close()
self.conn.close()
if self.loglevel >3:
myLogger(self.logger, self.loglevel, '关闭sqlite操作')
class Crawler(object):
def __init__(self, args, app, table, logger):
self.deep = args.depth #指定网页的抓取深度
self.url = args.urlpth #指定网站地址
self.key = args.key #要搜索的关键字
self.logfile = args.logfile #日志文件路径和名字
self.loglevel = args.loglevel #日志级别
self.dbpth = args.dbpth #指定sqlite数据文件路径和名字
self.tp = app #连接池回调实例
self.table = table #每次请求的table不同
self.logger = logger #logging模块实例
self.visitedUrl = [] #抓取的网页放入列表,防止重复抓取
def _hasCrawler(self, url):
'''判断是否已经抓取过这个页面'''
return (True if url in self.visitedUrl else False)
def getPageSource(self, url, key, deep):
''' 抓取页面,分析,入库.
'''
headers = { #设计一个用户代理,更好防止被认为是爬虫
'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; \
rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6' }
#if urlparse(url).scheme == 'https':
#pass
if self._hasCrawler(url): #发现重复直接return
return
else:
self.visitedUrl.append(url) #发现新地址假如到这个列表
try:
request = urllib2.Request(url = url, headers = headers) #创建一个访问请求,指定url,并且把访问请求保存在request
result = urllib2.urlopen(request).read() #打开这个请求,并保存读取数据
except urllib2.HTTPError, e: #触发http异常记录日志并返回
myLogger(self.logger, self.loglevel, e, True)
return -1
try:
encoding = chardet.detect(result)['encoding'] #判断页面的编码
if encoding.lower() == 'gb2312':
encoding = 'gbk' #今天我抓取新浪是gb2312,但是其中有个'蔡旻佑'不能被识别,所以用gbk去解码gb2312的页面
if encoding.lower() != 'utf-8': #发现不是默认编码就用应该的类型解码
result = result.decode(encoding)
except Exception, e:
myLogger(self.logger, self.loglevel, e, True)
return -1
else:
if self.loglevel >3:
myLogger(self.logger, self.loglevel, '抓取网页 %s 成功' % url)
try:
self._xpath(url, result, ['a'], unicode(key, 'utf8'), deep) #分析页面中的连接地址,以及它的内容
self._xpath(url, result, ['title', 'p', 'li', 'div'], unicode(key, "utf8"), deep) #分析这几个标签的内容
except TypeError: #对编码类型异常处理,有些深度页面和主页的编码不同
self._xpath(url, result, ['a'], key, deep)
self._xpath(url, result, ['title', 'p', 'li', 'div'], key, deep)
except Exception, e:
myLogger(self.logger, self.loglevel, e, True)
return -1
else:
if self.loglevel >3:
myLogger(self.logger, self.loglevel, '分析网页 %s 成功' % url)
return True
def _xpath(self, weburl, data, xpath, key, deep):
sq = mySqlite(self.dbpth, self.logger, self.loglevel)
page = etree.HTML(data)
for i in xpath:
hrefs = page.xpath(u"//%s" % i) #根据xpath标签
if deep >1:
for href in hrefs:
url = href.attrib.get('href','')
if not url.startswith('java') and not \
url.startswith('mailto'): #过滤javascript和发送邮件的链接
self.tp.add_job(self.getPageSource,url, key, deep-1) #递归调用,直到符合的深度
for href in hrefs:
value = href.text #抓取相应标签的内容
if value:
m = re.compile(r'.*%s.*' % key).match(value) #根据key匹配相应内容
if m:
sq.insert(self.table, m.group().strip()) #将匹配的数据插入到sqlite
sq.close()
def work(self):
'''主方法调用.
>>> import datetime
>>> logger = configLogger('test.log')
>>> time = datetime.datetime.now().strftime("%m%d%H%M%S")
>>> sq = mySqlite('test.db', logger, 1)
>>> table = 'd' + str(time)
>>> sq.create(table)
>>> tp = ThreadPool(5)
>>> def t():pass
>>> t.depth=1
>>> t.urlpth='http://www.baidu.com'
>>> t.logfile = 'test.log'
>>> t.loglevel = 1
>>> t.dbpth = 'test.db'
>>> t.key = 'test'
>>> d = Crawler(t, tp, table, logger)
>>> d.getPageSource(t.urlpth, t.key, t.depth)
True
'''
if not self.url.startswith('http://'): #支持用户直接写域名,当然也支持带前缀
self.url = 'http://' + self.url
self.tp.add_job(self.getPageSource, self.url, self.key, self.deep)
self.tp.wait_for_complete() #等待线程池完成
class MyThread(threading.Thread):
def __init__(self, workQueue, timeout=30, **kwargs):
threading.Thread.__init__(self, kwargs=kwargs)
self.timeout = timeout #线程在结束前等待任务队列多长时间
self.setDaemon(True) #设置deamon,表示主线程死掉,子线程不跟随死掉
self.workQueue = workQueue
self.start() #初始化直接启动线程
def run(self):
'''重载run方法'''
while True:
try:
lock.acquire() #线程安全上锁
callable, args = self.workQueue.get(timeout=self.timeout) #从工作队列中获取一个任务
res = callable(*args) #执行的任务
lock.release() #执行完,释放锁
except Queue.Empty: #任务队列空的时候结束此线程
break
except Exception, e:
myLogger(self.logger, self.loglevel, e, True)
return -1
class ThreadPool(object):
def __init__(self, num_of_threads):
self.workQueue = Queue.Queue()
self.threads = []
self.__createThreadPool(num_of_threads)
def __createThreadPool(self, num_of_threads):
for i in range(num_of_threads):
thread = MyThread(self.workQueue)
self.threads.append(thread)
def wait_for_complete(self):
'''等待所有线程完成'''
while len(self.threads):
thread = self.threads.pop()
if thread.isAlive(): #判断线程是否还存活来决定是否调用join
thread.join()
def add_job( self, callable, *args):
'''增加任务,放到队列里面'''
self.workQueue.put((callable, args))
def configLogger(logfile):
'''配置日志文件和记录等级'''
try:
handler = logging.handlers.RotatingFileHandler(logfile,
maxBytes=10240000, #文件最大字节数
backupCount=5, #会轮转5个文件,共6个
)
except IOError, e:
print e
return -1
else:
handler.setFormatter(formatter) #设置日志格式
LOGGER.addHandler(handler) #增加处理器
logging.basicConfig(level=logging.NOTSET) #设置,不打印小于4级别的日志
return LOGGER #返回logging实例
def myLogger(logger, lv, mes, err=False):
'''记录日志函数'''
getattr(logger, LEVELS.get(lv, 'WARNING').lower())(mes)
if err: #当发现是错误日志,还会记录错误的堆栈信息
getattr(logger, LEVELS.get(lv, 'WARNING').lower())(traceback.format_exc())
def parse():
parser = OptionParser(
description="This script is used to crawl analyzing web!")
parser.add_option("-u", "--url", dest="urlpth", action="store",
help="Path you want to fetch", metavar="www.sina.com.cn")
parser.add_option("-d", "--deep", dest="depth", action="store",type="int",
help="Url path's deep, default 1", default=1)
parser.add_option("-k", "--key", dest="key", action="store",
help="You want to query keywords, For example 'test'")
parser.add_option("-f", "--file", dest="logfile", action="store",
help="Record log file path and name, default spider.log",
default='spider.log')
parser.add_option("-l", "--level", dest="loglevel", action = "store",
type="int",help="Log file level, default 1(CRITICAL)",
default=1)
parser.add_option("-t", "--thread", dest="thread", action="store",
type="int",help="Specify the thread pool, default 10",
default=10)
parser.add_option("-q", "--dbfile", dest="dbpth", action="store",
help="Specify the the sqlite file directory and name, \
default test.db", metavar='test.db')
parser.add_option("-s", "--testself", dest="testself", action="store_true",
help="Test myself", default=False)
(options, args) = parser.parse_args()
return options
def main():
'''主函数'''
options = parse()
if options.testself: #如果testself,执行doctest
import doctest
print doctest.testmod()
return
if not options.urlpth or not options.key or not options.dbpth: #判断必选项是否存在
print 'Need to specify the parameters option "-u " or "-k" or "-q"!'
return
if '-h' in sys.argv or '--help' in sys.argv: #选择帮助信息,打印__doc__
print __doc__
logger = configLogger(options.logfile) #实例化日志调用
time = datetime.datetime.now().strftime("%m%d%H%M%S") #每次请求都会根据时间创建table
tp = ThreadPool(options.thread)
sq = mySqlite(options.dbpth, logger, options.loglevel)
table = 'd' + str(time)
sq.create(table) #创建table
sq.close()
crawler = Crawler(options, tp, table, logger)
crawler.work() #主方法
if __name__ == '__main__':
main()
|
__label__pos
| 0.905351 |
Pets
Take a Look at the Scary Vampire Deer before It Disappears
Are you curious about the vampire deer and want to discover more about its characteristics. Here is what you want as we reveal to you some of the facts that may be hidden away from you. First of all, we want to discover more about the name itself and why they are given such a scary and weird name. Water Deer, Fanged Deer and Vampire Deer are all names used for referring to this type of deer but the Vampire Deer is the most common name and is also the most interesting. The Vampire deer lives in China and Korea and this is why there are two subspecies which are the Korean Water Deer and the Chinese Water Deer. The Vampire Deer is small in its size and it is superficially closer in its characteristics to the musk deer more than the true deer. The Vampire or Water Deer is classified as a cervid although it does not have antlers and it also has different anatomical characteristics that make it really unique such as the large tusks that make it weird.
Chinese_water_deer_Stuffed_specimen_2 Take a Look at the Scary Vampire Deer before It Disappears
tufted-deer-vampire Take a Look at the Scary Vampire Deer before It Disappears
tumblr_mxbfw4Nojv1sqmphzo1_1280 Take a Look at the Scary Vampire Deer before It Disappears
♦ Why is it named the Vampire Deer?
mgid-uma-image-mtv Take a Look at the Scary Vampire Deer before It Disappears
morebabytuftedddeer Take a Look at the Scary Vampire Deer before It Disappears
tumblr_mxbfw4Nojv1sqmphzo5_1280 Take a Look at the Scary Vampire Deer before It Disappears
The main reason behind calling this type of deer the Vampire Deer is that it looks like vampires. The Vampire Deer does not suck the blood of living animals and other creatures like the vampires that we know and watch in films, but it just has prominent tusks that make it look like vampires and this is why it is given this interesting and catchy name.
♦ Where does the Vampire Deer live?
vampire-deer-chinese-water-deer-2 Take a Look at the Scary Vampire Deer before It Disappears
tumblr_mbm5bcAzJd1qiwe0oo1_1280 Take a Look at the Scary Vampire Deer before It Disappears
Moschus_moschiferus_in_Plzen_zoo_12.02.2011 Take a Look at the Scary Vampire Deer before It Disappears
Most of the Vampire Deer live beside rivers where they can hide in tall reeds and rushes. They can be also found in open cultivated fields and other areas that are hidden from sight such as mountains, grasslands and swamps. The most important countries in which the Vampire Deer lives include Korea, China, United Kingdom, Netherlands, Belgium, Argentina and France. You can also see the Vampire Deer in the United States but they are not common there.
♦ What is the story of the weird and large tusks?
173960 Take a Look at the Scary Vampire Deer before It Disappears
8509595578_4b383a817f Take a Look at the Scary Vampire Deer before It Disappears
0_aecce_a35b9d10_orig Take a Look at the Scary Vampire Deer before It Disappears
The long tusks or canines that the vampire deer has come out from the upper jaw which makes the Vampire Deer similar to the musk deer more than the true deer. The length of the large ad long tusks that the Vampire Deer has ranges from 5.5-8 cm (2.1-3.2 in).
♦ What is the importance of the long & large tusks?
kashmir-musk-vampire-deer-afghanistan_h Take a Look at the Scary Vampire Deer before It Disappears
vampire-deer-chinese-water-deer-1 Take a Look at the Scary Vampire Deer before It Disappears
vampire-deer-chinese-water-deer-6 Take a Look at the Scary Vampire Deer before It Disappears
The large tusks may be thought to be annoying and disturb the Vampire Deer while eating and moving their mouth, but in fact they are not as we think. The Vampire Deer’s tusks are really beneficial as the deer, especially males, use these large tusks in territorial fighting as a weapon. The long tusks are loosely held and can be easily controlled and moved through using facial muscles.
vampire-deer-2 Take a Look at the Scary Vampire Deer before It Disappears
Tufted_Deer Take a Look at the Scary Vampire Deer before It Disappears
So, what do you think of these vampire deer or water deer? And do you still believe that they are scary and dangerous as you first thought?
DMCA.com
Leave a Reply
Your email address will not be published. Required fields are marked *
Back to top button
Pin It on Pinterest
|
__label__pos
| 0.68926 |
Many of these restorations can be replaced conservatively with direct composite. Unfortunately, however, many of the placement and accompanying adhesive protocols required for predictability can be time-consuming and technique sensitive. That’s why it’s important to understand the historical development of dental equipment when considering today’s etching and adhesive protocol options to determine which will serve you—and your patients—best.
In the total-etch (etch-and-rinse) technique, both enamel and dentin are etched with phosphoric acid to remove the smear layer and condition the preparation prior to bonding (the enamel is etched longer than the dentin). The etchant and smear layer are then rinsed off with water and gently air-dried. Because dentin should remain moist and glossy in appearance, care must be taken not to overdry the dentin. This prevents collagen fibrils from collapsing, which would create a less permeable surface for hydrophilic monomers in the adhesive, as well as a weak interface that could lead to a poor bond and postoperative sensitivity. Although total-etch adhesives and their associated multistep techniques are well established and clinically proven, they are often considered to be technique sensitive.
Manufacturers have helped to streamline adhesive protocols by introducing universal adhesives that promote high bond strength to enamel and dentin and that can be used on both dry and moist dentin. Because they are designed to work with or without phosphoric acid, dental micro motor, universal adhesives (e.g., Adhese Universal from Ivoclar Vivadent) are suitable for all etching techniques without the risk of overetching the dentin.
Presenting a case involving multiple failing side-by-side amalgam posterior restorations, Michael R. Sesemann, DDS, FAACD, explains how selective etching, universal adhesive, and bulk-fill composite can be combined for efficient and predictable posterior quadrant restorations.
Advertisements
|
__label__pos
| 0.609577 |
Defined Type: cfssl::cert
Defined in:
modules/cfssl/manifests/cert.pp
Overview
Parameters:
• signer_config (Cfssl::Signer_config)
• names (Array[Cfssl::Name]) (defaults to: [])
• key (Cfssl::Key) (defaults to: {'algo' => 'ecdsa', 'size' => 521})
• ensure (Wmflib::Ensure) (defaults to: 'present')
• owner (String) (defaults to: 'root')
• group (String) (defaults to: 'root')
• auto_renew (Boolean) (defaults to: true)
• renew_seconds (Integer[1800]) (defaults to: 604800)
• label (Optional[String]) (defaults to: undef)
• profile (Optional[String]) (defaults to: undef)
• outdir (Optional[Stdlib::Unixpath]) (defaults to: undef)
• tls_cert (Optional[Stdlib::Unixpath]) (defaults to: undef)
• tls_key (Optional[Stdlib::Unixpath]) (defaults to: undef)
• hosts (Optional[Array[Stdlib::Host]]) (defaults to: [])
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
# File 'modules/cfssl/manifests/cert.pp', line 2
define cfssl::cert (
Cfssl::Signer_config $signer_config,
Array[Cfssl::Name] $names = [],
Cfssl::Key $key = {'algo' => 'ecdsa', 'size' => 521},
Wmflib::Ensure $ensure = 'present',
String $owner = 'root',
String $group = 'root',
Boolean $auto_renew = true,
Integer[1800] $renew_seconds = 604800, # 1 week
Optional[String] $label = undef,
Optional[String] $profile = undef,
Optional[Stdlib::Unixpath] $outdir = undef,
Optional[Stdlib::Unixpath] $tls_cert = undef,
Optional[Stdlib::Unixpath] $tls_key = undef,
Optional[Array[Stdlib::Host]] $hosts = [],
) {
include cfssl
if $key['algo'] == 'rsa' and $key['size'] < 2048 {
fail('RSA keys must be either 2048, 4096 or 8192 bits')
}
if $key['algo'] == 'ecdsa' and $key['size'] > 2048 {
fail('ECDSA keys must be either 256, 384 or 521 bits')
}
$ensure_file = $ensure ? {
'present' => 'file',
default => $ensure,
}
$safe_title = $title.regsubst('[^\w\-]', '_', 'G')
$csr_json_path = "${cfssl::csr_dir}/${safe_title}.csr"
$_outdir = $outdir ? {
undef => "${cfssl::ssl_dir}/${safe_title}",
default => $outdir,
}
$_names = $names.map |Cfssl::Name $name| {
{
'C' => $name['country'],
'L' => $name['locality'],
'O' => $name['organisation'],
'OU' => $name['organisational_unit'],
'S' => $name['state'],
}
}
$csr = {
'CN' => $title,
'hosts' => $hosts,
'key' => $key,
'names' => $_names,
}
file{$csr_json_path:
ensure => $ensure_file,
owner => 'root',
group => 'root',
mode => '0400',
content => $csr.to_json_pretty()
}
file {$_outdir:
ensure => ensure_directory($ensure),
owner => $owner,
group => $group,
mode => '0440',
recurse => true,
purge => true,
}
$tls_config = ($tls_cert and $tls_key) ? {
true => "-mutual-tls-client-cert ${tls_cert} -mutual-tls-client-key ${tls_key}",
default => '',
}
$_label = $label ? {
undef => '',
default => "-label ${label}",
}
$_profile = $profile ? {
undef => '',
default => "-profile ${profile}",
}
$signer_args = $signer_config ? {
Stdlib::HTTPUrl => "-remote ${signer_config} ${tls_config} ${_label}",
Cfssl::Signer_config::Client => "-config ${signer_config['config_file']} ${tls_config} ${_label}",
default => @("SIGNER_ARGS"/L)
-ca=${signer_config['config_dir']}/ca/ca.pem \
-ca-key=${signer_config['config_dir']}/ca/ca_key.pem \
-config=${signer_config['config_dir']}/cfssl.conf \
| SIGNER_ARGS
}
$cert_path = "${_outdir}/${safe_title}.pem"
$key_path = "${_outdir}/${safe_title}-key.pem"
$csr_pem_path = "${_outdir}/${safe_title}.csr"
$gen_command = @("GEN_COMMAND"/L)
/usr/bin/cfssl gencert ${signer_args} ${_profile} ${csr_json_path} \
| /usr/bin/cfssljson -bare ${_outdir}/${safe_title}
| GEN_COMMAND
$sign_command = @("SIGN_COMMAND"/L)
/usr/bin/cfssl gencert ${signer_args} ${_profile} ${csr_pem_path} \
| /usr/bin/cfssljson -bare ${_outdir}/${safe_title}
| SIGN_COMMAND
# TODO: would be nice to check its signed with the correct CA
$test_command = @("TEST_COMMAND"/L)
/usr/bin/test \
"$(/usr/bin/openssl x509 -in ${cert_path} -noout -pubkey 2>&1)" == \
"$(/usr/bin/openssl pkey -pubout -in ${key_path} 2>&1)"
| TEST_COMMAND
if $ensure == 'present' {
exec{"Generate cert ${title}":
command => $gen_command,
unless => $test_command,
}
if $auto_renew {
exec {"renew certificate - ${title}":
command => $sign_command,
unless => "/usr/bin/openssl x509 -in ${cert_path} -checkend ${renew_seconds}",
require => Exec["Generate cert ${title}"]
}
}
}
file{[$cert_path, $key_path, $csr_pem_path]:
ensure => $ensure_file,
owner => $owner,
group => $group,
mode => '0440',
}
}
|
__label__pos
| 0.992494 |
Can we sort a list with Lambda in Java?
Java 8Object Oriented ProgrammingProgramming
Yes, we can sort a list with Lambda. Let us first create a String List:
List<String> list = Arrays.asList("LCD","Laptop", "Mobile", "Device", "LED", "Tablet");
Now, sort using Lambda, wherein we will be using compareTo():
Collections.sort(list, (String str1, String str2) -> str2.compareTo(str1));
The following is an example to sort a list with Lambda in Java:
Example
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
public class Demo {
public static void main(String... args) {
List<String> list = Arrays.asList("LCD","Laptop", "Mobile", "Device", "LED", "Tablet");
System.out.println("List = "+list);
Collections.sort(list, (String str1, String str2) -> str2.compareTo(str1));
System.out.println("Sorted List = "+list);
}
}
Output
List = [LCD, Laptop, Mobile, Device, LED, Tablet]
Sorted List = [Tablet, Mobile, Laptop, LED, LCD, Device]
raja
Published on 02-May-2019 10:10:22
Advertisements
|
__label__pos
| 0.993608 |
MASIGNCLEAN101
Tutorial PHP - Membuat Text Editor dengan Menggunakan CKEditor 5
Di dalam artikel kali ini saya akan menjelaskan bagaimana caranya membuat sebuah text editor dengan menggunakan PHP native tanpa framework. Saya sengaja menggunakan PHP native sebagai contoh karena jika anda sudah bisa memahami bagaimana menerapkan cara tersebut dengan PHP native, tentu saja anda akan bisa juga menerapkannya dalam berbagai framework PHP lainnya, Di sini saya menggunakan CKEditor versi 5 yang merupakan versi terbaru saat artikel ini ditulis.
#INSTALASI
Intalasi dari CKEditor 5 ini bisa menggunakan CDN maupun mendownload source nya terlebih dahulu. Jika anda menggunakan CDN yang sudah disediakan anda cukup menggunakan kode di bawah ini
<script src="https://cdn.ckeditor.com/ckeditor5/11.1.1/classic/ckeditor.js"></script>
Namun jika anda ingin mendownload source nya terlebih dahulu, dapat anda download pada link berikut ini https://ckeditor.com/ckeditor-5/download/. File yang diperlukan adalah ckeditor.js dan kode yang digunakan menjadi seperti berikut ini
<script src="[ckeditor-build-path]/ckeditor.js"></script>
Untuk cara yang kedua ini, anda hanya perlu melakukan include file js yang anda download sesuai dengan path dari file ckeditor.js di mana anda meletakkan file tersebut.
#CARA PENGGUNAAN
Sesuai dengan dokumentasi resminya, CKEditor versi 5 ini dapat digunakan dengan 4 mode yang berbeda. Mode tersebut adalah
• Classic Editor
• Inline Editor
• Ballon Editor
• Document Editor
Dalam artikel ini saya hanya akan menjelaskan penggunaan dari Classic Editor saja. Classic Editor ini mempunyai bentuk yang mirip dengan CKeditor versi 4 sebelumnya di mana mempunyai tampilan kotak biasa dengan toolbar di bagian atasnya. Untuk mode-mode penggunaan yang lainnya mungkin akan saya jelaskan pada artikel-artikel selanjutnya karena untuk saat ini saya sendiri belum mempelajarinya :).
Cara penggunaan dari CKEditor 5 ini masih mirip dengan CKEditor 4, yang pertama adalah menyediakan textarea terlebih dahulu seperti kode di bawah ini.
<textarea id="editor" name="content"></textarea>
Kemudian langkah selanjutnya adalah menambahkan script untuk mengubah textarea tersebut menjadi sebuah text editor. Caranya adalah dengan menambahkan kode di bawah ini
<script>
ClassicEditor
.create( document.querySelector( '#editor' ) )
.catch( error => {
console.error( error );
} );
</script>
Cara di atas akan menghasilkan text editor dengan toolbar secara default. Pembahasan lebih lanjut mengenai toolbar dan kustomisasinya akan saya jelaskan pada artikel-artikel selanjutnya. Yang perlu anda perhatikan pada contoh kode di atas adalah id pada textarea harus sama dengan yang digunakan pada script. Selain menggunakan id, anda juga bisa menggunakan class seperti pemakaian pada jQuery umumnya.
Untuk menyimpan data dari editor tersebut caranya sama saja dengan form input biasanya. Anda hanya perlu menambahkan form dengan method POST (atau GET terserah anda) dan kemudian mendapatkan isinya dengan menggunakan perintah $_POST['content'] (atau $_GET tergantung method form yang anda gunakan).
Demikian penjelasan saya mengenaik CKEditor 5 ini. Untuk kustomisasi lebih lanjut akan saya pelajari terlebih dahulu caranya dan akan saya buatkan artikel nya jika saya sudah berhasil memahaminya :). Dikarenakan waktu yang saya punya sudah tidak sebanyak di saat saya memulai blog ini jadi saya mohon maaf terlebih dahulu jika frekuensi artikel pada blog ini menurun :). Semoga penjelasan sederhana tentang penggunaan CKEditor 5 ini dapat dipahami dan bermanfaat. Selamat mencoba.
Share This :
Davnsial ID
Penggemar coding. Seseorang yang suka belajar sesuatu yang baru. terutama tentang pemrograman web dan desain web. senang berbagi tentang pengetahuan dan belajar dari orang lain.
|
__label__pos
| 0.997537 |
• Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
The aim of this experiment is to investigate the relationship between the current, voltage and resistance through the use of a fixed resistor and a filament lamp.
Extracts from this document...
Introduction
Abdul Mufti Centre Number: 13329 Candidate Number: 4138
10. F
Aim
The aim of this experiment is to investigate the relationship between the current, voltage and resistance through the use of a fixed resistor and a filament lamp.
Hypothesis
Based on knowledge of Ohm’s law it can be hypothesised that when increasing voltage and current is passed through a filament lamp the resistance would increase in a non-linear fashion, such that a graph similar to the one given below would be obtained (figure 1). This non-linear graph would be expected due to temperature increases in the filament lamp.
It can also be hypothesised that when current is passed through a fixed resistor the relationship between V and I would be expected to be linear such that a straight line through the origin would be obtained (figure 2). In addition the readings on the ammeter and voltmeter would both change accordingly as expected.
image00.pngimage01.png
The shape of a fixed resistor current-voltage graph (I-V graph) is explained in figure 3 since the three variables are related through Ohm’s law.
image09.pngimage10.pngimage11.pngimage02.pngimage03.png
Circuit Diagrams
image04.png
image05.png
image06.png
Equipment
Fixed resistor &
Filament Lamp to impede and obstruct current flowing through circuit
Ammeter to measure current flowing through the circuit
Voltmeter– to measure the voltage present in the circuit and to make sure the power supply is correctly calibrated.
Power Supply– to act as the adjustable power source for the circuit
Wiresto connect the circuit components.
...read more.
Middle
In order to apply Ohm’s law we need to know two of the 3 variables. In this experiment we will know the voltage and current. Therefore by rearranging Ohm’s law we can calculate the resistance.
• A steady increase in resistance, in a circuit with constant voltage, produces a progressively (not a straight-line if graphed) weaker current.
• A steady increase in voltage, in a circuit with constant resistance, produces a constant linear rise in current.
In this case, Ohm’s law is needed to calculate the resistance of the fixed resistor and the change in resistance as the filament lamp gets hot. For each case gradient from the appropriate graphs will be used. Resistance can be varied by using a variable resistor, by altering the gauge or by length of the wire or by changing the temperature. If a longer wire is being used then the resistance increases as the electrons have to travel further than in a short wire. If a thicker wire is being used then the electrons have more space to move and therefore resistance is decreased. If a thinner wire is used then the resistance will increase as the electrons cannot get around the circuit easily.
Method
The work surface was insured to be dry and clean. Bags, stools and other possible obstacles were removed from the work area. The equipment was collected according to the list given and verified by close inspection to be clean, non-corroded and undamaged.
...read more.
Conclusion
It would have been practical in the interest of conducting a fair test to use averages for series of current readings. This could be done using two different sets of equipment, conducting the experiment on each set and averaging the values. This could help us reduce error margin in any anomalies found. It may have been interesting to investigate the same aim with a wider range and more sensitive set of equipment. Smaller graduations of voltage on the PSU would have allowed us to plot the graphs with more accuracy. If possible it would have been interesting to use a diode. However given the amount of time we had, it would not have been possible to further complicate the experimental design.
We may also have adjusted variables such as the gauge, length and type of wire used to investigate the effect these factors have on ohms law. It may also have been of interest to us if we investigated how adjusting the circuit diagram would have affected our results. However this may have been a little advanced for our level at present.
Bibliography
GCSE double Science Physics – CGP – ISBN: 1-84146-401-5
New Modular Science for GCSE, (Vol 1), - Heinemann (1996) – ISBN: 0-43557-196-6
AQA GCSE Science syllabus (2001).
...read more.
This student written piece of work is one of many that can be found in our University Degree Physics section.
Found what you're looking for?
• Start learning 29% faster today
• 150,000+ documents available
• Just £6.99 a month
Not the one? Search for your essay title...
• Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
See related essaysSee related essays
Related University Degree Physics essays
1. How is the internal resistance of a standard battery affected by Temperature
Voltmeter * Plastic waterproof bag * Wires * Crocodile clips * Beaker / Water bath * Thermometer Method * Connect two crocodile clips to each end of the resistors * place the battery connected to the appropriate wires in a water bath filled with water straight from a boiling kettle
2. The Heating Effect of An Electrical Current
a known current flowing through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the electrical resistance of the wire multiplied by the square of the current.
1. The relationship between wire length, width, area and resistance.
Turn the apparatus on and then turn the dial on the lab pack so that the volt meter reads 1. Then read and record the reading on the ammeter. 6. Repeat this process for each length. 7. Repeat the above two a further two times to give 3 sets of results 8.
2. Torque Physics Lab Report. The purpose of this experiment was to help understand ...
If the clamp were to have been inverted, where the bar is supported at a point above the center of gravity, you wouldn't een be able to balance the meter bar because it is not in the center of gravity it would just be slack and hang down.
1. Double Slit Interference
Then the paper was removed and the distances between the central maximum and the other maximums were measured with meter stick. Also the distance between the screen and the slit disk was measured and recorded. After that, the slit disk was rotated to a new double slit with 0.04 mm
2. The purpose of this experiment was to find the normal force and the lift ...
Hence from the lift curve slope it can be seen that from angle of attack 1 to 11 the lift was because of low pressure on the upper surface and high pressure on the lower surface while during the angle of attack 16 due to high pressure on the upper
1. Experiment to find how the Length of the Wire affects the Resistance of the ...
The diameter of the wire must be kept the same because if the thickness were different each time, there would be more or less atoms that the electrons would have to travel through therefore it would decrease or increase the resistance making my results unreliable.
2. Investigating factors which affect the period time of a simple pendulum
taken the results to 2 decimal places because this was the limit of accuracy on the stopwatch. These results show that over a 50� increase in angle of release, the period of oscillation increases by 0.06. This is an unreliable measurement because the stopwatch does not have this degree of accuracy.
• Over 160,000 pieces
of student written work
• Annotated by
experienced teachers
• Ideas and feedback to
improve your own work
|
__label__pos
| 0.928536 |
Friday, 6 April 2012
Installing Graphite on CentOS - Part 2 - Setting up Graphite
The first post in this series detailed how to build rpms for graphite. This next post details how to get a working graphite installation setup on your production CentOS servers.
These instructions make a few assumptions - the system can use EPEL repositories and it has SELINUX disabled. This sets up a minimum graphite install - literally the carbon-cache daemon to receive the data and the graphite web front end to present the data via a web browser. You can extend the setup to carbon-cache clusters fronted by relay servers, duplicate your carbon data across datacentres, run mysql backends and much more. I'll touch on some of these scenarios in future posts.
1. Install the dependencies:
yum -y install Django django-tagging bitmap bitmap-fonts python-zope-interface python-memcached python-sqlite2 python-ldap python-twisted pycairo memcached
2. Install the graphite RPMs:
yum --nogpgcheck localinstall carbon-0.9.9-1.noarch.rpm graphite-web-0.9.9-1.noarch.rpm whisper-0.9.9-1.noarch.rpm
3. Setup the carbon and graphite-web configuration files:
cd /opt/graphite/conf/
cp graphite.wsgi.example graphite.wsgi
cp storage-schemas.conf.example storage-schemas.conf
cp carbon.conf.example carbon.conf
cd ../webapp/graphite
cp local_settings.py.example local_settings.py
4. Update local_settings.py above with correct Timezone, memcache location, ldap database for authentication. For a basic setup this is the only config file you need to modify.
5. Create the the django DB. Note that you can host this within mysql. You will need to install python-MySQL to do so and configure local_settings.py accordingly. Otherwise it will create a local file based database.
# python /opt/graphite/webapp/graphite/manage.py syncdb
Creating table account_profile
Creating table account_variable
Creating table account_view
Creating table account_window
Creating table account_mygraph
Creating table dashboard_dashboard_owners
Creating table dashboard_dashboard
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_user_permissions
Creating table auth_user_groups
Creating table auth_user
Creating table auth_message
Creating table django_session
Creating table django_admin_log
Creating table django_content_type
Creating table tagging_tag
Creating table tagging_taggeditem
Creating table events_event
You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): no
Installing index for account.Variable model
Installing index for account.View model
Installing index for account.Window model
Installing index for account.MyGraph model
Installing index for dashboard.Dashboard_owners model
Installing index for auth.Permission model
Installing index for auth.Group_permissions model
Installing index for auth.User_user_permissions model
Installing index for auth.User_groups model
Installing index for auth.Message model
Installing index for admin.LogEntry model
Installing index for tagging.TaggedItem model
No fixtures found.
6. Now Install and configure Apache - we use wsgi to create running web enabled graphite cgi processes. To enable this install mod_wsgi:
yum -y install httpd mod_wsgi
7. Create a virtualhost file for graphite /etc/httpd/conf.d/graphite.conf - your virtualhost config will probably differ slightly depending on how you want your virtualhost configuration:
<VirtualHost *:80>
ServerName graphite.somehost.com
DocumentRoot "/opt/graphite/webapp"
ErrorLog logs/graphite_error_log
TransferLog logs/graphite_access_log
LogLevel warn
WSGIDaemonProcess graphite processes=5 threads=5 display-name=" {GROUP}" inactivity-timeout=120
WSGIProcessGroup graphite
WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi
Alias /content/ /opt/graphite/webapp/content/
<Location "/content/">
SetHandler None
</Location>
Alias /media/ "/usr/lib/python2.6/site-packages/django/contrib/admin/media/"
<Location "/media/">
SetHandler None
</Location>
<Directory /opt/graphite/conf/>
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
8. Update /etc/httpd/conf.d/wsgi.conf with a socket prefix:
LoadModule wsgi_module modules/mod_wsgi.so
WSGISocketPrefix /var/run/wsgi
9. Allow apache access to the storage data:
chown -R apache:apache /opt/graphite/storage/
10. Create the init script /etc/init.d/carbon-cache. My script here also can start the carbon relay if you uncomment the relevant sections - carbon-relay is useful if you are running graphite in a high-availability setup but I'll talk about that in future posts....
#!/bin/bash
#
# This is used to start/stop the carbon-cache daemon
# chkconfig: - 99 01
# description: Starts the carbon-cache daemon
# Source function library.
. /etc/init.d/functions
RETVAL=0
prog="carbon-cache"
start_relay () {
/usr/bin/python /opt/graphite/bin/carbon-relay.py start
RETVAL=$?
[ $RETVAL -eq 0 ] && success || failure
echo
return $RETVAL
}
start_cache () {
/usr/bin/python /opt/graphite/bin/carbon-cache.py start
RETVAL=$?
[ $RETVAL -eq 0 ] && success || failure
echo
return $RETVAL
}
stop_relay () {
/usr/bin/python /opt/graphite/bin/carbon-relay.py stop
RETVAL=$?
[ $RETVAL -eq 0 ] && success || failure
echo
return $RETVAL
}
stop_cache () {
/usr/bin/python /opt/graphite/bin/carbon-cache.py stop
RETVAL=$?
[ $RETVAL -eq 0 ] && success || failure
echo
return $RETVAL
}
# See how we were called.
case "$1" in
start)
#start_relay
start_cache
;;
stop)
#stop_relay
stop_cache
;;
restart)
#stop_relay
stop_cache
#start_relay
start_cache
;;
*)
echo $"Usage: $0 {start|stop}"
exit 2
;;
esac
11. Start the services:
service memcached start
service carbon-cache start
service httpd start
You should now have a running carbon-cache process and be able to browse your graphite data through the web-interface. In the next post I'll discuss some techniques for injecting your data into graphite.
4 comments:
Peto Velas said...
Hi buddy, perfect blog posts[1,2,3] about graphite...
if found small typo /etc/init.d/carbon-cache :)
start_relay must point to /opt/graphite/bin/carbon-relay.py and not /opt/graphite/bin/carbon-cache.py
and
start_cache must point to /opt/graphite/bin/carbon-cache.py and not /opt/graphite/bin/carbon-relay.py
Al Rix said...
Good spot! Sorry about that - will update! Thanks for the positive feedback though...
Tobias said...
Excellent post! I have followed it using CentOS 6.2 in Vagrant, but when starting apache I get the following output in /var/log/httpd/graphite_error_log
IOError: [Errno 13] Permission denied: '/opt/graphite/storage/log/webapp/info.log'
What did I miss?
- Tobi
Tobias said...
I have found an answer to my question before. Had to run the following command:
chown -R apache:apache /opt/graphite/storage
That's all.
Tobi
|
__label__pos
| 0.58285 |
Skip to content
This repository
Fetching contributors…
Octocat-spinner-32-eaf2f5
Cannot retrieve contributors at this time
file 771 lines (685 sloc) 21.223 kb
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770
#include <stdio.h> /* printf */
#include <stdlib.h> /* malloc */
#include <string.h> /* strcmp, strlen */
#include "trie.h"
struct _Transition; /* Forward declaration, needed in _Trie. */
/* _Trie is a recursive data structure. A _Trie contains zero or more
* _Transitions that lead to more _Tries. The transitions are stored
* in alphabetical order of the suffix member of the data structure.
* _Trie also contains a pointer called value where the user can store
* arbitrary data. If value is NULL, then no data is stored here.
*/
struct _Trie {
struct _Transition *transitions;
unsigned char num_transitions;
void *value; /* specified by user, never freed or allocated by me! */
};
/* _Transition holds information about the transitions leading from
* one _Trie to another. The trie structure here is different from
* typical ones, because the transitions between nodes can contain
* strings of arbitrary length, not just single characters. Suffix is
* the string that is matched from one node to the next.
*/
typedef struct _Transition {
unsigned char *suffix;
Trie next;
} *Transition;
#define MAX_KEY_LENGTH 1000
static unsigned char KEY[MAX_KEY_LENGTH];
Trie Trie_new() {
Trie trie;
if(!(trie = (Trie)malloc(sizeof(struct _Trie))))
return NULL;
trie->transitions = NULL;
trie->num_transitions = 0;
trie->value = NULL;
return trie;
}
int Trie_set(Trie trie, const unsigned char *key, const void *value) {
int i;
Transition transition=NULL;
unsigned char *suffix=NULL;
int retval = 0;
int first, last, mid;
if(!key[0]) {
trie->value = (void *)value;
return 0;
}
/* Insert the key in alphabetical order. Do a binary search to
find the proper place. */
first = 0;
last = trie->num_transitions-1;
i = -1;
while(first <= last) {
mid = (first+last)/2;
transition = &trie->transitions[mid];
suffix = transition->suffix;
if(key[0] < suffix[0])
last = mid-1;
else if(key[0] > suffix[0])
first = mid+1;
else {
i = mid;
break;
}
}
/* If no place was found for it, then the indexes will be in the
order last,first. Place it at index first. */
if(i == -1)
i = first;
/* If nothing matches, then insert a new trie here. */
if((i >= trie->num_transitions) || (key[0] != suffix[0])) {
unsigned char *new_suffix=NULL;
Trie newtrie=NULL;
Transition new_transitions=NULL;
/* Create some variables for the new transition. I'm going to
allocate these first so that if I can detect memory errors
before I mess up the data structure of the transitions.
*/
if(!(new_suffix = (unsigned char *)strdup(key)))
goto insert_memerror;
if(!(newtrie = Trie_new()))
goto insert_memerror;
/* Create some space for the next transition. Allocate some
memory and shift the old transitions over to make room for
this one.
*/
if(!(new_transitions = malloc(sizeof(struct _Transition) *
(trie->num_transitions+1))))
goto insert_memerror;
memcpy(new_transitions, trie->transitions,
sizeof(struct _Transition)*i);
memcpy(&new_transitions[i+1], &trie->transitions[i],
sizeof(struct _Transition)*(trie->num_transitions-i));
free(trie->transitions);
trie->transitions = new_transitions;
new_transitions = NULL;
trie->num_transitions += 1;
/* Initialize the new transition. */
transition = &trie->transitions[i];
transition->suffix = new_suffix;
transition->next = newtrie;
transition->next->value = (void *)value;
if(0) {
insert_memerror:
if(new_transitions) free(new_transitions);
if(newtrie) free(newtrie);
if(new_suffix) free(new_suffix);
return 1;
}
}
/* There are three cases where the key and suffix share some
letters.
1. suffix is proper substring of key.
2. key is proper substring of suffix.
3. neither is proper substring of other.
For cases 2 and 3, I need to first split up the transition
based on the number of characters shared. Then, I can insert
the rest of the key into the next trie.
*/
else {
/* Count the number of characters shared between key
and suffix. */
int chars_shared = 0;
while(key[chars_shared] && key[chars_shared] == suffix[chars_shared])
chars_shared++;
/* Case 2 or 3, split this sucker! */
if(chars_shared < strlen(suffix)) {
Trie newtrie=NULL;
unsigned char *new_suffix1=NULL, *new_suffix2=NULL;
if(!(new_suffix1 = (unsigned char *)malloc(chars_shared+1)))
goto split_memerror;
strncpy(new_suffix1, key, chars_shared);
new_suffix1[chars_shared] = 0;
if(!(new_suffix2 = (unsigned char *)strdup(suffix+chars_shared)))
goto split_memerror;
if(!(newtrie = Trie_new()))
goto split_memerror;
if(!(newtrie->transitions =
(Transition)malloc(sizeof(struct _Transition))))
goto split_memerror;
newtrie->num_transitions = 1;
newtrie->transitions[0].next = transition->next;
newtrie->transitions[0].suffix = new_suffix2;
free(transition->suffix);
transition->suffix = new_suffix1;
transition->next = newtrie;
if(0) {
split_memerror:
if(newtrie && newtrie->transitions) free(newtrie->transitions);
if(newtrie) free(newtrie);
if(new_suffix2) free(new_suffix2);
if(new_suffix1) free(new_suffix1);
return 1;
}
}
retval = Trie_set(transition->next, key+chars_shared, value);
}
return retval;
}
void Trie_del(Trie trie) {
int i;
if(!trie)
return;
for(i=0; i<trie->num_transitions; i++) {
Transition transition = &trie->transitions[i];
if(transition->suffix)
free(transition->suffix);
Trie_del(transition->next);
}
free(trie);
}
void *Trie_get(const Trie trie, const unsigned char *key) {
int first, last, mid;
if(!key[0]) {
return trie->value;
}
/* The transitions are stored in alphabetical order. Do a binary
* search to find the proper one.
*/
first = 0;
last = trie->num_transitions-1;
while(first <= last) {
Transition transition;
unsigned char *suffix;
int c;
mid = (first+last)/2;
transition = &trie->transitions[mid];
suffix = transition->suffix;
/* If suffix is a substring of key, then get the value from
the next trie.
*/
c = strncmp(key, suffix, strlen(suffix));
if(c < 0)
last = mid-1;
else if(c > 0)
first = mid+1;
else
return Trie_get(transition->next, key+strlen(suffix));
}
return NULL;
}
/* Mutually recursive, so need to make a forward declaration. */
void
_get_approximate_trie(const Trie trie, const unsigned char *key, const int k,
void (*callback)(const unsigned char *key,
const void *value,
const int mismatches,
void *data),
void *data,
const int mismatches,
unsigned char *current_key, const int max_key
);
void
_get_approximate_transition(const unsigned char *key,
const int k,
const Transition transition,
const unsigned char *suffix,
void (*callback)(const unsigned char *key,
const void *value,
const int mismatches,
void *data),
void *data,
const int mismatches,
unsigned char *current_key, const int max_key
)
{
int i;
int prev_keylen = strlen(current_key);
/* Short circuit optimization. If there's too many characters to
possibly be a match, then don't even try to match things. */
if((int)(strlen(suffix) - strlen(key)) > k)
return;
/* Match as many characters as possible. */
i = 0;
while(suffix[i] && (key[i] == suffix[i])) {
i++;
}
/* Check to make sure the key is not too long. BUG: If it is,
fails silently. */
if((prev_keylen+i) >= max_key)
return;
strncat(current_key, suffix, i);
/* If all the letters in the suffix matched, then move to the
next trie. */
if(!suffix[i]) {
_get_approximate_trie(transition->next, &key[i], k, callback, data,
mismatches, current_key, max_key);
}
/* Otherwise, try out different kinds of mismatches. */
else if(k) {
int new_keylen = prev_keylen+i;
/* Letter replacement, skip the next letter in both the key and
suffix. */
if((new_keylen+1 < max_key) && key[i] && suffix[i]) {
current_key[new_keylen] = suffix[i];
current_key[new_keylen+1] = 0;
_get_approximate_transition(&key[i+1], k-1,
transition, &suffix[i+1],
callback, data,
mismatches+1, current_key, max_key);
current_key[new_keylen] = 0;
}
/* Insertion in key, skip the next letter in the key. */
if(key[i]) {
_get_approximate_transition(&key[i+1], k-1,
transition, &suffix[i],
callback, data,
mismatches+1, current_key, max_key);
}
/* Deletion from key, skip the next letter in the suffix. */
if((new_keylen+1 < max_key) && suffix[i]) {
current_key[new_keylen] = suffix[i];
current_key[new_keylen+1] = 0;
_get_approximate_transition(&key[i], k-1,
transition, &suffix[i+1],
callback, data,
mismatches+1, current_key, max_key);
current_key[new_keylen] = 0;
}
}
current_key[prev_keylen] = 0;
}
void
_get_approximate_trie(const Trie trie, const unsigned char *key, const int k,
void (*callback)(const unsigned char *key,
const void *value,
const int mismatches,
void *data),
void *data,
const int mismatches,
unsigned char *current_key, const int max_key
)
{
int i;
/* If there's no more key to match, then I'm done. */
if(!key[0]) {
if(trie->value)
(*callback)(current_key, trie->value, mismatches, data);
}
/* If there are no more mismatches allowed, then fall back to the
faster Trie_get. */
else if(!k) {
void *value = Trie_get(trie, key);
if(value) {
int l = strlen(current_key);
/* Make sure I have enough space for the full key. */
if(l + strlen(key) < max_key) {
strcat(current_key, key);
(*callback)(current_key, value, mismatches, data);
current_key[l] = 0;
}
/* BUG: Ran out of space for the key. This fails
silently, but should signal an error. */
}
}
/* If there are no more transitions, then all the characters left
in the key are mismatches. */
else if(!trie->num_transitions) {
if(trie->value && (strlen(key) <= k)) {
(*callback)(current_key, trie->value,
mismatches+strlen(key), data);
}
}
/* Otherwise, try to match each of the transitions. */
else {
for(i=0; i<trie->num_transitions; i++) {
Transition transition = &trie->transitions[i];
unsigned char *suffix = transition->suffix;
_get_approximate_transition(key, k, transition, suffix,
callback, data,
mismatches, current_key, max_key);
}
}
}
void
Trie_get_approximate(const Trie trie, const unsigned char *key, const int k,
void (*callback)(const unsigned char *key,
const void *value,
const int mismatches,
void *data),
void *data
)
{
KEY[0] = 0;
_get_approximate_trie(trie, key, k, callback, data, 0, KEY,MAX_KEY_LENGTH);
}
int Trie_len(const Trie trie)
{
int i;
int length = 0;
if(!trie)
return 0;
if(trie->value)
length += 1;
for(i=0; i<trie->num_transitions; i++) {
length += Trie_len(trie->transitions[i].next);
}
return length;
}
int Trie_has_key(const Trie trie, const unsigned char *key)
{
return Trie_get(trie, key) != NULL;
}
int Trie_has_prefix(const Trie trie, const unsigned char *prefix)
{
int first, last, mid;
if(!prefix[0]) {
return 1;
}
/* The transitions are stored in alphabetical order. Do a binary
* search to find the proper one.
*/
first = 0;
last = trie->num_transitions-1;
while(first <= last) {
Transition transition;
unsigned char *suffix;
int suffixlen, prefixlen, minlen;
int c;
mid = (first+last)/2;
transition = &trie->transitions[mid];
suffix = transition->suffix;
suffixlen = strlen(suffix);
prefixlen = strlen(prefix);
minlen = (suffixlen < prefixlen) ? suffixlen : prefixlen;
c = strncmp(prefix, suffix, minlen);
if(c < 0)
last = mid-1;
else if(c > 0)
first = mid+1;
else
return Trie_has_prefix(transition->next, prefix+minlen);
}
return 0;
}
static void
_iterate_helper(const Trie trie,
void (*callback)(const unsigned char *key,
const void *value,
void *data),
void *data,
unsigned char *current_key, const int max_key)
{
int i;
if(trie->value)
(*callback)(current_key, trie->value, data);
for(i=0; i<trie->num_transitions; i++) {
Transition transition = &trie->transitions[i];
unsigned char *suffix = transition->suffix;
int keylen = strlen(current_key);
if(keylen + strlen(suffix) >= max_key) {
/* BUG: This will fail silently. It should raise some
sort of error. */
continue;
}
strcat(current_key, suffix);
_iterate_helper(transition->next, callback, data,
current_key, max_key);
current_key[keylen] = 0;
}
}
void
Trie_iterate(const Trie trie,
void (*callback)(const unsigned char *key,
const void *value,
void *data),
void *data)
{
KEY[0] = 0;
_iterate_helper(trie, callback, data, KEY, MAX_KEY_LENGTH);
}
static void
_with_prefix_helper(const Trie trie, const unsigned char *prefix,
void (*callback)(const unsigned char *key,
const void *value,
void *data),
void *data,
unsigned char *current_key, const int max_key)
{
int first, last, mid;
if(!prefix[0]) {
_iterate_helper(trie, callback, data, current_key, max_key);
return;
}
/* The transitions are stored in alphabetical order. Do a binary
* search to find the proper one.
*/
first = 0;
last = trie->num_transitions-1;
while(first <= last) {
Transition transition;
unsigned char *suffix;
int suffixlen, prefixlen, minlen;
int c;
mid = (first+last)/2;
transition = &trie->transitions[mid];
suffix = transition->suffix;
suffixlen = strlen(suffix);
prefixlen = strlen(prefix);
minlen = (suffixlen < prefixlen) ? suffixlen : prefixlen;
c = strncmp(prefix, suffix, minlen);
if(c < 0)
last = mid-1;
else if(c > 0)
first = mid+1;
else {
int keylen = strlen(current_key);
if(keylen + minlen >= max_key) {
/* BUG: This will fail silently. It should raise some
sort of error. */
break;
}
strncat(current_key, suffix, minlen);
_with_prefix_helper(transition->next, prefix+minlen,
callback, data, current_key, max_key);
current_key[keylen] = 0;
break;
}
}
}
void
Trie_with_prefix(const Trie trie, const unsigned char *prefix,
void (*callback)(const unsigned char *key,
const void *value,
void *data),
void *data
)
{
KEY[0] = 0;
_with_prefix_helper(trie, prefix, callback, data, KEY, MAX_KEY_LENGTH);
}
/* Need to declare _serialize_transition here so it can be called from
_serialize_trie. */
int _serialize_transition(const Transition transition,
int (*write)(const void *towrite, const int length,
void *data),
int (*write_value)(const void *value, void *data),
void *data);
/* This library also provides code for flattening tries so that they
* can be saved and read back in later. The format of a serialized
* trie is:
* TYPE NBYTES DESCRIPTION
* byte 1 Whether or not there is a value
* variable variable If there is a value, let the client store it.
* byte 1 Number of transitions for this Trie.
* transition variable
* int 4 Number of characters in the suffix.
* suffix variable the suffix for this transition
* byte 1 Whether or not there is a trie
* trie variable Recursively points to another trie.
*
* The number of bytes and the endian may vary from platform to
* platform.
*/
int _serialize_trie(const Trie trie,
int (*write)(const void *towrite, const int length,
void *data),
int (*write_value)(const void *value, void *data),
void *data)
{
int i;
unsigned char has_value;
has_value = (trie->value != NULL);
if(!(*write)(&has_value, sizeof(has_value), data))
return 0;
if(has_value) {
if(!(*write_value)(trie->value, data))
return 0;
}
if(!(*write)(&trie->num_transitions, sizeof(trie->num_transitions), data))
return 0;
for(i=0; i<trie->num_transitions; i++) {
if(!_serialize_transition(&trie->transitions[i],
write, write_value, data))
return 0;
}
return 1;
}
int _serialize_transition(const Transition transition,
int (*write)(const void *towrite, const int length,
void *data),
int (*write_value)(const void *value, void *data),
void *data)
{
int suffixlen;
unsigned char has_trie;
suffixlen = strlen(transition->suffix);
if(!(*write)(&suffixlen, sizeof(suffixlen), data))
return 0;
if(!(*write)(transition->suffix, suffixlen, data))
return 0;
has_trie = (transition->next != NULL);
if(!(*write)(&has_trie, sizeof(has_trie), data))
return 0;
if(has_trie) {
if(!_serialize_trie(transition->next, write, write_value, data))
return 0;
}
return 1;
}
int Trie_serialize(const Trie trie,
int (*write)(const void *towrite, const int length,
void *data),
int (*write_value)(const void *value, void *data),
void *data)
{
int success = _serialize_trie(trie, write, write_value, data);
(*write)(NULL, 0, data);
return success;
}
int _deserialize_transition(Transition transition,
int (*read)(void *wasread, const int length,
void *data),
void *(*read_value)(void *data),
void *data);
int _deserialize_trie(Trie trie,
int (*read)(void *wasread, const int length, void *data),
void *(*read_value)(void *data),
void *data)
{
int i;
unsigned char has_value;
if(!(*read)(&has_value, sizeof(has_value), data))
goto _deserialize_trie_error;
if(has_value != 0 && has_value != 1)
goto _deserialize_trie_error;
if(has_value) {
if(!(trie->value = (*read_value)(data)))
goto _deserialize_trie_error;
}
if(!(*read)(&trie->num_transitions, sizeof(trie->num_transitions), data))
goto _deserialize_trie_error;
if(!(trie->transitions =
malloc(trie->num_transitions*sizeof(struct _Transition))))
goto _deserialize_trie_error;
for(i=0; i<trie->num_transitions; i++) {
if(!_deserialize_transition(&trie->transitions[i],
read, read_value, data))
goto _deserialize_trie_error;
}
return 1;
_deserialize_trie_error:
trie->num_transitions = 0;
if(trie->transitions) {
free(trie->transitions);
trie->transitions = NULL;
}
trie->value = NULL;
return 0;
}
int _deserialize_transition(Transition transition,
int (*read)(void *wasread, const int length,
void *data),
void *(*read_value)(void *data),
void *data)
{
int suffixlen;
unsigned char has_trie;
if(!(*read)(&suffixlen, sizeof(suffixlen), data))
goto _deserialize_transition_error;
if(suffixlen < 0 || suffixlen >= MAX_KEY_LENGTH)
goto _deserialize_transition_error;
if(!(*read)(KEY, suffixlen, data))
goto _deserialize_transition_error;
KEY[suffixlen] = 0;
if(!(transition->suffix = (unsigned char *)strdup(KEY)))
goto _deserialize_transition_error;
if(!(*read)(&has_trie, sizeof(has_trie), data))
goto _deserialize_transition_error;
if(has_trie != 0 && has_trie != 1)
goto _deserialize_transition_error;
if(has_trie) {
transition->next = Trie_new();
if(!_deserialize_trie(transition->next, read, read_value, data))
goto _deserialize_transition_error;
}
return 1;
_deserialize_transition_error:
if(transition->suffix) {
free(transition->suffix);
transition->suffix = NULL;
}
if(transition->next) {
Trie_del(transition->next);
transition->next = NULL;
}
return 0;
}
Trie Trie_deserialize(int (*read)(void *wasread, const int length, void *data),
void *(*read_value)(void *data),
void *data)
{
Trie trie = Trie_new();
if(!_deserialize_trie(trie, read, read_value, data)) {
Trie_del(trie);
return NULL;
}
return trie;
}
void test() {
Trie trie;
printf("Hello world!\n");
trie = Trie_new();
printf("New trie %p\n", trie);
Trie_set(trie, "hello world", "s1");
Trie_set(trie, "bye", "s2");
Trie_set(trie, "hell sucks", "s3");
Trie_set(trie, "hebee", "s4");
printf("%s\n", (char *)Trie_get(trie, "hello world"));
printf("%s\n", (char *)Trie_get(trie, "bye"));
printf("%s\n", (char *)Trie_get(trie, "hell sucks"));
printf("%s\n", (char *)Trie_get(trie, "hebee"));
Trie_set(trie, "blah", "s5");
printf("%s\n", (char *)Trie_get(trie, "blah"));
printf("%p\n", Trie_get(trie, "foobar"));
printf("%d\n", Trie_len(trie));
Trie_set(trie, "blah", "snew");
printf("%s\n", (char *)Trie_get(trie, "blah"));
Trie_del(trie);
}
#if 0
int main() {
test();
}
#endif
Something went wrong with that request. Please try again.
|
__label__pos
| 0.980091 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.