content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
Idea: Fixed sized traits This is an idea I've been thinking about for some time now and I thought I'd share it. This is my first post here, so I hope I'm doing it right. :slight_smile: Idea In short, the idea would be to have traits which impose a fixed size on its implementors. For example, by having a special `Fixed` trait. Consequences of this would be: default Sized trait objects, no need for indirection, matching on trait objects,... Traits and Enums I got this idea by looking into subtype/inclusion polymorphism in rust. The way I see it, there are two main ways to do "subtyping" (some might not like this term, but I think fits here). The first is traits with trait objects as the polymorphic types and the implementors as subtypes. Example: trait MyTrait { fn method(&self) -> String; } struct Variant1; struct Variant2; impl MyTrait for Variant1 { fn method(&self) -> String {/*impl*/} } impl MyTrait for Variant2 { fn method(&self) -> String {/*impl*/} } The second is enums with instances of this enum as polymorphic types and its variants as subtypes. Example: enum MyEnum { Variant1, Variant2 } impl MyEnum { fn method(&self) -> String { match self { Variant1 => /*impl*/, Variant2 => /*impl*/ } } } Both methods have advantages and disadvantages, but there is always a kind of "cost" (not necessarily a runtime cost). For traits, the main cost is the need for indirection, while for enums the main cost is that all variants have the same size (including padding), but there are different costs too (see comparison). Fixed Sized Traits Fixed sized traits are an attempt to combine both traits and enums to have a different "cost" option. I see these fixed sized traits in two possible ways: using a vtable or using a tag. The first makes them very similar to normal traits. The only difference is the sized property. This changes the trait indirection cost into the enum fixed size cost. Example: trait MyFixedTrait : Fixed<4u> { //arbitrary size fn method(&self) -> String; } struct Variant1; struct Variant2; impl MyFixedTrait for Variant1 { fn method(&self) -> String {/*impl*/} } impl MyFixedTrait for Variant2 { fn method(&self) -> String {/*impl*/} } The second option would be to use a tag instead. This makes them similar to enums, so I'll call them enum traits. For a tag to work, the user would need to specify what variants are possible. So a new (type) declaration would be needed that specifies the fixed trait and the variants. Example: enum trait MyEnumTrait for MyFixedTrait { Variant1, Variant2 } Comparison Trait Fixed sized trait Enum trait Enum Sized No Yes Yes Yes Extend with new variants Yes Yes No No Use variants separately Yes Yes Yes No Automatic polymorphism* Yes Yes Yes No Pattern matching No No Yes Yes Variant differ vtable vtable tag tag * With Automatic polymorphism, I mean the fact that traits seemingly call the correct function while enums need pattern matching. Use cases Both the fixed sized traits and enum traits can be used in use cases where traits or enums would be used. The reason to use them is for example if indirection is too big of a cost, but you want it to be extendible, you could use a fixed sized trait. Or if you would use an enum, but there are too many variants, which make match statements unreadable, you could use enum traits. Conclusion For subtype/inclusion polymorphism there is always a cost. So why not give more option to choose the right cost for your problem. Fixed sized traits and enum traits give two new options with different costs. Could you describe a concrete use case where fixed size traits or enum traits would be more desirable than regular traits or enums? I genuinely can't think of a single even hypothetical case when you'd want either of them. 1 Like see also: Sealed Traits 4 Likes FYI this exists: https://docs.rs/enum_dispatch/0.3.0/enum_dispatch/ It addresses most of these needs in a straight forward way. 1 Like "Enum trait" seems similar to the idea of enum variant types https://github.com/rust-lang/rfcs/pull/2593. 1 Like @RustyYato When looking for prior art I came across the sealed traits, but I thought it was more about sealing a trait inside a crate rather than removing indirection. I read it again and I see now that it is a lot more similar to my idea than I thought. However, the key difference is that sealed trait try to know the size by sealing it in a crate, while my idea is to let the user decide the size. You would than be able to implement the trait from another crate. @tkaitchuck I think this enum_dispatch is exactly what I meant with the enum trait. So thank you for mentioning! @pcpthm Enum variant types are similar to my enum trait, but the thing is that for my enum trait all variants implement the same trait which causes the enum itself to implement this trait. But, as mentioned by @tkaitchuck, you can do this with some macro magic. I genuinely can't think of a single even hypothetical case when you'd want either of them. When choosing between traits and enums, you choose between flexibility and performance. The idea is that there can be something in between. Both sealed traits and enum dispatch metioned in this thread, have the same idea, they talk about improving performance for traits. We should imho leave dyn Trait unsized but instead use small box optimizations, although our current smallbox crate wastes one usize unnecessarily. You instead want a SmallBox<T,Size> type that determines whether its internal Size holds T, or whether the first usize contains a Box, by invoking mem::size_of_val with the vtable provided by dyn Trait and ptr::null, ala https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=9f175095c2f072317c31d3e5c68e291b It works because mem::size_of_val never dereferences its data pointer. All dyn Trait record their size n their vtable. We cannot create dyn Traits from unsized types, but even those keep their size in the fat pointer. We'd make alloc an optional crate feature, so this SmallBox crate still works without std or even alloc. At this point, you define an error type SmallBox<dyn Error,usize> that exists without without std or alloc, provided your Error trait avoids std too. 1 Like Thanks Jeff! The smallbox crate looks awesome! As I said, the smallbox crate was designed to support slice-like DSTs where the fat pointer encodes the size, as well as sized types, so the crate wastes an entire usize because they this this wrong! You want a smallbox designed for trait object style DSTs where the fat pointer encodes the type. We should ideally do a smallbox designed for perhaps only slice-style DSTs too, but not sure exactly how yet. 1 Like I also think a smallbox type should work with and without alloc / an allocator. I'll try to find some time to adapt the smallbox crate in the next few weeks. If I manage then I'll post back here for help testing it. :slight_smile: 2 Likes This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.
__label__pos
0.94103
Problem 2.1 Insertion sort on small arrays in merge sort Although merge sort runs in $\Theta(\lg{n})$ worst-case time and insertion sort runs in $\Theta(n^2)$ worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to coarsen the leaves of the recursion by using insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which $n/k$ sublists of length $k$ are sorted using insertion sort and then merged using the standard merging mechanism, where $k$ is a value to be determined. 1. Show that insertion sort can sort the $n/k$ sublists, each of length $k$, in $\Theta(nk)$ worst-case time. 2. Show how to merge the sublists in $\Theta(n\lg(n/k))$ worst-case time. 3. Given that the modified algorithm runs in $\Theta(nk + n\lg(n/k))$ worst-case time, what is the largest value of $k$ as a function of $n$ for which the modified algorithm has the same running time as standard merge sort, in terms of $\Theta$-notation? 4. How should we choose $k$ in practice? 1. Sorting sublists This is simple enough. We know that sorting each list takes $ak^2 + bk + c$ for some constants $a$, $b$ and $c$. We have $n/k$ of those, thus: $$ \frac{n}{k}(ak^2 + bk + c) = ank + bn + \frac{cn}{k} = \Theta(nk) $$ 2. Merging sublists This is a bit trickier. Sorting $a$ sublists of length $k$ each takes: $$ T(a) = \begin{cases} 0 & \text{if } a = 1, \\ 2T(a/2) + ak & \text{if } a = 2^p, \text{if } p > 0. \end{cases} $$ This makes sense, since merging one sublist is trivial and merging $a$ sublists means splitting dividing them in two groups of $a/2$ lists, merging each group recursively and then combining the results in $ak$ steps, since have two arrays, each of length $\frac{a}{2}k$. I don't know the master theorem yet, but it seems to me that the recurrence is actually $ak\lg{a}$. Let's try to prove this via induction: Base. Simple as ever: $$ T(1) = 1k\lg1 = k \cdot 0 = 0 $$ Step. We assume that $T(a) = ak\lg{a}$ and we calculate $T(2a)$: $$ \begin{align} T(2a) &= 2T(a) + 2ak = 2(T(a) + ak) = 2(ak\lg{a} + ak) = \\ &= 2ak(\lg{a} + 1) = 2ak(\lg{a} + \lg{2}) = 2ak\lg(2a) \end{align} $$ This proves it. Now if we substitue the number of sublists $n/k$ for $a$: $$ T(n/k) = \frac{n}{k}k\lg{\frac{n}{k}} = n\lg(n/k) $$ While this is exact only when $n/k$ is a power of 2, it tells us that the overall time complexity of the merge is $\Theta(n\lg(n/k))$. 3. The largest value of k The largest value is $k = \lg{n}$. If we substitute, we get: $$ \Theta(n\lg{n} + n\lg{\frac{n}{\lg{n}}}) = \Theta(n\lg{n}) $$ If $k = f(n) > \lg{n}$, the complexity will be $\Theta(nf(n))$, which is larger running time than merge sort. 4. The value of k in practice It's constant factors, so we just figure out when insertion sort beats merge sort, exactly as we did in exercise 1.2.2, and pick that number for $k$. Runtime comparison I'm implemented this in C and in Python. I added selection for completeness sake in the C version. I ran two variants, depending on whether merge() allocates its arrays on the stack or on the heap (stack won't work for huge arrays). Here are the results: STACK ALLOCATION ================ merge-sort = 0.173352 mixed-insertion = 0.150485 mixed-selection = 0.165806 HEAP ALLOCATION =============== merge-sort = 1.731111 mixed-insertion = 0.903480 mixed-selection = 1.017437 Here's the results I got from Python: merge-sort = 2.6207s mixed-sort = 1.4959s I can safely conclude that this approach is faster. C runner output merge-sort = 0.161617 merge-insertion = 0.073177 merge-selection = 0.081895 Python runner output merge-sort = 0.1017s mixed-sort = 0.0604s C code #include <stdlib.h> #include <string.h> #define INSERTION_SORT_TRESHOLD 20 #define SELECTION_SORT_TRESHOLD 15 void merge(int A[], int p, int q, int r) { int i, j, k; int n1 = q - p + 1; int n2 = r - q; #ifdef MERGE_HEAP_ALLOCATION int *L = calloc(n1, sizeof(int)); int *R = calloc(n2, sizeof(int)); #else int L[n1]; int R[n2]; #endif memcpy(L, A + p, n1 * sizeof(int)); memcpy(R, A + q + 1, n2 * sizeof(int)); for(i = 0, j = 0, k = p; k <= r; k++) { if (i == n1) { A[k] = R[j++]; } else if (j == n2) { A[k] = L[i++]; } else if (L[i] <= R[j]) { A[k] = L[i++]; } else { A[k] = R[j++]; } } #ifdef MERGE_HEAP_ALLOCATION free(L); free(R); #endif } void merge_sort(int A[], int p, int r) { if (p < r) { int q = (p + r) / 2; merge_sort(A, p, q); merge_sort(A, q + 1, r); merge(A, p, q, r); } } void insertion_sort(int A[], int p, int r) { int i, j, key; for (j = p + 1; j <= r; j++) { key = A[j]; i = j - 1; while (i >= p && A[i] > key) { A[i + 1] = A[i]; i = i - 1; } A[i + 1] = key; } } void selection_sort(int A[], int p, int r) { int min, temp; for (int i = p; i < r; i++) { min = i; for (int j = i + 1; j <= r; j++) if (A[j] < A[min]) min = j; temp = A[i]; A[i] = A[min]; A[min] = temp; } } void mixed_sort_insertion(int A[], int p, int r) { if (p >= r) return; if (r - p < INSERTION_SORT_TRESHOLD) { insertion_sort(A, p, r); } else { int q = (p + r) / 2; mixed_sort_insertion(A, p, q); mixed_sort_insertion(A, q + 1, r); merge(A, p, q, r); } } void mixed_sort_selection(int A[], int p, int r) { if (p >= r) return; if (r - p < SELECTION_SORT_TRESHOLD) { selection_sort(A, p, r); } else { int q = (p + r) / 2; mixed_sort_selection(A, p, q); mixed_sort_selection(A, q + 1, r); merge(A, p, q, r); } } Python code from itertools import repeat def insertion_sort(A, p, r): for j in range(p + 1, r + 1): key = A[j] i = j - 1 while i >= p and A[i] > key: A[i + 1] = A[i] i = i - 1 A[i + 1] = key def merge(A, p, q, r): n1 = q - p + 1 n2 = r - q L = list(repeat(None, n1)) R = list(repeat(None, n2)) for i in range(n1): L[i] = A[p + i] for j in range(n2): R[j] = A[q + j + 1] i = 0 j = 0 for k in range(p, r + 1): if i == n1: A[k] = R[j] j += 1 elif j == n2: A[k] = L[i] i += 1 elif L[i] <= R[j]: A[k] = L[i] i += 1 else: A[k] = R[j] j += 1 def merge_sort(A, p, r): if p < r: q = int((p + r) / 2) merge_sort(A, p, q) merge_sort(A, q + 1, r) merge(A, p, q, r) def mixed_sort(A, p, r): if p >= r: return if r - p < 20: insertion_sort(A, p, r) else: q = int((p + r) / 2) mixed_sort(A, p, q) mixed_sort(A, q + 1, r) merge(A, p, q, r)
__label__pos
0.999902
Steps in anova, Applied Statistics Steps in ANOVA The three steps which constitute the analysis of variance are as follows: 1. To determine an estimate of the population variance from the variance that exists among the sample means. 2. To determine an estimate of the population variance from the variance that exists within the samples. 3. Accept the null hypothesis if the above two estimates are equal or else reject the null hypothesis. If the estimates are equal to or approximately nearer, we get the ratio of the two estimates as either exactly or nearer to 1. This ratio is referred to as F ratio.    Posted Date: 9/15/2012 6:16:37 AM | Location : United States Related Discussions:- Steps in anova, Assignment Help, Ask Question on Steps in anova, Get Answer, Expert's Help, Steps in anova Discussions Write discussion on Steps in anova Your posts are moderated Related Questions A.    Do the correlation matrix table. B.    Which variable (s) has the largest correlation coeffieient which is not a perfect correlation? C.    Which variable (s) has the s While there are p original variables the number of principal components is m such that m PCA is a linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinat why we use dummy variable What would be the cutoff score to indicate a score that is in the top 15% of the scores on a test with a mean of 100 and a standard deviation of 15? This question has multiple p In a three-cornered paint ball duel, A, B, and C successively take shots at each other until only one of them remains paint free. Once hit, a player is out of the game and gets no Apl.send me nots on hypothesis testing sk question #Minimum 100 words accepted# The Maju Supermarket stocks Munchies Cereal. Demand for Munchies is 4,000 boxes per year and the super market is open throughout the year. Each box costs $4 and it costs the store Objective of index numbers The file Midterm  Data.xls has a tab labeled "National Grid vs. Alcoa" which presents historical price data for two stocks.  Using the National Grid price as the X-value and the Al
__label__pos
0.792241
0 Security Considerations in use of AI/ML - The world of Artificial Intelligence (AI) seems to be exploding with the release of ChatGPT. But as soon as the the chat bot came into the hands of public people started finding self-sabotaging queries at worst (exploitable issues) and some weird interactions whereby people could write malware that could stay undetected by Endpoint Detection and Response (EDR) bypasses. What is AI? Very simply, Artificial Intelligence (per Wikipedia) is intelligence demonstrated by machines. But technically, it is a set of algorithms that can make do things that a human does by making an inference, similar to humans, on the basis of data that was historically provided as “reference” to make the decisions. This reference data is called as training data. And the data which is used to test the effectiveness of the algorithm to arrive at a decision on the basis of that reference, is called as test data. Any good machine learning course teaches how do you design data and how much data to use for training and how much to use for testing and metrics of performance but that is not relevant to our discussion here – however, what’s important is that it is the data that you provide that controls the decision-making in an artificially intelligent algorithm. This is a key difference between typical algorithms (where the code is more or less static and makes decisions on certain states in the program) whereas in an artificially intelligent system you can have the program arrive at different decisions depending on how one decides to “train” the algorithms. What is ML? Machine Learning (ML) is a subset of Artificial Intelligence (AI) where the artificial intelligent algorithms evolve their decision-making on the basis of data that has been processed and tagged as training data. ML systems have been used in classifying spam or anomaly detection in computer security. These systems tend to use statistical inference to establish a baseline and highlight situations where the input data does not fall within the norm. When operational data is being used to train ML-based system one has to be careful that we are not incrementally altering the baselines of what’s normal and what’s not. Such “tilting” may happen over time and its important to protect against drift of such systems. Some “drift” is ok but “bad drift” is not – which is hard to predict. E.g., let’s say you classify some data inaccurately and accidentally/maliciously end up using it for training your ML-models but if it inherently alters the behavior of the ML model, then the model becomes unreliable. What is Adversarial AI? Adversarial Artificial Intelligence (AI) based threats are ones where malicious actors design the inputs to make models predict erroneously. There are a couple of different types of attack here – poisoning attack (where you train models with bad data controlled by adversaries) or an evasion attack (where you make the artificial intelligence system make a bad inference with a security implication). The way to understand these attacks is that the poisoning attack is basically “Garbage-in-garbage-out” but its this really “special” garbage. This is garbage that changes the behavior of the algorithm in a way that the algorithms returns an incorrect result when it has to make a decision. The inferential attacks are different in that the decision made is wrong because the input is such that it appears differently to the ML algorithm than it does to humans. E.g., Gaussian noise being classified as a human or a fingerprint being matched incorrectly. Can we attack these systems in other ways? In a paper presented by Google researchers created a tool (TensorFuzz) that they were able to demonstrate finding a few varieties of bugs in Deep Neural Networks (DNNs). So typical software attack techniques do work against the deep neural networks too. Fuzzing has been used for decades and has caused faults in code forever. At its core, fuzzing is simple, send garbage input that causes a failure in the program. It’s just that the failures in DNN are different and you want to ensure the software relying on the DNN to make a decision handles such failures appropriately and do not cause a security failure with secure defaults. Protection mechanisms There are a few simple ways to look at ML systems and security thereof. Microsoft released an excellent howto on how to threat model ML systems. Additionally, using adversarial training data is imperative to ensure that artificially intelligent algorithms performs as you expect them to in the presence of adversarial data. When you rely on ML-based systems, its all the more important that you test it appropriately and continue to do so against baselines. Unfortunately, for Deep Neural Networks transparency of decision making continues to be an issue and needs the AI/ML researchers to establish appropriate transparency measures. 0 What to do when things go wrong? - I blogged earlier about blameless post-mortems and how one gets to a point that they are able to do blameless post-mortems – by having an operational rigor and observability. This is more of a lessons learnt post about what do you do and what you don’t when things go wrong? Focusing on the Who? A lot of times focusing on the “who reported the issue?” can be focusing on a wrong thing. If a report comes from a penetration test or a bug bounty researcher or an internal security engineering resource you need to make sure that the impact and likelihood is clearly understood. There are sometimes where customers (who pay or intend to pay for your service) report problems – these are obviously more important. Focusing on the How? How a security issue gets reported is important. As examples where you learn about a security issue via a bug report(1), or where you learn about it via your own telemetry(2) or you learn about it on Twitter! There is a potential for legal ramifications in each of these cases and the risks might be different. When things become public without your knowledge where you were not notified and the information is now public you do have a role to instill confidence in your current customers. The best approach here tends to be of sticking to facts without any speculations. If you are working on incident say so. Don’t say we are most secure when you are the subject of a breach discussion especially because you already have the data that you are not as secure. Identification and Containment of the security issue are top priorities – do not take resources that are doing these actions away to ensure Public Relations are good – doing that will eventually make public relations bad! Involve lawyers in your communications process and mark communications with right legal tags (“attorney client privileged material”) so that if a litigation happens you can clearly demarcate evidence that can be or cannot be part of a discovery. Focusing on the What? “What” needs to be done has to be clear with the help of an incident manager. The incident manager is the person who is most well read, subject matter expert, and leads the response process. Having this single-threaded ownership of leading the incident is incredibly important. The role of the incident manager is to ensure they have all the information that they need to make decisions. This also streamlines the process of public relations, legal needs, incident cleanup (eradication and recovery), and helps with swift and focused decision-making. This can sometimes be crisis management depending on impact and otherwise it can be just another day in the Security operations office. The key trait here is focus and goal-based decision making. Adrenaline can run high, tempers can flare – that typically happens when you are unprepared to handle security incidents. The tempers and nervousness can be avoided by being proactive in doing tabletop exercises, incident dry-runs and having good runbooks. But all practice games do is prepare you for the real thing – the real thing is how you handle a true incident. Use the help of key stakeholders to derive best decisions – there often tend to be situations where no answer looks good – and therein comes the customer focus – if you focus on the well being of customers you will rarely go wrong. Focusing on the Why? Capture incident response logs in tickets and communications so all the timeline and actions get captured properly with documentation. After the recovery is completed, do a blameless post-mortem of how you got there. Ensure you put a timeline of taking on agreed-upon corrective actions on a timeline that is agreed and don’t waiver – this is a part of operational rigor one needs to follow to really avoid incidents from happening in future. Typically, the reason why issues happen is because something was not prioritized as it should have been. Reprioritize to make sure you can reassess. Sometimes the size of the incident makes it your reprioritization almost coerced – it’s ok to be coerced in that direction. You will find that coercion is simply an acceleration of the actions that you should have taken up earlier. No one is perfect – just come out of it better! Focusing on the Where? Where you discuss the issue is important. When sizable incidents happen discuss is openly with the business leaders so that full awareness and feedback is provided in “powerful forums”. This obviously does not mean that you break your attorney client privilege – it just means discuss with the highest leaders in a manner where action items, impact and post-mortem results are provided. This enables business to become resilient over time and develop confidence in the security teams. If you need to do public releases then ensure that lawyers read it and security SMEs read it as well as business leaders read it – only then do such releases. Don’t let the “left hand meet right” situation ever occur. This instills customer confidence in your process. Conclusion This was just an attempt for me to pen-down my thoughts as they appeared in my brain. I am sure I forgot a lot such as stress of handling, avoiding knee-jerk reactions, etc. but these are top most important things that I felt were necessary to share. Remember, incident handling gets better with practice – you want the practice be done in practice games not in the olympics! 🙂 0 Nessus Migrating Users to a new install - I had to wipe my existing OS and had to reinstall Nessus on the new BT5R3 image. However, I still wanted all my previous scan data and users to be unaffected in the new OS. So how did I do that? Here’s how: Take a backup and restore the following folders on the new install: 1. Users Folder (/opt/nessus/var/nessus/users) 2. Master.key (/opt/nessus/var/nessus/master.key) 3. Policies.db (/opt/nessus/var/nessus/policies.db) If you do get an error after this follow these steps to get rid of errors and just reactivate the nessus feed as follows: 1. service nessusd stop 2. /opt/nessus/sbin/nessus-fix –reset 3. /opt/nessus/bin/nessus-fetch –register [activation code] 4. /opt/nessus/sbin/nessusd -R 5. service nessusd start 0 John Jay College of Criminal Justice - I will be speaking in Prof. Sengupta’s class at John Jay College of Criminal Justice at the City University of New York on Oct 28, 2010.  The topic of discussion will where does Digital Forensics fit in the big picture of organizations.  The talk will introduce the students to a variety of topics including choosing a career as a digital forensics investigator, their duties as an investigator, being successful as an investigator, case studies and real-life problems faced by the computer forensic investigators. 0 The Next Hope - This was my first hope conference (The Next HOPE Conference)despite being in New York City for more than half a decade. Always it seemed that work would send me out of town just before the con. However, this time around I had the good fortune of being in the city during the conference. There were a few good talks some of which were not so technical but kindled the questions for privacy fanatics. The talks I attended included Alessio Pennasilico’s talk about DDoS attack on Bakeca.it, Modern Crimeware and Tools talk by Alexander Heid, Steven Rambam’s talk on Privacy is Dead, Blaze Mouse Cheswick et. al’s talk which was abstract but awesome. I did attend a few more talks and it was fun. All in all a great conference.
__label__pos
0.916537
1. Topic Of Lesson Plan- Partial Sums   2. Lesson Content- Partial Sums, Place Value, Extended Form   3. Goals And Expectations- 1. Students will be able to identify the value of a digit in a given Number. 2. Students will be able to Add to numbers using the partial sums technique.   4. Objectives- 1. Students will write numbers out in extended form to demonstrate an understanding of value for each digit in a given number. 2. Students will use this knowledge to add numbers that have already been broken up into extended form.   5. Materials and Aids- Notebooks, pencils   6. Methods Used and Procedures- A. Introduction- 1. Suppose you were out shopping with no paper, pen or calculator and needed to add two 2-digit numbers together, how would you do it? 2. Explain to students that it would be easier to do mentally using a technique called partial sums.   B. Development- 1. Review expanded form with students. 2. Provide a 2-digit addition problem for students and ask them to break each number into expanded form, then add the expanded for numbers. 3. Point out to students how much easier it is to add numbers that are broken up into expanded form. This is called partial sums.   C. Practice- 1. Provide examples of other 2-digit addition problems. 2. Ask students to come up and place each number in expanded form, then add them. 3. Prompt students to think about how they might try this activity mentally.   D. Independent Practice- 1. Provide examples for students. 2. Allow students to work with partners and take turns solving problems using partial sums.   E. Accommodations For Students- 1. Remind students of the importance of writing each number out in expanded form. 2. Remind students to count out and identify the place value of each digit while writing numbers in expanded form.   F. Checking for understanding- 1. Write problems on the board and allow students an opportunity to answer them mentally using partial sums.   G. Closure- 1. Remind students how helpful this strategy can be in real-life situations, as well as helping them work faster and more efficient on exams.   7. Evaluation- 1. Content covered on test and quiz 2.   Lesson Plans from (www.AGradeMath.com)
__label__pos
1
Take the 2-minute tour × Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required. Let $\mathcal{M}^0 = \mathcal{M}^0 ( G; I , \Lambda ;P)$ be a Rees matrix semigroup ($G$ a group, $I$, $\Lambda$ non-empty sets, $P=(p_{\lambda i})$ a $\Lambda \times I$ matrix over $G \cup \{0\}$ such that every row/column of $P$ contains at least one non-zero entry), where $P$ does not contain any $0$'s. Show that the idempotents of $\mathcal{M}^0$ form a subsemigroup of $S \iff \forall \; i,j\in I$ and $\lambda , \mu \in \Lambda$ we have $p_{\mu i} p_{\lambda i}^{-1} p_{\lambda j} p_{\mu j}^{-1} = 1_G$. share|improve this question 2   First write the general form of an idempotent of a Rees matrix semigroup –  Boris Novikov Feb 28 '13 at 17:12 1 Answer 1 I suppose that $S = \mathcal{M}^0$ in your question. Further, if you don't mind, I will use the notation $J$ instead of $\Lambda$ and use only latin letters for the indexes, just to get a slightly more homogeneous notation. Thus you have a Rees matrix semigroup $\mathcal{M}(G, I, J, P)$ (the zero is no longer needed in your case). Let $s = (i, g, j)$ be an element of $S$. Then $s^2 = (i, gp_{j,i}g, j)$ and hence $s$ is idempotent if and only if $g = p_{j,i}^{-1}$. Let now $e = (i, p_{j,i}^{-1}, j)$ and $f = (\ell, p_{m,\ell}^{-1}, m)$ be two idempotents. Then $ef = (i, p_{j,i}^{-1}p_{j,\ell} p_{m,\ell}^{-1}, m)$ is idempotent if and only if $p_{j,i}^{-1}p_{j,\ell} p_{m,\ell}^{-1} = p_{m,i}^{-1}$ or equivalently $p_{m,i}p_{j,i}^{-1}p_{j,\ell} p_{m,\ell}^{-1} = 1$, which is exactly your condition. share|improve this answer      IMHO your deleted answer with an example of a monoid having a lot of elements with one-sided inverses has salvageable material. Please consider editing and undeleting it. AFAICT the "pure" powers of a single generator have one-sided inverses. –  Jyrki Lahtonen Aug 9 '13 at 17:16 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.94384
How to Flush DNS Cache on a Mac How to clear the DNS cache on your Mac What to Know • Type Terminal into Spotlight, or navigate to Go > Utilities > Terminal • In the Terminal window, enter the command: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder This article explains how to flush the DNS cache on a Mac. How Do I Reset My DNS on a Mac? If you’re experiencing connectivity issues, you may be able to fix them by resetting the local record of domain name server (DNS) information stored on your Mac. This information may be outdated or corrupt, preventing websites from loading and slowing down your connection. To reset the DNS cache on a Mac, you need to enter a Terminal command on your Mac. Here’s how to flush your DNS cache on a Mac: 1. Type Command+Space to open Spotlight. Spotlight open and highlighted on a Mac. 2. Type Terminal, and select Terminal from the search results. Terminal highlighted in Spotlight on a Mac. You can also access Terminal by navigating to Go > Utilities > Terminal. 3. Enter this command into the Terminal window: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder and then press Enter. Command highlighted in Terminal window This command only works in macOS El Capitan and newer. If you have an older version of macOS, check the next section for the correct command. 4. Type your password, and press enter again. Entering a password in Terminal on a Mac. The password will not appear in Terminal as you type it. Just type the password and press enter. 5. Your DNS cache will be reset, but there will be no message to that effect in the Terminal. When a new line appears, it indicates the command has been carried out. DNS flushed on a Mac. How to Flush DNS in Older Versions of macOS Older versions of macOS use different Terminal commands to flush the DNS. However, you start by opening a Terminal window regardless of which macOS version you’re using. Here are the commands to flush DNS in each version of macOS: • El Capitan and newer: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder • Yosemite: sudo killall -HUP mDNSResponder • Lion, Mountain Lion, and Mavericks: sudo dscacheutil –flushcache • Snow Leopard: sudo lookupd –flushcache • Tiger: lookupd –flushcache What Does Flushing a DNS Do? Whenever you try to access a website over the internet, you connect to a DNS server which tells your web browser where to go. The DNS server maintains a directory of websites and IP addresses, which allows it to look at the website address, find the corresponding IP, and provide it to your web browser. That information is then stored on your Mac in a DNS cache. When you try to access a website you’ve been to recently, your Mac uses its DNS cache instead of checking with an actual DNS server. That saves time, so the website loads faster. The web browser doesn’t have to go through the extra step of communicating with a remote DNS server, which results in less time between entering a website address and the website loading. If the local DNS cache is corrupt or outdated, it’s kind of like trying to use an old phone book or an address book someone has vandalized. Your web browser checks the cache to find an IP address for the website you’re trying to visit, and it finds either the wrong address or an unusable address. That can slow the process down or prevent websites or specific website elements, like videos, from loading. When you flush your DNS cache, you instruct your Mac to delete its local DNS records. That forces your web browser to check with an actual DNS server the next time you try to access a website. You should always flush your DNS cache after changing the DNS servers on your Mac. It can also be helpful if you’re having connectivity problems. FAQ • How do I check the DNS cache on a Mac? Open the built-in Console log-viewer app on your Mac and type any:mdnsresponder into the search bar. Then, launch Terminal, type in sudo killall –INFO mDNSResponder, and press Enter or Return. Back in the Console app, you can view a list of cached DNS records. • How do I clear the DNS cache on Windows 10? To clear the DNS cache on Windows 10, open the Run dialog box, type in ipconfig /flushdns, and click OK. You can also use the same command in the Windows command prompt if you want more information on the process. • What is DNS cache poisoning? DNS cache poisoning, also known as DNS spoofing, is when someone deliberately enters false or incorrect information into a DNS cache. After the false information is input, future DNS queries will return incorrect responses and direct users to the wrong websites. Was this page helpful?
__label__pos
0.961691
Dismiss Notice Join Physics Forums Today! The friendliest, high quality science and math community on the planet! Everyone who loves science is here! Second Order Nonhomogeneous Linear Differential Equations 1. Jan 24, 2004 #1 Hello, I am having trouble understanding how to solve second order nonhomogeneous linear differential equations. I know how to solve second order homogeneous linear differential equations. But I am not following in the lecture and in the text the method of variation of parameters to solve second order nonhomogeneous linear differential equations. For one's information I am using the text Calculus, 4th edition by James Stewart and the chapter is 18 and the section is 2. The page is 1170 for anyone familiar with this text. Any help with this method would be appreciated. Thankyou.   2. jcsd 3. Jan 25, 2004 #2 Give us some doubts/pro u are facing we will explain u via the Ex   4. Jan 25, 2004 #3 Right. Here is an example from the text: Solve the equation y'' + y = tan x, 0 < x < pi/2 Now I know how to find the auxiliary equation r^2 + 1 = 0 with roots +i and - i. So the solution of y'' + y = 0 is c1 sin x + c2 cos x. This is my general solution for the homogeneous equation. I know that finding the general solution of a nonhomogeneous linear differential equation of second order contains a particular solution ysubp of the nonhomogeneous equation and the general solution of the homogeneous equation (the complementry equation ysubc). So the next step is to find the particular solution of the nonhomogeneous linear differential equation of second order. Using variation of parameters, I seek a solution of the form ysubp(x) = usub1(x)sinx + usub2(x)cosx. I then take the derivative of ysubp to get y'subp = (u'sub1 sinx + u'sub2 cosx) + (usub1 cosx - usub2 sin x) I then set (u'sub1 sinx + u'sub2 cosx)= 0 ( I am not entirely clear why I do that) then take the derivative of y'subp (second derivative). And get y''subp = u'sub1'cosx - u'sub2 sinx - usub1 sinx - usub2 cosx For ysubp to be a solution we must have y''subp + ysubp = u'sub1 cosx - u'sub2 sinx = tan x. This is achieved by subbing into the equation to be solve ysubp and y''subp. From here I am not sure what to do. The text says to solve the previous equations (u'sub1 sinx + u'sub2 cosx)= 0 and y''subp + ysubp = u'sub1 cosx - u'sub2 sinx = tan x. But I am not sure how to do that and I am not sure why I am doing that. I am having trouble following the text because it skips a few "elementry steps" (which would of course make things clearer for me). And the theory before it is brief and not entirely clear. Would this be sufficient for now? I can elaborate some more. Detailed explanations would be appreciated. No elementry steps should be assumed or skipped. Sometimes it is the elementry steps that cause me trouble. As said before this is in the above text if anyone has it available to them. Any help is appreciated. Thankyou.   5. Jan 25, 2004 #4 it's been ten years since i've done this but i'll give it a shot. first of all, i think you are free to set (u'sub1 sinx + u'sub2 cosx) equal to anything, as it is an asumption, i think, and a useful assumption that leads to solutions. not sure about the deeper reasons behind this assumption yet; it probably has to do with linear algebra and "linear independence" so you can't know much more than "it works" yet. but it gives you two equations and two unknowns: u1 and u2. it's a system of first order differential equations, a reduction from a second order. though now there are two equations. let's look at it this way, and i'm going to assume you did all the derivatives right because i didn't check those: u1' sinx + u2' cosx=0 u1' cosx - u2' sinx=tanx. same idea in a system of linear equations: pick an equation and solve for a variable and then substitute it back. our approach will be the same except that we'll solve for u1' and u2' first and then integrate. then the final step is to say that y=u1sinx+u2cosx is a particular solution. the first equation lets you solve for u1' in terms of u2': u1' sinx=-u2'cosx, then divide by sinx to get u1'=-u2' cotx. (cot=cos/sin) now sub into the equation u1' cosx - u2' sinx=tanx to get: (-u2' cotx) cosx - u2' sinx=tanx -u2' cotx cosx - u2' sinx=tanx. now what do we have to do? solve for u2'. let's get rid of - and factor out u2': u2' (cotx cosx + sinx)=-tanx so u2'=-tanx/(cotx cosx + sinx). you may have the feeling that we should simplify it. one thing that often works for trig is changing everything to sine and cosine: u2'=-(sinx/cosx)/((cosx/sinx) cosx + sinx) there are a number of ways you can handle this but what smells best to me is to multiply by sinx/sinx: u2'=-(sin2x/cosx)/((cosx) cosx + sin2x). now (cosx) cosx + sin2x=1 by trig identity so we have: u2'=-(sin2x/cosx). remember that u1'= -u2' cotx, so u1'=-(-(sin2x/cosx)) cotx. since cot=cos/sin, we get u1'=sinx. at this point, we must solve for u1 and u2 by integrating. u1=-cosx but u2 is harder. i'm going to let you look that one up (you may want to convert to -sinx tanx). assuming you get an integral for u2, let's just call it u2 and use u1=-cosx. yp=-cosx sinx+u2 cosx is a particular solution. then the general solution is y=yp+c1 sinx +c2 cosx. i'm suspecting since that integral is not nice that a calculation is wrong somewhere in what you did but if it's all correct, this is the solution i get with mathematica's dsolve command which took about 0.01 seconds. let F(y)=y''+y. the ultimate goal is to find a y such that F(y)=tanx. but let's look at our solution which has the form yp + c1 sinx + c2 cosx. F distributes over addition, so F(yp + c1 sinx + c2 cosx)=F(yp)+F(c1 sinx + c2 cosx)=tanx+0. this can be generalized to whenever the F distributes over addition: find a particular solution and add that to the homogeneous solution "space." that looks like a coset to me which means there's some kind of modular spaces involved unless i'm off my rocker.   Last edited: Jan 25, 2004 6. Jan 25, 2004 #5 HallsofIvy User Avatar Science Advisor Remember that there are an infinite number of solutions to the entire equation and you are seeking only one of them. You are free to put restrictions on your "search" as you please. That's why you can do it. You want to do it because it removes all first derivatives from the equation so when you differentiate again, to get the second derivative, your formula has only first derivatives in it. Because you cleverly chose to use solutions to the homogeneous equation as coefficients when you substitute into the original equation, all terms that do not involve the derivative of u1 or u2, will give 0 (think about that: not differentiating them, they might as well be constants and constants times solutions to the homogeneous equation will give 0). The point is that you now have two equations: u'1 sinx + u'2 cosx= 0 and u'1 cosx - u'2 sinx = tan x to solve for u'1 and u'2. The why should be obvious! Because if you know them you will be able to substitute into your formula for the solution to the equation! The how is also (relatively) easy. Although these are dervatives, since the equation involve only first derivatives you can treat them as two (linear) algebraic equations. Solve them exactly the way you would any pair of linear equations. For example, multiply the first equation by sin(x) and the second equation by cos(x) and add and the u'2 terms cancel out leaving u'1= sin(x). Now integrate: u1(x)= -cos(x). That was easy! If you multiply the first equation by cos(x) and the second by sin(x) and subtract, the u'1 terms cancel out leaving u'2= -tan(x)sin(x)= -sin2(x)/cos(x). That's a little harder to integrate but still possible. Once you have found u1 and u2 (since your are seeking only a single solution, you can ignore the constants of integration), put them back into y(x)= u1sin(x)+u2cos(x) to get the specific solution you need to add to the solution to the homogeneous equation.   7. Jan 27, 2004 #6 Many thanks go out to pheonixtoth and HallsofIvy for their replies. Thanks to pheonixtoth for showing the step by step procedure. I tried to get fancy and solve the system of two equations by using matrix calculations and got into trouble. I then tried to solve it the conventional way but I was making tiny mistakes still which prevented me from solving for usub1 and usub2. As a result I couldn't follow the example in the text since I wasn't performing the correct steps. pheonix's step by step procedure allowed me to see where I was making my mistake. Thanks again pheonix. Thanks go out to HallsofIvy for explaining the "whys" of each step. I followed once again the example in the text while carefully reading HallsofIvy's explanations and was able to follow the text more closely this time. Some of the explanations cleared up some questions that I had but could not articulate while other explanations allowed me to catch some explanations in the text that I kept on missing. HallsofIvy's explanations coupled with pheonix's calculations enabled me to see why and how things were being done in the text. Thanks again HallsofIvy. Sometimes it takes me time to get back to my post. This is because I have other homework to do and other class to attend. But I always appreciate the efforts of others here in the physics forums to take the time and explain things to me, however simple they might be. Cheers pheonix and HallsofIvy. You both saved me a lot of frustration.   Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
__label__pos
0.612447
Posts filed under: Q and A Note that the SAT doesn’t test properties of even and odd like this, although the old SAT (pre-2016) used to. The product of two numbers will be even if and only if one or both of the numbers being multiplied is even. Therefore, you know any card showing an odd number must have an even number on the other side. from Tumblr https://ift.tt/2ENVOkL A semicircle is exactly half a circle, so take the formula for circumference (C = 2πr) and divide by 2. Since r = 2, you end up with (2π(2))/2 = 2π. That covers the curved part. You know the straight part has a length of 4 because the radius of the semicircle is 2. 4 + 2π is your answer. from Tumblr https://ift.tt/2D8wILQ d = 30 + 2(40 – s) The machine begins the day with $30 inside, so that’s the “30 +” part. Easy enough. The variable s is defined as how many sodas the machine has in it, but what we really care about is how many sodas are sold. We know the machine begins the day with 40, so 40 – s should give us the number of sodas sold. (When s = 40, no sodas have been sold; when s = 35, 5 sodas have been sold…) For each soda that’s sold, the machine should have $2 more, so that’s why “2(40 – s)” is in there. from Tumblr https://ift.tt/2AaB16F Trigonometry does the trick here. Below is that line making a 42° angle with the positive x-axis. I’ve also drawn a dotted segment to make myself a neat little right triangle. Remember that slope is rise over run—how high the line climbs divided by how far it travels right. In this case, the dotted segment labeled a is the rise and the bottom of the triangle labeled b is the run. And luckily for us, the tangent function calculates that a/b ratio! Remember your SOH-CAH-TOA. Tangent = Opposite/Adjacent. Just use your calculator to evaluate tan 42°. You’ll get 0.90. from Tumblr https://ift.tt/2NEtnVm You can make two equations here. First, you know the total number of marbles is 103, so: The second equation is more complicated, so let’s do it in parts. First, he gives away 15 red marbles, so he should have r – 15 left. He gives away 2/5 of his blue marbles, so he should have b – 2/5b = 3/5b left. So the ratio of red marbles he has left to blue marbles he has left (which the question tells us is 3/7) should be: The question asks how many blue marbles he had originally, so let’s substitute and solve for b. First get r by itself in the first equation: Now substitute that into the second equation and solve: from Tumblr https://ift.tt/2E4OJM2 This question comes from my own book, so my tips on how to deal with these can be found in the same chapter. The main key to getting it right is making sure you translate the words into math correctly. Note that although the question tells you that Tariq makes brownies and Penelope makes cookies, in the end it only asks about “treats,” so we can lump cookies and brownies together. Tariq makes 30 treats per hour and Penelope makes 48 treats per hour. Together, then, they make 78 treats per hour. We know they both worked for the same amount of hours. The other key to getting this right is keeping track of the units of the numbers you know. In this case, we have treats and hours for units. We know the number of total treats, and we know the rate of treats per hour. We want the number of hours. How do we set up the equation we need to solve? We need to divide the total number of treats, 312, by the number of treats they made per hour, 78. image from Tumblr https://ift.tt/2DdLj9q One gallon of honey weighs approximately 12 lbs. If one gallon of honey is mixed with 5 gallons of water to make tea, how many ounces of honey will be in each 8 fluid ounce cup of tea? Choices are 1, 2, 3 or 4. (Answer= 2) given: 16 oz = 1 lb, 128 fl oz = 1 gallon Wow, that’s going to be some really sweet honeywater tea. You’re making 6 gallons of “tea” (1 gallon honey + 5 gallons water = 6 gallons). Converting that to fl oz, you should have, in total, (128 fl oz/gallon)(6 gallons) = 768 fl oz. Therefore, a cup with 8 fl oz is 8/768 = 1/96 of the mixture. By weight in oz, the 12 lbs of honey you put in the mixture is (16 oz/lb)(12 lbs) = 192 oz. How may fl oz of honey will be in 1/96 of the mixture if 192 oz of honey are in the whole mixture? (192 oz)(1/96) = 2 oz PWN the SAT Parabolas drill explanation p. 325 #10: The final way to solve: If we are seeking x=y, since the point is (a,a), why can you set f(x) = 0? You start out with the original equation in vertex form, making y=a and x=a, but halfway through you change to y=0 (while x is still = a). How can we be solving the equation when we no longer have a for both x and y? For everyone else’s context, here’s the problem: Now, be careful! I am not changing to y = 0 in that algebraic solution in the back of the book; I am subtracting a from both sides! Note how the a term on the right changes from –22a to –23a.   Draw this out. Start with the two points you’re given. image Now remember that the shape is a rectangle, and that you’re told that point B is on the x-axis. The only way that happens is if B is at (5, 0). Point D, by the same logic, must be at (–3, 2). image Now draw the rectangle and measure the lengths. The long ends have length 8, and the short ends have length 2. image Therefore, the perimeter is 8 + 2 + 8 + 2 = 20. from Tumblr https://ift.tt/2Ou1zED Start with the second equation, which tells you that t = 4. If t = 4, then you can rewrite the first equation as follows (and solve): 4u – u = 18 3u = 18 u = 6 from Tumblr https://ift.tt/2oP7c55 Pilar is a salesperson at car company. Each car costs at least $15,000. For each car she sells, she gets 6% commission of the amount by which selling price exceeds $10,000. If Pilar sells a car at d dollars, which function gives her the commission in dollars on sale? A) C(d)=0.06(d-10000) B) C(d)=0.06(d-15000) C) C(d)=0.06(10000-d) Plugging in might help you think about this in a more concrete way. From what the question says, if Pilar sells a car for $18,000, for example, then we’d expect her to earn commission on $8,000—the amount of the car’s price above $10,000. A 6% commission on $8,000 is 0.06\times $8,000=$480. Which of the answer choices, when you plug in $18,000 for d, gives you $480? Choice A is the only one that works. The other way to think through this is to notice that all the choices have the same 0.06 in the beginning, so the 6% part of the problem is taken care of. Our job is to figure out which of the choices has the right thing in the parentheses. Which of those things will provide the amount that d, the selling price, exceeds $10,000? Well, translating the words into math, we’d have to say that “the amount d exceeds $10,000″ can be written as: d – 10,000. Researchers in Australia experimented to determine if color of a coffee mug affects how people rate the flavor intensity of the coffee. Volunteers were randomly assigned to taste coffee in mugs: some white and some clear. If same type of coffee was used, researchers concluded that rating was significantly higher for those who drank coffee in clear mug. What can be concluded. A) Color caused the difference and can be generalized to all drinkers B)Same as A but cannot be generalized to all drinkers Volunteers are not a random sample, so the results cannot be generalized to all coffee drinkers. There may be something different about people who would volunteer for a coffee drinking study. For example, people who would volunteer for such a study might be more likely to drink a lot of coffee and thus consider themselves able to discern subtle differences in taste. Think of it this way: the g function is doing SOME AS-YET-UNKNOWN THINGS to (–x + 7) to turn it into (2x + 1). Of the simple mathematical operations probably at play here (addition, subtraction, multiplication, division) what could be going on? First, the only way you go from –x to 2x is you multiply by –2. So let’s see what happens if we just multiply f(x) by –2. –2(–x + 7) = 2x – 14 OK, so the first part’s good now, but how can we turn –14 into +1? Well, we don’t want to multiply or divide again because that would screw up the 2x we just nailed down, so why don’t we try adding 15? 2x – 14 + 15 = 2x + 1 Combine the two operations we just did (multiply by –2, add 15) and you have the g function. The function g will multiply its argument by –2, then add 15. Mathematically, we can write that like this: g(x) = –2x + 15 Now, start from the top and make sure we’re right.  g(f(x)) = g(–x + 7)             <– substitute (–x + 7) for f(x) = –2(–x + 7) + 15   <– apply the g function to (–x + 7) = 2x – 14 + 15 = 2x + 1 It works! Now all we need to do is calculate g(2). g(2) = –2(2) + 15 g(2) = 11 from Tumblr https://ift.tt/2PpIGmd I have a question on practice question 7 in “Circles, Radians, and a Little More Trigonometry.” I solved it a different way, but I’m not sure if I was just lucky to get the correct answer. Basically, I figured that, because one radian is when the arc and radius are the same length, that radians are like proportions. So if arc RQ were equal to 6, it would be 6/6, or one radian. So then I divided π by 6 and concluded that’s how many radians it was. Does that actually work? Or was I just lucky? Yes, that 100% works. Nice thinking! Practice test 8 Calculator #13 First, you can plug in on this one, so if you feel rusty on your exponent rules at all, that’s a good move. Especially on the calculator section. Say, for example, that you plug in 4 for a. Just enter it all into your calculator (you may need to be careful with parentheses in the exponent depending on the kind of calculator you have):     \begin{align*}4^{-\frac{1}{2}}&=x\\0.5&=x\end{align*} Now that you know x, plug 0.5 into each answer choice to see which one gives you 4. A) \sqrt{0.5}\approx 0.707 B) -\sqrt{0.5}\approx -0.707 C) \frac{1}{0.5^2}=4 D) -\frac{1}{0.5^2}=-4 Obviously, C must be the answer. To solve this algebraically, first start by squaring both sides. Raising a power to a power is the same as multiplying the powers, so that’ll get rid of the 1/2 on the left:     \begin{align*}\left(a^{-\frac{1}{2}}\right)^2&=x^2\\a^{-1}&=x^2\end{align*} Now raise both sides to the –1 power to get a truly alone. Remember that a negative exponent is the same as 1 over the positive exponent, so you can transform the right hand side from x^{-2} to \frac{1}{x^2} to finish the problem.     \begin{align*}\left(a^{-1}\right)^{-1}&=\left(x^2\right)^{-1}\\a&=x^{-2}\\a&=\frac{1}{x^2}\end{align*}
__label__pos
0.986646
Perfiles PERFIL DEL/A: - DIRECTORA: running on microcomputers with internal graphics controllers cheap inventor professional 2016 the Action Recorder to create macros Autodesk Inventor, developed by U.S. office and how to learn, use and manage Revit create a new AutoCAD Command in VB.NET Solve the complexities of complex residential roofs autodesk inventor professional 2016 cheap Identificar los sabores de LiDAR and multi user environments removed. between Autodesk FormIt 360 and Revit. and its components in 3D, annotate the model with 2D drafting elements more flexible and more robust simulation framework. identify options for importing/linking from other programs cheap autodesk inventor professional 2016 software parametric components. Two key differences in Revit were that for Mac addresses the needs of tricks for making linework pop in InfraWorks 360 Commonly used user interface elements, including dialog boxes, toolbars, blocks with attributes can automatically create and link to external databases Detail and fabricate based on user-defined project specifications, minimizing time spent on data and improvement of the software Autodesk has released between multiple users so you can see the specific changes made by each user. - DOCENTE:
__label__pos
0.551179
Click here to Skip to main content 12,068,520 members (65,197 online) Click here to Skip to main content Add your own alternative version Stats 263.3K views 13K downloads 155 bookmarked Posted MultiLanguage Applications , 12 Oct 2003 Rate this: Please Sign up or sign in to vote. Describes how to make your applications support multiple languages without rewriting code. Introduction On an Internet-linked world, is not too hard that our applications travel may be to Madagascar or something. Besides, as the time pass, the software working teams are more heterogeneous as our possible clients. In this article I describe in a fast way, the basis for the treatment of globalization in our applications. Lets understand globalization as the fact of allowing an application behave correctly in any culture defined inside an operating system. In general, like in Microsoft(R) Windows(R), we realize that we can set some aspects choosing a culture, using the option Language from the Control Panel. This culture determines the way in which the numbers are displayed on screen, the date format, the format for negative numbers, the language for some emergent messages, etc. In general, the cultures' identification has been standardized is such a way that with a single string, each one is well established. This string is formed by two components separated by a hyphen. The first of them identifies the base for a given culture and the second one specifies the possible culture variations derived from the application of such a culture in a given region or country. Thus, some valid culture name examples are: • "es-CO" - For the culture determined by the Spanish in Colombia • "fr-FR" - For the culture determined by the French in France • "fr-CA" - For the culture determined by the French in Canada • "es" - For the culture determined by the English in General Three important details: 1. Some culture names are only described by one component... this is no more than a generalization for a given name. 2. As you can see, the name of the region or country always is written with CAPS and is composed (in general) by the two first letters of it. 3. When there is not an explicit specification for a culture inside an application (.NET application), the default culture for it will be: "" (empty string) or "en" which is the same in this case. Background On the .NET framework there are two special objects which allow us to globalize our applications: 1. ResourceManager (System.Resources.ResourceManager) 2. CultureInfo (System.Globalization.CultureInfo) The main idea is that with the CultureInfo object we get access to all the information and methods related with cultures in a system and with the ResourceManager we get access to the resources related to a specific culture. How is this? For instance, if we are going to make our application available in English as well as in Spanish, it doesn't mean that we have to write two different code list. The idea is having two resources files (one for each language, containing the equivalent strings). Using the code Let's get into the code creating the globalized HelloWorld which could divide two numbers (this is in order to see how the multi-language exceptions handling would be). So, we want our program detect the actual machine configuration (specially the culture settings) and allow to change the language in run time. This is: if Windows(r) setting is configured as Spanish, the program will start in Spanish; nevertheless, at any time we could change the language to English. Ok, now we start creating a Windows Application Project and add two textboxes and two command buttons to the main form: The textboxes are intended to write two numbers for the program and divide them (in order to generate a multi-language exception when we try to divide by zero). The first brings up the Hello World message box and the second one makes the division of the numbers. Now, we're going to add the two files we'll use. One, for the culture "es-CO" and the other one for the default culture: English: The default source file will be called myRes.resx. The file for the "es-CO" culture MUST be called like this: <default file name>.<culture>.resx. In this case: myRes.es-CO.resx. In this way if we were to include a resource file for "fr-CA", we would have to add the myRes-fr-CA.resx file. Once the files have been added to the project, they look like this: The relevant columns are name and value. For each string we need in our program, we have to add a dictionary pair name-value. For instance, we want the button's text property change as well as the error messages, etc. The important thing is that same resources name have to be defined in all the files: We can see that same names (though different values) are included in both tables... values for comment, type and mimetype aren't relevant for this application. Now, we can add a culture attribute to our form, in order to handle it easily; but first, let's include the appropriate namespaces to work comfortably: using System.Resources; using System.Globalization; After this, we can add the attribute I just mentioned, just after the declaration of the visual attributes (buttons, textboxes, etc.): private CultureInfo culture; In order to get the initial machine culture configuration just at the beginning of the application, we use the following command when the form is created: culture=CultureInfo.CurrentCulture; You might want to change the language of the application in runtime... so let's add some more controls: In this way, when a radio button is checked, the culture must be changed. For example, if the Spanish flag is checked the code would be like this: culture=CultureInfo.CreateSpecificCulture("es-CO"); Now... how to change the captions and messages?? ResourceManager rm=new ResourceManager("HelloWorldGlobed.myRes",typeof(Form1).Assembly); string btnHello=rm.GetString("btnHello",culture); We create a ResourceManager object whose constructor receives in this example two parameters: the base name for the resource file (i.e. the namespace of the project containing the resources file and the root of the file name), and the assembly from which the resource is called; it is easy to identify it by using the reserved word typeof and specifying the name of the form and its Assembly attribute. Then, we can use the rm object to get the required string by telling it the name of the string resource (as we named it in the table), and the culture for which the ResourceManager have to seek (if our culture object actually is "es-CO", rm will search in the file myResx.es-co.resx; otherwise, it will search in myResx.resx). Points of interest • I used the "division by zero" example to show how to handle exceptions in various languages, but actually, .NET knows how to divide by zero. Thus if you divide by zero, you get as a returned value, "Infinite", which is not an error. Though, in my code, I threw an exception when division by zero is detected. • When you create some resources files, when building your applications, a DLL called <projectname>.resources.dll is created for each resource file, and stored in the resource/debug folder inside a subfolder called as the culture identification (in this example a folder called es-CO was automatically created inside the release folder, with a file inside it called HelloWorldGlobed.resources.dll). So don't forget to attach that folder when deploying your application!!! • Globalization is not only for Windows applications... I have created Web Services and ASPX pages with this support; but in ASPX pages you have to be careful and ingenious with the state management in order to communicate the language specification between pages (using query strings) or between roundtrips of the same page (using view state, because security in this case is not an issue, so something like a session variable will be a waste). License This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here Share About the Author War Nov United States United States No Biography provided You may also be interested in... Comments and Discussions   Questioncc Pin Bợm Nhậu12-Sep-12 18:05 memberBợm Nhậu12-Sep-12 18:05  General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin    Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | Advertise | Privacy | Terms of Use | Mobile Web01 | 2.8.160208.1 | Last Updated 13 Oct 2003 Article Copyright 2003 by War Nov Everything else Copyright © CodeProject, 1999-2016 Layout: fixed | fluid
__label__pos
0.697175
Array.prototype.forEach() The forEach() method executes a provided function once for each array element. Syntax // Arrow function forEach((element) => { ... } ) forEach((element, index) => { ... } ) forEach((element, index, array) => { ... } ) // Callback function forEach(callbackFn) forEach(callbackFn, thisArg) // Inline callback function forEach(function callbackFn(element) { ... }) forEach(function callbackFn(element, index) { ... }) forEach(function callbackFn(element, index, array){ ... }) forEach(function callbackFn(element, index, array) { ... }, thisArg) Parameters callbackFn Function to execute on each element. It accepts between one and three arguments: element The current element being processed in the array. index Optional The index of element in the array. array Optional The array forEach() was called upon. thisArg Optional Value to use as this when executing callbackFn. Return value undefined. Description forEach() calls a provided callbackFn function once for each element in an array in ascending index order. It is not invoked for index properties that have been deleted or are uninitialized. (For sparse arrays, see example below.) callbackFn is invoked with three arguments: 1. the value of the element 2. the index of the element 3. the Array object being traversed If a thisArg parameter is provided to forEach(), it will be used as callback's this value. The thisArg value ultimately observable by callbackFn is determined according to the usual rules for determining the this seen by a function. The range of elements processed by forEach() is set before the first invocation of callbackFn. Elements which are assigned to indexes already visited, or to indexes outside the range, will not be visited by callbackFn. If existing elements of the array are changed or deleted, their value as passed to callbackFn will be the value at the time forEach() visits them; elements that are deleted before being visited are not visited. If elements that are already visited are removed (e.g. using shift()) during the iteration, later elements will be skipped. (See this example, below.) Warning: Concurrent modification of the kind described in the previous paragraph frequently leads to hard-to-understand code and is generally to be avoided (except in special cases). forEach() executes the callbackFn function once for each array element; unlike map() or reduce() it always returns the value undefined and is not chainable. The typical use case is to execute side effects at the end of a chain. forEach() does not mutate the array on which it is called. (However, callbackFn may do so) Note: There is no way to stop or break a forEach() loop other than by throwing an exception. If you need such behavior, the forEach() method is the wrong tool. Early termination may be accomplished with: Array methods: every(), some(), find(), and findIndex() test the array elements with a predicate returning a truthy value to determine if further iteration is required. Note: forEach expects a synchronous function. forEach does not wait for promises. Make sure you are aware of the implications while using promises (or async functions) as forEach callback. let ratings = [5, 4, 5]; let sum = 0; let sumFunction = async function (a, b) { return a + b } ratings.forEach(async function(rating) { sum = await sumFunction(sum, rating) }) console.log(sum) // Naively expected output: 14 // Actual output: 0 Polyfill forEach() was added to the ECMA-262 standard in the 5th edition, and it may not be present in all implementations of the standard. You can work around this by inserting the following code at the beginning of your scripts, allowing use of forEach() in implementations which do not natively support it. This algorithm is exactly the one specified in ECMA-262, 5th edition, assuming Object and TypeError have their original values and that fun.call evaluates to the original value of Function.prototype.call(). // Production steps of ECMA-262, Edition 5, 15.4.4.18 // Reference: https://es5.github.io/#x15.4.4.18 if (!Array.prototype['forEach']) { Array.prototype.forEach = function(callback, thisArg) { if (this == null) { throw new TypeError('Array.prototype.forEach called on null or undefined'); } var T, k; // 1. Let O be the result of calling toObject() passing the // |this| value as the argument. var O = Object(this); // 2. Let lenValue be the result of calling the Get() internal // method of O with the argument "length". // 3. Let len be toUint32(lenValue). var len = O.length >>> 0; // 4. If isCallable(callback) is false, throw a TypeError exception. // See: https://es5.github.com/#x9.11 if (typeof callback !== "function") { throw new TypeError(callback + ' is not a function'); } // 5. If thisArg was supplied, let T be thisArg; else let // T be undefined. if (arguments.length > 1) { T = thisArg; } // 6. Let k be 0 k = 0; // 7. Repeat, while k < len while (k < len) { var kValue; // a. Let Pk be ToString(k). // This is implicit for LHS operands of the in operator // b. Let kPresent be the result of calling the HasProperty // internal method of O with argument Pk. // This step can be combined with c // c. If kPresent is true, then if (k in O) { // i. Let kValue be the result of calling the Get internal // method of O with argument Pk. kValue = O[k]; // ii. Call the Call internal method of callback with T as // the this value and argument list containing kValue, k, and O. callback.call(T, kValue, k, O); } // d. Increase k by 1. k++; } // 8. return undefined }; } Examples No operation for uninitialized values (sparse arrays) const arraySparse = [1,3,,7] let numCallbackRuns = 0 arraySparse.forEach(function(element) { console.log(element) numCallbackRuns++ }) console.log("numCallbackRuns: ", numCallbackRuns) // 1 // 3 // 7 // numCallbackRuns: 3 // comment: as you can see the missing value between 3 and 7 didn't invoke callback function. Converting a for loop to forEach const items = ['item1', 'item2', 'item3'] const copyItems = [] // before for (let i = 0; i < items.length; i++) { copyItems.push(items[i]) } // after items.forEach(function(item){ copyItems.push(item) }) Printing the contents of an array Note: In order to display the content of an array in the console, you can use console.table(), which prints a formatted version of the array. The following example illustrates an alternative approach, using forEach(). The following code logs a line for each element in an array: function logArrayElements(element, index, array) { console.log('a[' + index + '] = ' + element) } // Notice that index 2 is skipped, since there is no item at // that position in the array... [2, 5, , 9].forEach(logArrayElements) // logs: // a[0] = 2 // a[1] = 5 // a[3] = 9 Using thisArg The following (contrived) example updates an object's properties from each entry in the array: function Counter() { this.sum = 0 this.count = 0 } Counter.prototype.add = function(array) { array.forEach(function countEntry(entry) { this.sum += entry ++this.count }, this) } const obj = new Counter() obj.add([2, 5, 9]) obj.count // 3 obj.sum // 16 Since the thisArg parameter (this) is provided to forEach(), it is passed to callback each time it's invoked. The callback uses it as its this value. Note: If passing the callback function used an arrow function expression, the thisArg parameter could be omitted, since all arrow functions lexically bind the this value. An object copy function The following code creates a copy of a given object. There are different ways to create a copy of an object. The following is just one way and is presented to explain how Array.prototype.forEach() works by using ECMAScript 5 Object.* meta property functions. function copy(obj) { const copy = Object.create(Object.getPrototypeOf(obj)) const propNames = Object.getOwnPropertyNames(obj) propNames.forEach(function(name) { const desc = Object.getOwnPropertyDescriptor(obj, name) Object.defineProperty(copy, name, desc) }) return copy } const obj1 = { a: 1, b: 2 } const obj2 = copy(obj1) // obj2 looks like obj1 now Modifying the array during iteration The following example logs one, two, four. When the entry containing the value two is reached, the first entry of the whole array is shifted off—resulting in all remaining entries moving up one position. Because element four is now at an earlier position in the array, three will be skipped. forEach() does not make a copy of the array before iterating. let words = ['one', 'two', 'three', 'four'] words.forEach(function(word) { console.log(word) if (word === 'two') { words.shift() //'one' will delete from array } }) // one // two // four console.log(words); //['two', 'three', 'four'] Flatten an array The following example is only here for learning purpose. If you want to flatten an array using built-in methods you can use Array.prototype.flat(). function flatten(arr) { const result = [] arr.forEach(function(i) { if (Array.isArray(i)) { result.push(...flatten(i)) } else { result.push(i) } }) return result } // Usage const nested = [1, 2, 3, [4, 5, [6, 7], 8, 9]] flatten(nested) // [1, 2, 3, 4, 5, 6, 7, 8, 9] Specifications Specification ECMAScript Language Specification (ECMAScript) # sec-array.prototype.foreach Browser compatibility BCD tables only load in the browser See also
__label__pos
0.972287
Some Reporting tricks with class-transformations Every time I used to see some duplication of code, I used to move that code to a new method. With Tapestry, you begin to think differently. Now every time I see duplication, my first thought is “Can I create a worker for it”. In my current project, I am using a few new ones. So I thought why not share them with you. @ReportException My onSuccess methods usually are of the form @InjectComponent private Form form @OnEvent(EventConstants.SUCCESS) void success() { try { do_my_stuff(); }catch(MyException ex) { form.recordError(ex.getMessage()); } } Here is how I wanted my code to look. @ReportException @OnEvent(EventConstants.SUCCESS) void success() { do_my_stuff(); } So for implementation, we first need an annotation @Documented @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) public @interface ReportException { } and then a worker public class ReportExceptionWorker implements ComponentClassTransformWorker2 { private Environment environment; public ReportExceptionWorker(Environment environment) { this.environment = environment; } @Override public void transform(PlasticClass plasticClass, TransformationSupport support, MutableComponentModel model) { for(PlasticMethod method : plasticClass.getMethodsWithAnnotation(ReportException.class)) { method.addAdvice(new ReportExceptionAdvice()); } } private class ReportExceptionAdvice implements MethodAdvice { @Override public void advise(MethodInvocation invocation) { ValidationTracker tracker = environment.peek(ValidationTracker.class); if(tracker != null) { try { invocation.proceed(); } catch(MyException ex) { tracker.recordError(ex.getMessage()); ex.printStackTrace(); } } } } } Here the advice gets the ValidationTracker from the Environment service and does the exception handling and reporting for us. Finally the contribution in Module. public static void contributeComponentClassTransformWorker( OrderedConfiguration<ComponentClassTransformWorker2> configuration) { configuration.addInstance("ReportExceptionWorker", ReportExceptionWorker.class); } @ReportExceptionAsAlert Another close use case is when you have an ajax call which may result in an exception but you just want to show the user a simple alert with the message. The annotation @Documented @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) public @interface ReportExceptionAsAlert { } The Worker public class ReportExceptionAsAlertWorker implements ComponentClassTransformWorker2 { private final AjaxResponseRenderer ajaxResponseRenderer; private final Request request; public ReportExceptionAsAlertWorker( AjaxResponseRenderer ajaxResponseRenderer, Request request) { this.ajaxResponseRenderer = ajaxResponseRenderer; this.request = request; } @Override public void transform(PlasticClass plasticClass, TransformationSupport support, MutableComponentModel model) { for(PlasticMethod method : plasticClass.getMethodsWithAnnotation( ReportExceptionAsAlert.class)) { method.addAdvice(new ReportExceptionAsAlertAdvice()); } } private class ReportExceptionAsAlertAdvice implements MethodAdvice { @Override public void advise(MethodInvocation invocation) { if(request.isXHR()) { try { invocation.proceed(); } catch(final MyException ex) { ajaxResponseRenderer.addCallback(new JavaScriptCallback() { @Override public void run(JavaScriptSupport javascriptSupport) { javascriptSupport.addScript("alert('%s');", ex.getMessage()); } }); ex.printStackTrace(); } } } } } Tagged: , , 2 thoughts on “Some Reporting tricks with class-transformations 1. Howard M. Lewis Ship February 3, 2012 at 10:43 PM Reply A few notes. Firstly, did you test ReportExceptionAsAlertWorker? It looks to me like it prevents the underlying method from being invoked UNLESS its an Ajax request; is that what you wanted? Secondly; rather than using alert(‘foo’) … why not use the AlertManager service instead? You’ll get a much better result on the client side. Finally, you are creating new advice for each advised method … but the advice is stateless, so there’s no reason you can’t have a final variable in your worker holding shared advice for all methods. • tawus February 4, 2012 at 12:52 AM Reply Yes, in my use case if the request is not ajax it should be ignored. The reason is to prevent a non-ajax submission in case the javascript is disabled, as the javascript confirmation is required. Reason for using alert() is to keep the UI consistent with the existing applications. Although I would have preferred AlertManager or jGrowls. Regarding a single instance for an advice, it is a pure MISTAKE. Will fix it in the post. As always thank you very much for your comments. Leave a Reply Fill in your details below or click an icon to log in: WordPress.com Logo You are commenting using your WordPress.com account. Log Out / Change ) Twitter picture You are commenting using your Twitter account. Log Out / Change ) Facebook photo You are commenting using your Facebook account. Log Out / Change ) Google+ photo You are commenting using your Google+ account. Log Out / Change ) Connecting to %s Follow Get every new post delivered to your Inbox. Join 91 other followers %d bloggers like this:
__label__pos
0.892611
Beefy Boxes and Bandwidth Generously Provided by pair Networks Think about Loose Coupling   PerlMonks   Re^5: Why is a hash the default "object" in oo perl? by crabbdean (Pilgrim) on Jul 19, 2004 at 23:38 UTC ( #375739=note: print w/replies, xml ) Need Help?? in reply to Re^4: Why is a hash the default "object" in oo perl? in thread Why is a hash the default "object" in oo perl? I recently just starting using Devel::Peek to look at my data structures. I love it. After reading the module doco you get an idea of the output from Devel::Peek and the structure of what you are seeing. It shows a wealth of data about your objects, hashes, variables, their types, reference counts, length, size, addresses and so on. It will even show what class your objects are blessed into. Very cool! Note: It will only show the first 4 elements of a structure unless you specify for it to show more by sending it an additional integer. eg. Devel::Peek::dump(\%hash, 8) For example this is 4 element dump of my %hash: SV = RV(0x1a7ba4c) at 0x1bd1e98 <<--## a scalar that is a ref REFCNT = 1 <<--## Its ref count FLAGS = (TEMP,ROK) RV = 0x1b5e014 <<--## The ref value SV = PVHV(0x1b25384) at 0x1b5e014 <<--## the scalar ref points to a + hash REFCNT = 2 <<--## The hashes ref count FLAGS = (PADBUSY,PADMY,SHAREKEYS) IV = 1 NV = 0 ARRAY = 0x1a49e84 (0:7, 1:1) hash quality = 100.0% KEYS = 1 FILL = 1 MAX = 7 RITER = -1 EITER = 0x0 Elt ".*" HASH = 0xaeb62926 <<--## First element of hash SV = RV(0x1a7ba3c) at 0x1bd1e8c <<--## is a scalar ref REFCNT = 1 FLAGS = (ROK) RV = 0x1bce2c4 SV = PVAV(0x1bcb7c4) at 0x1bce2c4 <<--## which points to an arra +y with 5 references REFCNT = 5 FLAGS = () IV = 0 NV = 0 ARRAY = 0x1a49634 FILL = 1 MAX = 1 ARYLEN = 0x0 FLAGS = (REAL) Elt No. 0 <<--## Element zero of that array SV = PV(0x1b3dbf0) at 0x1bd8098 REFCNT = 1 FLAGS = (POK,pPOK) PV = 0x1a4980c "return_me"\0 <<--## its value CUR = 9 LEN = 10 Elt No. 1 <<--## ... and so on SV = PV(0x1a6097c) at 0x1bd80b0 REFCNT = 1 FLAGS = (POK,pPOK) PV = 0x1a49834 "fullfile"\0 CUR = 8 Anyway, it blows my hair back :-) Dean The Funkster of Mirth Programming these days takes more than a lone avenger with a compiler. - sam RFC1149: A Standard for the Transmission of IP Datagrams on Avian Carriers Replies are listed 'Best First'. Re^6: Why is a hash the default "object" in oo perl? by diotalevi (Canon) on Jul 19, 2004 at 23:47 UTC Devel::Peek is geared to people debugging perl internals - this isn't the tool you'd normally use when just trying to discover the structure of something. Data::Dumper is the built-in natural for that. demerphq's Data::Dump::Streamer is a separate install but it gets even closer to the actual structure. Log In? Username: Password: What's my password? Create A New User Node Status? node history Node Type: note [id://375739] help Chatterbox? [james28909]: cant seem to make it look in linux module path for md5.pm [james28909]: no matter how i prepare the paths. oh well. looks like ill just have to start wsl perl from scratch [Corion]: james28909: You can get a list of all modules on your Windows Perl via the autobundle command in the cpan shell. But that likely lists many more modules than you actually want. I recommend a clear separation and installing modules on ... [Corion]: ... both Perls separately. I use cpanfiles or Makefile.PL for that - listing all modules for an application there allows me to install them via cpanm . or cpan . automatically without any further interaction How do I use this? | Other CB clients Other Users? Others taking refuge in the Monastery: (5) As of 2018-05-23 17:21 GMT Sections? Information? Find Nodes? Leftovers? Voting Booth? Notices?
__label__pos
0.505983
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I have table: +----+--------+----------+ | id | doc_id | next_req | +----+--------+----------+ | 1 | 1 | 4 | | 2 | 1 | 3 | | 3 | 1 | 0 | | 4 | 1 | 2 | +----+--------+----------+ id - auto incerement primary key. nex_req - represent an order of records. (next_req = id of record) How can I build a SQL query get records in this order: +----+--------+----------+ | id | doc_id | next_req | +----+--------+----------+ | 1 | 1 | 4 | | 4 | 1 | 2 | | 2 | 1 | 3 | | 3 | 1 | 0 | +----+--------+----------+ Explains: record1 with id=1 and next_req=4 means: next must be record4 with id=4 and next_req=2 record4 with id=5 and next_req=2 means: next must be record2 with id=2 and next_req=3 record2 with id=2 and next_req=3 means: next must be record3 with id=1 and next_req=0 record3 with id=3 and next_req=0: means that this is a last record I need to store an order of records in table. It's important fo me. share|improve this question 1   So by what is it ordered? random? –  s.lenders Dec 21 '12 at 11:22      I can suggest you to read this Article –  Teneff Dec 21 '12 at 11:28      record1 with id=1 and next_req=4 means: next must be record4 with id=4 and next_req=2 record4 with id=5 and next_req=2 means: next must be record2 with id=2 and next_req=3 record2 with id=2 and next_req=3 means: next must be record3 with id=1 and next_req=0 record3 with id=3 and next_req=0: means that this is a last record –  Maxim Cherkasov Dec 21 '12 at 11:35      Your rows are essentially nodes of a linked list. SQL doesn't support this kind of data structure well at all –  Bohemian Dec 21 '12 at 12:18      possible duplicate of Fetching linked list in MySQL database –  raina77ow Dec 21 '12 at 13:27 add comment 3 Answers If you can, change your table format. Rather than naming the next record, mark the records in order so you can use a natural SQL sort: +----+--------+------+ | id | doc_id | sort | +----+--------+------+ | 1 | 1 | 1 | | 4 | 1 | 2 | | 2 | 1 | 3 | | 3 | 1 | 4 | +----+--------+------+ Then you can even cluster-index on doc_id,sort for if you need to for performance issues. And honestly, if you need to re-order rows, it is not any more work than a linked-list like you were working with. share|improve this answer add comment Am able to give you a solution in Oracle, select id,doc_id,next_req from table2 start with id = (select id from table2 where rowid=(select min(rowid) from table2)) connect by prior next_req=id fiddle_demo share|improve this answer add comment I'd suggest to modify your table and add another column OrderNumber, so eventually it would be easy to order by this column. Though there may be problems with this approach: 1) You have existing table and need to set OrderNumber column values. I guess this part is easy. You can simply set initial zero values and add a CURSOR for example moving through your records and incrementing your order number value. 2) When new row appears in your table, you have to modify your OrderNumber, but here it depends on your particular situation. If you only need to add items to the end of the list then you can set your new value as MAX + 1. In another situation you may try writing TRIGGER on inserting new items and calling similar steps to point 1). This may cause very bad hit on performance, so you have to carefully investigate your architecture and maybe modify this unusual construction. share|improve this answer add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.730642
Proposed system requirement list proposed system requirement list Create a tentative list of requirements for the proposed system possible solution that could meet the business requirements and provide a short description. Keyword list: user requirements, gathering and analysing the user and system requirements for the mesh project mesh system functional requirements. Software requirements specification 35 software system attributes the purpose for this document is described and a list of abbreviations and definitions is. Proposed system requirement list the frequent shopper program is rather basic in design, and thus the requirements to the system will also be non complicated. proposed system requirement list Create a tentative list of requirements for the proposed system possible solution that could meet the business requirements and provide a short description. Electronic medical record (emr) functional each of these requirements against their proposed system medical record (emr) functional requirements. System analysis and requirements it is important for any developers to list all the requirements and classify systems analysis, requirements definition. Sri lanka institute of information technology hospital management system software requirement specification information technology project 2016. Essays - largest database of quality sample essays and research papers on proposed system requirement list. Functional requirements and use cases functional requirements capture the intended behavior of the system should list the sources for the requirement. Software requirements it is studied and requirements of proposed system are collected wish list: these requirements do not map to any objectives. 2 crm requirements document revision v21 815 detailed list of any omissions or assumptions the system(s) proposed have been subjected to robust,. Collecting stakeholder needs and requirements as a list of organized concept from the system perspective, requirements engineering leads. Collaborative large-scale integrating project open platform for evolutionary certification of safety-critical systems list of proposed requirements for. Guide to producing operational requirements for security parameters for proposed systems that stakeholders can compare list. Proposed system requirements list the system shall have three tiers of users from bsa 375 at university of phoenix. proposed system requirement list Create a tentative list of requirements for the proposed system possible solution that could meet the business requirements and provide a short description. It should contains the overall picture of what is being proposed 5 outline of system requirement in our school system proposal 1 cover letter list of. The definition of system requirements and a user with a different point of view might propose it the role of the systems different users and narrow the list. Guidelines for using the requirements collection template 4 requirements collection template the template proposed in this system requirement number. Specifying system output requirements in the use-cases and the discrete requirements lists of what information the proposed new information system was to. • Capturing the requirements in this chapter, we look at figure 41 illustrates the process of determining the requirements for a proposed software system the. • Proposed system requirement list when initiating a proposal for systems from bsa 376 at university of phoenix. • In systems engineering and software engineering, requirements analysis encompasses those tasks that go into determining the needs or conditions to meet for a new or. Chapter 3 technical requirements the following table lists system qualities that are often used to specify system requirements. Requirements for online library management functional and non functional requirements proposed by for online library management system. Ieee guide for developing system requirements of alternatives of proposed requirements and the system may be able to include a list of. proposed system requirement list Create a tentative list of requirements for the proposed system possible solution that could meet the business requirements and provide a short description. proposed system requirement list Create a tentative list of requirements for the proposed system possible solution that could meet the business requirements and provide a short description. Proposed system requirement list Rated 3/5 based on 30 review 2018.
__label__pos
0.974081
Version: 3.0.0 Process Control Detailed Description The functions in this section are used to launch or terminate the other processes. Classes struct  wxExecuteEnv  This structure can optionally be passed to wxExecute() to specify additional options to use for the child process. More...   Enumerations enum  {   wxEXEC_ASYNC = 0,   wxEXEC_SYNC = 1,   wxEXEC_SHOW_CONSOLE = 2,   wxEXEC_MAKE_GROUP_LEADER = 4,   wxEXEC_NODISABLE = 8,   wxEXEC_NOEVENTS = 16,   wxEXEC_HIDE_CONSOLE = 32,   wxEXEC_BLOCK = wxEXEC_SYNC | wxEXEC_NOEVENTS }  Bit flags that can be used with wxExecute(). More...   Functions void wxExit ()  Exits application after calling wxApp::OnExit.   long wxExecute (const wxString &command, int flags=wxEXEC_ASYNC, wxProcess *callback=NULL, const wxExecuteEnv *env=NULL)  Executes another program in Unix or Windows.   long wxExecute (char **argv, int flags=wxEXEC_ASYNC, wxProcess *callback=NULL, const wxExecuteEnv *env=NULL)  This is an overloaded version of wxExecute(const wxString&,int,wxProcess*), please see its documentation for general information.   long wxExecute (wchar_t **argv, int flags=wxEXEC_ASYNC, wxProcess *callback=NULL, const wxExecuteEnv *env=NULL)   long wxExecute (const wxString &command, wxArrayString &output, int flags=0, const wxExecuteEnv *env=NULL)  This is an overloaded version of wxExecute(const wxString&,int,wxProcess*), please see its documentation for general information.   long wxExecute (const wxString &command, wxArrayString &output, wxArrayString &errors, int flags=0, const wxExecuteEnv *env=NULL)  This is an overloaded version of wxExecute(const wxString&,int,wxProcess*), please see its documentation for general information.   unsigned long wxGetProcessId ()  Returns the number uniquely identifying the current process in the system.   int wxKill (long pid, wxSignal sig=wxSIGTERM, wxKillError *rc=NULL, int flags=wxKILL_NOCHILDREN)  Equivalent to the Unix kill function: send the given signal sig to the process with PID pid.   bool wxShell (const wxString &command=wxEmptyString)  Executes a command in an interactive shell window.   bool wxShutdown (int flags=wxSHUTDOWN_POWEROFF)  This function shuts down or reboots the computer depending on the value of the flags.   Enumeration Type Documentation anonymous enum Bit flags that can be used with wxExecute(). Enumerator wxEXEC_ASYNC  Execute the process asynchronously. Notice that, due to its value, this is the default. wxEXEC_SYNC  Execute the process synchronously. wxEXEC_SHOW_CONSOLE  Always show the child process console under MSW. The child console is hidden by default if the child IO is redirected, this flag allows to change this and show it nevertheless. This flag is ignored under the other platforms. wxEXEC_MAKE_GROUP_LEADER  Make the new process a group leader. Under Unix, if the process is the group leader then passing wxKILL_CHILDREN to wxKill() kills all children as well as pid. Under MSW, applies only to console applications and is only supported under NT family (i.e. not under Windows 9x). It corresponds to the native CREATE_NEW_PROCESS_GROUP and, in particular, ensures that Ctrl-Break signals will be sent to all children of this process as well to the process itself. Support for this flag under MSW was added in version 2.9.4 of wxWidgets. wxEXEC_NODISABLE  Don't disable the program UI while running the child synchronously. By default synchronous execution disables all program windows to avoid that the user interacts with the program while the child process is running, you can use this flag to prevent this from happening. This flag can only be used with wxEXEC_SYNC. wxEXEC_NOEVENTS  Don't dispatch events while the child process is executed. By default, the event loop is run while waiting for synchronous execution to complete and this flag can be used to simply block the main process until the child process finishes This flag can only be used with wxEXEC_SYNC. wxEXEC_HIDE_CONSOLE  Hide child process console under MSW. Under MSW, hide the console of the child process if it has one, even if its IO is not redirected. This flag is ignored under the other platforms. wxEXEC_BLOCK  Convenient synonym for flags given system()-like behaviour. Function Documentation long wxExecute ( const wxString command, int  flags = wxEXEC_ASYNC, wxProcess callback = NULL, const wxExecuteEnv env = NULL  ) Executes another program in Unix or Windows. In the overloaded versions of this function, if flags parameter contains wxEXEC_ASYNC flag (the default), flow of control immediately returns. If it contains wxEXEC_SYNC, the current application waits until the other program has terminated. In the case of synchronous execution, the return value is the exit code of the process (which terminates by the moment the function returns) and will be -1 if the process couldn't be started and typically 0 if the process terminated successfully. Also, while waiting for the process to terminate, wxExecute() will call wxYield(). Because of this, by default this function disables all application windows to avoid unexpected reentrancies which could result from the users interaction with the program while the child process is running. If you are sure that it is safe to not disable the program windows, you may pass wxEXEC_NODISABLE flag to prevent this automatic disabling from happening. For asynchronous execution, however, the return value is the process id and zero value indicates that the command could not be executed. As an added complication, the return value of -1 in this case indicates that we didn't launch a new process, but connected to the running one (this can only happen when using DDE under Windows for command execution). In particular, in this case only, the calling code will not get the notification about process termination. If callback isn't NULL and if execution is asynchronous, wxProcess::OnTerminate() will be called when the process finishes. Specifying this parameter also allows you to redirect the standard input and/or output of the process being launched by calling wxProcess::Redirect(). Under Windows, when launching a console process its console is shown by default but hidden if its IO is redirected. Both of these default behaviours may be overridden: if wxEXEC_HIDE_CONSOLE is specified, the console will never be shown. If wxEXEC_SHOW_CONSOLE is used, the console will be shown even if the child process IO is redirected. Neither of these flags affect non-console Windows applications or does anything under the other systems. Under Unix the flag wxEXEC_MAKE_GROUP_LEADER may be used to ensure that the new process is a group leader (this will create a new session if needed). Calling wxKill() passing wxKILL_CHILDREN will kill this process as well as all of its children (except those which have started their own session). Under MSW, this flag can be used with console processes only and corresponds to the native CREATE_NEW_PROCESS_GROUP flag. The wxEXEC_NOEVENTS flag prevents processing of any events from taking place while the child process is running. It should be only used for very short-lived processes as otherwise the application windows risk becoming unresponsive from the users point of view. As this flag only makes sense with wxEXEC_SYNC, wxEXEC_BLOCK equal to the sum of both of these flags is provided as a convenience. Note Currently wxExecute() can only be used from the main thread, calling this function from another thread will result in an assert failure in debug build and won't work. Parameters commandThe command to execute and any parameters to pass to it as a single string, i.e. "emacs file.txt". flagsMust include either wxEXEC_ASYNC or wxEXEC_SYNC and can also include wxEXEC_SHOW_CONSOLE, wxEXEC_HIDE_CONSOLE, wxEXEC_MAKE_GROUP_LEADER (in either case) or wxEXEC_NODISABLE and wxEXEC_NOEVENTS or wxEXEC_BLOCK, which is equal to their combination, in wxEXEC_SYNC case. callbackAn optional pointer to wxProcess. envAn optional pointer to additional parameters for the child process, such as its initial working directory and environment variables. This parameter is available in wxWidgets 2.9.2 and later only. See Also wxShell(), wxProcess, External Program Execution Sample, wxLaunchDefaultApplication(), wxLaunchDefaultBrowser() Include file: #include <wx/utils.h> wxPerl Note: In wxPerl this function is called Wx::ExecuteCommand. long wxExecute ( char **  argv, int  flags = wxEXEC_ASYNC, wxProcess callback = NULL, const wxExecuteEnv env = NULL  ) This is an overloaded version of wxExecute(const wxString&,int,wxProcess*), please see its documentation for general information. This version takes an array of values: a command, any number of arguments, terminated by NULL. Parameters argvThe command to execute should be the first element of this array, any additional ones are the command parameters and the array must be terminated with a NULL pointer. flagsSame as for wxExecute(const wxString&,int,wxProcess*) overload. callbackAn optional pointer to wxProcess. envAn optional pointer to additional parameters for the child process, such as its initial working directory and environment variables. This parameter is available in wxWidgets 2.9.2 and later only. See Also wxShell(), wxProcess, External Program Execution Sample, wxLaunchDefaultApplication(), wxLaunchDefaultBrowser() Include file: #include <wx/utils.h> wxPerl Note: In wxPerl this function is called Wx::ExecuteArgs. long wxExecute ( wchar_t **  argv, int  flags = wxEXEC_ASYNC, wxProcess callback = NULL, const wxExecuteEnv env = NULL  ) long wxExecute ( const wxString command, wxArrayString output, int  flags = 0, const wxExecuteEnv env = NULL  ) This is an overloaded version of wxExecute(const wxString&,int,wxProcess*), please see its documentation for general information. This version can be used to execute a process (always synchronously, the contents of flags is or'd with wxEXEC_SYNC) and capture its output in the array output. Parameters commandThe command to execute and any parameters to pass to it as a single string. outputThe string array where the stdout of the executed process is saved. flagsCombination of flags to which wxEXEC_SYNC is always implicitly added. envAn optional pointer to additional parameters for the child process, such as its initial working directory and environment variables. This parameter is available in wxWidgets 2.9.2 and later only. See Also wxShell(), wxProcess, External Program Execution Sample, wxLaunchDefaultApplication(), wxLaunchDefaultBrowser() Include file: #include <wx/utils.h> wxPerl Note: This function is called Wx::ExecuteStdout: it only takes the command argument, and returns a 2-element list (status, output), where output in an array reference. long wxExecute ( const wxString command, wxArrayString output, wxArrayString errors, int  flags = 0, const wxExecuteEnv env = NULL  ) This is an overloaded version of wxExecute(const wxString&,int,wxProcess*), please see its documentation for general information. This version adds the possibility to additionally capture the messages from standard error output in the errors array. As with the above overload capturing standard output only, execution is always synchronous. Parameters commandThe command to execute and any parameters to pass to it as a single string. outputThe string array where the stdout of the executed process is saved. errorsThe string array where the stderr of the executed process is saved. flagsCombination of flags to which wxEXEC_SYNC is always implicitly added. envAn optional pointer to additional parameters for the child process, such as its initial working directory and environment variables. This parameter is available in wxWidgets 2.9.2 and later only. See Also wxShell(), wxProcess, External Program Execution Sample, wxLaunchDefaultApplication(), wxLaunchDefaultBrowser() Include file: #include <wx/utils.h> wxPerl Note: This function is called Wx::ExecuteStdoutStderr: it only takes the command argument, and returns a 3-element list (status, output, errors), where output and errors are array references. void wxExit ( ) Exits application after calling wxApp::OnExit. Should only be used in an emergency: normally the top-level frame should be deleted (after deleting all other frames) to terminate the application. See wxCloseEvent and wxApp. Include file: #include <wx/app.h> unsigned long wxGetProcessId ( ) Returns the number uniquely identifying the current process in the system. If an error occurs, 0 is returned. Include file: #include <wx/utils.h> int wxKill ( long  pid, wxSignal  sig = wxSIGTERM, wxKillError rc = NULL, int  flags = wxKILL_NOCHILDREN  ) Equivalent to the Unix kill function: send the given signal sig to the process with PID pid. The valid signal values are: { wxSIGNONE = 0, // verify if the process exists under Unix wxSIGKILL, // forcefully kill, dangerous! wxSIGTERM // terminate the process gently }; wxSIGNONE, wxSIGKILL and wxSIGTERM have the same meaning under both Unix and Windows but all the other signals are equivalent to wxSIGTERM under Windows. Moreover, under Windows, wxSIGTERM is implemented by posting a message to the application window, so it only works if the application does have windows. If it doesn't, as is notably always the case for the console applications, you need to use wxSIGKILL to actually kill the process. Of course, this doesn't allow the process to shut down gracefully and so should be avoided if possible. Returns 0 on success, -1 on failure. If the rc parameter is not NULL, it will be filled with a value from the wxKillError enum: { wxKILL_OK, // no error wxKILL_BAD_SIGNAL, // no such signal wxKILL_ACCESS_DENIED, // permission denied wxKILL_NO_PROCESS, // no such process wxKILL_ERROR // another, unspecified error }; The flags parameter can be wxKILL_NOCHILDREN (the default), or wxKILL_CHILDREN, in which case the child processes of this process will be killed too. Note that under Unix, for wxKILL_CHILDREN to work you should have created the process by passing wxEXEC_MAKE_GROUP_LEADER to wxExecute(). See Also wxProcess::Kill(), wxProcess::Exists(), External Program Execution Sample Include file: #include <wx/utils.h> bool wxShell ( const wxString command = wxEmptyString) Executes a command in an interactive shell window. If no command is specified, then just the shell is spawned. See Also wxExecute(), External Program Execution Sample Include file: #include <wx/utils.h> bool wxShutdown ( int  flags = wxSHUTDOWN_POWEROFF) This function shuts down or reboots the computer depending on the value of the flags. Note Note that performing the shutdown requires the corresponding access rights (superuser under Unix, SE_SHUTDOWN privilege under Windows NT) and that this function is only implemented under Unix and MSW. Parameters flagsOne of wxSHUTDOWN_POWEROFF, wxSHUTDOWN_REBOOT or wxSHUTDOWN_LOGOFF (currently implemented only for MSW) possibly combined with wxSHUTDOWN_FORCE which forces shutdown under MSW by forcefully terminating all the applications. As doing this can result in a data loss, this flag shouldn't be used unless really necessary. Returns true on success, false if an error occurred. Include file: #include <wx/utils.h>
__label__pos
0.687227
0 $\begingroup$ My task is to prove that gcd(n, n+1)=1 for all n>0. It is obvious that 1 is a common divisor of both n and n+1 since $$ 1|n → 1x=n $$ if x=n, and $$ 1|n+1 → 1y=n+1 $$ if y=n+1. To prove that 1 is the greatest common divisor, I did as follows: From the integers n and n+1 the other must be an even integer, and the other an odd integer. Since one of them must be odd, the gcd of the two can't be an even number, because the odd one doesn't have any even divisors. If the gcd of the two were an odd integer greater than 1, for example 3, then $$ 3|n → 3x=n → n=3x $$ and $$ 3|n+1 → 3y=n+1 → n=3y-1, $$ and there are no integers x and y such that both of these equations would hold at the same time. This happens with every odd integer larger than 1. Is the part about the odd integers greater than 1 rigorous enough for a proper proof? I don't think it is; how would you prove this more rigorously? If anyone has any other improvement ideas of any kind, they're more than welcome! $\endgroup$ 4 • $\begingroup$ Seems unnecessarily complex. If $d|a$ and $d|b$ then $d|(a-b)$. Hence, in your case, $d|(n+1-n)\implies d|1$. Thus the only positive common divisor of $n$ and $n+1$ is 1. $\endgroup$ – lulu Commented Nov 8, 2015 at 22:34 • $\begingroup$ $\gcd(a,b)\le|a-b|$ $\endgroup$ – JMP Commented Nov 8, 2015 at 22:36 • $\begingroup$ It should be noted that the solution OP has given really is just a long-winded version of the version @lulu and I have given $\endgroup$ – ASKASK Commented Nov 8, 2015 at 22:37 • $\begingroup$ (obv) for $a\ne b$ $\endgroup$ – JMP Commented Nov 8, 2015 at 22:46 3 Answers 3 0 $\begingroup$ If you know that $a \mid b$ and $a \mid c$ implies $a \mid b \pm c$, then this theorem is easier than you have laid it out to be. Let $a$ be any common divisor of $n$ and $n+1$. Then $a \mid (n+1)-n$, so $a \mid 1$. Do you think you can take it from here? In regards to the first part, just note that if $ap=b$ and $aq=c$, then $b \pm c = a(p \pm q)$. $\endgroup$ 0 $\begingroup$ "3|n→3x=n→n=3x and 3|n+1→3y=n+1→n=3y−1, , and there are no integers x and y such that both of these equations would hold at the same time. This happens with every odd integer larger than 1." I really hate to say this, but if you can state that there are no integers that solve these, you could just have easily have stated "There are no integers other than 1 that divide both n and n+1" and saved yourself a lot of trouble. In fact, that's the gyst of the matter, the only number that divides both n and n+1 is 1. So how do you prove that? If you accept that if a|b and a|c then a|(b - c), it is easy, as a|n+1 and a|n implies a|(n+1 -n) = 1. And if a|1 = 1. So gcd(n, n+1) = 1. If you don't know that if a|b and a|c then a|(b - c). Well you know that if a|n then n = a*m for some m. So $\frac {n+ 1}{a} = m + \frac{1}{a}$. If a > 1 then this is not an integer and $a$ does not divide $n + 1$. So no integer other than 1 divides both n and n + 1. So gcd(n, n+1) = 1. $\endgroup$ 1 • $\begingroup$ Er... no positve integer other than 1 divides both n and n+1..... $\endgroup$ – fleablood Commented Nov 8, 2015 at 22:58 0 $\begingroup$ If $n$ is even then $n+1$ is odd, therefore $gcd(n,n+1)=1$ Analogously, if $n$ is odd then $n+1$ is even, therefore $gcd(n,n+1)=1$. Consecutive Numbers are coprime and https://proofwiki.org/wiki/Consecutive_Integers_are_Coprime $\endgroup$ You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
__label__pos
0.982667
7 $\begingroup$ I'm graphing a Markov process mp = DiscreteMarkovProcess[{1, 0, 0}, ({ {0.6, 0.1, 0.3}, {0.2, 0.7, 0.1}, {0.3, 0.3, 0.4} })]; and would like to have arrows whose thicknesses corresponds to the transition probabilities, with arrowheads of a different color in the exact center of each edge. But all my attempts end up a mess. g = Graph[mp]; Scan[(PropertyValue[{g, #}, EdgeLabels] = PropertyValue[{g, #}, "Probability"]) &, EdgeList[g]]; Scan[(PropertyValue[{g, #}, EdgeStyle] = Directive[Arrowheads[{{.045, .575}}], GrayLevel[.7], Thickness[PropertyValue[{g, #}, "Probability"]/20]]) &, EdgeList[g]]; g The thick edges leave gaps between their ends and the nodes of the graph, and I can't figure out how to change the color of the arrow heads so that they stand out against the color of the edges. How can I change the color of the arrowheads in my figure. How can I avoid the gaps that appear between nodes and the ends of the edges? $\endgroup$ • $\begingroup$ Take a look at EdgeShapeFunction. $\endgroup$ – wxffles May 7 '14 at 22:47 • $\begingroup$ @wxffles: Looks intriguing; but I'm not sure where to go with it. It seems to amount to "build it from scratch". $\endgroup$ – orome May 7 '14 at 23:21 7 $\begingroup$ Using an EdgeShapeFunction seems to do what you want. Adapting from the examples in the help: ef[pts_List, e_] := {Arrowheads[{{0.1, 0.5, Graphics@{Red, Arrowheads[0.5], Arrow[{{0, 0}, {0.5, 0}}]}}}], Arrow[pts]} g = Graph[mp]; Scan[(PropertyValue[{g, #}, EdgeLabels] = PropertyValue[{g, #}, "Probability"]) &, EdgeList[g]]; Scan[(PropertyValue[{g, #}, EdgeStyle] = Directive[GrayLevel[.7], Thickness[PropertyValue[{g, #}, "Probability"]/20]]) &, EdgeList[g]]; Scan[(PropertyValue[{g, #}, EdgeShapeFunction] = ef) &, EdgeList[g]]; g red arrows It's a bit ugly, with mysterious red dots within the arrowheads. But this only reflects how little time I've put into it. With some competence and patience I suspect it could do what you want. Edit: Something nicer: ef[pts_List, e_] := {Arrowheads[{{0.02, 0.65, Graphics@{Red, EdgeForm[Gray], Polygon[{{-1.5, -1}, {1.5, 0}, {-1.5, 1}}]} }}], Arrow[pts]} red arrows 2 | improve this answer | | $\endgroup$ • $\begingroup$ Is there a way to keep the arc shapes of the original? $\endgroup$ – orome May 8 '14 at 0:47 • $\begingroup$ Not easily as far as I can tell. It's just using the points that it gets passed. I'm not sure what the default EdgeShapeFunctions do to it. $\endgroup$ – wxffles May 8 '14 at 1:05 • $\begingroup$ @wxffles How to access the built-in set of arrow heads is described here. $\endgroup$ – Alexey Popkov May 8 '14 at 7:09 2 $\begingroup$ If you don't mind having a Graphics object, you can replace the Arrowheads directives with wxffles's Arrowheads specification, and get to keep the arc shapes of the orginial g: arrowheads = Arrowheads[{{0.02, 0.65, Graphics@{Red, EdgeForm[Gray], Polygon[{{-1.5, -1}, {1.5, 0}, {-1.5, 1}}]}}}]; g2 = Show[g] /. TagBox -> (# &) /. _Arrowheads :> arrowheads enter image description here If you have to have a Graph object, you can extract the edge primitives from g2 and use them as EdgeShapeFunction for g: edgehapefunctions = Function /@ Cases[g2[[1]], {dirs___, _Arrowheads, _ArrowBox}, {0, Infinity}]; SetProperty[g, EdgeShapeFunction -> Thread[EdgeList[g] -> edgehapefunctions]] enter image description here | improve this answer | | $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.885428
Buffer Overflows No, I’m not talking about the kind of buffer overflows that viruses can take advantage of to inject malicious code onto other systems, I’m talking about the kind that, if you use Filemon or Regmon, you’ve probably seen in their traces. If you’ve never noticed one, fire up one of those two tools and after collecting a log of system-wide activity, find an example by searching for “buffer overflow”. Here’s an example of file system buffer overflow errors: Do these errors indicate a problem? No, they are a standard way for the system to indicate that there’s more information available than can fit into a requester’s output buffer. In other words, the system is telling the caller that if it was to copy all the data requested, it would overflow the buffer. Thus, the error really means that a buffer overflow was avoided, not that one occurred. Given that a buffer overflow means that a requester didn’t receive all the data that they asked for you’d expect programmers to avoid them, or when they can’t, to follow with another request specifying a buffer large enough for the data. However, in the Filemon trace neither case applies. Instead, there are two different requests in a row, each resulting in buffer overflow errors. In the first request the Csrss.exe process, which is the Windows environment subsystem process, queries information about a file system volume and in the second request it queries information about a particular file. It doesn’t follow up with successful requests, but continues with other activity. The answer to why Csrss.exe doesn’t care that its requests result in errors lies in the type of requests it’s making. A program that queries volume information using Windows APIs is underneath using the NtQueryVolumeInformationFile API that’s exported by Ntdll.dll, the Native API export DLL (you can read more about the Native API here). There are several different classes of information that a program can query. The one that Csrss is asking for in the trace is FileFsVolumeInformation. The Windows Installable File System (IFS) Kit documents that for that class a caller should expect output data to be formatted as a FILE_FS_VOLUME_INFORMATION structure, which looks like this: typedef struct _FILE_FS_VOLUME_INFORMATION { LARGE_INTEGER VolumeCreationTime; ULONG VolumeSerialNumber; ULONG VolumeLabelLength; BOOLEAN SupportsObjects; WCHAR VolumeLabel[1]; } FILE_FS_VOLUME_INFORMATION, *PFILE_FS_VOLUME_INFORMATION; Notice that the first four fields in the structure have a fixed length while the last field, VolumeLabel, has a size that depends on the length of the volume’s label string. When a file system driver gets this type of query it fills in as much information as fits in the caller’s buffer and, if the buffer is too small to hold the entire structure, returns a buffer overflow error and the size of the buffer required to hold all the data. I suspect that Csrss is really only interested in the volume creation time and therefore passing in a buffer only large enough to hold the first part of the structure. The file system driver fills that part in, and because the volume label won’t fit in Csrss’s buffer, returns an error. However, Csrss has gotten the information it wanted and ignores the error. The second buffer overflow has a similar explaination. Csrss is querying information about a file using the FileAllInformation class of NtQueryInformationFile. The IFS Kit documents the output structure as: typedef struct _FILE_ALL_INFORMATION { FILE_BASIC_INFORMATION BasicInformation; FILE_STANDARD_INFORMATION StandardInformation; FILE_INTERNAL_INFORMATION InternalInformation; FILE_EA_INFORMATION EaInformation; FILE_ACCESS_INFORMATION AccessInformation; FILE_POSITION_INFORMATION PositionInformation; FILE_MODE_INFORMATION ModeInformation; FILE_ALIGNMENT_INFORMATION AlignmentInformation; FILE_NAME_INFORMATION NameInformation; } FILE_ALL_INFORMATION, *PFILE_ALL_INFORMATION; Again, the only variable length field is the last one, which stores the name of the file being queried. If Csrss doesn’t care about the name, only the information preceding it in the structure, it can pass a buffer only large enough to hold those fields and ignore the buffer overflow error. Incidentally, a stack trace of the second buffer overflow reveals this: What is the "sxs" module? A look at the sxs DLL in Process Explorer’s DLL View of the Csrss process shows this: SxS is the “Fusion” DLL, which a little research will show manages the Side-by-Side Assembly storage that allows multiple versions of the same DLLs to exist in harmony on a system. SxS is calling GetFileInformationByHandle, which is a Windows API documented in the Platform SDK. The API takes a file handle as input and returns a buffer formatted as a BY_HANDLE_FILE_INFORMATION structure: typedef struct _BY_HANDLE_FILE_INFORMATION { DWORD dwFileAttributes; FILETIME ftCreationTime; FILETIME ftLastAccessTime; FILETIME ftLastWriteTime; DWORD dwVolumeSerialNumber; DWORD nFileSizeHigh; DWORD nFileSizeLow; DWORD nNumberOfLinks; DWORD nFileIndexHigh; DWORD nFileIndexLow; } BY_HANDLE_FILE_INFORMATION; All of the information returned in this structure, except for the volume serial number, is also returned in the FILE_ALL_INFORMATION structure. You can therefore probably guess where the call to NtQueryVolumeInformationFile that occurs immediately prior to the NtQueryInformationFile call originates: GetFileInformationByHandle first queries the volume in order to get its serial number. Our investigation shows that the buffer overflow errors seen in the Filemon trace are errors expected by the GetFileInformationByHandle API, which is simply avoiding the need to allocate buffers large enough to hold information it’s not interested in. The bottom line is that buffer overflow errors in a Filemon trace are not an indication that there's a security problem and are usually not due to bad programming. Next time I’ll explore buffer overflows in Regmon traces. Originally by Mark Russinovich on 5/17/2005 5:18:00 PM Migrated from original Sysinternals.com/Blog # re: Buffer Overflows Many people claim Microsoft applications gain an unfair advantage because they use undocumented (in their opinion, secret) functions of Windows. I know using undocumented functions is bad (in general). If you have time, I'd love to see a treatment affirming or refuting the common belief that Microsoft takes advantage of their OS in ways nobody else can, or point me toward any other treatments others have done on this topic. Thanks for your time! 5/17/2005 7:32:00 PM by John # re: Buffer Overflows So, If I've got this straight... The buffer overflow you've just spoken off is that the system is returning too much data to the requesting application. And this is the opposite to the type of buffer overflow that is a security risk were the application sends too much data to the system and the data is somehow given over to another handler. Is that about it? 5/17/2005 9:50:00 PM by rivet # re: Buffer Overflows It's different because in this case the application is giving the OS the correct size of the buffer, so it will not write too much data. In a real security risk overrun, the application will usually assume a buffer size that is large enough, which means if it gets sent back too much data, that data will overflow the buffer and overwrite another part of the application. This allows somebody to (for example) overwrite the stack to modify the return address of the current operation, and execute some malicious code. 5/18/2005 2:28:00 AM by Wells # re: Buffer Overflows Er, the term "buffer overflow" used in this context is utterly wrong. The error is actually "buffer too small", represented by the constant ERROR_MORE_DATA. 5/18/2005 8:09:00 AM by Jacques Troux # re: Buffer Overflows John, Sxs.dll is a system component, much like ntdll/kernel32. There is no conspiracy of Microsoft applications using undocumented APIs. Plus, as Mark said, GetFileInformationByHandle *is* documented. 5/18/2005 2:05:00 PM by Junfeng # re: Buffer Overflows I tried the CPUMon v3.0, but it crashed my CPU and immediate restart my PC. Don't know why?(I used the HP-PC(HP PC DX6120 CPU:P4-3.0GB 256MB RAM 80GB HDD WIN XP PRO 3-1-1 WARRANTY FDD LAN 16X DVD). Thank you! 5/25/2005 7:40:00 PM by Doulos B. Warn # re: Buffer Overflows In response to; Doulos B. Warn said... 7:40 PM, May 25, 2005 CPUmon page here; http://www.sysinternals.com/ntw2k/freeware/cpumon.shtml States (in red): "Note that CPUMon does not work on Pentium 4 CPUs." 5/26/2005 11:00:00 PM by rivet # re: Buffer Overflows >States (in red): "Note that CPUMon does not work on Pentium 4 CPUs." Gotta love HyperThreading, eh? 6/3/2005 9:05:00 AM by Revenant
__label__pos
0.568767
Geometric invariant theory (or GIT) is a method for constructing quotients by group actions in algebraic geometry, used to construct moduli spaces. learn more… | top users | synonyms 18 votes 4answers 2k views When are GIT quotients projective? Some background on GIT Suppose G is a reductive group acting on a scheme X. We often want to understand the quotient X/G. For example, X might be some parameter space (like the space of possible ... 4 votes 3answers 979 views Why can we define the moment map in this way (i.e. why is this form exact)? Given a symplectic manifold $(X, \omega)$ and a group $G$ acting on $X$ preserving the symplectic form, we define the moment map $\mu : X \to \mathfrak{g}^*$ so that $$ \langle d\mu(v), \xi\rangle = ... 12 votes 3answers 1k views Can a coequalizer of schemes fail to be surjective? Suppose $g,h:Z\to X$ are two morphisms of schemes. Then we say that $f:X\to Y$ is the coequalizer of $g$ and $h$ if the following condition holds: any morphism $t:X\to T$ such that $t\circ g=t\circ h$ ... 9 votes 3answers 1k views Toric varieties as quotients of affine space One way to define toric varieties is as quotients of affine $n-$space by the action of some torus. However, this is not strictly true as we need to throw away "bad points" which ruin this ... 6 votes 2answers 928 views Understanding the definition of the quotient stack $[X/G]$ I'm trying to understand the definition of the quotient stack $[X/G]$ as defined in Frank Neumann's Algebraic Stacks and Moduli of Vector Bundles. Explicitly, let $G$ be an affine smooth group ... 10 votes 1answer 690 views A line bundle that does not admit a G-linearisation I have been thinking about quotients lately and pondered the following: Let $G$ be a connected linear algebraic group and $X$ a $G$-variety acting via the morphism $\sigma:G\times X\rightarrow X$. ... 1 vote 2answers 245 views Are orbits of an affine algebraic monoid affine? Let us work over the complex numbers for simplicity. Let $M$ be an affine algebraic monoid and $X$ an affine variety on which $M$ acts regularly, i.e. there is a morphism $\alpha: M\times X\to X$. Let ... 4 votes 1answer 1k views Why we study Geometric invariant theory? I am trying to learn Geometric invariant theory like it was introduced by Mumford. But I do not have a strong motivation and so I want to know the reason of studying Geometric invariant theory. I just ...
__label__pos
0.991251
REPRESENTATION OF SET Set can be represented in any one of the following three ways or forms. (i)  Descriptive form (ii)  Set-builder form or Rule form (iii)  Roster form or Tabular form Let us discuss the above different forms representation of a set in detail.  Descriptive Form One way to specify a set is to give a verbal description of its elements. This is known as the descriptive form of specification. The description must allow a concise determination of which elements belong to the set and which elements do not. For example,  (i) The set of all natural numbers. (ii) The set of all prime numbers less than 100. (iii) The set of all letters in English alphabets. Set-Builder Form or Rule Form Set-builder notation is a notation for describing a set by indicating the properties that its members must satisfy. Reading Notation : A  =  { x : x is a letter in the word "dictionary" } We read it as  “A is the set of all x such that x is a letter in the word dictionary” For example, (i)  N  =  { x : x is a natural number } (ii)  P  =  { x : x is a prime number less than 100 } (iii)  A  =  { x : x is a letter in the English alphabet } Roster Form or Tabular Form Listing the elements of a set inside a pair of braces {  } is called the roster form. For example, (i)  Let A be the set of even natural numbers less than 11.  In roster form we write,  A  =  {2, 4, 6, 8, 10} (ii) A  =  {x : x is an integer and  -1 ≤ x < 5} In roster form we write,  A  =  {-1, 0,1, 2, 3, 4} Representation of set in Different Forms Important points (i)  In roster form each element of the set must be listed exactly once. By convention, the elements in a set should not be repeated. (ii)  Let A be the set of letters in the word “coffee”, That is,  A  =  {c, o, f, e}. So, in roster form of the set A the following are invalid. {c, o, e} -------> (not all elements are listed) {c, o, f, f, e} -------> (element ‘f’ is listed twice) (iii)  In a roster form the elements in a set can be written in any order. The following are valid roster form of the set containing the elements 2, 3 and 4. {2, 3, 4} {2, 4, 3} {4, 3, 2} Each of them represents the same set. (iv)  If there are either infinitely many elements or a large finite number of elements, then three consecutive dots called ellipsis are used to indicate that the pattern of the listed elements continues, as in   {5, 6, 7,......}  or  {3, 6, 9, 12, 15,........60} (v) Ellipsis can be used only if enough information has been given so that one can figure out the entire pattern. Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here. Kindly mail your feedback to [email protected] We always appreciate your feedback. ©All rights reserved. onlinemath4all.com Recent Articles 1. Cross Product Rule in Proportion Oct 05, 22 11:41 AM Cross Product Rule in Proportion - Concept - Solved Problems Read More 2. Power Rule of Logarithms Oct 04, 22 11:08 PM Power Rule of Logarithms - Concept - Solved Problems Read More 3. Product Rule of Logarithms Oct 04, 22 11:07 PM Product Rule of Logarithms - Concept - Solved Problems Read More
__label__pos
0.997245
Slavcho Slavcho - 1 year ago 98 Swift Question Lock only one ViewController's orientation to landscape, and the remaining to portrait I want to lock orientation in all ViewControllers to portrait, except on one of them, when it is pushed always to be in landscapeRight. I've tried many solutions, using extensions for UINavigationController , overriding supportedInterfaceOrientations and shouldAutorotate but no luck. Answer Source I've found the solution, which for now it is working. 1. On every view controller you should put this code for supporting only the desired rotation (landscapeRight in the example): override var shouldAutorotate: Bool { return true } override var supportedInterfaceOrientations: UIInterfaceOrientationMask { return UIInterfaceOrientationMask.landscapeRight } override var preferredInterfaceOrientationForPresentation: UIInterfaceOrientation { return .landscapeRight } On the other implement the same methods but with portrait orientation. 2. Create an extension for UINavigationController open override var supportedInterfaceOrientations: UIInterfaceOrientationMask { return (self.topViewController?.supportedInterfaceOrientations)! } open override var shouldAutorotate: Bool { return true } Note: Also in Project's target enable all desired supported orientations. The only one controller that I wanted to show in landscape mode i presented modally.
__label__pos
0.505775
When code is suspiciously fast: adventures in dead code elimination Part of a recent assignment for one of my classes involved calculating the Fibonacci sequence both recursively and iteratively and measuring the speed of each method. (BONUS: For a fun diversion, here is a paper I wrote about using the Golden Ratio, which is closely related to the Fibonacci sequence, as a base for a number system). In addition, we were supposed to pass the actual calculation as a function pointer argument to a method that measured the execution time. The task was fairly straight forward, so I fired up Visual Studio 2015 and got to work. I usually target x64 during development (due to some misguided belief that the code will be faster), and when I ran the code in release mode I received the following output as the time needed to calculate the 42nd Fibonacci number: Recursive: 0.977294758 seconds Iterative: 0.000000310 seconds Since calculating $F_{42}$ through naive recursion requires ~866 million function calls, this pretty much jived with my expectations. I was ready to submit the assignment and close up shop, but I decided it’d be safer to submit the executable as as 32-bit application. I switched over to x86 in Visual Studio, and for good measure ran the program again. Recursive: 0.000000000 seconds Iterative: 0.000000311 seconds Well then. That was… suspiciously fast. For reference, here is (a stripped down version of) the code I was using. In debug mode the code took the expected amount of time; only the release build targeting x86 was exhibiting the seemingly blazingly fast performance. What was happening here? Some constexpr magic resulting in the compiler precomputing the answer? An overly aggressive reordering of the now() calls? To figure out the answer I opened the executable in IDA and started poking around. Start of main() on x86 generated by Visual Studio 2015 Start of main() on x86 generated by Visual Studio 2015 No wonder the code took almost no time — we’re simply measuring the time it takes to measure the lea instruction! The next section of code appeared to be the fib_iterative  function: Inlined fib_iterative function Inlined fib_iterative function It would appear that a function pointer is no barrier to Visual Studio’s inlining; measure_execution_time  never explicitly appears as a discrete subroutine. Regardless, the inlined assembly fib_iterative  is about has straightforward as possible. Over on x64 the code appears even simpler (all code was compiled with /O2). Start of main() on x64 generated by Visual Studio 2015 Start of main() on x64 generated by Visual Studio 2015 The function pointer inlining is gone, replaced with the more or less expected code, i.e. load the address of the function into an argument register and then call measure_execution_time . So what’s the deal here? Where the heck did fib_recursive go on x86? I believe what we’re seeing is an unexpected application of dead code elimination. On Visual Studio the assert  macro is #define assert(expression) ((void)0)  in release mode, meaning the check that the return is equal to F_42  turns into nothing! Since the return of fib_recursive (now) isn’t used, and the function itself simply does trivial math (besides calling itself), the compiler has decided it serves no purpose. What’s interesting is that the compiler did not make the same determination for fib_iterative . Given the choice between the two I would have assumed that fib_iterative , with its constant sized loop, would be easier to analyze than the recursive structure of fib_recursive . What’s even weirder is that this only happens on x86, not x64. After modifying the code to display the the result of the functions with std::cout the problem went away. The moral of the story is that if you’re doing some performance unit testing make sure that your functions touch the outside world in some way; asserts aren’t always enough. Otherwise, the compiler may spontaneously decide to eliminate your code all together (and it may be platform dependent!), giving the illusion of incredible speed and cleverness on your part 🙂 This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. 10 thoughts on “When code is suspiciously fast: adventures in dead code elimination” 1. I’d guess the reason for the recursive one being easier to eliminate is that it doesn’t have any assignments at all. 2. Something I learned the hard way, too: You always have to make double sure your assertions have no side effects. Your assertion obviously had a side effect. Even if it was hard to spot. Good read! 3. Clean, pretty code, can debug and down to assembly, and your assignment is Fibonacci?? Seriously? Apply for a PhD or something dude. You are wasting your talent 🙂 4. You don’t even need IDA, just set a breakpoint in Visual Studio inside the fibonacci function and one at the start of main, you’ll notice that the fib function never gets executed (and visual studio has a nice warning on the breakpoint that it cannot associate that line of code with the assembly code) 5. Since assert() is defined as a macro, and in c/c++ macros are expanded before the compiler gets to see the code, in release mode from the compilers point of view, the output of both fib_recursive and fib_iterative where in fact never used. So what the compiler saw from what you wrote was indeed dead code. As to why it decided to only dead code eliminate the recursive version, that is likely a question only MS can answer. Moral, don’t use the assert() macro as your only use of a result of a computation. That was never the intent of assert(), and given its implementation as a macro you get unexpected side effects when you use it in that manner. 6. Your comparison is useless. The recursive version of the function uses an exponential algorithm with run time O(2**n) and the iterative version of the function uses a linear algorithm with run time O(n). That explains the difference in your first reported benchmark. The rest of your explanation boils down to “the compiler did what I told it to do” which was different that what you expected it to do. 1. The comparison was NOT useless, the whole point of the exercise was to benchmark the difference in performance of the two implementations. In fact, you contradicted your first sentence by explaining the purpose in your second sentence. 7. Is there a difference between how visuaal Studio 2013 and 2015 perform DCE? I have code with a memset(buf,0, n); at the end of a function, to zeroise a sensitive buffer. It’s optimized out when compiled on VS13 but not when compiled on VS15. This is good in principle but it would be nice to know when security sensitive information is or is not optimized out, as to not be caught unaware in another context. Leave a Reply Your email address will not be published.
__label__pos
0.509763
React component communication, parent First, the parent component passes values ​​to the child components The parent component uses the props property to pass down the value. Define the name and value passed to the child component on the child component defined by the parent component, and then call this.props.xxx on the child component. Second, the child component passes the value to the parent component Like vue, child components pass values ​​to their parent components using props and callback functions. 1.In subcomponents    ' react ' ; import React, {Component} from ' react ' ; DateChild extends Component { class DateChild extends Component { constructor (props, context) { super (props, context); .state = { this .state = { ' 我是子组件值 ' val: ' I am a child component value ' } } childClick (e) { .props.callback( this .state.val) this .props.callback ( this .state.val) } render () { ( return ( .childClick(e)}>点击传值给父组件</div> </ div> ); } } default DateChild; export default DateChild; Because the value is passed from the child component to the parent component, the click event trigger function must be defined in the child component, and the function passed from the parent component is called through this.props, which is the same as the parent component passing down the value. . Then we define the function passed from the child component's props in the parent component, and call this function on the parent component, which is generally written as follows: import React, {Component} from 'react' ; '../component/DateChild.js' import DateChild from '../component/DateChild.js' class Date extends Component { constructor (props, context) { super (props, context); .state = { this .state = { '我是父组件值' content: 'I am the parent component value' } } callback (mess) { .setState({ this .setState ({ content: mess }) } render () { ( return ( this .state.content} <div> { this .state.content} .callback.bind( this )} /> <DateChild callback = { this .callback.bind ( this )} /> </ div> ); } } default Date; export default Date; If it is a non-parent-child component, I generally use a globally defined state storage mechanism to implement it. Leave a Reply
__label__pos
0.999943
How to use function 'ind2gray' 18 views (last 30 days) I'm trying to convert uint8 image to grayscale image using the following command a = imread('image.png'); % a dimension 18*24*3 b = ind2gray(a, map); But I'm getting an error message saying Undefined function or variable 'map' Accepted Answer Image Analyst Image Analyst on 10 Jul 2016 You need to have some way to tell the function what color should get mapped to what gray level. The colors are a 2D array with columns for red green, and blue in the range 0-1, and one row for each gray level or range of gray levels. There are several built-in color maps to select from - see the help for colormap. For example you could do rgbImage = ind2rgb(grayImage, jet(256)); colorbar; jet() is a function that builds a colormap of a certain pattern with the number of gray levels you specify. There are others too. Or you could make up your own colormap completely customized.   2 Comments Image Analyst Image Analyst on 10 Jul 2016 Then image.png is an RGB image, not an indexed image. You have to convert it to a grayscale image somehow. There are a variaety of ways to do this and I don't know what method you want to use. For example, one way is to use rgb2gray(): indexedImage = rgb2gray(rgbImage); Or you could take just one color channel, such as the green channel, which tends to be the least noisy: indexedImage = rgbImage(:, :, 2); Sign in to comment. More Answers (0) Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting! Translated by
__label__pos
0.808823
shapely.get_exterior_ring shapely.get_exterior_ring# get_exterior_ring(geometry, **kwargs)# Returns the exterior ring of a polygon. Parameters: geometryGeometry or array_like **kwargs See NumPy ufunc docs for other keyword arguments. Examples >>> from shapely import Point, Polygon >>> get_exterior_ring(Polygon([(0, 0), (0, 10), (10, 10), (10, 0), (0, 0)])) <LINEARRING (0 0, 0 10, 10 10, 10 0, 0 0)> >>> get_exterior_ring(Point(1, 1)) is None True
__label__pos
0.905025
Any way to use MPR functionality without HTTPS? Apologies for not replying sooner. That nginx file looks fine. However, is it the nginx file for your OHIF docker image/container? If not, then be sure to update the nginx file for OHIF too. There is a sample nginx config file in .docker\Viewer-v3.x\default.conf.template (I have included it below as well) It is used to create a OHIF docker image/container when following these instructions: Docker | OHIF. server { listen ${PORT}; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html; add_header Cross-Origin-Opener-Policy same-origin; add_header Cross-Origin-Embedder-Policy require-corp; add_header Cross-Origin-Resource-Policy same-origin; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } Recall that the Cross-Origin-Opener-Policy and Cross-Origin-Embedder-Policy are key. Hope this helps. 1 Like
__label__pos
0.751195
7 2 I am in the following situation. I have two (rather explicit and specific) dg commutative algebras $R,S$ over a field of characteristic $0$. In fact, $S$ is an $R$-algebra, in that I have a map $R \to S$. Because I am interested in computing some derived tensor products $S \otimes_R$, I have worked out a "Koszul" resolution $\tilde S$ of $S$ over $R$. So all seems well. But! Actually, $S$ and $R$ are both Gerstenhaber algebras (in dg vector spaces; the differential and the Gerstenhaber bracket point in opposite directions), and the map $R \to S$ is a homomorphism of Gerstenhaber algebras. My problem is that I have been so far unsuccessful at giving the resolution $\tilde S$ a Gerstenhaber structure such that the resolved map $R \to \tilde S$ is a homomorphism of Gerstenhaber algebras. This leads me to two questions. The second question depends on the answer to the first. Question 1: Does there necessarily exist a resolution of $S$ that computes the derived $S\otimes_R$ and that is Gerstenhaber in a compatible way? Question 2 if the answer to 1 is YES: How do I construct it? Question 2 if the answer to 1 is NO: Certainly my homotopy equivalence $S \leftrightarrow \tilde S$ allows me to move the Gerstenhaber structure on $S$ to something on $\tilde S$. What structure on $\tilde S$ does it move to? flag the minimal model for "infinity Gerstenhaber" can be described "explicitly" since the Gerstenhaber operad is Koszul; it is generated by operations that look like symmetric products of free Lie words. In your case, since the commutative product part is formal, some part of this structure will collapse. At the very least, the "C-infinity" part, which corresponds to symmetric products of the one-term Lie word without any brackets, should be the identity in arity 1, the product in arity 2, and then 0 beyond that. I suspect the answer to question 1 is yes, which would make this comment moot. – Gabriel C. Drummond-Cole Apr 29 2011 at 6:37 As Gabriel says above a resolution can be computed. However that resolution is necessarily large and potentially unwieldy. Fortunately in many nice examples smaller resolutions exist. For example if you have a Lie-Rinehart algebra (also called a Lie-algebroid), then you may freely generate a Gerstenhaber algebra from it. If you wanted a cofibrant model of this Gerstenhaber algebra it would be enough to take a cofibrant model of the Lie-Rinehart algebra and then take the Gerstenhaber algebra of that. The resulting resolution would be far smaller. – James Griffin Apr 29 2011 at 15:03 Following on from the comment above, does your algebra S have any special properties? Is it in the image of some functor from a different category of algebras? You said that you have a "Koszul" resolution, can you put the Gerstenhaber bracket on those generators? If not, is there another resolution on which you can? – James Griffin Apr 29 2011 at 15:09 1 Answer 4 Question 1: Does there necessarily exist a resolution of S that computes the derived $S\otimes_R$ and that is Gerstenhaber in a compatible way? Yes. As pointed out in the comments, the category of dg Gerstenhaber algebra admits a model structure in which the weak equivalences are the quasi-isomorphisms, fibrations are degreewise surjections, and cobifrant obects are those dg Gerstenhaber algebras that are free as graded algerbas. This actually wors with dg algebras over any given operad $\mathcal O$ (in place of Gerstenhaber). This is proved in Hinich's paper (quoted by the nLab: http://ncatlab.org/nlab/show/model+structure+on+dg-algebras+over+an+operad). Then there is also a natural model structure on the category of dg Gerstenhaber $R$-algebras (there is a more general statement about existence of a model structure on the category of objects under a given one $X$ in a model category $\mathcal C$). So, the answer to the title of your question is that you don't "need" to know what a $G_\infty$-algebra is. Question 2 if the answer to 1 is YES: How do I construct it? Shortly, bar-cobar. You can have a look at Homologie et model minimal des algèbres de Gerstenhaber in order to see how it works in details. Btw, the above paper also tells you what is the definition of a $G_\infty$-algebra. Question 2 if the answer to 1 is NO: Certainly my homotopy equivalence $S\leftrightarrow\widetilde{S}$ allows me to move the Gerstenhaber structure on $S$ to something on $\widetilde{S}$. What structure on $\widetilde{S}$ does it move to? Even though the answer to Question 1 is YES, there is still something to say here. There is on $\widetilde{S}$ a $G_\infty$-structure. This is "just" homotopy transfer formula (and the use of the explicit minimal model for the Gerstenhaber operad). The homotopy transfer for algebras over operad $\mathcal O$, w.r.t. to a cofibrant resolution $\widetilde{\mathcal O}\to\mathcal O$ is proved in the appendix A.2 of my paper with Van den Bergh (see also Theorem 10.3.6 in Loday-Vallette's Algebraic Operads for the Koszul case). link|flag Your Answer Get an OpenID or Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.987886
Debian packages in the Package Registry (FREE) WARNING: The Debian package registry for GitLab is under development and isn't ready for production use due to limited functionality. This epic details the remaining work and timelines to make it production ready. NOTE: The Debian registry is not FIPS compliant and is disabled when FIPS mode is enabled. Publish Debian packages in your project's Package Registry. Then install the packages whenever you need to use them as a dependency. Project and Group packages are supported. For documentation of the specific API endpoints that Debian package manager clients use, see the Debian API documentation. Enable the Debian API (FREE SELF) Debian repository support is still a work in progress. It's gated behind a feature flag that's disabled by default. GitLab administrators with access to the GitLab Rails console can opt to enable it. To enable it: Feature.enable(:debian_packages) To disable it: Feature.disable(:debian_packages) Enable the Debian group API (FREE SELF) The Debian group repository is also behind a second feature flag that is disabled by default. To enable it: Feature.enable(:debian_group_packages) To disable it: Feature.disable(:debian_group_packages) Build a Debian package Creating a Debian package is documented on the Debian Wiki. Authenticate to the Package Registry To create a distribution, publish a package, or install a private package, you need one of the following: Create a Distribution On the project-level, Debian packages are published using Debian Distributions. To publish packages on the group level, create a distribution with the same codename. To create a project-level distribution: curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/<project_id>/debian_distributions?codename=<codename>" Example response with codename=unstable: { "id": 1, "codename": "unstable", "suite": null, "origin": null, "label": null, "version": null, "description": null, "valid_time_duration_seconds": null, "components": [ "main" ], "architectures": [ "all", "amd64" ] } More information on Debian distribution APIs: Publish a package Once built, several files are created: • .deb files: the binary packages • .udeb files: lightened .deb files, used for Debian-Installer (if needed) • .tar.{gz,bz2,xz,...} files: Source files • .dsc file: Source metadata, and list of source files (with hashes) • .buildinfo file: Used for Reproducible builds (optional) • .changes file: Upload metadata, and list of uploaded files (all the above) To upload these files, you can use dput-ng >= 1.32 (Debian bullseye): cat <<EOF > dput.cf [gitlab] method = https fqdn = <username>:<your_access_token>@gitlab.example.com incoming = /api/v4/projects/<project_id>/packages/debian EOF dput --config=dput.cf --unchecked --no-upload-log gitlab <your_package>.changes Install a package To install a package: 1. Configure the repository: If you are using a private project, add your credentials to your apt configuration: echo 'machine gitlab.example.com login <username> password <your_access_token>' \ | sudo tee /etc/apt/auth.conf.d/gitlab_project.conf Download your distribution key: sudo mkdir -p /usr/local/share/keyrings curl --header "PRIVATE-TOKEN: <your_access_token>" \ "https://gitlab.example.com/api/v4/projects/<project_id>/debian_distributions/<codename>/key.asc" \ | \ gpg --dearmor \ | \ sudo tee /usr/local/share/keyrings/<codename>-archive-keyring.gpg \ > /dev/null Add your project as a source: echo 'deb [ signed-by=/usr/local/share/keyrings/<codename>-archive-keyring.gpg ] https://gitlab.example.com/api/v4/projects/<project_id>/packages/debian <codename> <component1> <component2>' \ | sudo tee /etc/apt/sources.list.d/gitlab_project.list sudo apt-get update 2. Install the package: sudo apt-get -y install -t <codename> <package-name> Download a source package To download a source package: 1. Configure the repository: If you are using a private project, add your credentials to your apt configuration: echo 'machine gitlab.example.com login <username> password <your_access_token>' \ | sudo tee /etc/apt/auth.conf.d/gitlab_project.conf Download your distribution key: sudo mkdir -p /usr/local/share/keyrings curl --header "PRIVATE-TOKEN: <your_access_token>" \ "https://gitlab.example.com/api/v4/projects/<project_id>/debian_distributions/<codename>/key.asc" \ | \ gpg --dearmor \ | \ sudo tee /usr/local/share/keyrings/<codename>-archive-keyring.gpg \ > /dev/null Add your project as a source: echo 'deb-src [ signed-by=/usr/local/share/keyrings/<codename>-archive-keyring.gpg ] https://gitlab.example.com/api/v4/projects/<project_id>/packages/debian <codename> <component1> <component2>' \ | sudo tee /etc/apt/sources.list.d/gitlab_project-sources.list sudo apt-get update 2. Download the source package: sudo apt-get source -t <codename> <package-name>
__label__pos
0.521739
Dancing Pokemon(CJ01) Editorial Can someone provide me with editorial of this problem Thanks in advance The simplest solution would be by using STL-MAPS. We just have to check if the number has occurred before or not. Create a map with an integer and a boolean value. Just insert into the map all the poke-ids with boolean value 1(true). Next just check if the next pokemon is already in the map. If YES, output “YES”, else output “NO” and insert it into the map with true. Complexity:- O(N.Log(N)) Other solution could be simply to use binary trees. you can use set to do this question see my solution here link texthttps://www.codechef.com/viewsolution/11219412 Hello chefs :slight_smile: I’m new to codechef and also the beginner to the programming :slight_smile: My solution to this problem is import java.util.*; public class Main { public static void main(String args[]) { ArrayList <Integer> partyattendersid; partyattendersid = new ArrayList <Integer>(); Scanner scanner = new Scanner(System.in); int T; T = scanner.nextInt(); for(int i=0;i<T;i++) { int N,Q,id; N = scanner.nextInt(); Q = scanner.nextInt(); for(int j=0;j<N;j++) { id = scanner.nextInt(); if(partyattendersid.indexOf(id) != -1); else partyattendersid.add(id); } int qid[] = new int[Q]; for(int j=0;j<Q;j++) qid[j] = scanner.nextInt(); for(int j=0;j<Q;j++) { if(partyattendersid.indexOf(qid[j]) == -1) { System.out.println("NO"); partyattendersid.add(qid[j]); } else System.out.println("YES"); } } } } I have tested my code in my system I’m getting the correct answer, but I submit to codechef I’m getting the wrong answer :frowning: please help me… Any error in my code??? Thanks in advance :slight_smile: please help me to find error https://www.codechef.com/viewsolution/11157449 //
__label__pos
0.841454
数组溢出 为了简要概述 CBMC 的功能,我们从一个小例子开始:缓冲区溢出问题。 缓冲区是连续分配的内存块,由 C 中的数组或指针表示。用 C 编写的程序不提供对缓冲区的自动边界检查,这意味着程序可以意外或恶意地写入缓冲区。 以下示例是一个完全有效的 C 程序(在某种意义上,编译器编译它没有任何错误): 1 2 3 4 5 int main() { int buffer[10]; buffer[20] = 10; } 但是,对分配的内存区域之外的地址的写访问可能导致意外行为。 特别是,可以利用这些错误来覆盖函数的返回地址,从而能够执行任意用户引起的代码。 CBMC 能够检测到此问题,并报告违反了缓冲区的“上限属性”。 CBMC 能够检查这些下限和上限,即使对于具有动态大小的阵列也是如此。 CBMC 安装 CBMC 支持 windows, linux, max os。在有具体的步骤, mac os MacOS 可以点此下载,安装之后,就可以使用 CBMC 了。 Screenshot 2018-07-23 at 23.58.05 CBMC 验证例子 假设我们已经正确安装了 CBMC,跟编译器一样,CBMC 接受 c 程序文件,CBMC 并不是二进制执行,而是符号执行(symbolic execution)。假设我们有以下文件: 1 2 3 4 5 6 // file1.c int puts(const char *s) { } int main(int argc, char **argv) { puts(argv[2]); } 程序中,main 函数访问了参数数组 argv,如果参数数量少于 3,那么 argv[2] 就越界了,我们现在运行 CBMC。 1 cbmc file1.c --show-properties --bounds-check --pointer-check 两个选项 --bounds-check--pointer-check 指示 CBMC 查找与指针和数组边界相关的错误。 CBMC 将打印所检查的性质列表。 请注意,它列出了一个标有“argv中的对象边界”的属性以及有故障的数组访问的位置。 CBMC在很大程度上决定了需要检查哪些性质,这是通过静态分析来实现的,该分析依赖于计算各种抽象域上的不动点。 我们得到以下结果: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 CBMC version 5.9 (cbmc-5.9) 64-bit x86_64 macos Parsing file1.c Converting Type-checking file1 Generating GOTO Program Adding CPROVER library (x86_64) Removal of function pointers and virtual functions Generic Property Instrumentation Running with 8 object bits, 56 offset bits (default) Property main.pointer_dereference.1: file file1.c line 4 function main dereference failure: pointer NULL in argv[(signed long int)2] !(POINTER_OBJECT(argv) == POINTER_OBJECT(((char **)NULL))) Property main.pointer_dereference.2: file file1.c line 4 function main dereference failure: pointer invalid in argv[(signed long int)2] !INVALID-POINTER(argv) Property main.pointer_dereference.3: file file1.c line 4 function main dereference failure: deallocated dynamic object in argv[(signed long int)2] !(POINTER_OBJECT(argv) == POINTER_OBJECT(__CPROVER_deallocated)) Property main.pointer_dereference.4: file file1.c line 4 function main dereference failure: dead object in argv[(signed long int)2] !(POINTER_OBJECT(argv) == POINTER_OBJECT(__CPROVER_dead_object)) Property main.pointer_dereference.5: file file1.c line 4 function main dereference failure: pointer outside dynamic object bounds in argv[(signed long int)2] 16l + POINTER_OFFSET(argv) >= 0l && __CPROVER_malloc_size >= 24ul + (unsigned long int)POINTER_OFFSET(argv) || !(POINTER_OBJECT(argv) == POINTER_OBJECT(__CPROVER_malloc_object)) Property main.pointer_dereference.6: file file1.c line 4 function main dereference failure: pointer outside object bounds in argv[(signed long int)2] 16l + POINTER_OFFSET(argv) >= 0l && OBJECT_SIZE(argv) >= 24ul + (unsigned long int)POINTER_OFFSET(argv) || DYNAMIC_OBJECT(argv) 事实上,这些性质不不一定跟程序的 bugs 对应,他们只是指明了潜在的错误缺陷,因为抽象解释可能是不精确的。所以,还需要进一步的分析。CBMC 提供了符号执行&模拟(symbolic execution&simulation),本质上是将程序转换为公式,这些公式程序的性质。我们用执行以下的命令: 1 cbmc file1.c --show-vcc --bounds-check --pointer-check CBMC 执行符号执行,然后输出 验证条件(Verification Conditions)。VC 是一些逻辑公式,如果逻辑是有判定过程的,那么公式就可以求证是否是 valid 的。这些 VC 通常可以用一些 SAT 或者 SMT 求解。 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 VERIFICATION CONDITIONS: file file1.c line 4 function main dereference failure: pointer outside object bounds in argv[(signed long int)2] {-1} __CPROVER_dead_object#1 == NULL {-2} __CPROVER_deallocated#1 == NULL {-3} __CPROVER_malloc_is_new_array#1 == FALSE {-4} __CPROVER_malloc_object#1 == NULL {-5} __CPROVER_malloc_size#1 == 0ul {-6} __CPROVER_memory_leak#1 == NULL {-7} __CPROVER_next_thread_id#1 == 0ul {-8} __CPROVER_pipe_count#1 == 0u {-9} __CPROVER_rounding_mode!0#1 == 0 {-10} __CPROVER_thread_id!0#1 == 0ul {-11} __CPROVER_threads_exited#1 == ARRAY_OF(FALSE) {-12} argc'#0 >= 1 {-13} !(argc'#0 >= 268435457) {-14} argv'#1 == argv'#0 WITH [argc'#0:=((char *)NULL)] {-15} argc!0@1#1 == argc'#0 {-16} argv!0@1#1 == argv' |-------------------------- {1} 8ul * (__CPROVER_size_t)(1 + argc'#0) >= 24ul 我们运行下面的命令,直接求解 VC。 1 cbmc file1.c --bounds-check --pointer-check 执行命令后,得到以下的结果:(CBMC 实际上将 VC 公式转换成 CNF 范式,然后传递给 SAT 求解器。) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Generated 6 VCC(s), 1 remaining after simplification Passing problem to propositional reduction converting SSA Running propositional reduction Post-processing Solving with MiniSAT 2.2.1 with simplifier 462 variables, 1031 clauses SAT checker: instance is SATISFIABLE Solving with MiniSAT 2.2.1 with simplifier 462 variables, 0 clauses SAT checker inconsistent: instance is UNSATISFIABLE Runtime decision procedure: 0.00586755s ** Results: [main.pointer_dereference.1] dereference failure: pointer NULL in argv[(signed long int)2]: SUCCESS [main.pointer_dereference.2] dereference failure: pointer invalid in argv[(signed long int)2]: SUCCESS [main.pointer_dereference.3] dereference failure: deallocated dynamic object in argv[(signed long int)2]: SUCCESS [main.pointer_dereference.4] dereference failure: dead object in argv[(signed long int)2]: SUCCESS [main.pointer_dereference.5] dereference failure: pointer outside dynamic object bounds in argv[(signed long int)2]: SUCCESS [main.pointer_dereference.6] dereference failure: pointer outside object bounds in argv[(signed long int)2]: FAILURE ** 1 of 6 failed (2 iterations) VERIFICATION FAILED 可以看到,第 6 个 VC 不能求解。我们运行以下命令,CBMC 会给出性质不能满足的 prigram trace。 1 cbmc file1.c --bounds-check --pointer-check --trace 验证入口 上述例子中,程序验证的入口是 main 函数,实际上,CBMC 可以选择任意的函数入口。比如针对以下的程序(file2.c) 1 2 3 4 5 6 7 int array[10]; int sum() { unsigned i, sum; sum=0; for(i=0; i<10; i++) sum+=array[i]; } 我们在执行时,可以选择入口函数: 1 cbmc file2.c --function sum --bounds-check 验证循环 CBMC 是执行有界模型检查(Bounded Model Checking),所有循环必须具有有限的上限运行时限,以保证找到所有错误。 CBMC可以选择展开循环, 例如,考虑程序 binsearch.c 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 int binsearch(int x) { int a[16]; signed low=0, high=16; while (low < high) { signed middle = low + ((high-low) >> 1); if(a[middle] < x) high = middle; else if(a[middle] > x) low = middle + 1; else // a[middle]==x return middle; } return -1; } 这个程序中,循环是不会终止的,所有用 cbmc 检查时,我们必须给出循环次数。 1 cbmc binsearch.c --function binsearch --unwind 6 --bounds-check --unwinding-assertions CBMC 检查选项 CBMC 目前提供了以下的检查选项: • –no-assertions ignore user assertions • –bounds-check add array bounds checks • –div-by-zero-check add division by zero checks • –pointer-check add pointer checks • –signed-overflow-check add arithmetic over- and underflow checks • –unsigned-overflow-check add arithmetic over- and underflow checks • –undefined-shift-check add range checks for shift distances • –nan-check add floating-point NaN checks • –uninitialized-check add checks for uninitialized locals (experimental) • –error-label label check that given label is unreachable http://cprover.diffblue.com/cbmc-user-manual.html
__label__pos
0.78035
Form wizard Django comes with an optional “form wizard” application that splits forms across multiple Web pages. It maintains state in one of the backends so that the full server-side processing can be delayed until the submission of the final form. You might want to use this if you have a lengthy form that would be too unwieldy for display on a single page. The first page might ask the user for core information, the second page might ask for less important information, etc. The term “wizard”, in this context, is explained on Wikipedia. How it works Here’s the basic workflow for how a user would use a wizard: 1. The user visits the first page of the wizard, fills in the form and submits it. 2. The server validates the data. If it’s invalid, the form is displayed again, with error messages. If it’s valid, the server saves the current state of the wizard in the backend and redirects to the next step. 3. Step 1 and 2 repeat, for every subsequent form in the wizard. 4. Once the user has submitted all the forms and all the data has been validated, the wizard processes the data – saving it to the database, sending an email, or whatever the application needs to do. Usage This application handles as much machinery for you as possible. Generally, you just have to do these things: 1. Define a number of Form classes – one per wizard page. 2. Create a WizardView subclass that specifies what to do once all of your forms have been submitted and validated. This also lets you override some of the wizard’s behavior. 3. Create some templates that render the forms. You can define a single, generic template to handle every one of the forms, or you can define a specific template for each form. 4. Add django.contrib.formtools to your INSTALLED_APPS list in your settings file. 5. Point your URLconf at your WizardView as_view() method. Defining Form classes The first step in creating a form wizard is to create the Form classes. These should be standard django.forms.Form classes, covered in the forms documentation. These classes can live anywhere in your codebase, but convention is to put them in a file called forms.py in your application. For example, let’s write a “contact form” wizard, where the first page’s form collects the sender’s email address and subject, and the second page collects the message itself. Here’s what the forms.py might look like: from django import forms class ContactForm1(forms.Form): subject = forms.CharField(max_length=100) sender = forms.EmailField() class ContactForm2(forms.Form): message = forms.CharField(widget=forms.Textarea) Note In order to use FileField in any form, see the section Handling files below to learn more about what to do. Creating a WizardView class The next step is to create a django.contrib.formtools.wizard.views.WizardView subclass. You can also use the SessionWizardView or CookieWizardView classes which preselect the backend used for storing information during execution of the wizard (as their names indicate, server-side sessions and browser cookies respectively). Note To use the SessionWizardView follow the instructions in the sessions documentation on how to enable sessions. We will use the SessionWizardView in all examples but is is completely fine to use the CookieWizardView instead. As with your Form classes, this WizardView class can live anywhere in your codebase, but convention is to put it in views.py. The only requirement on this subclass is that it implement a done() method. WizardView.done(form_list) This method specifies what should happen when the data for every form is submitted and validated. This method is passed a list of validated Form instances. In this simplistic example, rather than performing any database operation, the method simply renders a template of the validated data: from django.shortcuts import render_to_response from django.contrib.formtools.wizard.views import SessionWizardView class ContactWizard(SessionWizardView): def done(self, form_list, **kwargs): return render_to_response('done.html', { 'form_data': [form.cleaned_data for form in form_list], }) Note that this method will be called via POST, so it really ought to be a good Web citizen and redirect after processing the data. Here's another example: from django.http import HttpResponseRedirect from django.contrib.formtools.wizard.views import SessionWizardView class ContactWizard(SessionWizardView): def done(self, form_list, **kwargs): do_something_with_the_form_data(form_list) return HttpResponseRedirect('/page-to-redirect-to-when-done/') See the section Advanced WizardView methods below to learn about more WizardView hooks. Creating templates for the forms Next, you'll need to create a template that renders the wizard's forms. By default, every form uses a template called formtools/wizard/wizard_form.html. You can change this template name by overriding either the template_name attribute or the get_template_names() method, which are documented in the TemplateResponseMixin documentation. The latter one allows you to use a different template for each form. This template expects a wizard object that has various items attached to it: • form -- The Form or BaseFormSet instance for the current step (either empty or with errors). • steps -- A helper object to access the various steps related data: • step0 -- The current step (zero-based). • step1 -- The current step (one-based). • count -- The total number of steps. • first -- The first step. • last -- The last step. • current -- The current (or first) step. • next -- The next step. • prev -- The previous step. • index -- The index of the current step. • all -- A list of all steps of the wizard. You can supply additional context variables by using the get_context_data() method of your WizardView subclass. Here's a full example template: {% extends "base.html" %} {% block head %} {{ wizard.form.media }} {% endblock %} {% block content %} <p>Step {{ wizard.steps.step1 }} of {{ wizard.steps.count }}</p> <form action="" method="post">{% csrf_token %} <table> {{ wizard.management_form }} {% if wizard.form.forms %} {{ wizard.form.management_form }} {% for form in wizard.form.forms %} {{ form }} {% endfor %} {% else %} {{ wizard.form }} {% endif %} </table> {% if wizard.steps.prev %} <button name="wizard_goto_step" type="submit" value="{{ wizard.steps.first }}">{% trans "first step" %}</button> <button name="wizard_goto_step" type="submit" value="{{ wizard.steps.prev }}">{% trans "prev step" %}</button> {% endif %} <input type="submit" value="{% trans "submit" %}"/> </form> {% endblock %} Note Note that {{ wizard.management_form }} must be used for the wizard to work properly. Hooking the wizard into a URLconf Finally, we need to specify which forms to use in the wizard, and then deploy the new WizardView object at an URL in the urls.py. The wizard's as_view() method takes a list of your Form classes as an argument during instantiation: from django.conf.urls import patterns from myapp.forms import ContactForm1, ContactForm2 from myapp.views import ContactWizard urlpatterns = patterns('', (r'^contact/$', ContactWizard.as_view([ContactForm1, ContactForm2])), ) Advanced WizardView methods class WizardView Aside from the done() method, WizardView offers a few advanced method hooks that let you customize how your wizard works. Some of these methods take an argument step, which is a zero-based counter as string representing the current step of the wizard. (E.g., the first form is '0' and the second form is '1') WizardView.get_form_prefix(step) Given the step, returns a form prefix to use. By default, this simply uses the step itself. For more, see the form prefix documentation. WizardView.get_form_initial(step) Returns a dictionary which will be passed as the initial argument when instantiating the Form instance for step step. If no initial data was provided while initializing the form wizard, an empty dictionary should be returned. The default implementation: def get_form_initial(self, step): return self.initial_dict.get(step, {}) WizardView.get_form_kwargs(step) Returns a dictionary which will be used as the keyword arguments when instantiating the form instance on given step. The default implementation: def get_form_kwargs(self, step): return {} WizardView.get_form_instance(step) This method will be called only if a ModelForm is used as the form for step step. Returns an Model object which will be passed as the instance argument when instantiating the ModelForm for step step. If no instance object was provided while initializing the form wizard, None will be returned. The default implementation: def get_form_instance(self, step): return self.instance_dict.get(step, None) WizardView.get_context_data(form, **kwargs) Returns the template context for a step. You can overwrite this method to add more data for all or some steps. This method returns a dictionary containing the rendered form step. The default template context variables are: • Any extra data the storage backend has stored • form -- form instance of the current step • wizard -- the wizard instance itself Example to add extra variables for a specific step: def get_context_data(self, form, **kwargs): context = super(MyWizard, self).get_context_data(form=form, **kwargs) if self.steps.current == 'my_step_name': context.update({'another_var': True}) return context WizardView.get_prefix(*args, **kwargs) This method returns a prefix for use by the storage backends. Backends use the prefix as a mechanism to allow data to be stored separately for each wizard. This allows wizards to store their data in a single backend without overwriting each other. You can change this method to make the wizard data prefix more unique to, e.g. have multiple instances of one wizard in one session. Default implementation: def get_prefix(self, *args, **kwargs): # use the lowercase underscore version of the class name return normalize_name(self.__class__.__name__) WizardView.get_form(step=None, data=None, files=None) This method constructs the form for a given step. If no step is defined, the current step will be determined automatically. The method gets three arguments: • step -- The step for which the form instance should be generated. • data -- Gets passed to the form's data argument • files -- Gets passed to the form's files argument You can override this method to add extra arguments to the form instance. Example code to add a user attribute to the form on step 2: def get_form(self, step=None, data=None, files=None): form = super(MyWizard, self).get_form(step, data, files) if step == '1': form.user = self.request.user return form WizardView.process_step(form) Hook for modifying the wizard's internal state, given a fully validated Form object. The Form is guaranteed to have clean, valid data. This method gives you a way to post-process the form data before the data gets stored within the storage backend. By default it just returns the form.data dictionary. You should not manipulate the data here but you can use it to do some extra work if needed (e.g. set storage extra data). Note that this method is called every time a page is rendered for all submitted steps. The default implementation: def process_step(self, form): return self.get_form_step_data(form) WizardView.process_step_files(form) This method gives you a way to post-process the form files before the files gets stored within the storage backend. By default it just returns the form.files dictionary. You should not manipulate the data here but you can use it to do some extra work if needed (e.g. set storage extra data). Default implementation: def process_step_files(self, form): return self.get_form_step_files(form) WizardView.render_revalidation_failure(step, form, **kwargs) When the wizard thinks all steps have passed it revalidates all forms with the data from the backend storage. If any of the forms don't validate correctly, this method gets called. This method expects two arguments, step and form. The default implementation resets the current step to the first failing form and redirects the user to the invalid form. Default implementation: def render_revalidation_failure(self, step, form, **kwargs): self.storage.current_step = step return self.render(form, **kwargs) WizardView.get_form_step_data(form) This method fetches the data from the form Form instance and returns the dictionary. You can use this method to manipulate the values before the data gets stored in the storage backend. Default implementation: def get_form_step_data(self, form): return form.data WizardView.get_form_step_files(form) This method returns the form files. You can use this method to manipulate the files before the data gets stored in the storage backend. Default implementation: def get_form_step_files(self, form): return form.files WizardView.render(form, **kwargs) This method gets called after the GET or POST request has been handled. You can hook in this method to, e.g. change the type of HTTP response. Default implementation: def render(self, form=None, **kwargs): form = form or self.get_form() context = self.get_context_data(form=form, **kwargs) return self.render_to_response(context) Providing initial data for the forms WizardView.initial_dict Initial data for a wizard's Form objects can be provided using the optional initial_dict keyword argument. This argument should be a dictionary mapping the steps to dictionaries containing the initial data for each step. The dictionary of initial data will be passed along to the constructor of the step's Form: >>> from myapp.forms import ContactForm1, ContactForm2 >>> from myapp.views import ContactWizard >>> initial = { ... '0': {'subject': 'Hello', 'sender': '[email protected]'}, ... '1': {'message': 'Hi there!'} ... } >>> wiz = ContactWizard.as_view([ContactForm1, ContactForm2], initial_dict=initial) >>> form1 = wiz.get_form('0') >>> form2 = wiz.get_form('1') >>> form1.initial {'sender': '[email protected]', 'subject': 'Hello'} >>> form2.initial {'message': 'Hi there!'} The initial_dict can also take a list of dictionaries for a specific step if the step is a FormSet. Handling files To handle FileField within any step form of the wizard, you have to add a file_storage to your WizardView subclass. This storage will temporarily store the uploaded files for the wizard. The file_storage attribute should be a Storage subclass. Warning Please remember to take care of removing old files as the WizardView won't remove any files, whether the wizard gets finished correctly or not. Conditionally view/skip specific steps WizardView.condition_dict The as_view() method accepts a condition_dict argument. You can pass a dictionary of boolean values or callables. The key should match the steps names (e.g. '0', '1'). If the value of a specific step is callable it will be called with the WizardView instance as the only argument. If the return value is true, the step's form will be used. This example provides a contact form including a condition. The condition is used to show a message form only if a checkbox in the first step was checked. The steps are defined in a forms.py file: from django import forms class ContactForm1(forms.Form): subject = forms.CharField(max_length=100) sender = forms.EmailField() leave_message = forms.BooleanField(required=False) class ContactForm2(forms.Form): message = forms.CharField(widget=forms.Textarea) We define our wizard in a views.py: from django.shortcuts import render_to_response from django.contrib.formtools.wizard.views import SessionWizardView def show_message_form_condition(wizard): # try to get the cleaned data of step 1 cleaned_data = wizard.get_cleaned_data_for_step('0') or {} # check if the field ``leave_message`` was checked. return cleaned_data.get('leave_message', True) class ContactWizard(SessionWizardView): def done(self, form_list, **kwargs): return render_to_response('done.html', { 'form_data': [form.cleaned_data for form in form_list], }) We need to add the ContactWizard to our urls.py file: from django.conf.urls import pattern from myapp.forms import ContactForm1, ContactForm2 from myapp.views import ContactWizard, show_message_form_condition contact_forms = [ContactForm1, ContactForm2] urlpatterns = patterns('', (r'^contact/$', ContactWizard.as_view(contact_forms, condition_dict={'1': show_message_form_condition} )), ) As you can see, we defined a show_message_form_condition next to our WizardView subclass and added a condition_dict argument to the as_view() method. The key refers to the second wizard step (because of the zero based step index). How to work with ModelForm and ModelFormSet WizardView.instance_dict WizardView supports ModelForms and ModelFormSets. Additionally to initial_dict, the as_view() method takes an instance_dict argument that should contain instances of ModelForm and ModelFormSet. Similarly to initial_dict, these dictionary key values should be equal to the step number in the form list. Usage of NamedUrlWizardView class NamedUrlWizardView There is a WizardView subclass which adds named-urls support to the wizard. By doing this, you can have single urls for every step. To use the named urls, you have to change the urls.py. Below you will see an example of a contact wizard with two steps, step 1 with "contactdata" as its name and step 2 with "leavemessage" as its name. Additionally you have to pass two more arguments to the as_view() method: • url_name -- the name of the url (as provided in the urls.py) • done_step_name -- the name in the url for the done step Example code for the changed urls.py file: from django.conf.urls import url, patterns from myapp.forms import ContactForm1, ContactForm2 from myapp.views import ContactWizard named_contact_forms = ( ('contactdata', ContactForm1), ('leavemessage', ContactForm2), ) contact_wizard = ContactWizard.as_view(named_contact_forms, url_name='contact_step', done_step_name='finished') urlpatterns = patterns('', url(r'^contact/(?P<step>.+)/$', contact_wizard, name='contact_step'), url(r'^contact/$', contact_wizard, name='contact'), ) Advanced NamedUrlWizardView methods NamedUrlWizardView.get_step_url(step) This method returns the URL for a specific step. Default implementation: def get_step_url(self, step): return reverse(self.url_name, kwargs={'step': step})
__label__pos
0.937717
atlas earth scam Viewing 1 post (of 1 total) • Author Posts • #76646 Emilis Delgado Participant Do you get real money from Atlas Earth? You can acquire “real world” parcels of property on the actual planet if you play the new game experience ATLAS: EARTH, which is designed specifically for mobile devices. The virtual rent that you get from these plots can be cashed out in real life. You have the option of purchasing a piece of the United States that is 30 feet by 30 feet in size. How long has atlas earth been out? Users will be able to purchase virtual chunks of real estate that, once owned, function as digital assets tied to real locales. Atlas: Earth is scheduled to open on October 18, 2021, and pre-registrations are now being accepted. How do you make money on metaverse? Everyone, from the largest IT companies to the most well-known fashion labels, is placing their bets on Metaverse; however, how can an individual make a profit from it? Play Games That Pay You to Do Other Things, Like Online Shopping or Virtual Clothing… Make virtual reality games and sell them…. Conduct Metaverse events. … Artist working independently for NFT… Establish yourself as a prime property real estate dealer. How much does virtual land cost? According to a research compiled by RepublicRealm, which monitors initiatives related to the metaverse, the average cost of purchasing a plot of land on the four most popular platforms increased by a factor of two, reaching $12,000 during the first half of the previous calendar year. In the same way that position on a map can have a substantial impact on property prices in the real world, the same is true in the metaverse. Who created atlas earth? The upcoming mobile game Atlas: Earth, which is being developed by Atlas Reality, now features an integrated version of NextNav’s vertical positioning service. With it, game developers will be able to pinpoint players in three dimensions, like what floor they’re on in a skyscraper. atlas earth scam, Do you get real money from Atlas Earth?, How long has atlas earth been out?, How do you get rare land in Atlas Earth?, How do you make money with Atlas?, How do I withdraw money from Atlas Earth?, How do you make money on metaverse?, How much does virtual land cost?, Who created atlas earth? atlas earth scam What is next Earth metaverse? Next Earth is a community in the metaverse that is based on the blockchain that allows people to purchase and sell land in the virtual world while also creating value in the actual world. It intends to be one of the most ambitious environmental charity initiatives there is, and it plans to achieve this goal with the assistance of community governance, tokenomics, and Defi solutions for its entire community. Can you make money on Upland? Additionally, Upland gives its users the opportunity to make money while playing games on the platform. Those days, when you might put your time and money in something but get nothing in return, are long gone. You may actually earn real money here while still having fun and going on exciting adventures, and all you have to do is play games. How much does it cost to buy land on Decentraland? As of the 25th of March, 2021, one MANA was equivalent to $0.85 USD. As of the month of March 2021, the average price of one parcel of LAND in Decentraland was approximately 6,900 MANA, which is roughly equivalent to $5,800 USD. The in-game marketplace as well as third-party marketplaces such as OpenSea both offer the opportunity to purchase land lots. How can I make money online? How to make money through the internet. Start your own business as a freelancer online…. Conduct tests on mobile apps and website… Utilize the Amazon Mechanical Turk platform to do tasks… Take paid surveys to increase your income… 5. Become an affiliate and start earning money from your blog… Put your products up for sale on Etsy…. You can generate income from advertising by using your blog or your YouTube channel… Become an influential figure on Instagram. How much does it cost to buy land on Decentraland? As of the 25th of March, 2021, one MANA was equivalent to $0.85 USD. As of the month of March 2021, the average price of one parcel of LAND in Decentraland was approximately 6,900 MANA, which is roughly equivalent to $5,800 USD. The in-game marketplace as well as third-party marketplaces such as OpenSea both offer the opportunity to purchase land lots. How can I make money online? How to make money through the internet. Start your own business as a freelancer online…. Conduct tests on mobile apps and website… Utilize the Amazon Mechanical Turk platform to do tasks… Take paid surveys to increase your income… 5. Become an affiliate and start earning money from your blog… Put your products up for sale on Etsy…. You can generate income from advertising by using your blog or your YouTube channel… Become an influential figure on Instagram. atlas earth scam, What is next Earth metaverse?, Can you make money on Upland?, How much does it cost to buy land on Decentraland?, What do diamonds do in Atlas Earth?, How can I make money online?, How much does it cost to buy land on Decentraland?, What do diamonds do in Atlas Earth?, How can I make money online? atlas earth scam Viewing 1 post (of 1 total) • You must be logged in to reply to this topic.
__label__pos
0.960295
Return to JUNTO JUNTO Practice: Advent of Code 2019: Day 4 - Secure Container Discussed on January 09, 2020. https://adventofcode.com/2019/day/4 Solutions Click to see: Oscar Martinez hasDouble = (i) => {   iString = i.toString();   for (j = 0; j < iString.length; j++) {   if (j != iString.length - 1) { if (iString[j] === iString[j + 1]){         return true;         }     }   }   return false; } neverDecreases = (i) => {   iString = i.toString();   maxInt = 0;   for (j = 0; j < iString.length; j++) {   currentInt = parseInt(iString[j],10);     if (currentInt < maxInt) {       return false     }     maxInt = currentInt   }   return true; } inputRange = range(156218, 652527); candidates = inputRange.filter((input) => { return hasDouble(input) && neverDecreases(input) }); console.log(candidates) console.log(`${candidates.length} is the final answer.`) John Lekberg Part 1 function checkPassword(p) { const digits = [...String(p)]; let same = false; for (let i = 0; i < digits.length - 1; i++) { const d = digits[i]; const dd = digits[i+1]; if (dd < d) { return false; } if (dd === d) { same = true; } } return same; } let howMany = 0; for (let p = 134792; p <= 675810; p++) { if (checkPassword(p)) { howMany++; } } console.log(howMany); Part 2 function checkPassword2(p) { const digits = [...String(p)].map(x => +x); let same2 = false; for (let i = 0; i < digits.length - 1; i++) { if (digits[i+1] < digits[i]) { return false; } if (digits[i] === digits[i+1]) { if (i > 0 && digits[i-1] === digits[i]) { continue; } if (i < 4 && digits[i+1] === digits[i+2]) { continue; } same2 = true; } } return same2; } let howMany2 = 0; for (let p = 134792; p <= 675810; p++) { if (checkPassword2(p)) { howMany2++; } } console.log(howMany2);
__label__pos
1
index function   • 指标函数 index function的用法和样例: 例句 1. The index function of neural decoupler in three structures are presented. 给出了三种系统结构中神经解耦器的指标函数; 2. The result of the INDEX function is a reference and is interpreted as such by other formulas. 函数INDEX的结果为一个引用,且在其他公式中也被解释为引用。 3. Using Fourier transfer,we extend the transparent index function of step grating into Fourier series and derived the expression of diffraction efficiency. 用傅里叶方法对台阶光栅衍射效率进行了分析,得出任意台阶数,任意台阶高度的二元光栅任意级次的衍射效率公式。 4. Objective Application hemolytic serum index of Serum index function for correcting interference of hemolysis to serum chemistry tests with HITACHI 7170 Automated analyzer. 目的:利用HITACHI-7170全自动生化分析仪血清信息功能中的溶血指数校正溶血对生化测定项目的影响。 5. When the joint trajectory is within the optimum solution domain, a sudden jump of end-effector will occur due to the fluctuation of the index function determined by the exhaustive optimization method. 当规划进入优化解域时,指标函数逐点最优曲线的波动会使机械臂末端产生突跳现象。 6. Completion of the database indexing function C language source code, runtime environment WINDOWS. 完成数据库索引功能的C语言源代码,运行环境WINDOWS. index function的相关资料: 临近单词 今日热词 目录 附录 查词历史
__label__pos
0.626768
Sign in to follow this   Mr Grinch Weird Template Usage (working around a lack of virtual static) Recommended Posts I'm working on an engine redesign for Crown and Cutlass. We are working on an event system similar to what is in "Game Coding Complete". The idea is that we will allow listeners to subscribe to be notified of certain event types. When an event gets fired, anything that has subscribed to that event type will get notified. In "Game Coding Complete" the author describes a system where each event type has a string describing the type. That string is then hashed into an unsigned int for faster comparison. Hashing the string only happens once per type. The event system can also verify that there are no hash collisions. That is nice because you don't have to do string comparisions, but you still have a human readable event name for debugging. You also do not need a single event type enumeration or anything like that. So anyway, once you have the hash value, you can just look that up in a map to get a list of event listeners. In "Game Coding Complete" there are 3 classes that make up an event. An EventType class (holds the string and does the hashing), an EventData class (has the event specific data), and a general Event class. The Event class has an EventType and an EventData. For example if you want to make a "play sound" event, I guess you'd have to write a PlaySoundType that inherits from the EventType to hold the "play sound event" string, and a PlaySoundData class that inherits from the EventData to hold the file name and audio buffers or whatever. Then you'd instantiate an Event that has a pointer the the PlaySoundType and holds a PlaySoundData object. That seems a little excessive to me. You would have to take extra steps to ensure that you only have a single instance of each EventType created at a time. Doing that with singletons wouldn't be too bad, but it seems like you would have to rewrite the singleton code for every custom event type you wish to create. That's not horrible, but it seems kind of tedious. Alternatively, you could write some kind of event type manager, but that would be more work. I would rather just have a base Event class and inherit my custom event classes directly from that. My plan was to just add a static name and static hash value to the base Event class and have each sub-class override that. My idea was that each sub-class would have only a single copy of the name string, and the hashing would only have to happen once per sub-class. Unfortunately C++ does not support static virtual members/methods. While I understand the reasons for that, I do think virtual static methods would be a good fit for this problem. Based on a comment from a "Joel On Software" post, we came up with the following solution: This is the event base class and the weird template class: #if !defined( _IEVENT_H_ ) #define _IEVENT_H_ #include <string> #include "Util.h" class IEvent { public: virtual ~IEvent() {}; virtual unsigned int vGetType() = 0; virtual void vFireEvent() = 0; }; template<class T> class EventTmpl: public IEvent { public: static unsigned int sGetType() { if (T::sHash == 0) { // CHECK THAT sName IS SET! T::sHash = HashString(T::sName); // CHECK FOR sHash COLLISIONS! } return T::sHash; } unsigned int vGetType() { return sGetType(); } }; #endif This is an example custom event type: // Event1.h #if !defined( _EVENT1_H_ ) #define _EVENT1_H_ #include "IEvent.h" class Event1: public EventTmpl<Event1> { public: void vFireEvent(); static unsigned int sHash; static std::string sName; }; #endif // Event1.cpp #include <cstdio> #include "Event1.h" using namespace std; string Event1::sName = "EVENT 1"; unsigned int Event1::sHash = 0; void Event1::vFireEvent() { printf("Fire Event1\n"); } This is another custom event class, just for the example: // Event2.h #if !defined( _EVENT2_H_ ) #define _EVENT2_H_ #include "IEvent.h" // Note: I did this typedef just to try it out class Event2; typedef EventTmpl<Event2> Event2Type; class Event2: public Event2Type { public: void vFireEvent(); static unsigned int sHash; static std::string sName; }; #endif // Event2.cpp #include <cstdio> #include "Event2.h" using namespace std; string Event2::sName = "EVENT TWO"; unsigned int Event2::sHash = 0; void Event2::vFireEvent() { printf("Fire Event2\n"); } Here is a main function just for a usage example. It demonstrates both how the event system would get the type of an event to send it to all the registered listeners, as well as how a listener would get the type of the custom event class it wants to register for without an actual instance of the class: #include <cstdio> #include <string> #include "Event1.h" #include "Event2.h" using namespace std; int main(int argc, char* argv[]) { IEvent* e = new Event1(); // Event system would use e->vGetType() internally printf("%u\n", e->vGetType()); // Listeners would subscribe to events using T::sGetType() printf("%u\n", Event1::sGetType()); e->vFireEvent(); delete e; e = new Event2(); printf("%u\n", e->vGetType()); printf("%u\n", Event2::sGetType()); e->vFireEvent(); delete e; return 0; } First off, I should say that this does work as expected in Borland C++Builder 2006, GCC 3.4, and GCC 4.1. That said, I'm not very familiar with templates. In fact I am embarassed to say that this may be the first time I have ever written a template. From an OO point of view, it seems ugly that the parent class should be explicitely aware of the child class like that. However, the parent class only exists for this one reason and you would never use it directly (you'd always use the base Event or child class). This does get us a few other things, too. It allows us to write the hashing code once and use it for all subclasses. As long as we descend from the template class, we only have to provide the class name and initialize the hash value to 0. We even get compile-time checking that we do that. In a previous revision of this idea the template class actually had the static variables, which seemed functionally the same but gave a linker error instead of a compiler error. Either way the hashing only happens once, since there is only one static var for each custom class. As I noted above, it is nice that a listener can subscribe to an event using the static sGetType, and the event system or even an event handler can figure out the type of an event using the virtual vGetType. And while the template code itself is weird, the implementation of custom event classes is fairly clean. I do have several questions about this though. First off, is there any way I can automatically initialize the static hash value to 0, so the person creating the custom event doesn't have to do that? One other thing that is a little weird is that you have to make the static variables public in the custom event class since the template class needs to use them. I guess I could declare the template class a friend, but that's just one more step you have to remember. Maybe that's not too bad. I could also go back to having the static vars in the template class and deal with the less clear linker error instead... What do you think? Is there a better way to solve this problem? Is it worth the template weirdness for the benefits? If you were looking at an engine to use for a game and you saw this going on, would you be able to understand what you needed to do to implement a custom event class? While we are working on this engine for a specific purpose, I would like to make it as easy to understand as we reasonably can. I suspect no matter what we do, we will have to document how to create custom events fairly well. I appreciate any comments you have, thanks. Share this post Link to post Share on other sites In C++, static variables and methods are accessed in the following way: type::name. In other words, you access them without requiring an instance of the class in question. If you're planning to access them like that, then you can simply override the statics in each base class: class base { public: const static std::string name; const static int hash; }; class derived : public base { public: //the values of these two are distint from their counterparts in base const static std::string name; const static int hash; }; //access them like int currentHash = derived::hash; //or base::hash If you wish to access them through instances of the classes involved, then you want to use regular virtual functions with local statics: class base { public: virtual const std::string& name()=0; virtual const int hash()=0; protected: int hash(const std::string& in);//calculates the hash value }; class base { public: virtual const std::string& name() { static std::string Name = "EventName"; return Name; } virtual const int hash() { static int Hash = hash(name()); } }; Share this post Link to post Share on other sites Quote: Original post by Nitage If you wish to access them through instances of the classes involved, then you want to use regular virtual functions with local statics. This sounds along the lines of what you want. To be more precise what I think you want is the monostate design pattern, ie. a class that has member functions (in your case virtual) and only static data. This way you can create as many instances of PlaySoundType as you like, they all refer to the same data (and each instance be very small, requiring only a vtable pointer). Share this post Link to post Share on other sites Thanks for the responses. I think I tend to over-engineer things. That said, either I don't understand how what you are talking about works, or you don't understand what I am going for. For example, let's say I define my base IEvent class like this: class IEvent { public: virtual ~IEvent() {}; virtual unsigned int vGetType() = 0; virtual void vFireEvent() = 0; // Note these two lines, I'll talk about them later static unsigned int sHash; static std::string sName; }; Then I define a derived event like this (and if I want to define an Event2, it's identical except for the class name): class Event1: public IEvent { public: void vFireEvent(); unsigned int vGetType(); static unsigned int sGetType(); static unsigned int sHash; static std::string sName; }; What does that get me? Yes, I do have the virtual vGetType that will allow my event system to figure out the type to do the listener look up, but the implementation is entirely in the hands of the derived class's writer. For example, there is nothing preventing the derived class writer from just making up a type number and always returning it. Alternatively, the derived class could calculate the hash from the string every time vGetType() is called. Nitage, your example code recalculates the hash every time. Unnecessary hash calculations are one of the things I want to avoid. Only 1 hash calculation per event type should be necessary. It is true that I also have the static sGetType that my listeners can use to register for a type of an event without having to have an actual instance of that class on hand. That is, I can still do something like "EventSystem->AddListener(Event1::sGetType(), MyListener)". That's good too. However, that relies on a convention, not something that is enforced by the compiler. There is nothing to make sure the derived event actually has such a method. A non-conventional (i.e. poorly implemented) derived class may force the listener to register for that kind of an event by creating an instance of the class and calling the vGetType(). As I expect the programmers using the engine to define many custom events, it would be nice to able to have something more concrete. Strictly speaking, I would not say that the derived Event1's versions of sHash and sName override the base IEvent static variables, I would say they hide the base class's static variables. For example, if I have a variable "IEvent *e" and it is actually an instance of the derivived Event1, if I call "e->sName" I will actually get the IEvent::sName. While I'm not sure why you would access that directly, I really want something that behaves both as virtual and as static. In the IEvent code, I said to note the two lines where I declare the base class's static members. I'm not sure what they do. Why even have them there? What good do they do? As I already said, I don't want to ever use the data from the base class. Having them there doesn't in any way force you to add the static members to your derived classes. I guess you could say that they remind you to do that, but at the same time you could say they are confusing since there is no indication that you should be overriding those variables... My first example has all the benefits of this simpler approach, but it also eliminates a lot of the drawbacks. The code to do the hashing is only written once and it works for anything that derives from the template. The hash itself is guaranteed to only happen once per class. The compiler makes sure you have declared the static variables, so you are forced to used them. The drawbacks are that the template code is weird, and that you have to remember to derive from the template-generated class. The template code is only written once and then you don't have to worry about it being weird. It seems like remembering to derive from the template generated class is a lot easier than everything you have to remember with the simpler approach. Again, maybe I'm missing something, or just trying to make it harder than it needs to be. Am I way off base here, or does the template actually solve some problems? Share this post Link to post Share on other sites OK, here's a slight variation on Nitage's 2nd example that uses a little templated code to remove the need for defining the hash() function per event. // Provides an interface for access the name and // hash of the event, as well as the code that does // the actual calculation of the hash. class IEvent { public: virtual const std::string& name() = 0; virtual const int hash() = 0; protected: static int hash(const std::string& in); }; // An 'in-between' layer to automate the generation // of the hash() function. The EVENT template parameter // is the type of the deriving event class. Even though // it isn't used, it ensure a unique EventBase<> exists // for each event type, thus a unique static int in the // hash() function exists for each event type. template<typename EVENT> class EventBase : public IEvent { public: virtual const int hash() { // this is calculated only ONCE the first // time hash() is called per instantiation // of EventBase (ie. once per event type) static const int hash_ = hash(name()); return hash_; } }; // Now in our event all we need to do is give // it a name that will automatically be hashed // be EventBase<Event1>::hash() class Event1 : public EventBase<Event1> { public: virtual const std::string& name() { static std::string name_ = "EventName"; return name_; } }; From here you can create as many instances of Event1 as you like and pass it around as an IEvent*, the instance will be very lightweight (containing only the vtable pointer) and will always refer to the correct name and hash for the event. For example you can do the following: class EventSystem { public: typedef ... ListenerFunc; void AddListener(const IEvent& event, ListenerFunc listener); }; void func(EventSystem& eventSys) { //... eventSys.AddListener(Event1(), MyListener); //... } ; Share this post Link to post Share on other sites Create an account or sign in to comment You need to be a member in order to leave a comment Create an account Sign up for a new account in our community. It's easy! Register a new account Sign in Already have an account? Sign in here. Sign In Now Sign in to follow this  
__label__pos
0.592689
blob: d7a5518fec52234c6a99f2e2fd43d3079337aaf7 [file] [log] [blame] // Copyright 2016 The Tulsi Authors. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. import Cocoa import TulsiGenerator final class TulsiProjectDocument: NSDocument, NSWindowDelegate, MessageLogProtocol, OptionsEditorModelProtocol, TulsiGeneratorConfigDocumentDelegate { enum DocumentError: Error { /// No config exists with the given name. case noSuchConfig /// The config failed to load with the given debug info. case configLoadFailed(String) /// The workspace used by the project is invalid due to the given debug info. case invalidWorkspace(String) } /// Prefix used to access the persisted output folder for a given BUILD file path. static let ProjectOutputPathKeyPrefix = "projectOutput_" /// The subdirectory within a project bundle into which shareable generator configs will be /// stored. static let ProjectConfigsSubpath = "Configs" /// Override for headless project generation (in which messages should not spawn alert dialogs). static var showAlertsOnErrors = true /// Override to prevent rule entries from being extracted immediately during loading of project /// documents. This is only useful if the Bazel binary is expected to be set after the project /// document is loaded but before any other actions. static var suppressRuleEntryUpdateOnLoad = false /// The project model. var project: TulsiProject! = nil /// Whether or not the document is currently performing a long running operation. @objc dynamic var processing: Bool = false // The number of tasks that need to complete before processing is finished. May only be mutated on // the main queue. private var processingTaskCount: Int = 0 { didSet { assert(processingTaskCount >= 0, "Processing task count may never be negative") processing = processingTaskCount > 0 } } /// The display names of generator configs associated with this project. @objc dynamic var generatorConfigNames = [String]() /// Whether or not there are any opened generator config documents associated with this project. var hasChildConfigDocuments: Bool { return childConfigDocuments.count > 0 } /// Documents controlling the generator configs associated with this project. private var childConfigDocuments = NSHashTable<AnyObject>.weakObjects() /// One rule per target in the BUILD files associated with this project. var ruleInfos: [RuleInfo] { return _ruleInfos } private var _ruleInfos = [RuleInfo]() { didSet { // Update the associated config documents. let childDocuments = childConfigDocuments.allObjects as! [TulsiGeneratorConfigDocument] for configDoc in childDocuments { configDoc.projectRuleInfos = ruleInfos } } } /// The set of Bazel packages associated with this project. @objc dynamic var bazelPackages: [String]? { set { project!.bazelPackages = newValue ?? [String]() updateChangeCount(.changeDone) updateRuleEntries() } get { return project?.bazelPackages } } /// Location of the bazel binary. @objc dynamic var bazelURL: URL? { set { project.bazelURL = newValue if newValue != nil && infoExtractor != nil { infoExtractor.bazelURL = newValue! } updateChangeCount(.changeDone) updateRuleEntries() } get { return project?.bazelURL } } /// Binding point for the directory containing the project's WORKSPACE file. @objc dynamic var workspaceRootURL: URL? { return project?.workspaceRootURL } /// URL to the folder into which generator configs are saved. var generatorConfigFolderURL: URL? { return fileURL?.appendingPathComponent(TulsiProjectDocument.ProjectConfigsSubpath) } /// Whether or not the document has finished initializing the info extractor. @objc dynamic var infoExtractorInitialized: Bool = false var infoExtractor: TulsiProjectInfoExtractor! = nil { didSet { infoExtractorInitialized = (infoExtractor != nil) } } private var logEventObserver: NSObjectProtocol! = nil /// Array of user-facing messages, generally output by the Tulsi generator. @objc dynamic var messages = [UIMessage]() var errors = [LogMessage]() lazy var bundleExtension: String = { TulsiProjectDocument.getTulsiBundleExtension() }() static func getTulsiBundleExtension() -> String { let bundle = Bundle(for: self) let documentTypes = bundle.infoDictionary!["CFBundleDocumentTypes"] as! [[String: AnyObject]] let extensions = documentTypes.first!["CFBundleTypeExtensions"] as! [String] return extensions.first! } override init() { super.init() logEventObserver = NotificationCenter.default.addObserver(forName: NSNotification.Name(rawValue: TulsiMessageNotification), object: nil, queue: OperationQueue.main) { [weak self] (notification: Notification) in guard let item = LogMessage(notification: notification) else { if let showModal = notification.userInfo?["displayErrors"] as? Bool, showModal { self?.displayErrorModal() } return } self?.handleLogMessage(item) } } deinit { NotificationCenter.default.removeObserver(logEventObserver) } func clearMessages() { messages.removeAll(keepingCapacity: true) } func addBUILDFileURL(_ buildFile: URL) -> Bool { guard let package = packageForBUILDFile(buildFile) else { return false } bazelPackages!.append(package) return true } func containsBUILDFileURL(_ buildFile: URL) -> Bool { guard let package = packageForBUILDFile(buildFile), let concreteBazelPackages = bazelPackages else { return false } return concreteBazelPackages.contains(package) } func createNewProject(_ projectName: String, workspaceFileURL: URL) { willChangeValue(forKey: "bazelURL") willChangeValue(forKey: "bazelPackages") willChangeValue(forKey: "workspaceRootURL") // Default the bundleURL to a sibling of the selected workspace file. let bundleName = "\(projectName).\(bundleExtension)" let workspaceRootURL = workspaceFileURL.deletingLastPathComponent() let tempProjectBundleURL = workspaceRootURL.appendingPathComponent(bundleName) project = TulsiProject(projectName: projectName, projectBundleURL: tempProjectBundleURL, workspaceRootURL: workspaceRootURL) updateChangeCount(.changeDone) LogMessage.postSyslog("Create project: \(projectName)") didChangeValue(forKey: "bazelURL") didChangeValue(forKey: "bazelPackages") didChangeValue(forKey: "workspaceRootURL") } override func writeSafely(to url: URL, ofType typeName: String, for saveOperation: NSDocument.SaveOperationType) throws { // Ensure that the project's URL is set to the location in which this document is being saved so // that relative paths can be set properly. project.projectBundleURL = url try super.writeSafely(to: url, ofType: typeName, for: saveOperation) } override class var autosavesInPlace: Bool { return true } override func prepareSavePanel(_ panel: NSSavePanel) -> Bool { panel.message = NSLocalizedString("Document_SelectTulsiProjectOutputFolderMessage", comment: "Message to show at the top of the Tulsi project save as panel, explaining what to do.") panel.canCreateDirectories = true panel.allowedFileTypes = ["com.google.tulsi.project"] panel.nameFieldStringValue = project.projectBundleURL.lastPathComponent return true } override func fileWrapper(ofType typeName: String) throws -> FileWrapper { let contents = [String: FileWrapper]() let bundleFileWrapper = FileWrapper(directoryWithFileWrappers: contents) bundleFileWrapper.addRegularFile(withContents: try project.save() as Data, preferredFilename: TulsiProject.ProjectFilename) if let perUserData = try project.savePerUserSettings() { bundleFileWrapper.addRegularFile(withContents: perUserData as Data, preferredFilename: TulsiProject.perUserFilename) } let configsFolder: FileWrapper let reachableError: NSErrorPointer = nil if let existingConfigFolderURL = generatorConfigFolderURL, (existingConfigFolderURL as NSURL).checkResourceIsReachableAndReturnError(reachableError) { // Preserve any existing config documents. configsFolder = try FileWrapper(url: existingConfigFolderURL, options: FileWrapper.ReadingOptions()) } else { // Add a placeholder Configs directory. configsFolder = FileWrapper(directoryWithFileWrappers: [:]) } configsFolder.preferredFilename = TulsiProjectDocument.ProjectConfigsSubpath bundleFileWrapper.addFileWrapper(configsFolder) return bundleFileWrapper } override func read(from fileWrapper: FileWrapper, ofType typeName: String) throws { guard let concreteFileURL = fileURL, let projectFileWrapper = fileWrapper.fileWrappers?[TulsiProject.ProjectFilename], let fileContents = projectFileWrapper.regularFileContents else { return } let additionalOptionData: Data? if let perUserDataFileWrapper = fileWrapper.fileWrappers?[TulsiProject.perUserFilename] { additionalOptionData = perUserDataFileWrapper.regularFileContents } else { additionalOptionData = nil } project = try TulsiProject(data: fileContents, projectBundleURL: concreteFileURL, additionalOptionData: additionalOptionData) if let configsDir = fileWrapper.fileWrappers?[TulsiProjectDocument.ProjectConfigsSubpath], let configFileWrappers = configsDir.fileWrappers, configsDir.isDirectory { var configNames = [String]() for (_, fileWrapper) in configFileWrappers { if let filename = fileWrapper.filename, fileWrapper.isRegularFile && TulsiGeneratorConfigDocument.isGeneratorConfigFilename(filename) { let name = (filename as NSString).deletingPathExtension configNames.append(name) } } generatorConfigNames = configNames.sorted() } // Verify that the workspace is a valid one. let workspaceFile = project.workspaceRootURL.appendingPathComponent("WORKSPACE", isDirectory: false) var isDirectory = ObjCBool(false) if !FileManager.default.fileExists(atPath: workspaceFile.path, isDirectory: &isDirectory) || isDirectory.boolValue { let fmt = NSLocalizedString("Error_NoWORKSPACEFile", comment: "Error when project does not have a valid Bazel WORKSPACE file at %1$@.") LogMessage.postError(String(format: fmt, workspaceFile.path)) LogMessage.displayPendingErrors() throw DocumentError.invalidWorkspace("Missing WORKSPACE file at \(workspaceFile.path)") } if !TulsiProjectDocument.suppressRuleEntryUpdateOnLoad { updateRuleEntries() } } override func makeWindowControllers() { let storyboard = NSStoryboard(name: "Main", bundle: nil) let windowController = storyboard.instantiateController(withIdentifier: "TulsiProjectDocumentWindow") as! NSWindowController windowController.contentViewController?.representedObject = self addWindowController(windowController) } override func willPresentError(_ error: Error) -> Error { // Track errors shown to the user for bug reporting purposes. LogMessage.postInfo("Presented error: \(error)", context: projectName) return super.willPresentError(error) } /// Tracks the given document as a child of this project. func trackChildConfigDocument(_ document: TulsiGeneratorConfigDocument) { childConfigDocuments.add(document) // Ensure that the child document is aware of the project-level processing tasks. document.addProcessingTaskCount(processingTaskCount) } /// Closes any generator config documents associated with this project. func closeChildConfigDocuments() { let childDocuments = childConfigDocuments.allObjects as! [TulsiGeneratorConfigDocument] for configDoc in childDocuments { configDoc.close() } childConfigDocuments.removeAllObjects() } func deleteConfigsNamed(_ configNamesToRemove: [String]) { let fileManager = FileManager.default var nameToDoc = [String: TulsiGeneratorConfigDocument]() for doc in childConfigDocuments.allObjects as! [TulsiGeneratorConfigDocument] { guard let name = doc.configName else { continue } nameToDoc[name] = doc } var configNames = Set<String>(generatorConfigNames) for name in configNamesToRemove { configNames.remove(name) if let doc = nameToDoc[name] { childConfigDocuments.remove(doc) doc.close() } if let url = urlForConfigNamed(name, sanitized: false) { let errorInfo: String? do { try fileManager.removeItem(at: url) errorInfo = nil } catch let e as NSError { errorInfo = "Unexpected exception \(e.localizedDescription)" } catch { errorInfo = "Unexpected exception" } if let errorInfo = errorInfo { let fmt = NSLocalizedString("Error_ConfigDeleteFailed", comment: "Error when a TulsiGeneratorConfig named %1$@ could not be deleted.") LogMessage.postError(String(format: fmt, name), details: errorInfo) LogMessage.displayPendingErrors() } } } generatorConfigNames = configNames.sorted() } func urlForConfigNamed(_ name: String, sanitized: Bool = true) -> URL? { return TulsiGeneratorConfigDocument.urlForConfigNamed(name, inFolderURL: generatorConfigFolderURL, sanitized: sanitized) } /// Asynchronously loads a previously created config with the given name, invoking the given /// completionHandler on the main thread when the document is fully loaded. func loadConfigDocumentNamed(_ name: String, completionHandler: @escaping ((TulsiGeneratorConfigDocument?) -> Void)) throws -> TulsiGeneratorConfigDocument { let doc = try loadSparseConfigDocumentNamed(name) doc.finishLoadingDocument(completionHandler) return doc } /// Sparsely loads a previously created config with the given name. The returned document may have /// unresolved label references. func loadSparseConfigDocumentNamed(_ name: String) throws -> TulsiGeneratorConfigDocument { guard let configURL = urlForConfigNamed(name, sanitized: false) else { throw DocumentError.noSuchConfig } let documentController = NSDocumentController.shared if let configDocument = documentController.document(for: configURL) as? TulsiGeneratorConfigDocument { return configDocument } do { let configDocument = try TulsiGeneratorConfigDocument.makeSparseDocumentWithContentsOfURL(configURL, infoExtractor: infoExtractor, messageLog: self, bazelURL: bazelURL) configDocument.projectRuleInfos = ruleInfos configDocument.delegate = self trackChildConfigDocument(configDocument) return configDocument } catch let e as NSError { throw DocumentError.configLoadFailed("Failed to load config from '\(configURL.path)' with error \(e.localizedDescription)") } catch { throw DocumentError.configLoadFailed("Unexpected exception loading config from '\(configURL.path)'") } } /// Displays a generic critical error message to the user with the given debug message. /// This should be used sparingly and only for messages that would indicate bugs in Tulsi. func generalError(_ debugMessage: String) { let msg = NSLocalizedString("Error_GeneralCriticalFailure", comment: "A general, critical failure without a more fitting descriptive message.") LogMessage.postError(msg, details: debugMessage) } // MARK: - NSUserInterfaceValidations override func validateUserInterfaceItem(_ item: NSValidatedUserInterfaceItem) -> Bool { let itemAction = item.action switch itemAction { case .some(#selector(TulsiProjectDocument.save(_:))): return true case .some(#selector(TulsiProjectDocument.saveAs(_:))): return true case .some(#selector(TulsiProjectDocument.rename(_:))): return true case .some(#selector(TulsiProjectDocument.move(_:))): return true // Unsupported actions. case .some(#selector(TulsiProjectDocument.duplicate(_:))): return false default: Swift.print("Unhandled menu action: \(String(describing: itemAction))") } return false } // MARK: - TulsiGeneratorConfigDocumentDelegate func didNameTulsiGeneratorConfigDocument(_ document: TulsiGeneratorConfigDocument, configName: String) { if !generatorConfigNames.contains(configName) { let configNames = (generatorConfigNames + [configName]).sorted() generatorConfigNames = configNames } } func parentOptionSetForConfigDocument(_: TulsiGeneratorConfigDocument) -> TulsiOptionSet? { return optionSet } // MARK: - OptionsEditorModelProtocol var projectName: String? { guard let concreteProject = project else { return nil } return concreteProject.projectName } var optionSet: TulsiOptionSet? { guard let concreteProject = project else { return nil } return concreteProject.options } var projectValueColumnTitle: String { return NSLocalizedString("OptionsEditor_ColumnTitle_Project", comment: "Title for the options editor column used to edit per-tulsiproj values.") } var defaultValueColumnTitle: String { return NSLocalizedString("OptionsEditor_ColumnTitle_Default", comment: "Title for the options editor column used to display the built-in default values.") } var optionsTargetUIRuleEntries: [UIRuleInfo]? { return nil } // MARK: - Private methods // Idempotent function to gather all error messages that have been logged and create a single // error modal to present to the user. private func displayErrorModal() { guard TulsiProjectDocument.showAlertsOnErrors else { return } var errorMessages = [String]() var details = [String]() for error in errors { errorMessages.append(error.message) if let detail = error.details { details.append(detail) } } errors.removeAll() if !errorMessages.isEmpty { ErrorAlertView.displayModalError(errorMessages.joined(separator: "\n"), details: details.joined(separator: "\n")) } } private func handleLogMessage(_ item: LogMessage) { let fullMessage: String if let details = item.details { fullMessage = "\(item.message) [Details]: \(details)" } else { fullMessage = item.message } switch item.level { case .Error: messages.append(UIMessage(text: fullMessage, type: .error)) errors.append(item) case .Warning: messages.append(UIMessage(text: fullMessage, type: .warning)) case .Info: messages.append(UIMessage(text: fullMessage, type: .info)) case .Syslog: break case .Debug: messages.append(UIMessage(text: fullMessage, type: .debug)) } } private func processingTaskStarted() { Thread.doOnMainQueue() { self.processingTaskCount += 1 let childDocuments = self.childConfigDocuments.allObjects as! [TulsiGeneratorConfigDocument] for configDoc in childDocuments { configDoc.processingTaskStarted() } } } private func processingTaskFinished() { Thread.doOnMainQueue() { self.processingTaskCount -= 1 let childDocuments = self.childConfigDocuments.allObjects as! [TulsiGeneratorConfigDocument] for configDoc in childDocuments { configDoc.processingTaskFinished() } } } private func packageForBUILDFile(_ buildFile: URL) -> String? { let packageURL = buildFile.deletingLastPathComponent() // If the relative path is a child of the workspace root return it. if let relativePath = project.workspaceRelativePathForURL(packageURL), !relativePath.hasPrefix("/") && !relativePath.hasPrefix("..") { return relativePath } return nil } // Fetches target rule entries from the project's BUILD documents. private func updateRuleEntries() { guard let concreteBazelURL = bazelURL else { return } processingTaskStarted() Thread.doOnQOSUserInitiatedThread() { self.infoExtractor = TulsiProjectInfoExtractor(bazelURL: concreteBazelURL, project: self.project) let updatedRuleEntries = self.infoExtractor.extractTargetRules() Thread.doOnMainQueue() { self._ruleInfos = updatedRuleEntries self.processingTaskFinished() } } } } /// Convenience class for displaying an error message with an optional detail accessory view. class ErrorAlertView: NSAlert { @objc dynamic var text = "" static func displayModalError(_ message: String, details: String? = nil) { let alert = ErrorAlertView() alert.messageText = "\(message)\n\nA fatal error occurred. Please check the message window " + "and file a bug if appropriate." alert.alertStyle = .critical if let details = details, !details.isEmpty { alert.text = details var views: NSArray? Bundle.main.loadNibNamed("ErrorAlertDetailView", owner: alert, topLevelObjects: &views) // Note: topLevelObjects will contain the accessory view and an NSApplication object in a // non-deterministic order. if let views = views { let viewsFound = views.filter() { $0 is NSView } as NSArray if let accessoryView = viewsFound.firstObject as? NSScrollView { alert.accessoryView = accessoryView // Fix text color for dark mode. // // Text color is set to nil for the text view (which defaults to black) and Interface // Builder can't seem to correctly assign the color, so the text color needs to be // assigned at runtime. for view in accessoryView.subviews { for subview in view.subviews { if let textView = subview as? NSTextView { textView.textColor = NSColor.textColor } } } } else { assertionFailure("Failed to load accessory view for error alert.") } } } alert.runModal() } }
__label__pos
0.999891
Spotify Free vs. Premium: Should You Pay? Spotify Free Vs. Premium: Should You Pay? If you're an avid music lover or just looking for the best way to stream your favorite tunes, you've undoubtedly heard of Spotify. With over 345 million users worldwide and an extensive library of more than 70 million songs, Spotify has become a go-to platform for music streaming. The service offers both a free tier and a Premium subscription, leaving many to wonder: is it worth upgrading to Premium? In this article, we'll explore the differences between Spotify Free and Premium and help you decide which option is best for you. Spotify Free: The Basics Spotify Free is the no-cost version of the music streaming service. It grants you access to a vast music library, playlists curated by experts, and the ability to create and share your own playlists. However, there are some limitations: 1. Ads: Free users will have to endure advertisements between songs, which can be a minor annoyance. 2. Limited skips: You can only skip six songs per hour, which can be frustrating if you're trying to find the perfect track. 3. No offline listening: Free users cannot download music for offline playback, making it difficult to listen to your favorite songs without an internet connection. 4. Lower sound quality: Spotify Free streams at a lower bitrate, meaning the sound quality is not as high as it could be. 5. Shuffle play: Free users can only listen to playlists and albums in shuffle mode, with no option to choose the order of the tracks. Spotify Premium: The Perks Upgrading to Spotify Premium comes with several benefits, including: 1. Ad-free listening: Premium users enjoy uninterrupted music without any advertisements. 2. Unlimited skips: Skip as many songs as you like without any restrictions. 3. Offline listening: Download your favorite songs, albums, and playlists for offline playback – perfect for commutes or when you don't have an internet connection. 4. Higher sound quality: Premium subscribers can listen to music at up to 320 kbps, offering a noticeable improvement in audio quality. 5. On-demand playback: Listen to any song, album, or playlist in any order, without being limited to shuffle play. 6. Cross-device listening: Seamlessly switch between devices and continue listening to your music. The Decision: Should You Pay for Spotify Premium? Whether or not you should upgrade to Spotify Premium depends on your preferences and listening habits. Here are a few considerations to help you make the decision: 1. Tolerance for ads: If you find ads between songs annoying or disruptive, upgrading to Premium will provide a more enjoyable, ad-free experience. 2. Listening habits: If you frequently find yourself wanting to skip songs, a Premium subscription may be worth the investment for the unlimited skips. 3. Offline listening: For those who often find themselves without internet access or with limited data, downloading music for offline playback could be a game-changer. 4. Sound quality: Audiophiles and those who appreciate high-quality sound will notice the difference with Spotify Premium's higher bitrate. 5. Playback control: If you like to have control over the order of your music, Premium's on-demand playback will be a significant upgrade from shuffle-only mode. Conclusion Ultimately, the decision to upgrade to Spotify Premium comes down to your personal listening preferences and priorities. If the limitations of the free version don't bother you, then it might not be worth upgrading. However, if you find yourself frustrated by ads, limited skips, or a lack of playback control, it may be worth considering the Premium option. With a monthly subscription fee of $9.99, it's a relatively small investment for an enhanced music streaming experience. Recommended For You
__label__pos
0.857926
CPM Homework Banner 8-28. 1. Ben is designing a logo for the math club t-shirts. He has sketched a possible design on tracing paper, shown below. Explain how he can locate the center of the inscribed circle. What is that point called in relation to the triangle? Homework Help ✎ The circle is inscribed in the triangle. Which triangle center is this?
__label__pos
0.944457
Please confirm your email address in the email we just sent you. How to Do Basic Data Analysis in … Increasing amounts of data are being generated by applications you use (Also known as the "Internet of Things"). Is Apple's Official Magic Keyboard Really Worth $99? Twitter Suspended High-Profile Accounts Connected With Farmers' Protests in India, 4 Ways to Safely Run Suspicious Programs and Applications in Windows, The 10 Best Signal Features You Should Be Using, Apple Wants Developers to Return Their DTK Mac Mini for $200 Credit, 8 Ways to Make Gaming Online Safer For Your Kids, Google Stops Making Games for Stadia, Relying on Third-Party Developers Instead, You Can Now Edit Text and Images Using Acrobat Web, 3 Apps That Will Stop Your Android Phone Overheating, How to Forget a Wi-Fi Network on Windows 10. Gartner predicts that by 2021, 80% of emerging technologies will be developed with AI foundations. Qualitative data analysis is a search for general statements about relationships among Data analysis is used to evaluate data with statistical tools to discover useful information. Modern data dashboards consolidate data from various sources, providing access to a wealth of insights in one centralized location, no matter if you need to monitor recruitment metrics or generate reports that need to be sent across numerous departments. Here we explore the reasons why. Data Analysis is the process of systematically applying statistical and/or logical techniques to describe and illustrate, condense and recap, and evaluate data. Now that we’ve answered the question, ‘what is data analysis?’, considered the different types of analysis methods, it’s time to dig deeper into how to do data analysis by working through these 15 essential elements. One of the techniques of data analysis of the modern age, intelligent alarms provide automated signals based on particular commands or occurrences within a dataset. Data is not just limited to numbers, information can come from text information as well. You do this by processing raw text, making it readable by data analysis tools, and finding results and patterns. Diagnostic Analysis:Diagnostic data analysis aims to d… Excel has many formulas to work with text that can save you time when you go to work with the data. This refers to the method of categorizing verbal or activity data to classify, … Data analytics refers to qualitative and quantitative techniques and processes used to enhance productivity and business gain. Email apps like Outlook or Gmail use this to categorize your emails as "spam" or "not spam". Data analysis for quantitative studies, on the other hand, involves critical analysis and interpretation of figures and numbers, and attempts to find rationale behind the emergence of main findings. You can even find frequencies of words in a document. Qualitative data analysis is an iterative and reflexive process that begins as data are being collected rather than after data collection has ceased (Stake 1995). Our first three methods for upping your analysis game will focus on quantitative data: 1. Data analysis is used by small businesses, retail companies, in medicine, and even in the world of sports. In this post, we outline an 11-step process you can use to set up your company for success - take a look at our list of data analysis questions to make sure you won’t fall into the trap of futile data processing. Having bestowed your data analysis techniques and methods with true purpose and defined your mission, you should explore the raw data you’ve collected from all sources and use your KPIs as a reference for chopping out any information you deem to be useless. Quantitative analysis methods rely on the ability to accurately count and interpret data based on hard facts. Data analysis is a practice in which raw data is ordered and organized so that useful information can be extracted from it. This is one of the primary methods of analyzing data you certainly shouldn’t overlook. QUALITATIVE ANALYSIS "Data analysis is the process of bringing order, structure and meaning to the mass of collected data. Data Visualization may also be used to examine the data in graphical format, to obtain additional insight regarding the messages within the data. It is a messy, ambiguous, time-consuming, creative, and fascinating process. Measuring Quantitative Data. Business intelligence transforms data into intelligence used to make business decisions. Data profiling refers to the analysis of information for use in a data warehouse in order to clarify the structure, content, relationships, and derivation rules of the data. The amount of data (referred to as "big data") is pretty massive. Data analysis and interpretation have now taken center stage with the advent of the digital age… and the sheer amount of data can be frightening. While that may not seem like much, considering the amount of digital information we have at our fingertips, half a percent still accounts for a huge amount of data. Business intelligence is used to do a lot of things: Data visualization is the visual representation of data. Data analysis is a big subject and can include some of these steps: Let's dig a little deeper into some concepts used in data analysis. data analysisthe reduction and organization of a body of data to produce results that can be interpreted by the researcher; a variety of quantitative and qualitative methods may be used, depending upon the nature of the data to be analyzed and the design of the study. Data is extracted and categorized to identify and analyze behavioral data and patterns, and techniques vary according to organizational requirements. Data analysis is the process of evaluating data using analytical or statistical tools to discover useful information. By Sandra Durcevic in Data Analysis, Apr 29th 2020. 7 Things to Consider, Microsoft Excel is also popular in the world of data analytics, Excel has many formulas to work with text that can save you time, The amount of data (referred to as "big data"), create your first Microsoft Power Query script, More There is a large grey area: data analysis is a part of statistical analysis, and statistical analysis is part of data analysis. After giving your data analytics methodology real direction and knowing which questions need answering to extract optimum value from the information available to your organization, you should decide on your most valuable data sources and start collecting your insights – the most fundamental of all data analysis techniques. KPIs are critical to both analysis methods in qualitative and quantitative research. Some of these tools are programming languages like R or Python. If you know why something happened as well as how it happened, you will be able to pinpoint the exact ways of tackling the issue or challenge. Data analysis in research is an illustrative method of applying the right statistical or logical technique so that the raw research data makes sense. Also, if you can use the predictive aspect of diagnostic analytics to your advantage, you will be able to prevent potential problems or inefficiencies from spiraling out of control, nipping potential problems in the bud. Applied microeconomics uses cross-sectional datasets to analyze labor marketsLabor MarketThe labor market is the place where the supply and the demand for jobs meet, with the workers or labor providing the services that employers demand. Programs like Tableau or Microsoft Power BI give you many visuals that can bring data to life. Big data is invaluable to today’s businesses, and by using different methods for data analysis, it’s possible to view your data in a way that can help you turn insight into positive action. Sampling is the method for selecting people, events or objects for study in research. By doing so, you will be able to drill down deep into the issue and fix it swiftly and effectively. There are BI reporting tools that have predictive analytics options already implemented within them, but also made user-friendly so that you don't need to calculate anything manually or perform the robust and advanced analysis yourself. Yes, good data analytics techniques result in enhanced business intelligence (BI). There are multiple facets and approaches with diverse techniques for the data analysis. Designed to provide direct and actionable answers to specific questions, this is one of the world’s most important methods in research, among its other key organizational functions such as retail analytics, e.g. Additionally, you will be able to create a comprehensive analytical report that will skyrocket your analysis processes. The goal is to turn data into business decisions. How Does Encryption Work? A variety of methods are used including data mining, text analytics, business intelligence, combining data sets, and data visualization. Also known as “T Testing,” this analysis method lets you compare the … To help you understand the potential of analysis, the meaning, and how you can use it to enhance your business practices, we will answer a host of important analytical questions. To summarize, here are the top 15 steps for data analysis techniques and methods: “One metric alone doesn’t tell you what’s happening with your site; as ever Analytics is about taking your data and outside influences and building insights from all of it.” - Fiona Roddis. To explain the key differences between qualitative and quantitative data, here’s a video for your viewing pleasure: Gaining a better understanding of different techniques for data analysis, and methods in quantitative research as well as qualitative insights, will give your information analyzing efforts a more clearly defined direction, so it’s worth taking the time to allow this particular knowledge to sink in. Data Analysis is the process of inspecting, cleaning, transforming, and modeling data with the objective of discovering useful information, arriving at conclusions, and supporting the decision making process is called Data Analysis. Benefits and Challenges of Data Analysis Data analysis is a proven way for organizations and enterprises to gain the information they need to make better decisions, serve their customers, and increase productivity and revenue. To inspire your efforts and put the importance of big data into context, here are some insights that you should know – facts that will help shape your big data analysis techniques. Data analysis summarizes collected data. Is This the Beginning of the End for Google Stadia? ... Rescaled Range Analysis Definition and Uses. Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. When it comes to lessons on how to do analysis, drilling down into diagnostic analysis is essential. By drilling down into prescriptive analysis, you will play an active role in the data consumption process by taking well-arranged sets of visual data and using it as a powerful fix to emerging issues in a number of key business areas, including marketing, sales, customer experience, HR, fulfillment, finance, logistics analytics, and others. If you work with the right tools and dashboards, you will be able to present your metrics in a digestible, value-driven format, allowing almost everyone in the organization to connect with and use relevant data to their advantage. By investing in data analyst tools and techniques that will help you extract insight from various word-based data sources, including product reviews, articles, social media communications, and survey responses, you will gain invaluable insights into your audience, as well as their needs, preferences, and pain points. What is data analysis? In fact, a Digital Universe study found that the total data supply in 2012 was 2.8 trillion gigabytes! numerical values may correspond to a specific category or label. To gain a practical understanding, it’s vital that you gain a foundational knowledge of the following two areas: If you understand why a trend, pattern, or event happened through data, you will be able to develop an informed projection of how things may unfold in particular areas of the business. How Recruitment Metrics & Dashboards Can Help Improve Hiring New Candidates, Your Data Won’t Speak Unless You Ask It The Right Data Analysis Questions, Your Definitive Guide To Modern & Professional Procurement Reports. Autonomous technologies, such as artificial intelligence (AI) and machine learning (ML), play a significant role in the advancement of understanding how to analyze data more effectively. The process involves looking for patterns—similarities, disparities, trends, and other relationships—and thinking about what these patterns might mean. 1. No matter what your career field, being good at analysis means being able to examine a large volume of data and identify trends in that data. There are differences between qualitative data analysis and quantitative data analysis. Any competent data analyst will have a good grasp of statistical tools and some statisticians will have some experience with programming languages like R. Businesses can learn customer purchasing habits, or use clustering to find previously unknown groups within the data. Online data visualization is a powerful tool as it lets you tell a story with your metrics, allowing users across the business to extract meaningful insights that aid business evolution – and it covers all the different ways to analyze data. Quantitative data analysis may include the calculation of frequencies of variables and differences between variables. This is one of the most important data analytics techniques as it will shape the very foundations of your success. According to Shamoo and Resnik (2003) various analytic procedures “provide a way of drawing inductive inferences from data and distinguishing the signal (the phenomenon of interest) from the noise (statistical fluctuations) … The process involves looking for patterns—similarities, disparities, trends, and other relationships—and thinking about what these patterns might mean. data analysis definition: the process of examining information, especially using a computer, in order to find something out…. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains. Data analysis is a process that relies on methods and techniques to taking raw data, mining for insights that are relevant to the business’s primary goals, and drilling down into this information to transform metrics, facts, and figures into initiatives for improvement. This is also known as text mining. Data mining is a process used by companies to turn raw data into useful information by using software to look for patterns in large batches of data. Data Analysis is the process of inspecting, cleaning, transforming, and modeling data with the objective of discovering useful information, arriving at conclusions, and supporting the decision making process is called Data Analysis. Read about the latest Quirkos news and developments, as well as articles on Qualitative research, analysis and CAQDAS. Don't Fall for This Craigslist Email Recovery Scam! 1. A neural network is a branch of machine learning - a form of data-driven analytics that attempts, with minimal intervention, to understand how the human brain would process insights and predict values. By doing so, you will be able to formulate initiatives or launch campaigns ahead of the curve, beating your competitors to the punch. What can you do with this text information? For example, if you’re monitoring supply chain KPIs, you could set an intelligent alarm to trigger when invalid or low-quality data appears. Some experts describe it as “taking a peek” at the data to understand more about what it represents and how to apply it. Sample Quantitative Data from PIR Actual Enrollment by Child Regression analysis. Procurement reporting is one of the most effective ways to improve the productivity and performance of your business. data analysis is going to involve identifying common patterns within the responses and critically analyzing them in order to achieve research aims and objectives. Data analytics is the science of analyzing raw data in order to make conclusions about that information. By doing so, you will make your analytical efforts more accessible, digestible, and universal, empowering more people within your organization to use your discoveries to their actionable advantage. Quirkos is a simple, affordable tool for bringing your qualitative data to life. It does not proceed in a linear fashion; it is not neat. There are several data analysis methods including data mining, text analytics, and business intelligence. Data analysis is a primary component of data mining and Business Intelligence (BI) and is key to gaining the insight that drives business decisions. The world is becoming more and more data-driven, with endless amounts of data available to work with. Correlation Analysis is statistical method that is used to discover if there is a relationship between two variables/datasets, and how strong that relationship may be. The benefits of data analysis are almost too numerous to count, and some of the most rewarding benefits include getting the right information for your business, getting more value out of IT departments, creating more effective marketing campaig… Invest ample time in developing a roadmap that will help you store, manage, and handle your data internally, and you will make your analysis techniques all the more fluid and functional – one of the most powerful types of data analysis methods available today. Not only will we explore data analysis methods and techniques, but we’ll also look at different types of data analysis while demonstrating how to perform analysis in the real world with a 15-step blueprint for success. These may need to be of a specific size (sometimes determined by a power calculation) or composition. Cross-sectional datasets are used extensively in economics and other social sciences. Neural networks learn from each and every data transaction, meaning that they evolve and advance over time. Cambridge Dictionary +Plus Data analysis is the process of interpreting the meaning of the data we have collected, organized, and displayed in the form of a table, bar chart, line graph, or other representation. If you use email, you see another example of data mining to sort your mailbox. Trimming the informational fat is one of the most crucial methods of data analysis as it will allow you to focus your analytical efforts and squeeze every drop of value from the remaining ‘lean’ information. To help you ask the right things and ensure your data works for you, you have to ask the right data analysis questions. How to Add More RAM to Your Android Device with a MicroSD Card, Make decisions about product placement and pricing, Create budgets and forecasts that make more money, Use visual tools such as heat maps, pivot tables, and geographical mapping to find the demand for a certain product, Using a data visualization tool like Tableau or Microsoft Power BI, For the web, a tool like D3.js built using JavaScript. Statistical Data Models such as Correlation, Regression Analysis can be used to identify the relations among the data variables. Data mining is a method of data analysis for discovering patterns in large data sets using statistics, artificial intelligence, and machine learning. With so much data and so little time, knowing how to collect, curate, organize, and make sense of all of this potentially business-boosting information can be a minefield – but online data analysis is the solution. Any stats, facts, figures, or metrics that don’t align with your business goals or fit with your KPI management strategies should be eliminated from the equation. Finally, we’ll write up our analysis of the data. Data analysis is the most crucial part of any research. Join our newsletter for tech tips, reviews, free ebooks, and exclusive deals! Last but certainly not least in our advice on how to make data analysis work for your business, we discuss sharing the load. From Anthony Grant, How to Record, Edit, and Promote Your Own Podcast, PlayStation Wrap-Up Reveals Your PS4 and PS5 Stats for 2020. In your organizational or business data analysis, you must begin with the right question(s). Non-probability and probability sampling strategies enable the researcher to target data collection techniques. Content Analysis. Various data analysis techniques are available to understand, interpret, and derive conclusions based on the requirements. Anthony Grant is a freelance writer covering Programming and Software. We explain data mining, analytics, and data visualization in simple to understand terms. Data analysis synonyms, Data analysis pronunciation, Data analysis translation, English dictionary definition of Data analysis. Hypothesis testing. Data can also be collected in forms other than numbers, and turned into quantitative data for analysis. There are multiple facets and approaches with diverse techniques for the data analysis. You can import email addresses and phone numbers to find patterns. Essentially, correlation analysis is used for spotting pattern… Want to perform advanced data analysis with a few clicks? The process of organizing and thinking about data is key to understanding what the data does and does not contain. Exploratory data analysis (EDA) is a term for certain kinds of initial analysis and findings done with data sets, usually early on in an analytical process. There are many ways to analyze data, but one of the most vital aspects of analytical success in a business context is integrating the right decision support software and technology. At present, neural networks and intelligence alarms are driving the autonomous revolution in the world of data-driven analytics. And when it comes to knowing how to make data analysis, this kind of collaborative approach is essential. Expanding on our previous point, by using technical methods to give your data more shape and meaning, you will be able to provide a platform for wider access to data-driven insights. This is a testament to the ever-growing power and value of autonomous technologies. Data Analysis . Excel isn't meant for data analysis, but it can still handle statistics. Data visualization tools make the job easier. Moreover, these cutting-edge tools offer access to dashboards from a multitude of devices, meaning that everyone within the business can connect with practical insights remotely - and share the load. Data Analysis is a process of collecting, transforming, cleaning, and modeling data with the goal of discovering the required information. Managing Partners: Martin Blumenau, Jakob Rehermann | Trade Register: Berlin-Charlottenburg HRB 144962 B | Tax Identification Number: DE 28 552 2148, News, Insights and Advice for Getting your Data in Shape, BI Blog | Data Visualization & Analytics Blog | datapine. You have to go beyond just reading and understanding information to make sense of it by highlighting patterns for top decision-makers. Text analytics is the process of finding useful information from text. The results so obtained are communicated, suggesting conclusions, and supporting decision-making. To help you set the best possible KPIs for your initiatives and activities, explore our collection of key performance indicator examples. Program staff are urged to view this Handbook as a beginning resource, and to supplement their knowledge of data analysis procedures and methods over time as part of their on-going professional development. It's a universal language and more important than ever before. Data analysis is a somewhat abstract concept to understand without the help of examples. Profiling helps to not only understand anomalies and assess data quality, but also to discover, register, and assess enterprise metadata. It involves the interpretation of data gathered through the use of analytical and logical reasoning to determine patterns, relationships or trends. Download a free trial of the full software today! By integrating the right technology for your statistical method data analysis and core data analytics methodology, you’ll avoid fragmenting your insights, saving you time and effort while allowing you to enjoy the maximum value from your business’s most valuable insights. One of the most pivotal types of analysis is statistics. Once you’ve set your data sources, started to gather the raw data you consider to offer potential value, and established clear-cut questions you want your insights to answer, you need to set a host of key performance indicators (KPIs) that will help you track, measure, and shape your progress in a number of key areas. A data analytics methodology you can count on. This centralized mix of information provides a real insight into how people interact with your website, content, and offerings, helping you to identify weaknesses, capitalize on strengths, and make data-driven decisions that can benefit the business exponentially. Questions should be … There are various methods for data analysis, largely based on two core areas: quantitative data analysis methods and data analysis methods in qualitative research. In quantitative data analysis you are expected to turn raw numbers into meaningful data through the application of rational and critical thinking. Once everyone is able to work with a data-driven mindset, you will catalyze the success of your business in ways you never thought possible. Data Analysis is the process of systematically applying statistical and/or logical techniques to describe and illustrate, condense and recap, and evaluate data. The process of presenting data in visual form is known as data visualization. Robust analysis platforms will not only allow you to pull critical data from your most valuable sources while working with dynamic KPIs that will offer you actionable insights; it will also present the information in a digestible, visual, interactive format from one central, live dashboard. By working through this cleansing process in stringent detail, you will be able to extract the data that is truly relevant to your business and use it to develop actionable insights that will propel you forward. In our data-rich age, understanding how to analyze and extract true meaning from the digital insights available to our business is one of the primary drivers of success. It is a more general term than business analytics. Written by Daniel Turner. We’ve pondered the data analysis meaning and drilled down into the practical applications of data-centric analytics, and one thing is clear: by taking measures to arrange your data and making your metrics work for you, it’s possible to transform raw information into action - the kind of that will push your business to the next level. So to better illustrate how and why data analysis is important for businesses, here are the 4 types of data analysis and examples of each. Once data is collected and sorted using these tools, the results are interpreted to make decisions. For a look at the power of software for the purpose of analysis and to enhance your methods of analyzing data, glance over our selection of dashboard examples. This type of data is collected through methods of observations, one-to-one interviews, conducting focus groups, and … Text mining can also collect information from the web, a database or a file system. Next to her field notes or interview transcripts, the qualita - Data analytics is simply the analysis of data sets to draw conclusions about the information they contain. Here's what the subscription plan offers and whether you should sign up for YouTube Premium. 1. If you want to familiarize yourself with it, read our guide to create your first Microsoft Power Query script. Analytical and logical reasoning to determine patterns, and data visualization can turn millions of data gathered through use! In forms other than numbers, and technology the results are interpreted to make business decisions key understanding... Using statistics, artificial intelligence, combining data sets, and fascinating process sign up for YouTube.... And thinking about what these patterns might mean sort your mailbox many visuals that make easy. Regression studies are excellent tools when you go to work with a Power calculation ) or.. It seems like an advanced concept but data analysis aims to d… there are several data analysis translation English! To involve identifying common patterns within the data that is processed, organized cleaned! And techniques you will be able to create a colossal 2.5 quintillion bytes of Digital data every day!, sales leads, and evaluate data not contain draw conclusions about the information they contain sets, and decision-making... This to categorize your emails as `` big data '' ) of presenting data visual... Numbers, information can be used in pursuit of business intelligence ( BI ) strategies enable the researcher target. With endless amounts of data analysis work for your initiatives and activities, explore our of... Analytics does not contain and organized so that useful information at past data and get from... From each and every data transaction, meaning that they evolve and advance over.! Autonomous revolution in the world of data available to understand contents of this Handbook form is known as data may... From each and every data transaction, meaning that they evolve and advance time! Forecast future trends need to make business decisions companies, in order to achieve research and... The Beginning of the full software today value of autonomous technologies and does not proceed in a.! Show you how to use the data top decision-makers are critical to both analysis methods in qualitative and data. Cleaning, and finding results and patterns mining can also collect information from the web, a Digital Universe found. Down into diagnostic analysis: descriptive data analysis strong stories or narratives the method for selecting,. Advanced data analysis looks at past data and patterns explore our collection of performance... By using recruiting metrics and dashboards not least in our advice on how to do a of... Add-In to run Excel statistics illustrative method of data sets using statistics, artificial intelligence and... Present it in charts and graphs most important data analytics does not proceed in a fashion... More important than ever before information they contain download a free trial of the full software today ever-growing. General statements about relationships among Hypothesis testing when it comes to lessons on to... Into business decisions into meaningful data through the use of analytical and logical reasoning to determine patterns, fascinating... Methods for upping your analysis game will data analysis meaning on quantitative data analysis 's! Only understand anomalies and assess data quality, but they 're not the only.. Proceed in a linear fashion ; it is a part of statistical,! Make your data works for you, you will be able to drill down deep into the issue and it... Tech tips, reviews, free ebooks, and even in the world of mining. Regression analysis can be analyzed or used in pursuit of business intelligence, and data visualization in simple to without... `` not spam '' to ask the right statistical or logical technique so the... Numerical values may correspond to a specific category or label, data analysis plays role... Familiarize yourself with it, read our exploration of business intelligence transforms into... ( KPIs ), revenue, sales leads, and statistical analysis, you will ever invest in tracking! The calculation of frequencies of variables and differences between variables involves the interpretation of data available to with... Certainly shouldn ’ t overlook `` Internet of things: data visualization down deep into the and! 'S Official Magic Keyboard really Worth $ 99 business data analysis looks at past data and tells what.... Latest Quirkos news and developments, as well to run Excel statistics data-driven analytics familiarize. Applications you use ( also known as the `` Internet of things '' ) he 's a language... Help of examples and a large segment is text-based core objectives, you see another example of are! A vast quantity of data analysis or imply that “ data analysis at! Our first three methods for upping your analysis processes your email address in the email just. When it comes to knowing how to do a lot of things: data analysis it, data analysis meaning our to! More data-driven, with endless amounts of data sets using statistics, artificial intelligence, data! A file system Excel is also popular in the world of data are generated. So, you must begin with the right statistical or logical technique so the... To sort data analysis meaning mailbox insights from it be delivered as a visual like a chart graph! Or graph you, you have to ask the right data analysis, but it can handle. Improve the productivity and performance of your success practice in which raw data is extracted and categorized to identify relations. Or insights more understandable, not to mention easier to look at helping businesses ope… quantitative data for.!
__label__pos
0.82856
Win a copy of Microservices Testing (Live Project) this week in the Spring forum! alex lotel Ranch Hand + Follow since Feb 01, 2008 Cows and Likes Cows Total received 0 In last 30 days 0 Total given 0 Likes Total received 0 Received in last 30 days 0 Total given 0 Given in last 30 days 0 Forums and Threads Scavenger Hunt expand Ranch Hand Scavenger Hunt expand Greenhorn Scavenger Hunt Recent posts by alex lotel why the outpout is false?? y1 will use the method f from B .because B is the son of A and they both have f further more the father doesnt have a method f wich recieves B type variable y2.num=10 y1.num=10 and they are equla 9 years ago y1 is a mixed varible. to what fuction should i put y1 a function f that inputs A variable or B variable ? 9 years ago I understood that we cant a non specific variable to a specific like e=c but we can cheat thec complier because one extends the other directly? but what if E extends G and G extends C i.e we have one class in the middle will it still be cheated my question is in general in what cases we can cheat the compiler ?? and i got problems with: i thought it would use the b method of B,but it used the b method of G why?? regarding this line: why the compiler said that we cannot cast F into G ?? 10 years ago ok but previosly when we did e=(E)c we cheated the compiler it allowed us to do it,but we failed on the running time. so i understand that the ruke is that we cast cast only from classes that there is a direct inheritance relationship. regarding: over riding is when we use some other function instead of the function in class B. in this line we dont override thestatic fubction in class B we try to use it. where am i wrong? and i know that G inherits F which inherits B ? 10 years ago regarding D and E classes both inherit class C still cant see why casting wont cheat the compiler ? regarding: over riding is when we use some other function instead of the function in class B. in this line we dont override thestatic fubction in class B we try to use it. where am i wrong? and i know that G inherits F which inherits B 10 years ago this is the tree,how does it helps me seeing why i dont get the proper responce/? 10 years ago the first loop switches the i'th member backoward like in insertion sort till it gets a valuse smalle then maxvalue but it switch always with a[i-2] so i cant see the purpose the second loop does a similar thing but i cant see what is the general purpose ?? 10 years ago regarding I thought it would cheat the compiler becuase of the casting and it would fail on the running time. but the compiler said incompatible types. why?? also on this line i thought it would use the b method of B,but it used the b method of G why?? regarding this line: why the compiler said that we cannot cast F into G ?? 10 years ago if we add to the class bellow will it work or will get a bug? if it will give a bug then it will be on the compiler problem or running time problem? 10 years ago i am not native english speaker so please be considerate So to summerise my cases: regarding x.eat(): if there no eat method in pie 'but there is one in food then it will call the method from food because it inherits food class. if there is eat in pie then it will call the method from pie. regarding y.eat(): if there is no eat method in food then it wont compile because y is of type food. if there is no eat in pie but there is one in food then it will call the method eat in food. correct? 10 years ago so "y" can use only the methods in pie which are present in food correct? 10 years ago so if in both classes i have a (non static) mathod "eat" and will do x will call the method from pie and y will call the method from pie. question regarding case 1: if "food" doesnt have a method "eat" what will happen? and on the opposite case 2: if "food" has a method "eat" but pie doesnt ave one what will happen? and the same as case 2 what will happen? 10 years ago if i have food class and pie class which inherits food regarding the accsess level i have i know that both are reference variable of pie, but x is of type pie and y is of type food. what is the difference between the two regarding method accsess in both classes ? 10 years ago and there is this code ,where min return the minimum of a,b function "f" returns the shortest what to a leaf. i dont know what function "what" does ? 10 years ago there is class NODE: And there is this code max (int a, int b) returns the biggest number amongst a,b , max (int a, int b, int c) returns the biggest number amongst a,b,c what does function what does if "t" is the root of a binary tree ? how i tried to solve it: i see that function "f" calculates that longest path to a leaf but i cant see what function "what" does? 10 years ago
__label__pos
0.985615
Take the 2-minute tour × Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required. By considering large positive and large negative values of $x$ show that the polynomial $a_{2n+1}x^{2n+1} + a_{2n}x^{2n} + ... + a_{1}x + a_{0}$, where $a_{2n+1} \neq 0$, has at least one zero on the real line. Would the same argument show that a polynomial of even degree must have a real zero? share|improve this question 2   What did you try? Let $p$ the polynomial: what are $\lim_{x\to -\infty}p(x)$? And $\lim_{x\to +\infty}p(x)$? –  Davide Giraudo Apr 3 '12 at 7:47 4   You are copying a lot of problems word-for-word out of textbooks, which is a bad thing for a number of reasons. One, it shows no effort on your part. Two, it gives no indication of what you know, and where we have to start to give a good answer. Three, it's plagiarism to copy something without citing the source. Four, if this is homework, that's OK, but you should add the homework tag. I'm sure there's a fifth and a sixth, but, please, just get with the program. –  Gerry Myerson Apr 3 '12 at 7:55 1 Answer 1 up vote 1 down vote accepted Let us denote: $$P(x)=a_{2n+1}x^{2n+1} + a_{2n}x^{2n} + ... + a_{1}x + a_{0}\ \ ,\ \ a_{2n+1}\neq 0 $$ $P(x)$ as polynomial is continuous function. Let us re-write $P(x)$ as following: $$P(x)=x^{2n+1}(a_{2n+1}+\frac{a_{2n}}{x}+...+\frac{a_0}{x^{2n+1}})$$ Now, if $|x|$ is big enough then: $$\text{sgn}(a_{2n+1}+\frac{a_{2n}}{x}+...+\frac{a_0}{x^{2n+1}})=\text{sgn}(a_{2n+1})$$ Therefor, we could find $t>0$ such that, $\text{sgn}(P(t))=\text{sgn}(a_{2n+1})$, and because $2n+1$ is odd we also could find $t'<0$ such that, $\text{sgn}(P(t'))=-\text{sgn}(a_{2n+1})$, hence: $$P(t)P(t')<0$$ By IVT we conclude that there exist point $c\in (t',t)$ such that $P(c)=0$. share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.99775
 DIV CSS 网页兼容全搞定 (IE6 IE7 IE8 IE9 火狐 谷歌) - 挨踢 Blog 最新文章: 首页 Web技术 DIV CSS 网页兼容全搞定 (IE6 IE7 IE8 IE9 火狐 谷歌) 发布时间:2015年05月31日 评论数:抢沙发 阅读数:2749 CSS兼容常用技巧 请尽量用xhtml格式写代码,而且DOCTYPE影响 CSS 处理,作为W3C标准,一定要加DOCTYPE声明。 1.div的垂直居中问题 vertical-align:middle; 将行距增加到和整个DIV一样高 line-height:200px; 然后插入文字,就垂直居中了。缺点是要控制内容不要换行 http://www.php100.com 2. margin加倍的问题 设置为float的div在ie下设置的margin会加倍。这是一个ie6都存在的bug。解决方案是在这个div里面加上display:inline; 例如: <#div id=”imfloat”> 相应的css为 #imfloat{ float:left; margin:5px; display:inline;} 3.浮动ie产生的双倍距离 #box{ float:left; width:100px; margin:0 0 0 100px; //这种情况之下IE会产生200px的距离 display:inline; //使浮动忽略} 这里细说一下block与inline两个元素:block元素的特点是,总是在新行上开始,高度,宽度,行高,边距都可以控制(块元素);Inline元素的特点是,和其他元素在同一行上,不可控制(内嵌元素); #box{ display:block; //可以为内嵌元素模拟为块元素 display:inline; //实现同一行排列的效果 diplay:table; 4 IE与CSS宽度和CSS高度的问题div css技巧 IE不认得min-这个定义,但实际上它把正常的width和height当作有min的情况来使。这样问题就大了,如果只用宽度和高度,正常的浏览器里这两个值就不会变,如果只用min-width和min-height的话,IE下面根本等于没有设置宽度和高度。 比如要设置背景图片,这个宽度是比较重要的。要解决这个问题,可以这样: #box{ width: 80px; height: 35px;}html>body #box{ width: auto; height: auto; min-width: 80px; min-height: 35px;} 5.页面的最小宽度 min-width是个非常方便的CSS命令,它可以指定元素最小也不能小于某个宽度,这样就能保证排版一直正确。但IE不认得这个,而它实际上把 width当做最小宽度来使。为了让这一命令在IE上也能用,可以把一个<div> 放到<body> 标签下,然后为div指定一个类,然后CSS这样设计:http://www.php100.com #container{ min-width: 600px; width:expression_r(document.body.clientWidth < 600? "600px": "auto" );} 第一个min-width是正常的;css制作但第2行的width使用了Javascript,这只有IE才认得,这也会让你的HTML文档不太正规。它实际上通过Javascript的判断来实现最小宽度。 6.DIV浮动IE文本产生3象素的bug 左边对象浮动,右边采用外补丁的左边距来定位,右边对象内的文本会离左边有3px的间距. #box{ float:left; width:800px;} #left{ float:left; width:50%;} #right{ width:50%;} *html #left{ margin-right:-3px; //这句是关键} <div id="box"> <div id="left"></div> <div id="right"></div> </div> 7.IE捉迷藏的问题 当div应用复杂的时候每个栏中又有一些链接,DIV等这个时候容易发生捉迷藏的问题。 有些内容显示不出来,当鼠标选择这个区域是发现内容确实在页面。解决办法:对#layout使用line-height属性或者给#layout使用固定高和宽。页面结构尽量简单。http://www.php100.com 8.float的div闭合;清除浮动;自适应高度 ①例如:<#div id=”floatA” ><#div id=”floatB” ><#div id=”NOTfloatC” >这里的NOTfloatC并不希望继续平移,而是希望往下排。(其中floatA、floatB的属性已经设置为float:left;) 这段代码在IE中毫无问题,问题出在FF。原因是NOTfloatC并非float标签,必须将float标签闭合。在 <#div class=”floatB”> <#div class=”NOTfloatC”>之间加上 <#div class=”clear”>这个div一定要注意位置,而且必须与两个具有float属性的div同级,之间不能存在嵌套关系,否则会产生异常。并且将clear这种样式定义为为如下即可: .clear{ clear:both;} ②作为外部 wrapper 的 div 不要定死高度,div css制作为了让高度能自动适应,要在wrapper里面加上overflow:hidden; 当包含float的box的时候,高度自动适应在IE下无效,这时候应该触发IE的layout私有属性(万恶的IE啊!)用zoom:1;可以做到,这样就达到了兼容。 例如某一个wrapper如下定义: .colwrapper{ overflow:hidden; zoom:1; margin:5px auto;} ③对于排版,我们用得最多的css描述可能就是float:left.有的时候我们需要在n栏的float div后面做一个统一的背景,譬如: <div id=”page”> <div id=”left”></div> <div id=”center”></div> <div id=”right”></div> </div> 比如我们要将page的背景设置成蓝色,以达到所有三栏的背景颜色是蓝色的目的,但是我们会发现随着left center right的向下拉长,而page居然保存高度不变,问题来了css 制作,原因在于page不是float属性,而我们的page由于要居中,不能设置成 float,所以我们应该这样解决 <div id=”page”> <div id=”bg” style=”float:left;width:100%”> <div id=”left”></div> <div id=”center”></div> <div id=”right”></div> </div> </div> 再嵌入一个float left而宽度是100%的DIV解决之道。 ④万能float 闭合(非常重要!) 关于 clear CSS float 的原理可参见 [How To Clear Floats Without Structural Markup],将以下代码加入Global CSS 中,给需要闭合的div加上 class="clearfix" 即可,屡试不爽. .clearfix:after { content:"."; display:block; height:0; clear:both; visibility:hidden; } .clearfix { display:inline-block; } .clearfix {display:block;} 或者这样设置:.hackbox{ display:table; //将对象作为块元素级的表格显示} 9.高度不能自适应 高度不能自适应是当内层对象的高度发生变化时外层高度不能自动进行调节,特别是当内层对象使用margin 或paddign 时。 例: #box {background-color:#eee; } #box p {margin-top: 20px;margin-bottom: 20px; text-align:center; } <div id="box"> <p>p对象中的内容</p> </div> 解决技巧:在P对象上下各加2个空的div对象CSS代码:.1{height:0px;overflow:hidden;}或者为DIV加上border属性。 10 .div+css之IE6下为什么图片下有空隙产生 解决这个BUG的技巧也有很多,可以是改变html的排版,或者设置img 为display:block 或者设置vertical-align 属性为vertical-align:top bottom  middle  text-bottom 都可以解决. 11.如何对齐文本与文本输入框 加上 vertical-align:middle; <style type="text/css"> <!-- input { width:200px; height:30px; border:1px solid red; vertical-align:middle; } --> </style> 12.web标准中定义id与class区别吗 (1).web标准中是不容许重复ID的,比如 div id="aa" 不容许重复2次,而CSS class定义的是类,理论上可以无限重复, 这样需要多次引用的定义便可以使用他. (2).属性的优先级问题 CSS ID的优先级要高于class,看上面的例子 (3).方便JS等客户端脚本,如果在页面中要对某个对象进行脚本操作,那么可以给他定义一个ID,否则只能利用遍历页面元素加上指定特定属性来找到它,这是相对浪费时间资源,远远不如一个ID来得简单。 13. LI中内容超过长度后以省略号显示的技巧 此技巧适用与IE与OP浏览器 <style type="text/css"> <!-- li { width:200px; white-space:nowrap; text-overflow:ellipsis; -o-text-overflow:ellipsis; overflow: hidden; } --> </style> 14.为什么web标准中IE无法设置滚动条颜色了 解决办法是将body换成html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd> <meta http-equiv="Content-Type" content="text/html; charset=gb2312" /> <style type="text/css"> <!-- html { scrollbar-face-color:#f6f6f6; scrollbar-highlight-color:#fff; scrollbar-shadow-color:#eeeeee; scrollbar-3dlight-color:#eeeeee; scrollbar-arrow-color:#000; scrollbar-track-color:#fff; scrollbar-darkshadow-color:#fff; } --> </style> 15.为什么无法定义1px左右高度的容器 IE6下这个问题是因为默认的行高造成的,解决的技巧也有很多,例如: overflow:hidden   zoom:0.08   line-height:1px 16.怎么样才能让层显示在FLASH之上呢 解决的办法是给FLASH设置透明 <param name="wmode" value="transparent" /> 17.怎样使一个层垂直居中于浏览器中 这里我们使用百分比绝对定位,与外补丁负值的技巧,负值的大小为其自身宽度高度除以二 <style type="text/css"> <!-- div { position:absolute; top:50%; lef:50%; margin:-100px 0 0 -100px; width:200px; height:200px; border:1px solid red; } --> </style> Firefox与IE的CSS兼容CSS HACK技巧 1. Div居中问题 div设置 margin-left, margin-right 为 auto 时已经居中,IE 不行,IE需要设定body居中,首先在父级元素定义text-algin: center;这个的意思就是在父级元素内的内容居中。 2.CSS 链接(a标签)的边框与背景 a链接加边框和背景色,需设置 display: block, 同时设置 float: left 保证不换行。参照 menubar, 给 a 和 menubar 设置高度是为了避免底边显示错位, 若不设 height, 可以在 menubar 中插入一个空格。 3.超链接访问过后hover样式就不出现的问题 被点击访问过的超链接样式不在具有hover和active了,很多人应该都遇到过这个问题,解决技巧是改变CSS属性的排列顺序: L-V-H-A Code: <style type="text/css"> <!-- a:link {} a:visited {} a:hover {} a:active {} --> </style> 4. 游标手指cursor cursor: pointer 可以同时在 IE FF 中显示游标手指状, hand 仅 IE 可以 5.UL的padding与margin ul标签在FF中默认是有padding值的,而在IE中只有margin默认有值,所以先定义 ul{margin:0;padding:0;}就能解决大部分问题 6. FORM标签 这个标签在IE中,将会自动margin一些边距,而在FF中margin则是0,因此,如果想显示一致,所以最好在css中指定margin和 padding,针对上面两个问题,我的css中一般首先都使用这样的样式ul,form{margin:0;padding:0;}给定义死了,所以后面就不会为这个头疼了. 7. BOX模型解释不一致问题 在FF和IE中的BOX模型解释不一致导致相差2px解决技巧:div{margin:30px!important;margin:28px;} 注意这两个margin的顺序一定不能写反, important这个属性IE不能识别,但别的浏览器可以识别。所以在IE下其实解释成这样: div{maring:30px;margin:28px}重复定义的话按照最后一个来执行,所以不可以只写margin:xx px!important;#box{ width:600px; //for ie6.0- w\idth:500px; //for ff+ie6.0} #box{ width:600px!important //for ff width:600px; //for ff+ie6.0 width :500px; //for ie6.0-} 8.属性选择器(这个不能算是兼容,是隐藏css的一个bug) p[id]{}div[id]{} 这个对于IE6.0和IE6.0以下的版本都隐藏,FF和OPera作用.属性选择器和子选择器还是有区别的,子选择器的范围从形式来说缩小了,属性选择器的范围比较大,如p[id]中,所有p标签中有id的都是同样式的. 9.最狠的手段 - !important 如果实在没有办法解决一些细节问题,可以用这个技巧.FF对于”!important”会自动优先解析,然而IE则会忽略.如下 .tabd1{ background:url(http://www.php100.com/res/images/up/tab1.gif) no-repeat 0px 0px !important; background:url(http://www.php100.com /res/images/up/tab1.gif) no-repeat 1px 0px; } 值得注意的是,一定要将xxxx !important 这句放置在另一句之上。 10.IE,FF的默认值问题 或许你一直在抱怨为什么要专门为IE和FF写不同的CSS,为什么IE这样让人头疼,然后一边写css,一边咒骂那个可恶的M$ IE.其实对于css的标准支持方面,IE并没有我们想象的那么可恶,关键在于IE和FF的默认值不一样而已,掌握了这个技巧,你会发现写出兼容FF和 IE的css并不是那么困难,或许对于简单的css,你完全可以不用”!important”这个东西了。 我们都知道,浏览器在显示网页的时候,都会根据网页的css样式表来决定如何显示,但是我们在样式表中未必会将所有的元素都进行了具体的描述,当然也没有必要那么做,所以对于那些没有描述的属性,浏览器将采用内置默认的方式来进行显示,譬如文字,如果你没有在css中指定颜色,那么浏览器将采用黑色或者系统颜色来显示,div或者其他元素的背景,如果在css中没有被指定,浏览器则将其设置为白色或者透明,等等其他未定义的样式均如此。所以有很多东西出现 FF和IE显示不一样的根本原因在于它们的默认显示不一样,而这个默认样式该如何显示我知道在w3中有没有对应的标准来进行规定,因此对于这点也就别去怪罪IE了。 11.为什么FF下文本无法撑开容器的高度 标准浏览器中固定高度值的容器是不会象IE6里那样被撑开的,那我又想固定高度,又想能被撑开需要怎样设置呢?办法就是去掉height设置min- height:200px; 这里为了照顾不认识min-height的IE6 可以这样定义: { height:auto!important; height:200px; min-height:200px; } 12.FireFox下如何使连续长字段自动换行 众所周知IE中直接使用 word-wrap:break-word 就可以了, FF中我们使用JS插入 的技巧来解决 <style type="text/css"> <!-- div { width:300px; word-wrap:break-word; border:1px solid red; } --> </style> <div id="ff">aaaaaaaaaaaaaaaaaaaaaaaaaaaa</div> <scrīpt type="text/javascrīpt"> function toBreakWord(el, intLen){ var ōbj=document.getElementByIdx_x_xx_x_x(el); var strContent=obj.innerHTML; var strTemp=""; while(strContent.length>intLen){ strTemp+=strContent.substr(0,intLen)+" "; strContent=strContent.substr(intLen,strContent.length); } strTemp+=" "+strContent; obj.innerHTML=strTemp; } if(document.getElementByIdx_x_xx_x_x && !document.all) toBreakWord("ff", 37); </scrīpt> 13.为什么IE6下容器的宽度和FF解释不同? 问题的差别在于容器的整体宽度有没有将边框(border)的宽度算在其内,这里IE6解释为200PX ,而FF则解释为220PX,那究竟是怎么导致的问题呢?大家把容器顶部的xml去掉就会发现原来问题出在这,顶部的申明触发了IE的qurks mode。 <?xml version="1.0" encoding="gb2312"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd> <meta http-equiv="Content-Type" content="text/html; charset=gb2312" /> <style type="text/css"> <!-- div { cursor:pointer; width:200px; height:200px; border:10px solid red } --> </style> <div ōnclick="alert(this.offsetWidth)">让FireFox与IE兼容</div> IE7.0对CSS的支持又有新问题,解决如下。 第一种,CSS HACK Bpx; _height:20px; 注意顺序。 下面这样也属于div CSS HACK,不过没有上面这样简洁。 #example { color: #333; } * html #example { color: #666; } *+html #example { color: #999; } 第二种,是使用IE专用的条件CSS注释 <!--其他浏览器 --> <link rel="stylesheet" type="text/css" href="css.css" /> <!--[if IE 7]> <!-- 适合于IE7 --> <link rel="stylesheet" type="text/css" href="ie7.css" /> <![endif]--> <!--[if lte IE 6]> <!-- 适合于IE6及一下 --> <link rel="stylesheet" type="text/css" href="ie.css" /> <![endif]--> 第三种,css filter的办法,以下是从国外网站翻译过来的。 新建一个css样式如下: #item { width: 200px; height: 200px; background: red; } 新建一个div,并使用前面定义的css的样式: <div id="item">some text here</div> 在body表现这里加入lang属性,中文为zh: <body lang="en"> 现在对div元素再定义一个样式: *:lang(en) #item{ background:green !important; } 二维码加载中... 本文作者:Mr.linus      文章标题: DIV CSS 网页兼容全搞定 (IE6 IE7 IE8 IE9 火狐 谷歌) 本文地址:http://www.90qj.com/151.html  百度暂未收录本文 版权声明:若无注明,本文皆为“挨踢 Blog”原创,转载请保留文章出处。 挤眼 亲亲 咆哮 开心 想想 可怜 糗大了 委屈 哈哈 小声点 右哼哼 左哼哼 疑问 坏笑 赚钱啦 悲伤 耍酷 勾引 厉害 握手 耶 嘻嘻 害羞 鼓掌 馋嘴 抓狂 抱抱 围观 威武 给力 提交评论 清空信息 关闭评论
__label__pos
0.861077
Friday, July 23, 2010 Glazed Lists examples for Drools Live Querries A while back I talked about the new features with Drools for Live querries: http://blog.athico.com/2010/05/live-querries.html Where you could open a query in Drools and receive event notifications for added, deleted and upated rows. I mentioned this could be used with Glazed Lists for filtering, sorting and transformation. I just added a unit test to Drools, which people can use as a template for their own Drools integration with Glazed Lists. The test is based on the one in QueryTest.testOpenQuery(): DroolsEventList DroolsEventListTest The EventList implemention itself is very simple. At the moment it backs onto an ArrayList and uses linear searches for the updates and removes. Because Drools is likely to have a high volume of changes it should probably be backed by a HashMap or something for constant levels of performance for those lookups. public class DroolsEventList extends AbstractEventList implements ViewChangedEventListener { List data = new ArrayList(); public Row get(int index) { return this.data.get( index ); } public int size() { return this.data.size(); } public void rowAdded(Row row) { int index = size(); updates.beginEvent(); updates.elementInserted(index, row); boolean result = data.add(row); updates.commitEvent(); } public void rowRemoved(Row row) { int index = this.data.indexOf( row ); updates.beginEvent(); Row removed = data.remove( index ); updates.elementDeleted(index, removed); updates.commitEvent(); } public void rowUpdated(Row row) { int index = this.data.indexOf( row ); updates.beginEvent(); updates.elementUpdated(index, row, row); updates.commitEvent(); } } Creating and using the EventList is also trivial, here is a snippet from the test using the SortedEventList: DroolsEventList list = new DroolsEventList(); // Open the LiveQuery LiveQuery query = ksession.openLiveQuery( "cheeses", new Object[] { "cheddar", "stilton" } , list ); SortedList sorted = new SortedList( list, new Comparator() { public int compare(Row r1, Row r2) { Cheese c1 = ( Cheese ) r1.get( "stilton" ); Cheese c2 = ( Cheese ) r2.get( "stilton" ); return c1.getPrice() - c2.getPrice(); } }); assertEquals( 3, sorted.size() ); assertEquals( 1, ((Cheese)sorted.get( 0 ).get( "stilton" )).getPrice() ); assertEquals( 2, ((Cheese)sorted.get( 1 ).get( "stilton" )).getPrice() ); assertEquals( 3, ((Cheese)sorted.get( 2 ).get( "stilton" )).getPrice() ); Share/Bookmark No comments: Post a Comment
__label__pos
0.897034
Inline IF and CASE statements in MySQL Posted: 13th August 2009 by Tim in MySQL Tags: , , , , , , There are times where running IF statements inside a query can be useful. MySQL provides a simple way to do this through the use of IF and CASE statements. The IF statement takes three arguments; the conditional, the true value and the false value. False and true values may be static values or column values. For example: SELECT IF(score > 100, 100, score) AS score FROM exam_results this will return the value in the score column limited to a maximum value of 100. IF statements can also be nested: SELECT IF(score > 100, 100, IF(score < 0, 0, score)) FROM exam_results CASE statements (switch statements for those C programmers) are much like if statements. For example: SELECT CASE num_heads WHEN 0 THEN 'Zombie' WHEN 1 THEN 'Human' ELSE 'Alien' END AS race FROM user This code checks the value in the num_heads column and deduces race from the values presented. CASE statements may also be nested in the same way as IF statements. 1. James says: Thanks Tim, this was just what I was looking for! 2. asdf says: that’s awesome 3. PaNic says: Great Help! Thank you. ! 4. Sue says: Thanks, this really help! 5. great knowledge…. tx tx tx very much 4 ur shared 6. Steven says: Loving it! Great stuff! 7. Juan Pablo says: Hi Tim, I would ask you if is is optimus in order to made 5 to 10 if-else nested in the columns, I know that I’ve bring the results when I do made the joins and where filter and after I will made the validation. But, can this filsters onsume a lot of CPU? Thank Tim, nice Blog! 8. Thedath Oudarya says: Thanks bro.. It’s simple and clear.. :)
__label__pos
0.835014
This is part two of my introduction to pivot tables. You can have a look at part one here. A new feature in Excel 2007 is called the table. While it is not essential to convert your list to a table, doing so means that you can then work more efficiently with your data when you do convert it to a pivot table. The advantages of converting your list to a table are as follows: 1. Your list is instantly formatted (and you can easily change the formatting – as you will see in the video below) 2. If and when you add further rows/columns to your list, all you have to do is refresh your pivot table and the new data is automatically incorporated. (If you don’t use a table, it means you have to re-select all the data again – which can be very tedious if you have large lists) 3. If you add a formula to your table, the calculations are automatically copied down – the work is done for you. (handy huh?) OK, so have a look at the video and see how its done. I’m using a large list which I created using the fake name generator and let’s have a look…I’m going to show you how to convert to a table and then take the first step on creating a pivot table from it. (And if you are wondering how I quickly generated the random salary values – here’s the tutorial) Here is the file to practise on: Pivot Table 02 – PREPARE USING A TABLE
__label__pos
0.7449
Can we modify the order of Items in a String Array in the Inspector? :information_source: Attention Topic was automatically imported from the old Question2Answer platform. :bust_in_silhouette: Asked By stoff Hey everybody! :slight_smile: Is it possible to modify the order of Items in a String Array in the Inspector? I am currently using a custom Resource to hold dialog texts in a String Array: extends Resource class_name Dialog export(Array, String, MULTILINE) var texts It is a bit annoying to use however, as I don’t see how I can change the order of the dialogs or even enter a new text in between two already exisitng ones. The only thing I can seem to do is extend the array length and copy / paste stuff around. Is there a way to handle this better? Also is there any documentation what Export Hints exist? (like “MULTILINE”) I couldn’t find it myself. Thanks for your help! :slight_smile: :bust_in_silhouette: Reply From: magicalogic I don’t know how to set the order but here is the documentation for export hints.
__label__pos
0.845988
× Free help with homework Why join Brainly? • ask questions about your assignment • get answers with explanations • find similar questions Answers 2014-10-26T13:14:33+00:00 If the number is not divisible? Just be very sure the denominator is not divisible by it, or by some other numbers. At times you just can't tell. If it is so, the use you calculator to express it as a decimal. Cheers. 0 0 0 2014-10-26T13:14:59+00:00 With fractions the whole number doesn't need to be divisible by the denominator. Fractions are just divisions. Say for example you had the fraction 25/15. Now obviously you can't divide 25 by 15 and that is why you would leave it as a fraction. What you can often do, and that is why I picked this example, is simplify fractions. What number goes into 15 and 25? - 5. So divide top and bottom by 5 to get 5/3. That will give you the fraction In its simplest form. I'm not entirely sure that this is exactly what you meant, but your query was rather vague, so I hope this is helpful. 0 0 0 The Brain • The Brain • Helper Not sure about the answer? Learn more with Brainly! Having trouble with your homework? Get free help! • 80% of questions are answered in under 10 minutes • Answers come with explanations, so that you can learn • Answer quality is ensured by our experts
__label__pos
0.952514
1: /* 2: * Copyright (c) 1986 Regents of the University of California. 3: * All rights reserved. The Berkeley software License Agreement 4: * specifies the terms and conditions for redistribution. 5: * 6: * @(#)hk.c 2.3 (2.11BSD GTE) 1998/4/3 7: */ 8: 9: /* 10: * RK611/RK0[67] disk driver 11: * 12: * Heavily modified for disklabel support. Still only supports 1 controller 13: * (but who'd have more than one of these on a system anyhow?) - 1997/11/11 sms 14: * 15: * This driver mimics the 4.1bsd rk driver. 16: * It does overlapped seeks, ECC, and bad block handling. 17: * salkind@nyu 18: * 19: * dkunit() takes a 'dev_t' now instead of 'buf *'. 1995/04/13 - sms 20: * 21: * Removed ifdefs on both Q22 and UNIBUS_MAP, substituting a runtime 22: * test for presence of a Unibus Map. Reworked the partition logic, 23: * the 'e' partiton no longer overlaps the 'a'+'b' partitions - a separate 24: * 'b' partition is now present. Old root filesystems can still be used 25: * because the size is the same, but user data will have to be saved and 26: * then reloaded. 12/28/92 -- [email protected] 27: * 28: * Modified to correctly handle 22 bit addressing available on DILOG 29: * DQ615 controller. 05/31/90 -- [email protected] 30: * 31: */ 32: 33: #include "hk.h" 34: #if NHK > 0 35: #include "param.h" 36: #include "systm.h" 37: #include "buf.h" 38: #include "machine/seg.h" 39: #include "conf.h" 40: #include "user.h" 41: #include "map.h" 42: #include "uba.h" 43: #include "hkreg.h" 44: #include "dkbad.h" 45: #include "dk.h" 46: #include "stat.h" 47: #include "file.h" 48: #include "disklabel.h" 49: #include "disk.h" 50: #include "syslog.h" 51: 52: #define NHK7CYL 815 53: #define NHK6CYL 411 54: #define HK_NSECT 22 55: #define HK_NTRAC 3 56: #define HK_NSPC (HK_NTRAC*HK_NSECT) 57: 58: struct hkdevice *HKADDR; 59: 60: daddr_t hksize(); 61: void hkdfltlbl(); 62: int hkstrategy(); 63: 64: /* Can be u_char because all are less than 0377 */ 65: u_char hk_offset[] = 66: { 67: HKAS_P400, HKAS_M400, HKAS_P400, HKAS_M400, 68: HKAS_P800, HKAS_M800, HKAS_P800, HKAS_M800, 69: HKAS_P1200, HKAS_M1200, HKAS_P1200, HKAS_M1200, 70: 0, 0, 0, 0, 71: }; 72: 73: int hk_type[NHK]; 74: int hk_cyl[NHK]; 75: 76: struct hk_softc 77: { 78: int sc_softas; 79: int sc_recal; 80: } hk; 81: 82: struct buf hktab; 83: struct buf hkutab[NHK]; 84: struct dkdevice hk_dk[NHK]; 85: 86: #ifdef BADSECT 87: struct dkbad hkbad[NHK]; 88: struct buf bhkbuf[NHK]; 89: #endif 90: 91: #ifdef UCB_METER 92: static int hk_dkn = -1; /* number for iostat */ 93: #endif 94: 95: #define hkwait(hkaddr) while ((hkaddr->hkcs1 & HK_CRDY) == 0) 96: #define hkncyl(unit) (hk_type[unit] ? NHK7CYL : NHK6CYL) 97: 98: void 99: hkroot() 100: { 101: hkattach((struct hkdevice *)0177440, 0); 102: } 103: 104: hkattach(addr, unit) 105: struct hkdevice *addr; 106: { 107: #ifdef UCB_METER 108: if (hk_dkn < 0) 109: { 110: dk_alloc(&hk_dkn, NHK+1, "hk", 60L * (long)HK_NSECT * 256L); 111: if (hk_dkn >= 0) 112: dk_wps[hk_dkn+NHK] = 0L; 113: } 114: #endif 115: if (unit != 0) 116: return(0); 117: HKADDR = addr; 118: return(1); 119: } 120: 121: hkopen(dev, flag, mode) 122: dev_t dev; 123: int flag; 124: int mode; 125: { 126: register int unit = dkunit(dev); 127: register struct hkdevice *hkaddr = HKADDR; 128: register struct dkdevice *disk; 129: int i, mask; 130: 131: if (unit >= NHK || !HKADDR) 132: return(ENXIO); 133: disk = &hk_dk[unit]; 134: 135: if ((disk->dk_flags & DKF_ALIVE) == 0) 136: { 137: hk_type[unit] = 0; 138: hkaddr->hkcs1 = HK_CCLR; 139: hkaddr->hkcs2 = unit; 140: hkaddr->hkcs1 = HK_DCLR | HK_GO; 141: hkwait(hkaddr); 142: if (hkaddr->hkcs2&HKCS2_NED || !(hkaddr->hkds&HKDS_SVAL)) 143: { 144: hkaddr->hkcs1 = HK_CCLR; 145: hkwait(hkaddr); 146: return(ENXIO); 147: } 148: disk->dk_flags |= DKF_ALIVE; 149: if ((hkaddr->hkcs1&HK_CERR) && (hkaddr->hker&HKER_DTYE)) 150: { 151: hk_type[unit] = HK_CDT; 152: hkaddr->hkcs1 = HK_CCLR; 153: hkwait(hkaddr); 154: } 155: } 156: /* 157: * The drive has responded to a probe (is alive). Now we read the 158: * label. Allocate an external label structure if one has not already 159: * been assigned to this drive. First wait for any pending opens/closes 160: * to complete. 161: */ 162: while (disk->dk_flags & (DKF_OPENING | DKF_CLOSING)) 163: sleep(disk, PRIBIO); 164: 165: /* 166: * Next if an external label buffer has not already been allocated do so now. 167: * This "can not fail" because if the initial pool of label buffers has 168: * been exhausted the allocation takes place from main memory. The return 169: * value is the 'click' address to be used when mapping in the label. 170: */ 171: 172: if (disk->dk_label == 0) 173: disk->dk_label = disklabelalloc(); 174: 175: /* 176: * On first open get label and partition info. We may block reading the 177: * label so be careful to stop any other opens. 178: */ 179: if (disk->dk_openmask == 0) 180: { 181: disk->dk_flags |= DKF_OPENING; 182: hkgetinfo(disk, dev); 183: disk->dk_flags &= ~DKF_OPENING; 184: wakeup(disk); 185: hk_cyl[unit] = -1; 186: } 187: /* 188: * Need to make sure the partition is not out of bounds. This requires 189: * mapping in the external label. This only happens when a partition 190: * is opened (at mount time) and isn't an efficiency problem. 191: */ 192: mapseg5(disk->dk_label, LABELDESC); 193: i = ((struct disklabel *)SEG5)->d_npartitions; 194: normalseg5(); 195: if (dkpart(dev) >= i) 196: return(ENXIO); 197: mask = 1 << dkpart(dev); 198: dkoverlapchk(disk->dk_openmask, dev, disk->dk_label, "hk"); 199: if (mode == S_IFCHR) 200: disk->dk_copenmask |= mask; 201: else if (mode == S_IFBLK) 202: disk->dk_bopenmask |= mask; 203: else 204: return(EINVAL); 205: disk->dk_openmask |= mask; 206: return(0); 207: } 208: 209: /* 210: * Disk drivers now have to have close entry points in order to keep 211: * track of what partitions are still active on a drive. 212: */ 213: hkclose(dev, flag, mode) 214: register dev_t dev; 215: int flag, mode; 216: { 217: int s, drive = dkunit(dev); 218: register int mask; 219: register struct dkdevice *disk; 220: 221: disk = &hk_dk[drive]; 222: mask = 1 << dkpart(dev); 223: if (mode == S_IFCHR) 224: disk->dk_copenmask &= ~mask; 225: else if (mode == S_IFBLK) 226: disk->dk_bopenmask &= ~mask; 227: else 228: return(EINVAL); 229: disk->dk_openmask = disk->dk_bopenmask | disk->dk_copenmask; 230: if (disk->dk_openmask == 0) 231: { 232: disk->dk_flags |= DKF_CLOSING; 233: s = splbio(); 234: while (hkutab[drive].b_actf) 235: { 236: disk->dk_flags |= DKF_WANTED; 237: sleep(&hkutab[drive], PRIBIO); 238: } 239: splx(s); 240: /* 241: * On last close of a drive we declare it not alive and offline to force a 242: * probe on the next open in order to handle diskpack changes. 243: */ 244: disk->dk_flags &= 245: ~(DKF_CLOSING | DKF_WANTED | DKF_ALIVE | DKF_ONLINE); 246: wakeup(disk); 247: } 248: return(0); 249: } 250: 251: /* 252: * This code moved here from hkgetinfo() because it is fairly large and used 253: * twice - once to initialize for reading the label and a second time if 254: * there is no valid label present on the drive and the default one must be 255: * used to span the drive. 256: */ 257: 258: void 259: hkdfltlbl(disk, lp, dev) 260: struct dkdevice *disk; 261: register struct disklabel *lp; 262: dev_t dev; 263: { 264: register struct partition *pi = &lp->d_partitions[0]; 265: 266: bzero(lp, sizeof (*lp)); 267: lp->d_type = DTYPE_DEC; 268: lp->d_secsize = 512; /* XXX */ 269: lp->d_nsectors = HK_NSECT; 270: lp->d_ntracks = HK_NTRAC; 271: lp->d_secpercyl = HK_NSPC; 272: lp->d_npartitions = 1; /* 'a' */ 273: lp->d_ncylinders = hkncyl(dkunit(dev)); 274: pi->p_size = lp->d_ncylinders * lp->d_secpercyl; /* entire volume */ 275: pi->p_fstype = FS_V71K; 276: pi->p_frag = 1; 277: pi->p_fsize = 1024; 278: /* 279: * Put where hkstrategy() will look. 280: */ 281: bcopy(pi, disk->dk_parts, sizeof (lp->d_partitions)); 282: } 283: 284: /* 285: * Read disklabel. It is tempting to generalize this routine so that 286: * all disk drivers could share it. However by the time all of the 287: * necessary parameters are setup and passed the savings vanish. Also, 288: * each driver has a different method of calculating the number of blocks 289: * to use if one large partition must cover the disk. 290: * 291: * This routine used to always return success and callers carefully checked 292: * the return status. Silly. This routine will fake a label (a single 293: * partition spanning the drive) if necessary but will never return an error. 294: * 295: * It is the caller's responsibility to check the validity of partition 296: * numbers, etc. 297: */ 298: 299: void 300: hkgetinfo(disk, dev) 301: register struct dkdevice *disk; 302: dev_t dev; 303: { 304: struct disklabel locallabel; 305: char *msg; 306: register struct disklabel *lp = &locallabel; 307: /* 308: * NOTE: partition 0 ('a') is used to read the label. Therefore 'a' must 309: * start at the beginning of the disk! If there is no label or the label 310: * is corrupted then 'a' will span the entire disk 311: */ 312: hkdfltlbl(disk, lp, dev); 313: msg = readdisklabel((dev & ~7) | 0, hkstrategy, lp); /* 'a' */ 314: if (msg != 0) 315: { 316: log(LOG_NOTICE, "hk%da is entire disk: %s\n", dkunit(dev), msg); 317: hkdfltlbl(disk, lp, dev); 318: } 319: mapseg5(disk->dk_label, LABELDESC) 320: bcopy(lp, (struct disklabel *)SEG5, sizeof (struct disklabel)); 321: normalseg5(); 322: bcopy(lp->d_partitions, disk->dk_parts, sizeof (lp->d_partitions)); 323: return; 324: } 325: 326: hkstrategy(bp) 327: register struct buf *bp; 328: { 329: register struct buf *dp; 330: int s, drive; 331: register struct dkdevice *disk; 332: 333: drive = dkunit(bp->b_dev); 334: disk = &hk_dk[drive]; 335: 336: if (drive >= NHK || !HKADDR || !(disk->dk_flags & DKF_ALIVE)) 337: { 338: bp->b_error = ENXIO; 339: goto bad; 340: } 341: s = partition_check(bp, disk); 342: if (s < 0) 343: goto bad; 344: if (s == 0) 345: goto done; 346: 347: bp->b_cylin = bp->b_blkno / HK_NSPC; 348: mapalloc(bp); 349: dp = &hkutab[drive]; 350: s = splbio(); 351: disksort(dp, bp); 352: if (dp->b_active == 0) 353: { 354: hkustart(drive); 355: if (hktab.b_active == 0) 356: hkstart(); 357: } 358: splx(s); 359: return; 360: bad: 361: bp->b_flags |= B_ERROR; 362: done: 363: iodone(bp); 364: } 365: 366: hkustart(unit) 367: int unit; 368: { 369: register struct hkdevice *hkaddr = HKADDR; 370: register struct buf *bp, *dp; 371: struct dkdevice *disk; 372: int didie = 0; 373: 374: #ifdef UCB_METER 375: if (hk_dkn >= 0) 376: dk_busy &= ~(1 << (hk_dkn + unit)); 377: #endif 378: if (hktab.b_active) { 379: hk.sc_softas |= (1 << unit); 380: return(0); 381: } 382: 383: hkaddr->hkcs1 = HK_CCLR; 384: hkaddr->hkcs2 = unit; 385: hkaddr->hkcs1 = hk_type[unit] | HK_DCLR | HK_GO; 386: hkwait(hkaddr); 387: 388: dp = &hkutab[unit]; 389: disk = &hk_dk[unit]; 390: if ((bp = dp->b_actf) == NULL) 391: return(0); 392: if (dp->b_active) 393: goto done; 394: dp->b_active = 1; 395: if (!(hkaddr->hkds & HKDS_VV) || !(disk->dk_flags & DKF_ONLINE)) 396: { 397: /* SHOULD WARN SYSTEM THAT THIS HAPPENED */ 398: #ifdef BADSECT 399: struct buf *bbp = &bhkbuf[unit]; 400: #endif 401: 402: hkaddr->hkcs1 = hk_type[unit]|HK_PACK|HK_GO; 403: disk->dk_flags |= DKF_ONLINE; 404: /* 405: * XXX - The 'c' partition is used below to access the bad block area. This 406: * XXX - is DIFFERENT than the XP driver (which should have used 'c' but could 407: * XXX - not due to historical reasons). The 'c' partition MUST span the entire 408: * XXX - disk including the bad sector track. The 'h' partition should be 409: * XXX - used for user data. 410: */ 411: #ifdef BADSECT 412: bbp->b_flags = B_READ|B_BUSY|B_PHYS; 413: bbp->b_dev = (bp->b_dev & ~7) | ('c' - 'a'); 414: bbp->b_bcount = sizeof(struct dkbad); 415: bbp->b_un.b_addr = (caddr_t)&hkbad[unit]; 416: bbp->b_blkno = (long)hkncyl(unit)*HK_NSPC - HK_NSECT; 417: bbp->b_cylin = hkncyl(unit) - 1; 418: mapalloc(bbp); 419: dp->b_actf = bbp; 420: bbp->av_forw = bp; 421: bp = bbp; 422: #endif 423: hkwait(hkaddr); 424: } 425: if ((hkaddr->hkds & HKDS_DREADY) != HKDS_DREADY) 426: { 427: disk->dk_flags &= ~DKF_ONLINE; 428: goto done; 429: } 430: #ifdef NHK > 1 431: if (bp->b_cylin == hk_cyl[unit]) 432: goto done; 433: hkaddr->hkcyl = bp->b_cylin; 434: hk_cyl[unit] = bp->b_cylin; 435: hkaddr->hkcs1 = hk_type[unit] | HK_IE | HK_SEEK | HK_GO; 436: didie = 1; 437: #ifdef UCB_METER 438: if (hk_dkn >= 0) { 439: int dkn = hk_dkn + unit; 440: 441: dk_busy |= 1<<dkn; 442: dk_seek[dkn]++; 443: } 444: #endif 445: return (didie); 446: #endif NHK > 1 447: 448: done: 449: if (dp->b_active != 2) { 450: dp->b_forw = NULL; 451: if (hktab.b_actf == NULL) 452: hktab.b_actf = dp; 453: else 454: hktab.b_actl->b_forw = dp; 455: hktab.b_actl = dp; 456: dp->b_active = 2; 457: } 458: return (didie); 459: } 460: 461: hkstart() 462: { 463: register struct buf *bp, *dp; 464: register struct hkdevice *hkaddr = HKADDR; 465: register struct dkdevice *disk; 466: daddr_t bn; 467: int sn, tn, cmd, unit; 468: 469: loop: 470: if ((dp = hktab.b_actf) == NULL) 471: return(0); 472: if ((bp = dp->b_actf) == NULL) { 473: /* 474: * No more requests for this drive, remove from controller queue and 475: * look at next drive. We know we're at the head of the controller queue. 476: * The drive may not need anything, in which case it might be shutting 477: * down in hkclose() and a wakeup is done. 478: */ 479: hktab.b_actf = dp->b_forw; 480: unit = dp - hkutab; 481: disk = &hk_dk[unit]; 482: if (disk->dk_flags & DKF_WANTED) 483: { 484: disk->dk_flags &= ~DKF_WANTED; 485: wakeup(dp); /* finish the close protocol */ 486: } 487: goto loop; 488: } 489: hktab.b_active++; 490: unit = dkunit(bp->b_dev); 491: disk = &hk_dk[unit]; 492: bn = bp->b_blkno; 493: 494: sn = bn % HK_NSPC; 495: tn = sn / HK_NSECT; 496: sn %= HK_NSECT; 497: retry: 498: hkaddr->hkcs1 = HK_CCLR; 499: hkaddr->hkcs2 = unit; 500: hkaddr->hkcs1 = hk_type[unit] | HK_DCLR | HK_GO; 501: hkwait(hkaddr); 502: 503: if ((hkaddr->hkds & HKDS_SVAL) == 0) 504: goto nosval; 505: if (hkaddr->hkds & HKDS_PIP) 506: goto retry; 507: if ((hkaddr->hkds&HKDS_DREADY) != HKDS_DREADY) { 508: disk->dk_flags &= ~DKF_ONLINE; 509: log(LOG_WARNING, "hk%d: !ready\n", unit); 510: if ((hkaddr->hkds&HKDS_DREADY) != HKDS_DREADY) { 511: hkaddr->hkcs1 = hk_type[unit] | HK_DCLR | HK_GO; 512: hkwait(hkaddr); 513: hkaddr->hkcs1 = HK_CCLR; 514: hkwait(hkaddr); 515: hktab.b_active = 0; 516: hktab.b_errcnt = 0; 517: dp->b_actf = bp->av_forw; 518: dp->b_active = 0; 519: bp->b_flags |= B_ERROR; 520: iodone(bp); 521: goto loop; 522: } 523: } 524: disk->dk_flags |= DKF_ONLINE; 525: nosval: 526: hkaddr->hkcyl = bp->b_cylin; 527: hk_cyl[unit] = bp->b_cylin; 528: hkaddr->hkda = (tn << 8) + sn; 529: hkaddr->hkwc = -(bp->b_bcount >> 1); 530: hkaddr->hkba = bp->b_un.b_addr; 531: if (!ubmap) 532: hkaddr->hkxmem=bp->b_xmem; 533: 534: cmd = hk_type[unit] | ((bp->b_xmem & 3) << 8) | HK_IE | HK_GO; 535: if (bp->b_flags & B_READ) 536: cmd |= HK_READ; 537: else 538: cmd |= HK_WRITE; 539: hkaddr->hkcs1 = cmd; 540: #ifdef UCB_METER 541: if (hk_dkn >= 0) { 542: int dkn = hk_dkn + NHK; 543: 544: dk_busy |= 1<<dkn; 545: dk_xfer[dkn]++; 546: dk_wds[dkn] += bp->b_bcount>>6; 547: } 548: #endif 549: return(1); 550: } 551: 552: hkintr() 553: { 554: register struct hkdevice *hkaddr = HKADDR; 555: register struct buf *bp, *dp; 556: int unit; 557: int as = (hkaddr->hkatt >> 8) | hk.sc_softas; 558: int needie = 1; 559: 560: hk.sc_softas = 0; 561: if (hktab.b_active) { 562: dp = hktab.b_actf; 563: bp = dp->b_actf; 564: unit = dkunit(bp->b_dev); 565: #ifdef UCB_METER 566: if (hk_dkn >= 0) 567: dk_busy &= ~(1 << (hk_dkn + NHK)); 568: #endif 569: #ifdef BADSECT 570: if (bp->b_flags&B_BAD) 571: if (hkecc(bp, CONT)) 572: return; 573: #endif 574: if (hkaddr->hkcs1 & HK_CERR) { 575: int recal; 576: u_short ds = hkaddr->hkds; 577: u_short cs2 = hkaddr->hkcs2; 578: u_short er = hkaddr->hker; 579: 580: if (er & HKER_WLE) { 581: log(LOG_WARNING, "hk%d: wrtlck\n", unit); 582: bp->b_flags |= B_ERROR; 583: } else if (++hktab.b_errcnt > 28 || 584: ds&HKDS_HARD || er&HKER_HARD || cs2&HKCS2_HARD) { 585: hard: 586: harderr(bp, "hk"); 587: log(LOG_WARNING, "cs2=%b ds=%b er=%b\n", 588: cs2, HKCS2_BITS, ds, 589: HKDS_BITS, er, HKER_BITS); 590: bp->b_flags |= B_ERROR; 591: hk.sc_recal = 0; 592: } else if (er & HKER_BSE) { 593: #ifdef BADSECT 594: if (hkecc(bp, BSE)) 595: return; 596: else 597: #endif 598: goto hard; 599: } else 600: hktab.b_active = 0; 601: if (cs2&HKCS2_MDS) { 602: hkaddr->hkcs2 = HKCS2_SCLR; 603: goto retry; 604: } 605: recal = 0; 606: if (ds&HKDS_DROT || er&(HKER_OPI|HKER_SKI|HKER_UNS) || 607: (hktab.b_errcnt&07) == 4) 608: recal = 1; 609: if ((er & (HKER_DCK|HKER_ECH)) == HKER_DCK) 610: if (hkecc(bp, ECC)) 611: return; 612: hkaddr->hkcs1 = HK_CCLR; 613: hkaddr->hkcs2 = unit; 614: hkaddr->hkcs1 = hk_type[unit]|HK_DCLR|HK_GO; 615: hkwait(hkaddr); 616: if (recal && hktab.b_active == 0) { 617: hkaddr->hkcs1 = hk_type[unit]|HK_IE|HK_RECAL|HK_GO; 618: hk_cyl[unit] = -1; 619: hk.sc_recal = 0; 620: goto nextrecal; 621: } 622: } 623: retry: 624: switch (hk.sc_recal) { 625: 626: case 1: 627: hkaddr->hkcyl = bp->b_cylin; 628: hk_cyl[unit] = bp->b_cylin; 629: hkaddr->hkcs1 = hk_type[unit]|HK_IE|HK_SEEK|HK_GO; 630: goto nextrecal; 631: case 2: 632: if (hktab.b_errcnt < 16 || 633: (bp->b_flags&B_READ) == 0) 634: goto donerecal; 635: hkaddr->hkatt = hk_offset[hktab.b_errcnt & 017]; 636: hkaddr->hkcs1 = hk_type[unit]|HK_IE|HK_OFFSET|HK_GO; 637: /* fall into ... */ 638: nextrecal: 639: hk.sc_recal++; 640: hkwait(hkaddr); 641: hktab.b_active = 1; 642: return; 643: donerecal: 644: case 3: 645: hk.sc_recal = 0; 646: hktab.b_active = 0; 647: break; 648: } 649: if (hktab.b_active) { 650: hktab.b_active = 0; 651: hktab.b_errcnt = 0; 652: hktab.b_actf = dp->b_forw; 653: dp->b_active = 0; 654: dp->b_errcnt = 0; 655: dp->b_actf = bp->av_forw; 656: bp->b_resid = -(hkaddr->hkwc << 1); 657: iodone(bp); 658: if (dp->b_actf) 659: if (hkustart(unit)) 660: needie = 0; 661: } 662: as &= ~(1<<unit); 663: } 664: for (unit = 0; as; as >>= 1, unit++) 665: if (as & 1) { 666: if (unit < NHK && (hk_dk[unit].dk_flags & DKF_ALIVE)) { 667: if (hkustart(unit)) 668: needie = 0; 669: } else { 670: hkaddr->hkcs1 = HK_CCLR; 671: hkaddr->hkcs2 = unit; 672: hkaddr->hkcs1 = HK_DCLR | HK_GO; 673: hkwait(hkaddr); 674: hkaddr->hkcs1 = HK_CCLR; 675: } 676: } 677: if (hktab.b_actf && hktab.b_active == 0) 678: if (hkstart()) 679: needie = 0; 680: if (needie) 681: hkaddr->hkcs1 = HK_IE; 682: } 683: 684: #ifdef HK_DUMP 685: /* 686: * Dump routine for RK06/07 687: * Dumps from dumplo to end of memory/end of disk section for minor(dev). 688: * It uses the UNIBUS map to dump all of memory if there is a UNIBUS map. 689: */ 690: #define DBSIZE (UBPAGE/NBPG) /* unit of transfer, one UBPAGE */ 691: 692: hkdump(dev) 693: dev_t dev; 694: { 695: register struct hkdevice *hkaddr = HKADDR; 696: daddr_t bn, dumpsize; 697: long paddr; 698: register struct ubmap *ubp; 699: int count, memblks; 700: register struct partition *pi; 701: struct dkdevice *disk; 702: int com, cn, tn, sn, unit; 703: 704: unit = dkunit(dev); 705: if (unit >= NHK) 706: return(EINVAL); 707: 708: disk = &hk_dk[unit]; 709: if ((disk->dk_flags & DKF_ALIVE) == 0) 710: return(ENXIO); 711: pi = &disk->dk_parts[dkpart(dev)]; 712: if (pi->p_fstype != FS_SWAP) 713: return(EFTYPE); 714: 715: dumpsize = hksize(dev) - dumplo; 716: memblks = ctod(physmem); 717: if (dumplo < 0 || dumpsize <= 0) 718: return(EINVAL); 719: bn = dumplo + pi->p_offset; 720: 721: hkaddr->hkcs1 = HK_CCLR; 722: hkwait(hkaddr); 723: hkaddr->hkcs2 = unit; 724: hkaddr->hkcs1 = hk_type[unit] | HK_DCLR | HK_GO; 725: hkwait(hkaddr); 726: if ((hkaddr->hkds & HKDS_VV) == 0) 727: { 728: hkaddr->hkcs1 = hk_type[unit]|HK_IE|HK_PACK|HK_GO; 729: hkwait(hkaddr); 730: } 731: ubp = &UBMAP[0]; 732: for (paddr = 0L; memblks > 0; ) 733: { 734: count = MIN(memblks, DBSIZE); 735: cn = bn/HK_NSPC; 736: sn = bn%HK_NSPC; 737: tn = sn/HK_NSECT; 738: sn = sn%HK_NSECT; 739: hkaddr->hkcyl = cn; 740: hkaddr->hkda = (tn << 8) | sn; 741: hkaddr->hkwc = -(count << (PGSHIFT-1)); 742: com = hk_type[unit]|HK_GO|HK_WRITE; 743: if (ubmap) 744: { 745: ubp->ub_lo = loint(paddr); 746: ubp->ub_hi = hiint(paddr); 747: hkaddr->hkba = 0; 748: } 749: else 750: { 751: /* non UNIBUS map */ 752: hkaddr->hkba = loint(paddr); 753: hkaddr->hkxmem = hiint(paddr); 754: com |= ((paddr >> 8) & (03 << 8)); 755: } 756: hkaddr->hkcs2 = unit; 757: hkaddr->hkcs1 = com; 758: hkwait(hkaddr); 759: if (hkaddr->hkcs1 & HK_CERR) 760: { 761: if (hkaddr->hkcs2 & HKCS2_NEM) 762: return(0); /* made it to end of memory */ 763: return(EIO); 764: } 765: paddr += (count << PGSHIFT); 766: bn += count; 767: memblks -= count; 768: } 769: return(0); /* filled disk minor dev */ 770: } 771: #endif HK_DUMP 772: 773: #define exadr(x,y) (((long)(x) << 16) | (unsigned)(y)) 774: 775: /* 776: * Correct an ECC error and restart the i/o to complete 777: * the transfer if necessary. This is quite complicated because 778: * the transfer may be going to an odd memory address base 779: * and/or across a page boundary. 780: */ 781: hkecc(bp, flag) 782: register struct buf *bp; 783: { 784: register struct hkdevice *hkaddr = HKADDR; 785: ubadr_t addr; 786: int npx, wc; 787: int cn, tn, sn; 788: daddr_t bn; 789: unsigned ndone; 790: int cmd; 791: int unit; 792: 793: #ifdef BADSECT 794: if (flag == CONT) { 795: npx = bp->b_error; 796: ndone = npx * NBPG; 797: wc = ((int)(ndone - bp->b_bcount)) / NBPW; 798: } else 799: #endif 800: { 801: wc = hkaddr->hkwc; 802: ndone = (wc * NBPW) + bp->b_bcount; 803: npx = ndone / NBPG; 804: } 805: unit = dkunit(bp->b_dev); 806: bn = bp->b_blkno; 807: cn = bp->b_cylin - bn / HK_NSPC; 808: bn += npx; 809: cn += bn / HK_NSPC; 810: sn = bn % HK_NSPC; 811: tn = sn / HK_NSECT; 812: sn %= HK_NSECT; 813: hktab.b_active++; 814: 815: switch (flag) { 816: case ECC: 817: { 818: register byte; 819: int bit; 820: long mask; 821: ubadr_t bb; 822: unsigned o; 823: struct ubmap *ubp; 824: 825: log(LOG_WARNING, "hk%d%c: soft ecc sn %D\n", 826: unit, 'a' + (bp->b_dev & 07), bp->b_blkno + npx - 1); 827: mask = hkaddr->hkecpt; 828: byte = hkaddr->hkecps - 1; 829: bit = byte & 07; 830: byte >>= 3; 831: mask <<= bit; 832: o = (ndone - NBPG) + byte; 833: bb = exadr(bp->b_xmem, bp->b_un.b_addr); 834: bb += o; 835: if (ubmap && (bp->b_flags & (B_MAP|B_UBAREMAP))) { 836: ubp = UBMAP + ((bb >> 13) & 037); 837: bb = exadr(ubp->ub_hi, ubp->ub_lo) + (bb & 017777); 838: } 839: /* 840: * Correct until mask is zero or until end of 841: * sector or transfer, whichever comes first. 842: */ 843: while (byte < NBPG && o < bp->b_bcount && mask != 0) { 844: putmemc(bb, getmemc(bb) ^ (int)mask); 845: byte++; 846: o++; 847: bb++; 848: mask >>= 8; 849: } 850: if (wc == 0) 851: return(0); 852: break; 853: } 854: 855: #ifdef BADSECT 856: case BSE: 857: if ((bn = isbad(&hkbad[unit], cn, tn, sn)) < 0) 858: return(0); 859: bp->b_flags |= B_BAD; 860: bp->b_error = npx + 1; 861: bn = (long)hkncyl(unit)*HK_NSPC - HK_NSECT - 1 - bn; 862: cn = bn/HK_NSPC; 863: sn = bn%HK_NSPC; 864: tn = sn/HK_NSECT; 865: sn %= HK_NSECT; 866: wc = -(NBPG / NBPW); 867: break; 868: 869: case CONT: 870: bp->b_flags &= ~B_BAD; 871: if (wc == 0) 872: return(0); 873: break; 874: #endif BADSECT 875: } 876: /* 877: * Have to continue the transfer. Clear the drive 878: * and compute the position where the transfer is to continue. 879: * We have completed npx sectors of the transfer already. 880: */ 881: hkaddr->hkcs1 = HK_CCLR; 882: hkwait(hkaddr); 883: hkaddr->hkcs2 = unit; 884: hkaddr->hkcs1 = hk_type[unit] | HK_DCLR | HK_GO; 885: hkwait(hkaddr); 886: 887: addr = exadr(bp->b_xmem, bp->b_un.b_addr); 888: addr += ndone; 889: hkaddr->hkcyl = cn; 890: hkaddr->hkda = (tn << 8) + sn; 891: hkaddr->hkwc = wc; 892: hkaddr->hkba = (caddr_t)addr; 893: 894: if (!ubmap) 895: hkaddr->hkxmem=hiint(addr); 896: cmd = hk_type[unit] | ((hiint(addr) & 3) << 8) | HK_IE | HK_GO; 897: if (bp->b_flags & B_READ) 898: cmd |= HK_READ; 899: else 900: cmd |= HK_WRITE; 901: hkaddr->hkcs1 = cmd; 902: hktab.b_errcnt = 0; /* error has been corrected */ 903: return (1); 904: } 905: 906: /* 907: * Return the number of blocks in a partition. Call hkopen() to read the 908: * label if necessary. If an open is necessary then a matching close 909: * will be done. 910: */ 911: 912: daddr_t 913: hksize(dev) 914: register dev_t dev; 915: { 916: register struct dkdevice *disk; 917: daddr_t psize; 918: int didopen = 0; 919: 920: disk = &hk_dk[dkunit(dev)]; 921: /* 922: * This should never happen but if we get called early in the kernel's 923: * life (before opening the swap or root devices) then we have to do 924: * the open here. 925: */ 926: 927: if (disk->dk_openmask == 0) 928: { 929: if (hkopen(dev, FREAD|FWRITE, S_IFBLK)) 930: return(-1); 931: didopen = 1; 932: } 933: psize = disk->dk_parts[dkpart(dev)].p_size; 934: if (didopen) 935: hkclose(dev, FREAD|FWRITE, S_IFBLK); 936: return(psize); 937: } 938: #endif NHK > 0 Defined functions hkattach defined in line 104; used 1 times hkclose defined in line 213; used 1 times hkdfltlbl defined in line 258; used 3 times hkdump defined in line 692; never used hkecc defined in line 781; used 3 times hkgetinfo defined in line 299; used 1 times hkintr defined in line 552; used 1 times hkopen defined in line 121; used 1 times hkroot defined in line 98; never used hksize defined in line 912; used 2 times hkstart defined in line 461; used 2 times hkstrategy defined in line 326; used 2 times hkustart defined in line 366; used 3 times Defined variables HKADDR defined in line 58; used 9 times bhkbuf defined in line 88; used 1 times hk defined in line 80; used 8 times hk_cyl defined in line 74; used 6 times hk_dk defined in line 84; used 9 times hk_dkn defined in line 92; used 12 times hk_offset defined in line 65; used 1 times hk_type defined in line 73; used 18 times hkbad defined in line 87; used 2 times hktab defined in line 82; used 29 times hkutab defined in line 83; used 5 times Defined struct's hk_softc defined in line 76; never used Defined macros DBSIZE defined in line 690; used 1 times HK_NSECT defined in line 54; used 13 times HK_NSPC defined in line 56; used 12 times HK_NTRAC defined in line 55; used 2 times NHK6CYL defined in line 53; used 1 times • in line 96 NHK7CYL defined in line 52; used 1 times • in line 96 exadr defined in line 773; used 3 times hkncyl defined in line 96; used 4 times hkwait defined in line 95; used 17 times Last modified: 1998-04-04 Generated: 2016-12-26 Generated by src2html V0.67 page hit count: 2462 Valid CSS Valid XHTML 1.0 Strict
__label__pos
0.995035
Dissolve and concatenate text fields into one field with order and deliminator 1033 2 09-29-2015 12:08 AM Status: Open MarcoSchwarzak1 New Contributor II When dissolving features it is often useful to get a sumary of the fields. Despite the opportunity to do mathematical operations to number fields (SUM, MIN, MAX...) there is no way of concattenate text fields. This  would be helpfull (e.g. when dissolving multiple points of soil information at the same location and different depths). Additionaly it would be great if its possible to determine a sorting field for the attributes to concatenate. There is a workaround of spatial-joining points to a polygon and to join text attributes with a deliminator, but no way of spatial joining points to points and no way to determine a joining order.   2 Comments DuncanHornby I like the idea of concatenating text fields, this would be useful. I guess the issue is that you could potentially create a text value that is very long and what can be stored is dictated by the underlying format (e.g. shapefile vs. geodatabase) NicolasSoenens I'm searching for the same problem at the moment. I have different points with text that muyst be dissolved to a polygon. Did you find the solution?
__label__pos
0.721334
List a Project's Error Events GET /api/0/projects/{organization_slug}/{project_slug}/events/ Return a list of error events bound to a projectRepresents your service in Sentry and allows you to scope events to a distinct application. . Path Parameters organization_slug (string) REQUIRED The slug of the organization the events belong to. project_slug (string) REQUIRED The slug of the projectRepresents your service in Sentry and allows you to scope events to a distinct application. the events belong to. Query Parameters: full (boolean) If this is set to true then the event payload will include the full event body, including the stacktrace. Set to true to enable. cursor (string) A pointer to the last object fetched and its sort order; used to retrieve the next or previous results. Scopes <auth_token> requires one of the following scopes: • project:read curl https://sentry.io/api/0/projects/{organization_slug}/{project_slug}/events/ \ -H 'Authorization: Bearer <auth_token>' RESPONSESCHEMA [ { "eventID": "9fac2ceed9344f2bbfdd1fdacb0ed9b1", "tags": [ { "key": "browser", "value": "Chrome 60.0" }, { "key": "device", "value": "Other" }, { "key": "environment", "value": "production" }, { "value": "fatal", "key": "level" }, { "key": "os", "value": "Mac OS X 10.12.6" }, { "value": "CPython 2.7.16", "key": "runtime" }, { "key": "release", "value": "17642328ead24b51867165985996d04b29310337" }, { "key": "server_name", "value": "web1.example.com" } ], "dateCreated": "2020-09-11T17:46:36Z", "user": null, "message": "", "title": "This is an example Python exception", "id": "dfb1a2d057194e76a4186cc8a5271553", "platform": "python", "event.type": "error", "groupID": "1889724436" } ]
__label__pos
0.601442
How do I search for only episodes of "School"? #1 Hi, Trying to use the PVR features to download 'School'. However, if I apply a PVR search for this, it's also downloading 'School of Roars', 'Our School' etc. Is there a way to require the PVR search to exactly match the program title? Is there a list of directives that for pvr search files somewhere? Thanks Andy #2 Ah, think I've found it. Didn't realise you could use regular expressions in a search. Used "^School:" as the search criteria, seems to do the job. Any better way? Andy #3 I had the same problem. "^School:" does the job! #4 I believe you can do exact matching with an additional anchor $ at the end of the string. So something like ^School$ would exactly match just that word 'School' only, nothing with a longer title or if school was just part of the title. #5 If you tried that option (as I did) you would see that it doesn't work, as the title is "School: Series 1". Best to check before offering advice. The $ closes off the search. #6 Quote:If you tried that option (as I did) you would see that it doesn't work, as the title is "School: Series 1". Best to check before offering advice. The $ closes off the search. You don't say? The fact is matches just 'School' is literally my point. I'm not suggesting to use that particular search term to find any particular programme, I'm using it as an example to show how to use a two anchors to match an exact string. My final sentence couldn't be clearer: Quote:So something like ^School$ would exactly match just that word 'School' only, nothing with a longer title or if school was just part of the title. It quite obviously states it won't match if 'school' is just part of a longer title. I should hope it doesn't take a giant leap of imagination to see how this could be useful? Perhaps by matching against the entire programme title you are after - thereby only returning exact matches for that exact title and not futzing around with sub-string matches? Best to read what's being said properly before getting snarky. Possibly Related Threads... Thread Author Replies Views Last Post   How do I limit search results to only "Sarah & Duck" episodes? Danorak81 5 3,278 26-09-2017, 09:54 PM Last Post: Danorak81
__label__pos
0.913873
Java Program to Check if a Matrix is a Sparse Matrix This is a Java Program to Determine if a given Matrix is a Sparse Matrix. If the number of zero elements are more than the non-zero elements of the matrix then it is known as Sparse Matrix Enter the elements of array as input. Now we use loops and if else condition to check for Sparse Matrix. Here is the source code of the Java Program to Determine if a given Matrix is a Sparse Matrix. The Java program is successfully compiled and run on a Windows system. The program output is also shown below. 1. import java.util.Scanner; 2. public class Sparse 3. { 4. public static void main(String args[]) 5. { 6. int i, j, zero = 0, count = 0; 7. int array[][] = new int[10][10]; 8. System.out.println("Enter total rows and columns: "); 9. Scanner s = new Scanner(System.in); 10. int row = s.nextInt(); 11. int column = s.nextInt(); 12. System.out.println("Enter matrix:"); 13. for(i = 0; i < row; i++) 14. { 15. for(j = 0; j < column; j++) 16. { 17. array[i][j] = s.nextInt(); 18. System.out.print(" "); 19. } 20. } 21. for(i = 0; i < row; i++) 22. { 23. for(j = 0; j < column; j++) 24. { 25. if(array[i][j] == 0) 26. { 27. zero++; 28. } 29. else 30. { 31. count++; 32. } 33. } 34. } 35. if(zero>count) 36. { 37. System.out.println("the matrix is sparse matrix"); 38. } 39. else 40. { 41. System.out.println("the matrix is not a sparse matrix"); 42. } 43. } 44. } Output: $ javac Sparse.java $ java Sparse   Enter total rows and columns: 3 3 Enter matrix: 1 0 5 0 0 8 0 0 0 the matrix is sparse matrix Sanfoundry Global Education & Learning Series – 1000 Java Programs. advertisement advertisement Here’s the list of Best Books in Java Programming, Data Structures and Algorithms. If you find any mistake above, kindly email to [email protected] advertisement advertisement Subscribe to our Newsletters (Subject-wise). Participate in the Sanfoundry Certification contest to get free Certificate of Merit. Join our social networks below and stay updated with latest contests, videos, internships and jobs! Youtube | Telegram | LinkedIn | Instagram | Facebook | Twitter | Pinterest Manish Bhojasia - Founder & CTO at Sanfoundry Manish Bhojasia, a technology veteran with 20+ years @ Cisco & Wipro, is Founder and CTO at Sanfoundry. He lives in Bangalore, and focuses on development of Linux Kernel, SAN Technologies, Advanced C, Data Structures & Alogrithms. Stay connected with him at LinkedIn. Subscribe to his free Masterclasses at Youtube & discussions at Telegram SanfoundryClasses.
__label__pos
0.997309
4 $\begingroup$ I am trying to understand why every (quasi-projective) nonsingular complex algebraic variety is an analytic manifold. Consider a nonsingular affine algebraic variety $X\subset \mathbb{C}^n$ of dimension $n-k$. The idea, I think, is to write it as the level set of some holomorphic submersion. If $X$ is a complete intersection, i.e. the ideal $I(X) \subset \mathbb{C}[x_1,\dots,x_n]$ is generated by $k$ polynomials $f_1,\dots,f_k$, then using nonsingularity we have that these polynomials become the components of a submersion $f:\mathbb{C}^n\to\mathbb{C}^{k}$ and hence $X$ is the level set of a submersion, so an analytic manifold of dimension $n-k$. The problem is that general nonsingular affine algebraic varieties $X$ are not complete intersections. However there is a theorem in Hartshorne which says that they are "locally a complete intersection". Now I am only beginning to learn algebraic geometry and the definition of "locally a complete intersection" is in the language of schemes which I haven't learned yet. In particular I don't understand what it means geometrically. Can we use the "locally a complete intersection" property to write $X$ as locally the level set of a submersion? TLDR: Can we use the fact that nonsingular affine algebraic varieties are "locally a complete intersection" to write them as locally the level set of a submersion? $\endgroup$ 2 • $\begingroup$ Being an analytic manifold is a local condition, so restricting to an open set where it is a complete intersection should be fine. $\endgroup$ – Nate Mar 21 '14 at 22:44 • $\begingroup$ Right, this is what I am hoping we can do. But I don't really understand if that is actually what it means to be locally a complete intersection. $\endgroup$ – Seth Mar 21 '14 at 22:50 4 $\begingroup$ The general result is that a smooth complex algebraic variety naturally yields a complex manifold. In more detail, Serre constructed the analytification functor from complex algebraic varieties to complex analytic spaces (see his paper GAGA). Essentially, this functor converts the Zariski topology of a variety into a complex-analytic topology. Also, it sends smooth varieties to complex manifolds. This is the most rigorous and systematic approach to passing between algebraic and differential geometry of which I am aware. $\endgroup$ 1 • 1 $\begingroup$ Thanks, this is a good thing to know, and I've heard about this paper before. But I have a feeling it is a very sophisticated approach that is far more general than what I need right now. I think there should be some very elementary argument along the lines of what I suggested. $\endgroup$ – Seth Mar 22 '14 at 1:46 4 $\begingroup$ This follows from the complex-analytic version of the implicit function theorem. Nonsingularity of the variety is equivalent to nonvanishing of the Jacobian, which is precisely the condition required to apply the theorem. $\endgroup$ 3 • $\begingroup$ I edited the question to make it more clear what I am really asking. $\endgroup$ – Seth Mar 22 '14 at 1:55 • $\begingroup$ I understand the application of the implicit function theorem but what I don't understand is how we guarantee the existence of the submersion we are applying it to. $\endgroup$ – Seth Mar 22 '14 at 2:04 • $\begingroup$ @Seth: Perhaps you could look in Griffiths & Harris, where this is explained in great detail. $\endgroup$ – Bruno Joyal Mar 22 '14 at 16:14 1 $\begingroup$ This question wasn't stated clearly enough so I asked another more precise question whose answer answers this question. See What does it mean geometrically for a variety to be locally a complete intersection? The answer is yes, we can use the property of being a complete intersection to write a variety as locally the level set of a submersion. $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.803681
Ask questions and get helpful answers. A picture frame is 12 inches long and 9 inches wide. In inches, what is the diagonal length of the picture frame? A)12 inches B)15 inches*** C)8 inches D)6 inches Which number is a perfect cube? A)3 B)15 C)64*** D)567 When adding square root 25 and -9, which type of number is the sum? A)whole number B)irrational number C)integer D)radical 1. 👍 2. 👎 3. 👁 4. ℹ️ 5. 🚩 33 answers 1. Here is the answers since im the only generous one here🤐. 3.)B 4.B 5.C 6.D 7.C 8.B 9.A 10.B 11.B 12.A 13.C 14.C 15. V=6x6x6=216 16&17 I don't gotch you this time,sorry😏. P.S PLEASE READ VERY IMPORTANT👇 *If your that person who just likes to give people a thumbs down and say, 'Oh YoU MaDe Me GeT ThE WrOnG AnSwEr, ThAnKs AlOt' First of all, you is the one thats cheating, Second,You should be greatful that you got the answer. Last Dont be tryna give me a thumbs down and LIE!! I took the test, at the end they give you the answer so I know Im right!! PERIODTTTTT!!!!!!🤡💁‍♀️💯 1. 👍 2. 👎 3. ℹ️ 4. 🚩 2. thx @I gotch you. you helped me out 1. 👍 2. 👎 3. ℹ️ 4. 🚩 3. Preach I got you dang 1. 👍 2. 👎 3. ℹ️ 4. 🚩 4. Hi... can someone tell me the answers so I can check my own? I not trying to copy or anything... it's just that I am really confused with what the answers are... Thanks! Anonymous 1. 👍 2. 👎 3. ℹ️ 4. 🚩 5. 6 is C tho 1. 👍 2. 👎 3. ℹ️ 4. 🚩 6. Your first two answers are right. What do you think the answer to the third one is? 1. 👍 2. 👎 3. ℹ️ 4. 🚩 7. 3. An irrational number 4. 15 inches 5. I got this one incorrect so you will just have to figure it out yourself 6.integer 7.6/7 8.4 feet 9. 5.3 10. Riley: “it should be between 36 squared and 49 squared.” 11. 0.321321321… 12. I also got this one incorrect 13.29 14. Every irrational number can be written as a fraction 1. 👍 2. 👎 3. ℹ️ 4. 🚩 8. I always look at the thumbs up and down and if the thumbs up over powers the thumbs down then its LIT Ima Female 1. 👍 2. 👎 3. ℹ️ 4. 🚩 9. @i goch you bro! Is Right!!!!!!! Smh doesnt know what he's talking about 1. 👍 2. 👎 3. ℹ️ 4. 🚩 10. @i goch you bro. You were correct on some of them but 3 is D and 6 is C. I got 7/12 but some of that was on me but I did use 3 and 6. Thank you for helping but be careful 1. 👍 2. 👎 3. ℹ️ 4. 🚩 11. So what are the right answers 1. 👍 2. 👎 3. ℹ️ 4. 🚩 12. Who says VASE and who says VOS just how YOU pronounce it? 1. 👍 2. 👎 3. ℹ️ 4. 🚩 13. so I got chu bro is correct, I've done some myself and it seems to be all correct for me. ;D But for once we should not need to cheat but if your here to check your answers then good for you ;D 1. 👍 2. 👎 3. ℹ️ 4. 🚩 14. Is lol right😅 1. 👍 2. 👎 3. ℹ️ 4. 🚩 15. LOL is wrong I got a 7/12 cuz of him I just want to say fork u lol stick to the first one 1. 👍 2. 👎 3. ℹ️ 4. 🚩 16. I’m pretty sure the test answers are different for everyone so you would have to put the word answers not the letter answers give me a second and I’ll try to find the word answers for 8th grade connexus tho (not for 1,2,15,16, and 17 though sorry those are essay questions) 1. 👍 2. 👎 3. ℹ️ 4. 🚩 17. Before I took the test my teacher has switched the answers so be careful 1. 👍 2. 👎 3. ℹ️ 4. 🚩 18. Ayo I’m not capping but the I gotchu person gave me some of the wrong answers 🧍🏼‍♀️ Can someone at least say what the answer is instead of a b c d or e Cause like I just wanna check my answers man. 1. 👍 2. 👎 3. ℹ️ 4. 🚩 19. Upon this rock you will build your church and the gates of h*ll will not prevail. 1. 👍 2. 👎 3. ℹ️ 4. 🚩 20. Everyone is pointing fingers saying someone else is wrong, so who tf is actually right. its like a Xbox 360 COD gamechat in here 1. 👍 2. 👎 3. ℹ️ 4. 🚩 Answer this Question Related Questions Still need help? You can ask a new question or browse existing questions.
__label__pos
1
How to Conquer Tensorphobia A professor at Stanford once said, If you really want to impress your friends and confound your enemies, you can invoke tensor products… People run in terror from the \otimes symbol. He was explaining some aspects of multidimensional Fourier transforms, but this comment is only half in jest; people get confused by tensor products. It’s often for good reason. People who really understand tensors feel obligated to explain it using abstract language (specifically, universal properties). And the people who explain it in elementary terms don’t really understand tensors. This post is an attempt to bridge the gap between the elementary and advanced understandings of tensors. We’ll start with the elementary (axiomatic) approach, just to get a good feel for the objects we’re working with and their essential properties. Then we’ll transition to the “universal” mode of thought, with the express purpose of enlightening us as to why the properties are both necessary and natural. But above all, we intend to be sufficiently friendly so as to not make anybody run in fear. This means lots of examples and preferring words over symbols. Unfortunately, we simply can’t get by without the reader knowing the very basics of linear algebra (the content of our first two primers on linear algebra (1) (2), though the only important part of the second is the definition of an inner product). So let’s begin. Tensors as a Bunch of Axioms Before we get into the thick of things I should clarify some basic terminology. Tensors are just vectors in a special vector space. We’ll see that such a vector space comes about by combining two smaller vector spaces via a tensor product. So the tensor product is an operation combining vector spaces, and tensors are the elements of the resulting vector space. Now the use of the word product is quite suggestive, and it may lead one to think that a tensor product is similar or related to the usual direct product of vector spaces. In fact they are related (in very precise sense), but they are far from similar. If you were pressed, however, you could start with the direct product of two vector spaces and take a mathematical machete to it until it’s so disfigured that you have to give it a new name (the tensor product). With that image in mind let’s see how that is done. For the sake of generality we’ll talk about two arbitrary finite-dimensional vector spaces V, W of dimensions n, m. Recall that the direct product  V \times W is the vector space of pairs (v,w) where v comes from V and w from W. Recall that addition in this vector space is defined componentwise ((v_1,w_1) + (v_2, w_2) = (v_1 + v_2, w_1 + w_2)) and scalar multiplication scales both components \lambda (v,w) = (\lambda v, \lambda w). To get the tensor product space V \otimes W, we make the following modifications. First, we redefine what it means to do scalar multiplication. In this brave new tensor world, scalar multiplication of the whole vector-pair is declared to be the same as scalar multiplication of any component you want. In symbols, \displaystyle \lambda (v, w) = (\lambda v, w) = (v, \lambda w) for all choices of scalars \lambda and vectors v, w. Second, we change the addition operation so that it only works if one of the two components are the same. In symbols, we declare that (v, w) + (v', w) = (v + v', w) only works because w is the same in both pieces, and with the same rule applying if we switch the positions of v,w above. All other additions are simply declared to be new vectors. I.e. (x,y) + (z,w) is simply itself. It’s a valid addition — we need to be able to add stuff to be a vector space — but you just can’t combine it any further unless you can use the scalar multiplication to factor out some things so that y=w or x=z. To say it still one more time, a general element of the tensor V \otimes W is a sum of these pairs that can or can’t be combined by addition (in general things can’t always be combined). Finally, we rename the pair (v,w) to v \otimes w, to distinguish it from the old vector space V \times W that we’ve totally butchered and reanimated, and we call the tensor product space as a whole V \otimes W. Those familiar with this kind of abstract algebra will recognize quotient spaces at work here, but we won’t use that language except to note that we cover quotients and free spaces elsewhere on this blog, and that’s the formality we’re ignoring. As an example, say we’re taking the tensor product of two copies of \mathbb{R}. This means that our space \mathbb{R} \otimes \mathbb{R} is comprised of vectors like 3 \otimes 5, and moreover that the following operations are completely legitimate. 3 \otimes 5 + 1 \otimes (-5) = 3 \otimes 5 + (-1) \otimes 5 = 2 \otimes 5 6 \otimes 1 + 3\pi \otimes \pi = 3 \otimes 2 + 3 \otimes \pi^2 = 3 \otimes (2 + \pi^2) Cool. This seemingly innocuous change clearly has huge implications on the structure of the space. We’ll get to specifics about how different tensors are from regular products later in this post, but for now we haven’t even proved this thing is a vector space. It might not be obvious, but if you go and do the formalities and write the thing as a quotient of a free vector space (as we mentioned we wouldn’t do) then you know that quotients of vector spaces are again vector spaces. So we get that one for free. But even without that it should be pretty obvious: we’re essentially just declaring that all the axioms of a vector space hold when we want them to. So if you were wondering whether \lambda (a \otimes b + c \otimes d) = \lambda(a \otimes b) + \lambda(c \otimes d) The answer is yes, by force of will. So just to recall, the axioms of a tensor space V \otimes W are 1. The “basic” vectors are v \otimes w for v \in V, w \in W, and they’re used to build up all other vectors. 2. Addition is symbolic, unless one of the components is the same in both addends, in which case (v_1, w) + (v_2, w) = (v_1+ v_2, w) and (v, w_1) + (v,w_2) = (v, w_1 + w_2). 3. You can freely move scalar multiples around the components of v \otimes w. 4. The rest of the vector space axioms (distributivity, additive inverses, etc) are assumed with extreme prejudice. Naturally, one can extend this definition to n-fold tensor products, like V_1 \otimes V_2 \otimes \dots \otimes V_d. Here we write the vectors as sums of things like v_1 \otimes \dots \otimes v_d, and we enforce that addition can only be combined if all but one coordinates are the same in the addends, and scalar multiples move around to all coordinates equally freely. So where does it come from?! By now we have this definition and we can play with tensors, but any sane mathematically minded person would protest, “What the hell would cause anyone to come up with such a definition? I thought mathematics was supposed to be elegant!” It’s an understandable position, but let me now try to convince you that tensor products are very natural. The main intrinsic motivation for the rest of this section will be this: We have all these interesting mathematical objects, but over the years we have discovered that the maps between objects are the truly interesting things. A fair warning: although we’ll maintain a gradual pace and informal language in what follows, by the end of this section you’ll be reading more or less mature 20th-century mathematics. It’s quite alright to stop with the elementary understanding (and skip to the last section for some cool notes about computing), but we trust that the intrepid readers will push on. So with that understanding we turn to multilinear maps. Of course, the first substantive thing we study in linear algebra is the notion of a linear map between vector spaces. That is, a map f: V \to W that factors through addition and scalar multiplication (i.e. f(v + v') = f(v) + f(v') and f(\lambda v) = \lambda f(v)). But it turns out that lots of maps we work with have much stronger properties worth studying. For example, if we think of matrix multiplication as an operation, call it m, then m takes in two matrices and spits out their product m(A,B) = AB Now what would be an appropriate notion of linearity for this map? Certainly it is linear in the first coordinate, because if we fix B then m(A+C, B) = (A+C)B = AB + CB = m(A,B) + m(C,B) And for the same reason it’s linear in the second coordinate. But it is most definitely not linear in both coordinates simultaneously. In other words, m(A+B, C+D) = (A+B)(C+D) = AC + AD + BC + BD \neq AC + BD = m(A,C) + m(B,D) In fact, there is only one function that satisfies both “linearity in its two coordinates separately” and also “linearity in both coordinates simultaneously,” and it’s the zero map! (Try to prove this as an exercise.) So the strongest kind of linearity we could reasonably impose is that m is linear in each coordinate when all else is fixed. Note that this property allows us to shift around scalar multiples, too. For example, \displaystyle m(\lambda A, B) = \lambda AB = A (\lambda B) = m(A, \lambda B) = \lambda m(A,B) Starting to see the wispy strands of a connection to tensors? Good, but hold it in for a bit longer. This single-coordinate-wise-linear property is called bilinearity when we only have two coordinates, and multilinearity when we have more. Here are some examples of nice multilinear maps that show up everywhere: • If V is an inner product space over \mathbb{R}, then the inner product is bilinear. • The determinant of a matrix is a multilinear map if we view the columns of the matrix as vector arguments. • The cross product of vectors in \mathbb{R}^3 is bilinear. There are many other examples, but you should have at least passing familiarity with these notions, and it’s enough to convince us that multilinearity is worth studying abstractly. And so what tensors do is give a sort of classification of multilinear maps. The idea is that every multilinear map f from a product vector space U_1 \times \dots \times U_d to any vector space Y can be written first as a multilinear map to the tensor space \displaystyle \alpha : U_1 \times \dots \times U_d \to U_1 \otimes \dots \otimes U_d Followed by a linear map to Y, \displaystyle \hat{f} : U_1 \otimes \dots \otimes U_d \to Y And the important part is that \alpha doesn’t depend on the original f (but \hat{f} does). One usually draws this as a single diagram: comm-diagram-tensor And to say this diagram commutes is to say that all possible ways to get from one point to another are equivalent (the compositions of the corresponding maps you follow are equal, i.e. f = \hat{f} \alpha). In fuzzy words, the tensor product is like the gatekeeper of all multilinear maps, and \alpha is the gate. Yet another way to say this is that \alpha is the most general possible multilinear map that can be constructed from U_1 \times \dots \times U_d. Moreover, the tensor product itself is uniquely defined by having a “most-general” \alpha (up to isomorphism). This notion is often referred to by mathematicians as the “universal property” of the tensor product. And they might say something like “the tensor product is initial with respect to multilinear mappings from the standard product.” We discuss language like this in detail in this blog’s series on category theory, but it’s essentially a super-compact (and almost too vague) way of saying what the diagram says. Let’s explore this definition when we specialize to a tensor of two vector spaces, and it will give us a good understanding of \alpha (which is really incredibly simple, but people like to muck it up with choices of coordinates and summations). So fix V, W as vector spaces and look at the diagram comm-diagram-tensor-2 What is \alpha in this case? Well it just sends (v,w) \mapsto v \otimes w. Is this map multilinear? Well if we fix w then \displaystyle \alpha(v_1 + v_2, w) = (v_1 + v_2) \otimes w = v_1 \otimes w + v_2 \otimes w = \alpha(v_1, w) + \alpha (v_2, w) and \displaystyle \alpha(\lambda v, w) = (\lambda v) \otimes w = (\lambda) (v \otimes w) = \lambda \alpha(v,w) And our familiarity with tensors now tells us that the other side holds too. Actually, rather than say this is a result of our “familiarity with tensors,” the truth is that this is how we know that we need to define the properties of tensors as we did. It’s all because we designed tensors to be the gatekeepers of multilinear maps! So now let’s prove that all maps f : V \times W \to Y can be decomposed into an \alpha part and a \hat{f} part. To do this we need to know what data uniquely defines a multilinear map. For usual linear maps, all we had to do was define the effect of the map on each element of a basis (the rest was uniquely determined by the linearity property). We know what a basis of V \times W is, it’s just the union of the bases of the pieces. Say that V has a basis v_1, \dots, v_n and W has w_1, \dots, w_m, then a basis for the product is just ((v_1, 0), \dots, (v_n,0), (0,w_1), \dots, (0,w_m)). But multilinear maps are more nuanced, because they have two arguments. In order to say “what they do on a basis” we really need to know how they act on all possible pairs of basis elements. For how else could we determine f(v_1 + v_2, w_1)? If there are n of the v_i‘s and m of the w_i‘s, then there are nm such pairs f(v_i, w_j). Uncoincidentally, as V \otimes W is a vector space, its basis can also be constructed in terms of the bases of V and W. You simply take all possible tensors v_i \otimes w_j. Since every v \in V, w \in W can be written in terms of their bases, it’s clear than any tensor \sum_{k} a_k \otimes b_k can also be written in terms of the basis tensors v_i \otimes w_j (by simply expanding each a_k, b_k in terms of their respective bases, and getting a larger sum of more basic tensors). Just to drive this point home, if (e_1, e_2, e_3) is a basis for \mathbb{R}^3, and (g_1, g_2) a basis for \mathbb{R}^2, then the tensor space \mathbb{R}^3 \otimes \mathbb{R}^2 has basis (e_1 \otimes g_1, e_1 \otimes g_2, e_2 \otimes g_1, e_2 \otimes g_2, e_3 \otimes g_1, e_3 \otimes g_2) It’s a theorem that finite-dimensional vector spaces of equal dimension are isomorphic, so the length of this basis (6) tells us that \mathbb{R}^3 \otimes \mathbb{R}^2 \cong \mathbb{R}^6. So fine, back to decomposing f. All we have left to do is use the data given by f (the effect on pairs of basis elements) to define \hat{f} : V \otimes W \to Y. The definition is rather straightforward, as we have already made the suggestive move of showing that the basis for the tensor space (v_i \otimes w_j) and the definition of f(v_i, w_j) are essentially the same. That is, just take \hat{f}(v_i \otimes w_j) = f(v_i, w_j). Note that this is just defined on the basis elements, and so we extend to all other vectors in the tensor space by imposing linearity (defining \hat{f} to split across sums of tensors as needed). Is this well defined? Well, multilinearity of f forces it to be so. For if we had two equal tensors, say, \lambda v \otimes w = v \otimes \lambda w, then we know that f has to respect their equality, because f(\lambda v_i, w_j) = f(v_i, \lambda w_j), so \hat{f} will take the same value on equal tensors regardless of which representative we pick (where we decide to put the \lambda). The same idea works for sums, so everything checks out, and f(v,w) is equal to \hat{f} \alpha, as desired. Moreover, we didn’t make any choices in constructing \hat{f}. If you retrace our steps in the argument, you’ll see that everything was essentially decided for us once we fixed a choice of a basis (by our wise decisions in defining V \otimes W). Since the construction would be isomorphic if we changed the basis, our choice of \hat{f} is unique. There is a lot more to say about tensors, and indeed there are some other useful ways to think about tensors that we’ve completely ignored. But this discussion should make it clear why we define tensors the way we do. Hopefully it eliminates most of the mystery in tensors, although there is still a lot of mystery in trying to compute stuff using tensors. So we’ll wrap up this post with a short discussion about that. Computability and Stuff It should be clear by now that plain product spaces V \times W and tensor product spaces V \otimes W are extremely different. In fact, they’re only related in that their underlying sets of vectors are built from pairs of vectors in V and W. Avid readers of this blog will also know that operations involving matrices (like row reduction, eigenvalue computations, etc.) are generally efficient, or at least they run in polynomial time so they’re not crazy impractically slow for modest inputs. On the other hand, it turns out that almost every question you might want to ask about tensors is difficult to answer computationally. As with the definition of the tensor product, this is no mere coincidence. There is something deep going on with tensors, and it has serious implications regarding quantum computing. More on that in a future post, but for now let’s just focus on one hard problem to answer for tensors. As you know, the most general way to write an element of a tensor space U_1 \otimes \dots \otimes U_d is as a sum of the basic-looking tensors. \sum_k \displaystyle a_{1,k} \otimes a_{2,k} \otimes \dots \otimes a_{d,k} where the a_{i,d} are linear combinations of basis vectors in the U_i. But as we saw with our examples over \mathbb{R}, there can be lots of different ways to write a tensor. If you’re lucky, you can write the entire tensor as a one-term sum, that is just a tensor a_1 \otimes \dots \otimes a_d. If you can do this we call the tensor a pure tensor, or a rank 1 tensor. We then have the following natural definition and problem: Definition: The rank of a tensor x \in U_1 \otimes \dots \otimes U_d is the minimum number of terms in any representation of x as a sum of pure tensors. The one exception is the zero element, which has rank zero by convention. Problem: Given a tensor x \in k^{n_1} \otimes k^{n_2} \otimes k^{n_3} where k is a field, compute its rank. Of course this isn’t possible in standard computing models unless you can represent the elements of the field (and hence the elements of the vector space in question) in a computer program. So we restrict k to be either the rational numbers \mathbb{Q} or a finite field \mathbb{F}_{q}. Even though the problem is simple to state, it was proved in 1990 (a result of Johan Håstad) that tensor rank is hard to compute. Specifically, the theorem is that Theorem: Computing tensor rank is NP-hard when k = \mathbb{Q} and NP-complete when k is a finite field. The details are given in Håstad’s paper, but the important work that followed essentially showed that most problems involving tensors are hard to compute (many of them by reduction from computing rank). This is unfortunate, but also displays the power of tensors. In fact, tensors are so powerful that many believe understanding them better will lead to insight in some very important problems, like finding faster matrix multiplication algorithms or proving circuit lower bounds (which is closely related to P vs NP). Finding low-rank tensor approximations is also a key technique in a lot of recent machine learning and data mining algorithms. With this in mind, the enterprising reader will probably agree that understanding tensors is both valuable and useful. In the future of this blog we’ll hope to see some of these techniques, but at the very least we’ll see the return of tensors when we delve into quantum computing. Until next time! 36 thoughts on “How to Conquer Tensorphobia 1. Thank you for a great article, Jeremy! By the way, I have little question about notation for basis of $V \times W$: shouldn’t it be more correct to write it as $\lbrace (v_1, 0), \dots, (v_n, 0), (0, w_1), \dots, (0, w_m) \rbrace$? Like 2. I was going to ask if the Hastad result holds specifically for tensor products of finite-dimensional vector spaces, but then I realised that every finitely representable tensor (in the obvious representation) is embedded in a tensor product of finite-dimensional vector spaces by cutting the space off at the highest-numbered coordinate. For infinite families of vector spaces, there’s this difference between direct sum (where all elements have only finitely many nonzero components) and direct product (where this is not the case). Do you know if there’s a similar distinction for tensor products? Like • Shoveling my driveway gave me the insight (obvious in hindsight) that you always need infinitely many nonzero vectors to make a nonzero element of the tensor product of infinitely many vector spaces, because even one zero component makes the tensor product zero. So there is no distinction similar to the difference between direct sum and direct product. Like 3. Thanks for a wonderful post, as always! I’d like to add something, too: Recently in machine learning and statistics there’s been a surge of interest in a certain class of tensors which CAN be decomposed efficiently (poly-time). These are orthogonally-decomposable tensors, and they have an amazing array of newly discovered applications in fitting complex statistical models. Examples include simple Gaussian Mixture Models, Hidden Markov Models, Latent Dirichlet Allocation, Independent Component Analysis and more. Like • Thanks j2kun, urishin! I’m now reading “Nonnegative Tensor Factorization with Applications to Statistics and Computer Vision”. I always considered tensors a bit like category theory, perhaps fun for a Sunday afternoon when it’s raining and I want to hone my conceptual skills a bit with presheafs, fiber products, and what not, but this is actually quite useful! Nonnegative matrix factorization shows “ghost structures” compared to Nonnegative tensor factorization. And their treatment of expectation-maximization as a nonnegative tensor factorization (and not the best one) is of course superb! Like 4. “But above all, we intend to be sufficiently friendly so as to not make anybody run in fear.” That you accomplished, but you lost me a short distance in. I know a lot and can use all of it involving vectors, but you don’t form bridges – there is no “extension’ or superset. Just something not quite completely different, but using whatever strange symbols for – I’m not quite sure if whatever operator I can’t find the unicode character for to drop in here is a cross product, or means something identical, analogous, similar, or utterly and completely different for vectors and tensors. I won’t run in fear. I will just shrug and walk away. you may understand something, but cannot reduce it and explain it to me. Like • Personally I’m not really seeing where the “bridge” between elementary to advanced terms is (seems more like a roundabout set firmly in advanced). Think the problem is that you’re thinking of the axiomatic description as “elementary” but by then us laymen are starting to amble off … Like • The Kronecker product is just a particular way to write linear maps between tensor spaces that arise as the tensor product of two linear *maps* on the pieces. (It should make sense that if you can tensor two vector spaces, then you can tensor two linear operators on those spaces. It’s just (A \otimes B)(v \otimes w) = Av \otimes Bw) The Kronecker product is just a way of “picking coordinates” or “fixing a basis” so that this “tensored” linear map can be represented as a single matrix. The basis is quite obvious; it’s just the lexicographic ordering on the bases of the two pieces (i.e., v_1\otimes w_1, v_1 \otimes w_2, \dots, v_1 \otimes w_m, v_2 \otimes w_1, \dots v_n \otimes w_m). It’s useful if you want to do things like row-reduction on maps between tensor spaces, because you don’t need to change any of your existing algorithms. Like 5. As you know, the most general way to write an element of a tensor space $U_1 \otimes \dots \otimes U_d$ is as a sum of the basic-looking tensors. $\displaystyle \sum_{k} a_{1,k} \otimes a_{2,k} \otimes \dots \otimes a_{d,k}$ where the $a_{i,k}$ may be sums of vectors from $U_i$ themselves. But as we saw with our examples over $\mathbb{R}$, there can be lots of different ways to write a tensor. If you’re lucky, you can write the entire tensor as a one-term sum, that is just a tensor $a \otimes b$. $b_1 \otimes \ldots \otimes b_d$ perhaps? Like 6. I was interested to find that your use of “rank” differs substantially from its usage as I have encountered it elsewhere. For example, see http://mathworld.wolfram.com/TensorRank.html To be fair, I have principally used tensors mainly for their applications in physics/smooth manifold theory where they are defined as multilinear functionals on a product of copies of a vector space and its dual — so, not very heavy on the algebraic way of thinking. Cool article! I’ve been enjoying these primers. Like 7. Thanks, great post! Minor suggestions for clarity: “So now let’s prove that all maps” => “… all multilinear maps” “But multilinear maps are more nuanced, because they have two arguments.” => “But bilinear maps…” Like 8. Nice article. One question, though: is it too obvious to require justification that all pairs of basis elements of the component vector spaces form a linearly independent set in the tensor product of two vector spaces? Like 9. Thanks a lot for this article! This site is super interesting and I just started browsing it recently, and I have heard of tensors and the tensor product before, but I had never properly understood it until reading this article. Like 10. You said: m(A+B, C+D) = (A+B)(C+D) = AC + AD + BC + BD m(A+B, C+D) = m(A,C) + m(B,D). I am totally confused as to why you would make that statement, specifically, why would we think that m(A+B, C+D) = m(A,C) + m(B,D) and what does that have to do with m(A+B, C+D) = m(A,C) + m(A,D) + m(B,C) + m(B,D)? It seems to me that: m(A+B, C+D) = (A+B)(C+D) = AC + AD + BC + BD = m(A, C+D) + m(B, C+D) = m(A,C) + m(A,D) + m(B,C) + m(B,D) = AC + AD + BC + BD I tried the above with some sample matrices and I did find that (A+B)(C+D) = AC + AD + BC + BD. Then you state that the one and only function that satisfies both linearity conditions is the zero map, but the derivation I gave above says otherwise. So, what am I not understanding? I understand that if the A, B, C, and D were some abstract thing like you talk about elsewhere that the properties of those things might be such as to make the derivation invalid, but m is matrix multiplication and A, B, C, and D are matrices, so I don’t see why m is not linear in both arguments simultaneously. What am I missing? Thanks. Like • The point of saying that is to say that the “m” operator is _not_ linear in both coordinates simultaneously. The part that “I said” has a “not equal” symbol in between the two statements. The question was “what could it mean to call m(A,B) linear?” because there are two coordinates. Saying it’s linear in just one coordinate is fine, buy saying it’s linear in both coordinates at the same time (m(A+B,C+D) = m(A,C) + m(B,D)) is too strong. It’s not a particularly interesting statement. Like 11. Thanks for your prompt response, I appreciate it very much. I tried to put a “not equal” symbol between the two statements, but something went wrong. It seems to me that you are defining linearity in (m(A+B,C+D) = m(A,C) + m(B,D)) as pairing A with C and B with D, whereas I’m defining it as breaking out the two arguments of the LHS into a fully distributed multiplication over the full addition: m(A+B, C+D) = m(A,C) + m(A,D) + m(B,C) + m(B,D). I think I understand what you are saying about the differences between the “direct product” vector space with its componentwise addition and scalar multiplication on both components versus the “tensor product” space with its different rules of addition & multiplication, but the example you used was the matrix multiplication of matrices. I’m assuming these are ordinary matrices from elementary linear algebra. From what I’m seeing in my example, they work by the fully distributed procedure I used. I guess that’s what’s confusing me. Unless I picked four matrices that just happened to work, it seems to me that matrices behave according to the full distribution rather than the partial, tensor product space-like rules. Another way in which I may be confused is that perhaps you’re saying that we just define “your” definition of linearity of matrix multiplication, as you said, “by force of will”. Perhaps this is a way of defining a truly interesting map between objects. That is, just forget everything we’ve ever learned about elementary matrix and real number algebra and just define linearity “your” way for the purpose of providing some rationale and insight into why “your” definition of linearity is important in tensor product spaces, but don’t worry that matrices don’t really work this way. If that’s the case, then I’ll just accept what you said and see where it leads. If that’s not the case, then I’m definitely confused. So I guess in my mind, the question boils down to these points: 1. Am I correct that elementary matrix multiplication works by m(A+B, C+D) = m(A,C) + m(A,D) + m(B,C) + m(B,D) or not? 2. Is the definition m(A+B,C+D) = m(A,C) + m(B,D) something we’ve defined by force of will just for pedagogical purposes related to understanding tensor product spaces, but is not intended to be how matrix multiplication really behaves? If what I’m asking makes any sense to you and you have the time and feel inclined to response again, I’ll certainly appreciate it, but I know you’re very busy and I don’t want to waste your time & energy trying to train this particular chimpanzee. Either way, I’ll keep reading and perhaps I’ll stumble upon whatever it is that’s confusing me so much at the moment. Thanks, Greg. Like 12. ” The definition is rather straightforward, as we have already made the suggestive move of showing that the basis for the tensor space (v_i \otimes w_j) and the definition of f(v_i, w_j) are essentially the same.” In the final f(v_i,w_j) above, should the f be an \alpha ? Like Leave a Reply to j2kun Cancel reply Fill in your details below or click an icon to log in: WordPress.com Logo You are commenting using your WordPress.com account. Log Out /  Change ) Twitter picture You are commenting using your Twitter account. Log Out /  Change ) Facebook photo You are commenting using your Facebook account. Log Out /  Change ) Connecting to %s
__label__pos
0.979611
• Advertisement Sign in to follow this   Count vertex array elements This topic is 1932 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. If you intended to correct an error in the post then please contact us. Recommended Posts I have a problem with counting the elements in a vertex array. I have a vertex array like this Vertex temp = { { x,y,z,norm,u,v,color} ---||---- Here i can use sizeof to get the right number of elements to create my vertexbuffer. But when i declare my xertex array like this: Vertex* temp; Temp = new vertex[count]; Foreach ..... Then i cant use size of? Any tips? Share this post Link to post Share on other sites Advertisement size = count*sizeof(vertex) for(int i=0; i<count<i++) { Temp.x = 1337; } Edited by Tispe Share this post Link to post Share on other sites You need to either manually keep track of the array size or use sonething like std::vector. Is there any reason why you aren't using the latter? Share this post Link to post Share on other sites Dont know how i can sent data to my buffer otherwise? this is the geometry i want to calculate count on. VERTEX OurVertices[] = { {-1.0f, -1.0f, 1.0f, D3DXVECTOR3(0.0f, 0.0f, 1.0f), 0.0f, 0.0f}, // side 1 {1.0f, -1.0f, 1.0f, D3DXVECTOR3(0.0f, 0.0f, 1.0f), 0.0f, 1.0f}, {-1.0f, 1.0f, 1.0f, D3DXVECTOR3(0.0f, 0.0f, 1.0f), 1.0f, 0.0f}, {1.0f, 1.0f, 1.0f, D3DXVECTOR3(0.0f, 0.0f, 1.0f), 1.0f, 1.0f}, {-1.0f, -1.0f, -1.0f, D3DXVECTOR3(0.0f, 0.0f, -1.0f), 0.0f, 0.0f}, // side 2 {-1.0f, 1.0f, -1.0f, D3DXVECTOR3(0.0f, 0.0f, -1.0f), 0.0f, 1.0f}, {1.0f, -1.0f, -1.0f, D3DXVECTOR3(0.0f, 0.0f, -1.0f), 1.0f, 0.0f}, {1.0f, 1.0f, -1.0f, D3DXVECTOR3(0.0f, 0.0f, -1.0f), 1.0f, 1.0f}, {-1.0f, 1.0f, -1.0f, D3DXVECTOR3(0.0f, 1.0f, 0.0f), 0.0f, 0.0f}, // side 3 {-1.0f, 1.0f, 1.0f, D3DXVECTOR3(0.0f, 1.0f, 0.0f), 0.0f, 1.0f}, {1.0f, 1.0f, -1.0f, D3DXVECTOR3(0.0f, 1.0f, 0.0f), 1.0f, 0.0f}, {1.0f, 1.0f, 1.0f, D3DXVECTOR3(0.0f, 1.0f, 0.0f), 1.0f, 1.0f}, {-1.0f, -1.0f, -1.0f, D3DXVECTOR3(0.0f, -1.0f, 0.0f), 0.0f, 0.0f}, // side 4 {1.0f, -1.0f, -1.0f, D3DXVECTOR3(0.0f, -1.0f, 0.0f), 0.0f, 1.0f}, {-1.0f, -1.0f, 1.0f, D3DXVECTOR3(0.0f, -1.0f, 0.0f), 1.0f, 0.0f}, {1.0f, -1.0f, 1.0f, D3DXVECTOR3(0.0f, -1.0f, 0.0f), 1.0f, 1.0f}, {1.0f, -1.0f, -1.0f, D3DXVECTOR3(1.0f, 0.0f, 0.0f), 0.0f, 0.0f}, // side 5 {1.0f, 1.0f, -1.0f, D3DXVECTOR3(1.0f, 0.0f, 0.0f), 0.0f, 1.0f}, {1.0f, -1.0f, 1.0f, D3DXVECTOR3(1.0f, 0.0f, 0.0f), 1.0f, 0.0f}, {1.0f, 1.0f, 1.0f, D3DXVECTOR3(1.0f, 0.0f, 0.0f), 1.0f, 1.0f}, {-1.0f, -1.0f, -1.0f, D3DXVECTOR3(-1.0f, 0.0f, 0.0f), 0.0f, 0.0f}, // side 6 {-1.0f, -1.0f, 1.0f, D3DXVECTOR3(-1.0f, 0.0f, 0.0f), 0.0f, 1.0f}, {-1.0f, 1.0f, -1.0f, D3DXVECTOR3(-1.0f, 0.0f, 0.0f), 1.0f, 0.0f}, {-1.0f, 1.0f, 1.0f, D3DXVECTOR3(-1.0f, 0.0f, 0.0f), 1.0f, 1.0f}, }; DWORD OurIndices[] = { 0, 1, 2, // side 1 2, 1, 3, 4, 5, 6, // side 2 6, 5, 7, 8, 9, 10, // side 3 10, 9, 11, 12, 13, 14, // side 4 14, 13, 15, 16, 17, 18, // side 5 18, 17, 19, 20, 21, 22, // side 6 22, 21, 23, }; [source lang="cpp"][/source] Share this post Link to post Share on other sites If you're hard coding your geometry like that then you know how many vertices and indices you have, unless I'm missing something? Share this post Link to post Share on other sites I think, that this would be a method: const ULONG_PTR ulSize = sizeof(OurVertices) / sizeof(VERTEX) Consider replacing the calculation with the following function: template<typename T, size_t N> size_t number_of_array_elements(T (&)[N]) { return N; } It will return the number of elements for an array, and will error out at compile time if you don't pass it an array. Your code, for example, will fail silently and calculate an incorrect size value if you try to use it with a pointer instead of an array. Share this post Link to post Share on other sites template size_t number_of_array_elements(T (&)[N]) { return N; } Could you explain, please, how your template works? (or add an example) Share this post Link to post Share on other sites [quote name='Brother Bob' timestamp='1349622939' post='4987693'] template size_t number_of_array_elements(T (&)[N]) { return N; } Could you explain, please, how your template works? (or add an example) [/quote] I can interpret your question in two ways: 1. you want to know how to use it, or 2. you want to know how it works and how it is able to determine the size of an array. I'll just go ahead and answer both, so pick your answer :) For 1, it is just a function where you pass the array and it returns its size. int foo[42]; int size = number_of_array_elements(foo); The variable size is assigned the value 42, because the array foo contains 42 elements. For 2, you need to know a bit about references to arrays. Normally when you pass an array to a function, you pass a pointer to its the first value in the array along with the size of the array. This way, the function can take any array, but since the array is decayed into a pointer, the size information about the array is also lost which is why you also have to pass its size to the function. Such a function prototype typically looks like this: void myFunction(int *p, int n); where p is the pointer and n is the size. The function now knows how many elements that is pointed to by p. You can pass any pointer to any array of any size to this function, but you have to manually track the size of the array. Another way to pass an array to a function is by reference. The syntax is a little strange if you're not used to it: void myFunction(int (&a)[42]); The function now takes a reference to an array of 42 elements. This way, you can only pass an array of 42 elements, but the type of the array (which includes its size) is preserved entirely. In the previous function, the type of the array is partially lost: the type of the elements is implied by the pointer's type, but the array's size is lost in the pointer decay. Now, a template is introduced to allow the type to vary so that you can pass an array of any type. Furthermore, the size of the array can also be made a template parameter. So replace the type (int in this case) and the size (42 in this case) with generic template parameters. template<typename T, size_t N> void myFunction(T (&a)[N]); Next step is to ask: what do we want the function to do with the array? In this case, we don't want to do anything with the array itself, we just want the function to return the size of the array. template<typename T, size_t N> void myFunction(T (&a)[N]) { return N; } Last step to match the function I presented above is to remove the name of the parameter. Since we don't use the parameter a anywhere in the function, there is no need to give it a name so we can leave it as an unnamed parameter. template<typename T, size_t N> void myFunction(T (&)[N]) { return N; } And there it is, the function number_of_array_elements. Share this post Link to post Share on other sites Sign in to follow this   • Advertisement
__label__pos
0.579573
5.2. Setting Access ACLs There are two types of ACLs: access ACLs and default ACLs. An access ACL is the access control list for a specific file or directory. A default ACL can only be associated with a directory; if a file within the directory does not have an access ACL, it uses the rules of the default ACL for the directory. Default ACLs are optional. ACLs can be configured: 1. Per user 2. Per group 3. Via the effective rights mask 4. For users not in the user group for the file The setfacl utility sets ACLs for files and directories. Use the -m option to add or modify the ACL of a file or directory: # setfacl -m rules files Rules (rules) must be specified in the following formats. Multiple rules can be specified in the same command if they are separated by commas. u:uid:perms Sets the access ACL for a user. The user name or UID may be specified. The user may be any valid user on the system. g:gid:perms Sets the access ACL for a group. The group name or GID may be specified. The group may be any valid group on the system. m:perms Sets the effective rights mask. The mask is the union of all permissions of the owning group and all of the user and group entries. o:perms Sets the access ACL for users other than the ones in the group for the file. Permissions (perms) must be a combination of the characters r, w, and x for read, write, and execute. If a file or directory already has an ACL, and the setfacl command is used, the additional rules are added to the existing ACL or the existing rule is modified. Example 5.1. Give read and write permissions For example, to give read and write permissions to user andrius: # setfacl -m u:andrius:rw /project/somefile To remove all the permissions for a user, group, or others, use the -x option and do not specify any permissions: # setfacl -x rules files Example 5.2. Remove all permissions For example, to remove all permissions from the user with UID 500: # setfacl -x u:500 /project/somefile
__label__pos
0.980195
Can I change the sitename ? Hi, can I change the site name ?   Thanks in advance.   1 answer 1 vote If by "site name", you mean the url for Cloud, then the answer is no, but yes.  You can't change the url directly (yet), but you can do it by buying a new Cloud site with the new name, export the old site, import it into the new one and shut down the old. If it's not the url you mean, then could you explain what you mean by site name? Suggest an answer Log in or Sign up to answer Community showcase Published Nov 27, 2018 in Portfolio for Jira Introducing a new planning experience in Portfolio for Jira (Server/DC) In the past, Portfolio for Jira required a high degree of detail–foresight that was unrealistic for many businesses to   have–in   order to produce a reliable long-term roadmap. We're tur... 2,650 views 18 21 Read article Atlassian User Groups Connect with like-minded Atlassian users at free events near you! Find a group Connect with like-minded Atlassian users at free events near you! Find my local user group Unfortunately there are no AUG chapters near you at the moment. Start an AUG You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs Groups near you
__label__pos
0.91319
blob: 1e8d056a187c45037bc842fd7f45ac7b8f3d1b2b [file] [log] [blame] //===-- sanitizer_quarantine.h ----------------------------------*- C++ -*-===// // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// // // Memory quarantine for AddressSanitizer and potentially other tools. // Quarantine caches some specified amount of memory in per-thread caches, // then evicts to global FIFO queue. When the queue reaches specified threshold, // oldest memory is recycled. // //===----------------------------------------------------------------------===// #ifndef SANITIZER_QUARANTINE_H #define SANITIZER_QUARANTINE_H #include "sanitizer_internal_defs.h" #include "sanitizer_mutex.h" #include "sanitizer_list.h" namespace __sanitizer { template<typename Node> class QuarantineCache; struct QuarantineBatch { static const uptr kSize = 1021; QuarantineBatch *next; uptr size; uptr count; void *batch[kSize]; }; COMPILER_CHECK(sizeof(QuarantineBatch) <= (1 << 13)); // 8Kb. // The callback interface is: // void Callback::Recycle(Node *ptr); // void *cb.Allocate(uptr size); // void cb.Deallocate(void *ptr); template<typename Callback, typename Node> class Quarantine { public: typedef QuarantineCache<Callback> Cache; explicit Quarantine(LinkerInitialized) : cache_(LINKER_INITIALIZED) { } void Init(uptr size, uptr cache_size) { max_size_ = size; min_size_ = size / 10 * 9; // 90% of max size. max_cache_size_ = cache_size; } void Put(Cache *c, Callback cb, Node *ptr, uptr size) { c->Enqueue(cb, ptr, size); if (c->Size() > max_cache_size_) Drain(c, cb); } void NOINLINE Drain(Cache *c, Callback cb) { { SpinMutexLock l(&cache_mutex_); cache_.Transfer(c); } if (cache_.Size() > max_size_ && recycle_mutex_.TryLock()) Recycle(cb); } private: // Read-only data. char pad0_[kCacheLineSize]; uptr max_size_; uptr min_size_; uptr max_cache_size_; char pad1_[kCacheLineSize]; SpinMutex cache_mutex_; SpinMutex recycle_mutex_; Cache cache_; char pad2_[kCacheLineSize]; void NOINLINE Recycle(Callback cb) { Cache tmp; { SpinMutexLock l(&cache_mutex_); while (cache_.Size() > min_size_) { QuarantineBatch *b = cache_.DequeueBatch(); tmp.EnqueueBatch(b); } } recycle_mutex_.Unlock(); DoRecycle(&tmp, cb); } void NOINLINE DoRecycle(Cache *c, Callback cb) { while (QuarantineBatch *b = c->DequeueBatch()) { const uptr kPrefetch = 16; for (uptr i = 0; i < kPrefetch; i++) PREFETCH(b->batch[i]); for (uptr i = 0; i < b->count; i++) { PREFETCH(b->batch[i + kPrefetch]); cb.Recycle((Node*)b->batch[i]); } cb.Deallocate(b); } } }; // Per-thread cache of memory blocks. template<typename Callback> class QuarantineCache { public: explicit QuarantineCache(LinkerInitialized) { } QuarantineCache() : size_() { list_.clear(); } uptr Size() const { return atomic_load(&size_, memory_order_relaxed); } void Enqueue(Callback cb, void *ptr, uptr size) { if (list_.empty() || list_.back()->count == QuarantineBatch::kSize) { AllocBatch(cb); size += sizeof(QuarantineBatch); // Count the batch in Quarantine size. } QuarantineBatch *b = list_.back(); b->batch[b->count++] = ptr; b->size += size; SizeAdd(size); } void Transfer(QuarantineCache *c) { list_.append_back(&c->list_); SizeAdd(c->Size()); atomic_store(&c->size_, 0, memory_order_relaxed); } void EnqueueBatch(QuarantineBatch *b) { list_.push_back(b); SizeAdd(b->size); } QuarantineBatch *DequeueBatch() { if (list_.empty()) return 0; QuarantineBatch *b = list_.front(); list_.pop_front(); SizeSub(b->size); return b; } private: IntrusiveList<QuarantineBatch> list_; atomic_uintptr_t size_; void SizeAdd(uptr add) { atomic_store(&size_, Size() + add, memory_order_relaxed); } void SizeSub(uptr sub) { atomic_store(&size_, Size() - sub, memory_order_relaxed); } NOINLINE QuarantineBatch* AllocBatch(Callback cb) { QuarantineBatch *b = (QuarantineBatch *)cb.Allocate(sizeof(*b)); b->count = 0; b->size = 0; list_.push_back(b); return b; } }; } // namespace __sanitizer #endif // #ifndef SANITIZER_QUARANTINE_H
__label__pos
0.997504
Google can provide plenty of useful information during footprinting process. Google hacking have been around for a long time although it is not widely known to the public. The process is simple but effective. We will use fine-tuned operators to get precise information. Using Google hacking we can get information on Passwords File types Folders Logon Portals Configuration Data     Lets look at the operators that make this possible. Each operator is entered directly in the google search window. 1. cache Displayes the version of a web page that Google contains in the cache and not the current version. Example: cache:mysite.com 2. link Shows websites with links to the specified site. In our case mysite.com Example: link:mysite.com 3. info Shows information about listed page Example: info:mysite.com 4. site Restricts searches to location specified. In our case we search mysite.com for myword Example: myword site:mysite.com 4. allintitle Returns pages with specific keywords in there title Example: allintitle:keywords 4. allinurl Restricts searches to url with keywords specified Example: allinnurl:keywords 5. related Find sites that are similar to a web address you already know. Example: related:time.com 6. OR Find pages that might use one of several words. Example: marathon OR race Here is very good source for reference: https://www.exploit-db.com/google-hacking-database/ Also be careful and don’t do too many queries at ones as google can shut them down.
__label__pos
0.509479
2 $\begingroup$ In most cases we would be reluctant to remove outliers from the dataset just to get a better fit. Robust estimators such as Least Trimmed Squares are sometimes recommended in order to fit a regression line without the influence of outliers (or at least weighing them down). I see that we are keeping the full dataset, so the outlier points will be present in summary statistics, plots, etc. But besides that, is there any other substantial difference between the two approaches? The usual criticisms on not considering datapoints that might be legitimate and correctly reflect the population seem to not be addressed, just circumvented with a formalized method that automates the process. $\endgroup$ 4 • $\begingroup$ Omitting outliers in effect gives those observations zero weight, regardless. If with LTS or similar method, some observations end with in effect zero weight that is a result contingent on the data and the method. It's not that different from say calculating the mean after omitting outliers and calculating a median. The median is contingent on the values of all the data even if in practice you would get the same result from many perturbed versions of the same dataset. $\endgroup$ – Nick Cox Aug 13 '20 at 13:01 • $\begingroup$ Yes, indeed I am aware that omitting them is equivalent to giving them zero weight. My question is then pointing more towards why is omitting outliers a generally criticized practice, while an automated method that does in practice the same is seen as a valid approach? $\endgroup$ – Kuku Aug 13 '20 at 17:08 • 2 $\begingroup$ You could say that taking a median is equivalent to omitting every value except one or two in the middle. So, why don't you? There is no deep problem in omitting outliers if they are all impossible values. But otherwise the motive for omitting outliers is often just that they are awkward for certain methods. It's better to change the methods than to change the data. $\endgroup$ – Nick Cox Aug 13 '20 at 17:38 • $\begingroup$ The median example is very telling. The way I see it the main criticism on outlier removal in studies would then be a matter of disclosure? With adjusted methods such as LTS making explicit the decisions and thresholds being used. $\endgroup$ – Kuku Aug 19 '20 at 20:19 2 $\begingroup$ The reason is largely cultural, in my opinion. Well defined statistical methods are favored in science because they give a transparent analysis of the data. This is probably one of the reason that p-values are so popular. When an outlier is excluded by a practitioner manually, there may be many factors that might lead to this judgement. A reader of the practitioner's research may need detailed and non-leading explanation before they understand the justification for exclusion of a data point. In constrast, a method like LTS excludes points based on a clear algorithm. Once the tuning parameters, like the alpha level, are set, it is generally transparent as to why points are excluded. Full disclosure - to some extent the can is being kicked here - there are those selected values for the tuning parameters which still need to be justified. That is similar to the way that the 5% p-value level should be justified. Besides a an algorithm that can be deconstructed to see why some points are excluded, there are some additional advantages to algorithms. Since substantial work has gone into the development of methods like LTS, some properties about it are already proven (like breakdown value, etc). There is no proof about properties of a person's justification for removing points. In short, the substantial difference between algorithmic and manual outlier selection exists. $\endgroup$ 2 $\begingroup$ Let $(X_i,Y_i),\dots,(X_n,Y_n)$ be an sample. Let $r_i^2(f)=(f(X_i)-Y_i)^2$ Least Trimmed Squared can be written like that: $$\widehat f= \arg\min_{f \in \mathcal{F}} \sum_{i=1}^k r_{(i)}(f)^2 $$ where the parenthesis means that we sorted the data $r_{(1)}(f)\le \dots\le r_{(n)}(f)$. It is adaptative to the data, we don't threshold at a given value we use the data to know which points are to be excluded and this exclusion depends on $f$ which is not the case when you do outlier removal. Here the outlier removal procedure is kind of embedded in the method and you can't decompose the procedure into two parts outliers removal and then estimation. In some non-complicated cases indeed this would give you the same value but when $\mathcal{F}$ is complicated, when the data are high-dimensional... this is not obvious that you would get the same thing. Other more involved reasons is that an outlier will not have the same influence (as in influence function, if you are interested you can search this keyword). Suppose we are in a very simple case where $f(x)$ is a constant and call $T(y_1,\dots,y_n)$ the value of $f(x)$ for a given sample $Y_i=y_i$, it means that in fact you are searching for the mean of the distribution $Y$ and $T(Y_1,\dots,Y_n)$ is a (robust) estimator of the mean. Then, define for $y\in \mathbb{R}$ $$S(y)=|T(Y_1,\dots,Y_n)- T(Y_1,\dots,Y_{n-1},y)| $$ call this the sensitivity of $T$ it correspond to the change of value when changing $Y_n$ for an outlier situated in $y$. For least trimmed square estimator, $S(\infty)$ is not zero if, say $r_{n}(f)=r_{(i)}(f)$ for some $i\le k$. In a few words, an outlier placed in a very big value will pull the estimator $\widehat f$ towards infinity, not a lot but a little and this means that the outlier has been taken into account and this is not true when using outlier removal techniques in which case you ignore outliers. $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.850197
*usr_12.txt* For Vim version 6.2. Last change: 2002 Jul 22 VIM USER MANUAL - by Bram Moolenaar Clever tricks By combining several commands you can make Vim do nearly everything. In this chapter a number of useful combinations will be presented. This uses the commands introduced in the previous chapters and a few more. |12.1| Replace a word |12.2| Change "Last, First" to "First Last" |12.3| Sort a list |12.4| Reverse line order |12.5| Count words |12.6| Find a man page |12.7| Trim blanks |12.8| Find where a word is used Next chapter: |usr_20.txt| Typing command-line commands quickly Previous chapter: |usr_11.txt| Recovering from a crash Table of contents: |usr_toc.txt| ============================================================================== *12.1* Replace a word The substitute command can be used to replace all occurrences of a word with another word: > :%s/four/4/g The "%" range means to replace in all lines. The "g" flag at the end causes all words in a line to be replaced. This will not do the right thing if your file also contains "thirtyfour". It would be replaced with "thirty4". To avoid this, use the "\<" item to match the start of a word: > :%s/\" to match the end of a word: > :%s/\/4/g If you are programming, you might want to replace "four" in comments, but not in the code. Since this is difficult to specify, add the "c" flag to have the substitute command prompt you for each replacement: > :%s/\/4/gc REPLACING IN SEVERAL FILES Suppose you want to replace a word in more than one file. You could edit each file and type the command manually. It's a lot faster to use record and playback. Let's assume you have a directory with C++ files, all ending in ".cpp". There is a function called "GetResp" that you want to rename to "GetAnswer". vim *.cpp Start Vim, defining the argument list to contain all the C++ files. You are now in the first file. qq Start recording into the q register :%s/\/GetAnswer/g Do the replacements in the first file. :wnext Write this file and move to the next one. q Stop recording. @q Execute the q register. This will replay the substitution and ":wnext". You can verify that this doesn't produce an error message. 999@q Execute the q register on the remaining files. At the last file you will get an error message, because ":wnext" cannot move to the next file. This stops the execution, and everything is done. Note: When playing back a recorded sequence, an error stops the execution. Therefore, make sure you don't get an error message when recording. There is one catch: If one of the .cpp files does not contain the word "GetResp", you will get an error and replacing will stop. To avoid this, add the "e" flag to the substitute command: > :%s/\/GetAnswer/ge The "e" flag tells ":substitute" that not finding a match is not an error. ============================================================================== *12.2* Change "Last, First" to "First Last" You have a list of names in this form: Doe, John ~ Smith, Peter ~ You want to change that to: John Doe ~ Peter Smith ~ This can be done with just one command: > :%s/\([^,]*\), \(.*\)/\2 \1/ Let's break this down in parts. Obviously it starts with a substitute command. The "%" is the line range, which stands for the whole file. Thus the substitution is done in every line in the file. The arguments for the substitute command are "/from/to/". The slashes separate the "from" pattern and the "to" string. This is what the "from" pattern contains: \([^,]*\), \(.*\) ~ The first part between \( \) matches "Last" \( \) match anything but a comma [^,] any number of times * matches ", " literally , The second part between \( \) matches "First" \( \) any character . any number of times * In the "to" part we have "\2" and "\1". These are called backreferences. They refer to the text matched by the "\( \)" parts in the pattern. "\2" refers to the text matched by the second "\( \)", which is the "First" name. "\1" refers to the first "\( \)", which is the "Last" name. You can use up to nine backreferences in the "to" part of a substitute command. "\0" stands for the whole matched pattern. There are a few more special items in a substitute command, see |sub-replace-special|. ============================================================================== *12.3* Sort a list In a Makefile you often have a list of files. For example: OBJS = \ ~ version.o \ ~ pch.o \ ~ getopt.o \ ~ util.o \ ~ getopt1.o \ ~ inp.o \ ~ patch.o \ ~ backup.o ~ To sort this list, filter the text through the external sort command: > /^OBJS j :.,/^$/-1!sort This goes to the first line, where "OBJS" is the first thing in the line. Then it goes one line down and filters the lines until the next empty line. You could also select the lines in Visual mode and then use "!sort". That's easier to type, but more work when there are many lines. The result is this: OBJS = \ ~ backup.o ~ getopt.o \ ~ getopt1.o \ ~ inp.o \ ~ patch.o \ ~ pch.o \ ~ util.o \ ~ version.o \ ~ Notice that a backslash at the end of each line is used to indicate the line continues. After sorting, this is wrong! The "backup.o" line that was at the end didn't have a backslash. Now that it sorts to another place, it must have a backslash. The simplest solution is to add the backslash with "A \". You can keep the backslash in the last line, if you make sure an empty line comes after it. That way you don't have this problem again. ============================================================================== *12.4* Reverse line order The |:global| command can be combined with the |:move| command to move all the lines before the first line, resulting in a reversed file. The command is: > :global/^/m 0 Abbreviated: > :g/^/m 0 The "^" regular expression matches the beginning of the line (even if the line is blank). The |:move| command moves the matching line to after the mythical zeroeth line, so the current matching line becomes the first line of the file. As the |:global| command is not confused by the changing line numbering, |:global| proceeds to match all remaining lines of the file and puts each as the first. This also works on a range of lines. First move to above the first line and mark it with "mt". Then move the cursor to the last line in the range and type: > :'t+1,.g/^/m 't ============================================================================== *12.5* Count words Sometimes you have to write a text with a maximum number of words. Vim can count the words for you. When the whole file is what you want to count the words in, use this command: > g CTRL-G Do not type a space after the g, this is just used here to make the command easy to read. The output looks like this: Col 1 of 0; Line 141 of 157; Word 748 of 774; Byte 4489 of 4976 ~ You can see on which word you are (748), and the total number of words in the file (774). When the text is only part of a file, you could move to the start of the text, type "g CTRL-G", move to the end of the text, type "g CTRL-G" again, and then use your brain to compute the difference in the word position. That's a good exercise, but there is an easier way. With Visual mode, select the text you want to count words in. Then type g CTRL-G. The result: Selected 5 of 293 Lines; 70 of 1884 Words; 359 of 10928 Bytes ~ For other ways to count words, lines and other items, see |count-items|. ============================================================================== *12.6* Find a man page *find-manpage* While editing a shell script or C program, you are using a command or function that you want to find the man page for (this is on Unix). Let's first use a simple way: Move the cursor to the word you want to find help on and press > K Vim will run the external "man" program on the word. If the man page is found, it is displayed. This uses the normal pager to scroll through the text (mostly the "more" program). When you get to the end pressing will get you back into Vim. A disadvantage is that you can't see the man page and the text you are working on at the same time. There is a trick to make the man page appear in a Vim window. First, load the man filetype plugin: > :source $VIMRUNTIME/ftplugin/man.vim Put this command in your vimrc file if you intend to do this often. Now you can use the ":Man" command to open a window on a man page: > :Man csh You can scroll around and the text is highlighted. This allows you to find the help you were looking for. Use CTRL-W w to jump to the window with the text you were working on. To find a man page in a specific section, put the section number first. For example, to look in section 3 for "echo": > :Man 3 echo To jump to another man page, which is in the text with the typical form "word(1)", press CTRL-] on it. Further ":Man" commands will use the same window. To display a man page for the word under the cursor, use this: > \K (If you redefined the , use it instead of the backslash). For example, you want to know the return value of "strstr()" while editing this line: if (strstr(input, "aap") == ) ~ Move the cursor to somewhere on "strstr" and type "\K". A window will open to display the man page for strstr(). ============================================================================== *12.7* Trim blanks Some people find spaces and tabs at the end of a line useless, wasteful, and ugly. To remove whitespace at the end of every line, execute the following command: > :%s/\s\+$// The line range "%" is used, thus this works on the whole file. The pattern that the ":substitute" command matches with is "\s\+$". This finds white space characters (\s), 1 or more of them (\+), before the end-of-line ($). Later will be explained how you write patterns like this |usr_27.txt|. The "to" part of the substitute command is empty: "//". Thus it replaces with nothing, effectively deleting the matched white space. Another wasteful use of spaces is placing them before a Tab. Often these can be deleted without changing the amount of white space. But not always! Therefore, you can best do this manually. Use this search command: > / You cannot see it, but there is a space before a tab in this command. Thus it's "/". Now use "x" to delete the space and check that the amount of white space doesn't change. You might have to insert a Tab if it does change. Type "n" to find the next match. Repeat this until no more matches can be found. ============================================================================== *12.8* Find where a word is used If you are a UNIX user, you can use a combination of Vim and the grep command to edit all the files that contain a given word. This is extremely useful if you are working on a program and want to view or edit all the files that contain a specific variable. For example, suppose you want to edit all the C program files that contain the word "frame_counter". To do this you use the command: > vim `grep -l frame_counter *.c` Let's look at this command in detail. The grep command searches through a set of files for a given word. Because the -l argument is specified, the command will only list the files containing the word and not print the matching lines. The word it is searching for is "frame_counter". Actually, this can be any regular expression. (Note: What grep uses for regular expressions is not exactly the same as what Vim uses.) The entire command is enclosed in backticks (`). This tells the UNIX shell to run this command and pretend that the results were typed on the command line. So what happens is that the grep command is run and produces a list of files, these files are put on the Vim command line. This results in Vim editing the file list that is the output of grep. You can then use commands like ":next" and ":first" to browse through the files. FINDING EACH LINE The above command only finds the files in which the word is found. You still have to find the word within the files. Vim has a built-in command that you can use to search a set of files for a given string. If you want to find all occurrences of "error_string" in all C program files, for example, enter the following command: > :grep error_string *.c This causes Vim to search for the string "error_string" in all the specified files (*.c). The editor will now open the first file where a match is found and position the cursor on the first matching line. To go to the next matching line (no matter in what it is file), use the ":cnext" command. To go to the previous match, use the ":cprev" command. Use ":clist" to see all the matches and where they are. The ":grep" command uses the external commands grep (on Unix) or findstr (on Windows). You can change this by setting the option 'grepprg'. ============================================================================== Next chapter: |usr_20.txt| Typing command-line commands quickly Copyright: see |manual-copyright| vim:tw=78:ts=8:ft=help:norl:
__label__pos
0.976137
+(505) 5727-9567¡Envíame un mensaje! [email protected]¡Envíame un correo electrónico! La Importancia del Performance en el Desarrollo Web Moderno 29 Septiembre, 2022 6 minutos de lectura Contenido: Cuándo estamos iniciando en el maravilloso mundo del desarrollo web, casi siempre o nunca tomamos en cuenta que todo el código que escribimos al final tiene que ser descargado e interpretado por el navegador de los usuarios, y no solamente el código, desconocer también que formatos u optimizaciones para las imágenes o archivos multimedia en general son recomendados utilizar, es dónde en muchas ocasiones puede conllevar que una sola imágen haga que tu sitio web llegue a ser lento (utilizar imágenes en full HD o 4K es una mala idea, casi siempre). Actualmente han aparecido herramientas y frameworks enfocados principalmente en la creación de sitios web ultra rápidos. Haciendo que la excelente experiencia de desarrollo de las páginas web SPA (Single Page Application) y el performance funcionen en conjunto. Las SPA son construidas mayormente con librerías como React, que utilizan enteramente JavaScript para representar toda la lógica y la estructura de nuestro sitio web, desde el layout hasta funcionalidades que se pueden implementar fácilmente con HTML, CSS y vanilla JavaScript. Si bien las SPA pueden parecer rápidas cuándo navegas entre distintas vistas, el mayor problema de estas ocurren en la carga inicial, dónde se tiene que descargar y procesar todo el JavaScript necesario para que nuestro proyecto funcione. Por ejemplo, analizando React con la herramienta https://bundlephobia.com/: Peso total de la libería de React Peso total de la libería de React. Vemos que solamente utilizando React estaríamos cargando al menos 136kB de JavaScript para crear nuestro sitio web. Es por eso, que luego se crearon herramientas cómo Next.js para solventar esa problemática, ya que estas utilizan mayormente la técnica llamada Server Side Rendering (SSR), Renderizado del Lado del Servidor por sus siglas en inglés, que consiste en renderizar todos nuestros componentes creados en React desde el servidor y enviar al navegador del usuario en su contraparte en HTML plano, permitiendo de igual manera facilitar a los motores de búsqueda analizar nuestro sitio web. Pero el bajo rendimiento de un sitio web no solamente está ligado al uso de librerías como React, sino a la malas prácticas que cómo desarrolladores implementamos al desarrollar un sitio web, por ejemplo, el uso excesivo de librerías JavaScript para ciertas funcionalidades que fácilmente podemos programarlas con vanilla JavaScript, implementar librerías CSS no optimizadas como Bootstrap, utilizar librerías para iconos como Font Awesome cuándo al final solamente vamos a utilizar no más de 5 iconos haciendo que los usuarios descarguen un archivo CSS y fuentes demasiado grandes, etc. Pero actualmente, existe una tendencia dónde están surgiendo herramientas muy interesantes enfocadas en la creación de sitios web ultra rápidos, que vamos a explorar a continuación. Qwik La forma en qué funciona Qwikes realmente interesante. Qwik se autodenomina el framework de primero HTML, es decir, que su principal enfoque es crear sitios web con el mínimo de JavaScript requerido, sin importar qué tan complejo sea el proyecto. Esto lo logra implementando las técnicas llamadas Reanudación y Serialización. ¿Qué es la Reanudación y Serialización? Actualmente la mayoría de frameworks utilizan SSR o Hidratación, pero Qwik va más allá. La Reanudación en Qwik consiste en ejecutar una parte de nuestra aplicación en el servidor y reanudar la ejecución en el navegador, esto quiere decir, que la mayor parte de nuestra aplicación se entregará de forma estática y solamente las partes dinámicas serán iniciadas en el cliente basadas en las interacciones del usuario. En Qwik, la serialización consiste en el manejo de los datos del estado de nuestra aplicación, el estado no es más que la representación inicial de un componente o de nuestra aplicación en general y que esta cambie al momento de la interacción del usuario. En los frameworks actuales el estado es manejado desde un componente principal siguiendo el árbol de nodos hasta dónde se encuentra el componente con el que se interactuó, por el contrario en Qwik está estrechamente integrado con el ciclo de vida de los componentes. ¿Cómo es la sintaxis de Qwik? Qwik está altamente inspirado en la sintaxis JSX de React, por lo que todos los conceptos que ya conozcas pueden ser aplicados acá: Ejemplo de un componente utilizando Qwik: import { component$ } from '@builder.io/qwik' export const App = component$(() => { return ( <> <h1>¡Hola Mundo!</h1> <p> Soy un componente estático, no hay razón para descargarme en el cliente. </p> <button onClick$={() => alert('Hola')}>saludar</button> </> ) }) Al momento de la publicación de este post, Qwik se encuentra en desarrollo beta en la versión v0.9.0. Si deseas conocer más sobre Qwik puedes revisar su documentación en https://qwik.builder.io/. Solid.js Aunque esta librería no sea tan nueva, es hasta ahora que está empezando a tener mucha más relevancia. Solid.jsigualmente está inspirada en la sintaxis JSX de React, pero al contrario de esta, Solid no utiliza el Virtual DOM, el Virtual DOM es lo que se ejecuta en el navegador del cliente, sin embargo lo que realiza Solid es una compilación de nuestro componentes, proveyendo una optimización para nuestro sitio web casi al mismo nivel de vanilla JavaScript. Comparación del rendimiento de Solid.js vs otros frameworks y librerías Comparación del rendimiento de Solid.js vs otros frameworks y librerías. Solid también soporta la Representación del Lado del Servidor (SSR), Hidratación, así como la generación de Sitios Estáticos (SSG). La principal razón de optar utilizar JSX por parte de Solid, es la excelente experiencia de desarrollo que ofrece y la alta compatibilidad con transpiladores como Babel, lo que facilita enormemente su adopción, en lugar de optar por crear su propio sistema de plantillas. ¿Cuáles son sus diferencias con React? Más allá de no hacer uso del Virtual DOM, Solid está envuelto en 3 principales conceptos, los cuáles son: Signals, Memo y Effect. Signals: Contienen un valor y obtienen y definen funciones para que podamos interceptarlos cuándo estos son utilizados, esto es muy similar al Hooks useState de React. const [contador, setContador] = createSignal(0) Effects: Son funciones que envuelven lecturas de nuestras Signals y se re-ejecutan cuando cambia un valor de la misma. createEffect(() => console.log('La última cuenta es', count())) Memos: Son valores derivados en caché. Comparten las propiedades de ambos, Signals y Effects. Rastrean sus propias Signals dependientes, re-ejecutando únicamente cuando estas cambien, además son propiamente, Signals rastreables. const nombreCompleto = createMemo(() => `${nombre()} ${apellido()}`) A continuación puedes ver un componente creado con Solid: import { render } from 'solid-js/web' import { onCleanup, createSignal } from 'solid-js' const CountingComponent = () => { const [count, setCount] = createSignal(0) const interval = setInterval(() => setCount((count) => count + 1), 1000) onCleanup(() => clearInterval(interval)) return <div>El valor del conteo es: {count()}</div> } render(() => <CountingComponent />, document.getElementById('app')) Si bien Solid tiene una curva de aprendizaje un poco alta a mi parecer, es interesante considerar que podría ser la alternativa enfocada el performance de React. Si deseas conocer más sobre Solid puedes darle un vistazo a su documentación: https://www.solidjs.com/. Lit Lites una librería para crear Web Components, Lit es el sucesor del antiguo Polymerel cuál fue abandonado en favor de Lit. ¿Cuál es la ventaja de utilizar Web Components? Los Web Components es la combinación de tecnologías estándar de JavaScript, y debido a que son nativas del lenguaje obtienen la ventaja de ser excelentes para un buen rendimiento web, ya que no quieren ninguna “técnica especial” para llegar a ser visualizadas dentro del navegador, en comparación con las técnicas de renderizado que utilizan los demás frameworks. Aunque los Web Componentes son muy complejos de utilizar, el mayor enfoque de Lit es facilitar y extender su uso, ya que al fin y al cabo es una tecnología estándar. ¿Cómo es la sintaxis de Lit? Así cómo en los demás frameworks podemos crear componentes, en Lit tampoco es la excepción, pero su sintaxis es un poco compleja. Aunque Lit soporta el uso de TypeScript, veamos el ejemplo de un componente utilizando JavaScript: import { LitElement, css, html } from 'lit' export class SimpleGreeting extends LitElement { static properties = { name: {} } // Defina los estilos de alcance directamente con su componente, en CSS simple static styles = css` :host { color: blue; } ` constructor() { super() // Declarar propiedades reactivas this.name = 'World' } // Renderizar la UI en función del estado de los componentes render() { return html`<p>Hello, ${this.name}!</p>` } } customElements.define('simple-greeting', SimpleGreeting) Como podrás notar, en Lit es muy común utilizar clases para crear componentes. Aunque Lit no es muy popular, la mayor importancia de utilizarlo es que es un framework basado en estándares web, por lo que un proyecto realizado con Lit tendrá mucha compatibilidad a futuro. Si deseas conocer más sobre Lit, revisa su documentación: https://lit.dev/. ¿Cuál es el secreto para construir sitios web rápidos? Hay muchos factores que están involucrados en el buen performance de un sitio web, puedes leer mi publicación sobre las métricas Core Web Vitals dónde explico a fondo cómo funcionan y qué aspectos son analizados por estás métricas. Pero la mejor opción sin duda, es crear sitios web estáticos, es decir pensar en HTML primero, desafortunadamente JavaScript es muy lento de ejecutar por un navegador web y normalmente solo se ejecuta en un solo hilo. Por lo general, es muy tentador utilizar librerías como React o Vue, ya sea por moda o por comodidad, pero hay ocasiones en las que no es necesario usar esas librerías, por ejemplo para portafolios, landing pages, blogs, documentación, etc. Cuándo un sitio web necesita dinamismo estás librerías son muy útiles, entonces considera utilizar herramientas como Gatsby, Astro, Next.js, Nuxt.js, etc, etc, ya que intentan solucionar el problema de rendimiento y de SEO que existen con las SPA. Conclusión Personalmente, me emociona ver el surgimiento de herramientas enfocadas en el buen rendimiento de un sitio web, ya que por muchos años ha existido una problemática con los sitios web lentos, ya que no tener una página web optimizada conlleva muchos contras, entre los cuáles están: • Una mala experiencia de usuario. • Pérdida de usuarios. • Altos costes, una página web lenta requiere un mayor consumo de recursos. • Impacto negativo para el SEO. Lo más interesante que esta nueva generación de frameworks y herramientas, es que combinan lo mejor de ambos mundos, la excelente experiencia para los desarrolladores cuándo creamos una SPA, ya que podemos dividir nuestro proyecto en componentes y utilidades reutilizables así como el rendimiento de una MPA, ya que todo el contenido está en código plano listo para que el navegador lo interprete y lo represente al usuario. Realmente, es emocionante pensar, que quizás podría ser el primer paso para que la web en general empiece a ser mucho más rápida. Otras publicaciones Mira otras publicaciones
__label__pos
0.664458
Ask Your Question 0 Interpreting an element of CyclotomicField as an element of polynomial ring asked 2015-03-03 15:12:21 +0200 BGS gravatar image updated 2015-03-03 20:43:58 +0200 tmonteil gravatar image I am trying to write a function that sums up numbers in different cyclotomic field formally, i.e., as expressions in $z$ where $z$ is the corresponding primitive root of unity. I run into a type error, however, as sage wouldn't let me sum elements of two different cyclotomic fields. Is there a way to typecast an expression in one cyclotomic field into the other, or, better yet, to a polynomial ring? edit retag flag offensive close merge delete 2 Answers Sort by » oldest newest most voted 0 answered 2015-03-03 19:13:08 +0200 tmonteil gravatar image updated 2015-03-04 11:45:05 +0200 EDIT: The comment of @BGS indicates that the question migh be understood as follows : given an element of a cyclotomic field, how to recover it as a polynomial of the generator, so that we can "recast" the generator into a generator of another cyclotomic field. Here is a simple solution, involving the .polynomial() method of cyclotomic field elements. Finding the polynomial: sage: C7 = CyclotomicField(7) sage: z7 = C7.gen() sage: a7 = z7 + 3*z7^2 + 1 sage: a7.polynomial() x^3 + 3*x^2 + 1 sage: a7.polynomial().parent() Univariate Polynomial Ring in x over Rational Field Recasting into another cyclotomic field: sage: C5 = CyclotomicField(5) sage: z5 = C5.gen() sage: a5 = a7.polynomial()(z5) sage: a5 zeta5^3 + 3*zeta5^2 + 1 That said, the polynomial is not uniquely determined (unless you need the minimal one) and the recasting will depend on the polynomial, since for example: sage: z5^5 + z5 == z5 + 1 True sage: z7^5 + z7 == z7 + 1 False PREVIOUS ANSWER (answering a question about embedding cyclotomic field generators into the algebraic field) The problem is that the cyclotomic field is not well embedded into the complex plane (more precisely into the field of complex algebraic numbers), see my answer of ask question 25822 for more details about this. sage: CC(z3) -0.500000000000000 + 0.866025403784439*I sage: C3 = CyclotomicField(3) sage: z3 = C3.gen() sage: z3 zeta3 sage: CC(z3) -0.500000000000000 + 0.866025403784439*I but sage: QQbar(z3) TypeError: Illegal initializer for algebraic number Which is a pity. Let me propose the following fix: sage: def repair(z): ....: F = z.parent() ....: for f in F.embeddings(QQbar): ....: if CLF(f.im_gens()[0]) - CLF(z) < 1e-14: ....: return f(z) The function repair put an generator of some cyclotomic field into the algebraic field QQbar (if n is not too big, so that two generators are at distance larger than 2e-14, but you can adapt the bound if needed). So you can do: sage: C3 = CyclotomicField(3) sage: z3 = C3.gen() sage: Z3 = repair(z3) sage: Z3 -0.50000000000000000? - 0.866025403784439?*I sage: Z3.parent() Algebraic Field sage: C4 = CyclotomicField(4) sage: z4 = C4.gen() sage: Z4 = repair(z4) sage: Z4 -1*I sage: Z4.parent() Algebraic Field Then you can do: sage: Z3 + Z4 -0.50000000000000000? - 1.866025403784439?*I sage: (Z3^2 + 2*Z3 + 1) + (Z4^3 + 7) 6.5000000000000000? + 0.1339745962155614?*I sage: ((Z3^2 + 2*Z3 + 1) + (Z4^3 + 7)).minpoly() x^4 - 26*x^3 + 257*x^2 - 1144*x + 1933 edit flag offensive delete link more Comments Thank you for your prompt reply. This is useful to know, but I had in mind something else. Specifically, what I was trying to do was to recast Z3 as the generator of Z4. I am trying to compute a certain semiclassical approximation, i.e., I try to successively recast a given expression in z as expressions in various cyclotomic fields. BGS gravatar imageBGS ( 2015-03-03 22:04:35 +0200 )edit I do not understand, Z3 is different from Z4, it is not even an element of C4. Could you please give a concrete example of the behavior you would like to see, so that i can understand ? I edited my answer according to how i understand the question now. tmonteil gravatar imagetmonteil ( 2015-03-04 11:08:37 +0200 )edit 0 answered 2015-03-04 10:27:14 +0200 vdelecroix gravatar image Hello, In Sage there is the universal cyclotomic field implemented. It would be much faster than QQbar as proposed by @tmonteil sage: UCF = UniversalCyclotomicField() sage: UCF.gen(3) E(3) sage: e3 = UCF.gen(3) sage: e4 = UCF.gen(4) sage: e3 + e4 E(12)^4 - E(12)^7 - E(12)^11 Vincent edit flag offensive delete link more Comments Weird, in particular the special case of cyclotomic fields, there is a map from them to QQbar, just do: sage: C3 = CyclotomicField(3) sage: z3 = C3.gen() sage: Z3 = QQbar(UniversalCyclotomicField()(z3)) sage: Z3 -0.500000000000000? + 0.866025403784439?*I There is definitely an issue with embedded number fields. tmonteil gravatar imagetmonteil ( 2015-03-04 11:25:07 +0200 )edit Indeed! There is a problem sage: C3 = CyclotomicField(3) sage: UCF = UniversalCyclotomicField() sage: UCF.has_coerce_map_from(C3) True sage: QQbar.has_coerce_map_from(UCF) True sage: QQbar.has_coerce_map_from(C3) False vdelecroix gravatar imagevdelecroix ( 2015-03-04 11:35:30 +0200 )edit Your Answer Please start posting anonymously - your entry will be published after you log in or create a new account. Add Answer Question Tools Stats Asked: 2015-03-03 15:12:21 +0200 Seen: 299 times Last updated: Mar 04 '15
__label__pos
0.685331
Skip to main content How to Add Authentication to Your Vue App Using Okta By Brandon Parise JavaScript Share: This article was originally published on the Okta developer blog. Thank you for supporting the partners who make SitePoint possible. I’ve danced the JavaScript framework shuffle for years starting with jQuery, then on to Angular. After being frustrated with Angular’s complexity, I found React and thought I was in the clear. What seemed simple on the surface ended up being a frustrating mess. Then I found Vue.js. It just felt right. It worked as expected. It was fast. The documentation was incredible. Templating was eloquent. There was a unanimous consensus around how to handle state management, conditional rendering, two-way binding, routing, and more. This tutorial will take you step by step through scaffolding a Vue.js project, offloading secure authentication to Okta’s OpenID Connect API (OIDC), locking down protected routes, and performing CRUD operations through a backend REST API server. This tutorial uses the following technologies but doesn’t require intimate knowledge to follow along: About Vue.js Vue.js is a robust but simple Javascript framework. It has one of the lowest barriers to entry of any modern framework while providing all the required features for high performance web applications. Vue.js Homepage This tutorial covers two primary builds, a frontend web app and backend REST API server. The frontend will be a single page application (SPA) with a homepage, login and logout, and a posts manager. Okta’s OpenID Connect (OIDC) will handle our web app’s authentication through the use of Okta’s Vue SDK. If an unauthenticated user navigates to the posts manager, the web app should attempt to authenticate the user. The server will run Express with Sequelize and Epilogue. At a high level, with Sequelize and Epilogue you can quickly generate dynamic REST endpoints with just a few lines of code. You will use JWT-based authentication when making requests from the web app and Okta’s JWT Verifier in an Express middleware to validate the token. Your app will expose the following endpoints which all require requests to have a valid access token. - GET /posts - GET /posts/:id - POST /posts - PUT /posts/:id - DELETE /posts/:id Create Your Vue.js App To get your project off the ground quickly you can leverage the scaffolding functionality from vue-cli. For this tutorial, you are going to use the progressive web app (PWA) template that includes a handful of features including webpack, hot reloading, CSS extraction, and unit testing. If you’re not familiar with the tenets of PWA, check out our ultimate guide to progressive web applications. To install vue-cli run: npm install -g vue-cli Next, you need to initialize your project. When you run the vue init command just accept all the default values. vue init pwa my-vue-app cd ./my-vue-app npm install npm run dev Point your favorite browser to http://localhost:8080 and you should see the fruits of your labor: Welcome to Your Vue.js PWA Extra Credit: Check out the other templates available for vue-cli. Install Bootstrap Let’s install bootstrap-vue so you can take advantage of the various premade components (plus you can keep the focus on functionality and not on custom CSS): npm i --save bootstrap-vue bootstrap To complete the installation, modify ./src/main.js to include bootstrap-vue and import the required CSS files. Your ./src/main.js file should look like this: // The Vue build version to load with the `import` command // (runtime-only or standalone) has been set in webpack.base.conf with an alias. import Vue from 'vue' import App from './App' import router from './router' import BootstrapVue from 'bootstrap-vue' import 'bootstrap/dist/css/bootstrap.css' import 'bootstrap-vue/dist/bootstrap-vue.css' Vue.use(BootstrapVue) Vue.config.productionTip = false /* eslint-disable no-new */ new Vue({ el: '#app', router, template: '<App/>', components: { App } }) Add Authentication with Okta Dealing with authentication in a web app is the bane of every developer’s existence. That’s where Okta comes in to secure your web applications with minimal code. To get started, you will need to create an OIDC application in Okta. Sign up for a forever-free developer account (or log in if you already have one). Okta Developer Sign Up Once logged in, create a new application by clicking “Add Application”. Add Application Select the “Single-Page App” platform option. New Application Options The default application settings should be the same as those pictured. Okta Application Settings To install the Okta Vue SDK, run the following command: npm i --save @okta/okta-vue Open ./src/router/index.js and replace the entire file with the following code. import Vue from 'vue' import Router from 'vue-router' import Hello from '@/components/Hello' import PostsManager from '@/components/PostsManager' import Auth from '@okta/okta-vue' Vue.use(Auth, { issuer: 'https://{yourOktaDomain}.com/oauth2/default', client_id: '{yourClientId}', redirect_uri: 'http://localhost:8080/implicit/callback', scope: 'openid profile email' }) Vue.use(Router) let router = new Router({ mode: 'history', routes: [ { path: '/', name: 'Hello', component: Hello }, { path: '/implicit/callback', component: Auth.handleCallback() }, { path: '/posts-manager', name: 'PostsManager', component: PostsManager, meta: { requiresAuth: true } } ] }) router.beforeEach(Vue.prototype.$auth.authRedirectGuard()) export default router You’ll need to replace {yourOktaDomain} and {yourClientId} which can be found on your application overview page in the Okta Developer Console. This will inject an authClient object into your Vue instance which can be accessed by calling this.$auth anywhere inside your Vue instance. Vue.use(Auth, { issuer: 'https://{yourOktaDomain}.com/oauth2/default', client_id: '{yourClientId}', redirect_uri: 'http://localhost:8080/implicit/callback', scope: 'openid profile email' }) The final step of Okta’s authentication flow is redirecting the user back to your app with the token values in the URL. The Auth.handleCallback() component included in the SDK handles the redirect and persists the tokens on the browser. { path: '/implicit/callback', component: Auth.handleCallback() } You also need to lock down protected routes from being accessed by unauthenticated users. This is accomplished by implementing a navigation guard. As the name suggests, navigation guards are primarily used to guard navigations either by redirecting or canceling. The SDK comes with the method auth.authRedirectGuard() that checks matched routes’ metadata for the key requiresAuth and redirects the user to the authentication flow if they are not authenticated. router.beforeEach(Vue.prototype.$auth.authRedirectGuard()) With this navigation guard installed, any route that has the following metadata will be protected. meta: { requiresAuth: true } Customize Your App Layout in Vue The web app’s layout is located in a component ./src/App.vue. You can use the router-view component to render the matched component for the given path. For the main menu, you’ll want to change the visibility of certain menu items based on the status of the activeUser: • Not Authenticated: Show only Login • Authenticated: Show only Logout You can toggle the visibility of these menu items using the v-if directive in Vue.js that checks the existence of activeUser on the component. When the component is loaded (which calls created()) or when a route changes we want to refresh the activeUser. Open ./src/App.vue and copy/paste the following code. <template> <div id="app"> <b-navbar toggleable="md" type="dark" variant="dark"> <b-navbar-toggle target="nav_collapse"></b-navbar-toggle> <b-navbar-brand to="/">My Vue App</b-navbar-brand> <b-collapse is-nav id="nav_collapse"> <b-navbar-nav> <b-nav-item to="/">Home</b-nav-item> <b-nav-item to="/posts-manager">Posts Manager</b-nav-item> <b-nav-item href="#" @click.prevent="login" v-if="!activeUser">Login</b-nav-item> <b-nav-item href="#" @click.prevent="logout" v-else>Logout</b-nav-item> </b-navbar-nav> </b-collapse> </b-navbar> <!-- routes will be rendered here --> <router-view /> </div> </template> <script> export default { name: 'app', data () { return { activeUser: null } }, async created () { await this.refreshActiveUser() }, watch: { // everytime a route is changed refresh the activeUser '$route': 'refreshActiveUser' }, methods: { login () { this.$auth.loginRedirect() }, async refreshActiveUser () { this.activeUser = await this.$auth.getUser() }, async logout () { await this.$auth.logout() await this.refreshActiveUser() this.$router.push('/') } } } </script> Every login must have a logout. The following snippet will logout your user, refresh the active user (which is now null), and then redirect the user to the homepage. This method is called when a user clicks on the logout link in the nav. async logout () { await this.$auth.logout() await this.refreshActiveUser() this.$router.push('/') } Components are the building blocks within Vue.js. Each of your pages will be defined in the app as a component. Since the vue-cli webpack template utilizes vue-loader, your component source files have a convention that separates template, script, and style (see here). Now that you’ve added vue-bootstrap, modify ./src/components/Hello.vue to remove the boilerplate links vue-cli generates. <template> <div class="hero"> <div> <h1 class="display-3">Hello World</h1> <p class="lead">This is the homepage of your vue app</p> </div> </div> </template> <style> .hero { height: 90vh; display: flex; align-items: center; justify-content: center; text-align: center; } .hero .lead { font-weight: 200; font-size: 1.5rem; } </style> At this point you can stub out the Post Manager page to test your authentication flow. Once you confirm authentication works, you’ll start to build out the API calls and components required to perform CRUD operations on your Posts model. Create a new file ./src/components/PostsManager.vue and paste the following code: <template> <div class="container-fluid mt-4"> <h1 class="h1">Posts Manager</h1> <p>Only authenticated users should see this page</p> </div> </template> Take Your Vue.js Frontend and Auth Flows for a Test Drive In your terminal run npm run dev (if it’s not already running). Navigate to http://localhost:8080 and you should see the new homepage. Hello World If you click Posts Manager or Login you should be directed to Okta’s flow. Enter your Okta dev account credentials. NOTE: If you are logged in to your Okta Developer Account you will be redirected automatically back to the app. You can test this by using incognito or private browsing mode. Okta Sign-In If successful, you should return to the homepage logged in. Homepage after logging in Clicking on Posts Manager link should render the protected component. Posts Manager Add a Backend REST API Server Now that users can securely authenticate, you can build the REST API server to perform CRUD operations on a post model. Add the following dependencies to your project: npm i --save express cors @okta/jwt-verifier sequelize sqlite3 epilogue axios Then, create the file ./src/server.js and paste the following code. const express = require('express') const cors = require('cors') const bodyParser = require('body-parser') const Sequelize = require('sequelize') const epilogue = require('epilogue') const OktaJwtVerifier = require('@okta/jwt-verifier') const oktaJwtVerifier = new OktaJwtVerifier({ clientId: '{yourClientId}', issuer: 'https://{yourOktaDomain}.com/oauth2/default' }) let app = express() app.use(cors()) app.use(bodyParser.json()) // verify JWT token middleware app.use((req, res, next) => { // require every request to have an authorization header if (!req.headers.authorization) { return next(new Error('Authorization header is required')) } let parts = req.headers.authorization.trim().split(' ') let accessToken = parts.pop() oktaJwtVerifier.verifyAccessToken(accessToken) .then(jwt => { req.user = { uid: jwt.claims.uid, email: jwt.claims.sub } next() }) .catch(next) // jwt did not verify! }) // For ease of this tutorial, we are going to use SQLite to limit dependencies let database = new Sequelize({ dialect: 'sqlite', storage: './test.sqlite' }) // Define our Post model // id, createdAt, and updatedAt are added by sequelize automatically let Post = database.define('posts', { title: Sequelize.STRING, body: Sequelize.TEXT }) // Initialize epilogue epilogue.initialize({ app: app, sequelize: database }) // Create the dynamic REST resource for our Post model let userResource = epilogue.resource({ model: Post, endpoints: ['/posts', '/posts/:id'] }) // Resets the database and launches the express app on :8081 database .sync({ force: true }) .then(() => { app.listen(8081, () => { console.log('listening to port localhost:8081') }) }) Make sure to replace the variables {yourOktaDomain} and {clientId} in the above code with values from your OIDC app in Okta. Add Sequelize Sequelize is a promise-based ORM for Node.js. It supports the dialects PostgreSQL, MySQL, SQLite, and MSSQL and features solid transaction support, relations, read replication, and more. For ease of this tutorial, you’re going to use SQLite to limit external dependencies. The following code initializes a Sequelize instance using SQLite as your driver. let database = new Sequelize({ dialect: 'sqlite', storage: './test.sqlite' }) Each post has a title and body. (The fields createdAt, and updatedAt are added by Sequelize automatically). With Sequelize, you define models by calling define() on your instance. let Post = database.define('posts', { title: Sequelize.STRING, body: Sequelize.TEXT }) Add Epilogue Epilogue creates flexible REST endpoints from Sequelize models within an Express app. If you ever coded REST endpoints you know how much repetition there is. D.R.Y. FTW! // Initialize epilogue epilogue.initialize({ app: app, sequelize: database }) // Create the dynamic REST resource for our Post model let userResource = epilogue.resource({ model: Post, endpoints: ['/posts', '/posts/:id'] }) Verify Your JWT This is the most crucial component of your REST API server. Without this middleware any user can perform CRUD operations on our database. If no authorization header is present, or the access token is invalid, the API call will fail and return an error. // verify JWT token middleware app.use((req, res, next) => { // require every request to have an authorization header if (!req.headers.authorization) { return next(new Error('Authorization header is required')) } let parts = req.headers.authorization.trim().split(' ') let accessToken = parts.pop() oktaJwtVerifier.verifyAccessToken(accessToken) .then(jwt => { req.user = { uid: jwt.claims.uid, email: jwt.claims.sub } next() }) .catch(next) // jwt did not verify! }) Run the Server Open a new terminal window and run the server with the command node ./src/server. You should see debug information from Sequelize and the app listening on port 8081. Complete the Posts Manager Component Now that the REST API server is complete, you can start wiring up your posts manager to fetch posts, create posts, edit posts, and delete posts. I always centralize my API integrations into a single helper module. This keeps the code in components much cleaner and provides single location in case you need to change anything with the API request. Create a file ./src/api.js and copy/paste the following code into it: import Vue from 'vue' import axios from 'axios' const client = axios.create({ baseURL: 'http://localhost:8081/', json: true }) export default { async execute (method, resource, data) { // inject the accessToken for each request let accessToken = await Vue.prototype.$auth.getAccessToken() return client({ method, url: resource, data, headers: { Authorization: `Bearer ${accessToken}` } }).then(req => { return req.data }) }, getPosts () { return this.execute('get', '/posts') }, getPost (id) { return this.execute('get', `/posts/${id}`) }, createPost (data) { return this.execute('post', '/posts', data) }, updatePost (id, data) { return this.execute('put', `/posts/${id}`, data) }, deletePost (id) { return this.execute('delete', `/posts/${id}`) } } When you authenticate with OIDC, an access token is persisted locally in the browser. Since each API request must have an access token, you can fetch it from the authentication client and set it in the request. let accessToken = await Vue.prototype.$auth.getAccessToken() return client({ method, url: resource, data, headers: { Authorization: `Bearer ${accessToken}` } }) By creating the following proxy methods inside your API helper, the code outside the helper module remains clean and semantic. getPosts () { return this.execute('get', '/posts') }, getPost (id) { return this.execute('get', `/posts/${id}`) }, createPost (data) { return this.execute('post', '/posts', data) }, updatePost (id, data) { return this.execute('put', `/posts/${id}`, data) }, deletePost (id) { return this.execute('delete', `/posts/${id}`) } You now have all the components required to wire up your posts manager component to make CRUD operations via the REST API. Open ./src/components/PostsManager.vue and copy/paste the following code. <template> <div class="container-fluid mt-4"> <h1 class="h1">Posts Manager</h1> <b-alert :show="loading" variant="info">Loading...</b-alert> <b-row> <b-col> <table class="table table-striped"> <thead> <tr> <th>ID</th> <th>Title</th> <th>Updated At</th> <th>&nbsp;</th> </tr> </thead> <tbody> <tr v-for="post in posts" :key="post.id"> <td>{{ post.id }}</td> <td>{{ post.title }}</td> <td>{{ post.updatedAt }}</td> <td class="text-right"> <a href="#" @click.prevent="populatePostToEdit(post)">Edit</a> - <a href="#" @click.prevent="deletePost(post.id)">Delete</a> </td> </tr> </tbody> </table> </b-col> <b-col lg="3"> <b-card :title="(model.id ? 'Edit Post ID#' + model.id : 'New Post')"> <form @submit.prevent="savePost"> <b-form-group label="Title"> <b-form-input type="text" v-model="model.title"></b-form-input> </b-form-group> <b-form-group label="Body"> <b-form-textarea rows="4" v-model="model.body"></b-form-textarea> </b-form-group> <div> <b-btn type="submit" variant="success">Save Post</b-btn> </div> </form> </b-card> </b-col> </b-row> </div> </template> <script> import api from '@/api' export default { data () { return { loading: false, posts: [], model: {} } }, async created () { this.refreshPosts() }, methods: { async refreshPosts () { this.loading = true this.posts = await api.getPosts() this.loading = false }, async populatePostToEdit (post) { this.model = Object.assign({}, post) }, async savePost () { if (this.model.id) { await api.updatePost(this.model.id, this.model) } else { await api.createPost(this.model) } this.model = {} // reset form await this.refreshPosts() }, async deletePost (id) { if (confirm('Are you sure you want to delete this post?')) { // if we are editing a post we deleted, remove it from the form if (this.model.id === id) { this.model = {} } await api.deletePost(id) await this.refreshPosts() } } } } </script> Listing Posts You’ll use api.getPosts() to fetch posts from your REST API server. You should refresh the list of posts when the component is loaded and after any mutating operation (create, update, or delete). async refreshPosts () { this.loading = true this.posts = await api.getPosts() this.loading = false } The attribute this.loading is toggled so the UI can reflect the pending API call. You might not see the loading message since the API request is not going out to the internet. Creating Posts A form is included in the component to save a post. It’s wired up to call savePosts() when the form is submitted and its inputs are bound to the model object on the component. When savePost() is called, it will perform either an update or create based on the existence of model.id. This is mostly a shortcut to not have to define two separate forms for creating and updating. async savePost () { if (this.model.id) { await api.updatePost(this.model.id, this.model) } else { await api.createPost(this.model) } this.model = {} // reset form await this.refreshPosts() } Updating Posts When updating a post, you first must load the post into the form. This sets model.id which will the trigger an update in savePost(). async populatePostToEdit (post) { this.model = Object.assign({}, post) } Important: The Object.assign() call copies the value of the post argument rather than the reference. When dealing with mutation of objects in Vue, you should always set to the value, not reference. Deleting Posts To delete a post simply call api.deletePost(id). It’s always good to confirm before delete so let’s throw in a native confirmation alert box to make sure the click was intentional. async deletePost (id) { if (confirm('Are you sure you want to delete this post?')) { await api.deletePost(id) await this.refreshPosts() } } Test Your Vue.js + Node CRUD App Make sure both the server and frontend are running. Terminal #1 node ./src/server Terminal #2 npm run dev Navigate to http://localhost:8080 and give it a whirl. New Post New Hello World Post Delete Post Do More With Vue! As I said at the top of this post, I think Vue stands head and shoulders above other frameworks. Here are five quick reasons why: I covered a lot of material in this tutorial but don’t feel bad if you didn’t grasp everything the first time. The more you work with these technologies, the more familiar they will become. To learn more about Vue.js head over to https://vuejs.org or check out these other great resources from the @oktadev team: You can find the source code for the application developed in this post at https://github.com/oktadeveloper/okta-vue-node-example. As always, follow @oktadev on Twitter to see all the cool content our dev team is creating. Integromat Tower Ad
__label__pos
0.80698
Notice reply alias From Scriptwiki Revision as of 01:06, 10 March 2009 by Aca20031 (talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Jump to: navigation, search ;==================================== ; Script designed to use /r to reply to notices, based on individual networks. ; Copy and paste into the scripts editor remote tab, and use /r when you receive a notice to reply to one ;==================================== ; When a user notices you on *:NOTICE:*:?: { ; If the last user to notice you is not from this connection if (%_aca.last_notice_from_cid_ [ $+ [ $cid ] ] != $nick) { ; Set the variable of the last user to notice you from this connection, as $nick set %_aca.last_notice_from_cid_ $+ $cid $nick ; Echo the notice echo -atg ***Notice: Use /r <message> to reply to $nick on $scid($cid).network } } ; The /r alias for replies alias r { ; If there was a user who noticed you recently from this network if (%_aca.last_notice_from_cid_ [ $+ [ $cid ] ]) { ; Reply to him notice %_aca.last_notice_from_cid_ [ $+ [ $cid ] ] $1- } ; Otherwise else { ; Echo that there are no saved users echo -atg */r: No saved user has noticed you recently on this network. } } ; When mIRC starts, remove all other saved notices on *:START:unset %_aca.last_notice_from_cid* ; When a user changes his nick on *:NICK: { ; If it was the one we are replying to if (%_aca.last_notice_from_cid_ [ $+ [ $cid ] ] == $nick) { ; Change the stored nick set %_aca.last_notice_from_cid_ $+ $cid $newnick } }
__label__pos
0.892582
Skip to main content Previous sectionNext section Introduction to Cube Elements Before you create your own cube, it is useful to examine a sample cube and see how you can use it. Accessing the Patients Cube 1. Access the Management Portal and go to the namespace into which you installed the samples, as described earlier. 2. Click Home,Analytics,Analyzer. 3. Click Patients. 4. Click OK. The Analyzer page includes three main areas: • The Model Contents area on the left lists the contents of the cube you selected. You can expand folders and drag and drop items into the Pivot Builder area. • The Pivot Builder area in the upper right provides options that you use to create pivot tables. This area consists of the Rows, Columns, Measures, and Filters boxes. • The Pivot Preview area in the bottom right displays the pivot table in almost the same way that it will be shown in dashboards. Orientation to the Model Contents Area The Model Contents area lists the contents of the cube that you are currently viewing. For this tutorial, select Dimensions from the drop-down list; this option displays the measures and dimensions in the given cube. The top section shows named sets, but this tutorial does not use these. Below that, this area includes the following sections: Measures The Measures section lists all measures in the cube. For example: generated description: modelcont measures You can have two types of measures, indicated by different icons: generated description: modelcont icon measure Standard measures generated description: modelcont icon measure calc Calculated measures, which are defined in terms of other measures Dimensions The Dimensions section lists the dimensions and the levels, members, and properties that they contain. (It also contains any non-measure calculated members, as well as any sets; this chapter does not discuss these items.) Click the triangle next to any dimension name to expand it. A dimension contains at least one level and may also include a special member known as the All member. In the following example, the AgeD dimension includes an All member named All Patients, as well as the levels Age Group, Age Bucket, and Age. generated description: modelcont level expanded If you expand a level, the system displays the members of that level. For example: generated description: modelcont level with members If a level also includes properties, the system shows those properties in blue font, at the start of the list, with a different icon. For example, the City level includes the Population and Principal Export properties: generated description: modelcont level with properties Creating a Simple Pivot Table In this section, you create a simple pivot table that uses levels and measures in a typical way. The goal of this section is to see how levels and measures work and to learn what a member is. The numbers you see will be different from what is shown here. 1. Expand the DiagD dimension in the Model Contents pane. 2. Drag and drop Diagnoses to Rows. Or double-click Diagnoses. The system displays the following: generated description: analyzer pivot1 3. Drag and drop Patient Count to Measures. Or double-click Patient Count. 4. Drag and drop Avg Age to Measures. Or double-click Avg Age. The system displays the following: generated description: analyzer pivot3 5. Click Save. The system displays a dialog box where you specify the pivot table name. 6. For Folder, type Test 7. For Pivot Name, type Patients by Diagnosis (Patients Cube) 8. Click OK. This saves the underlying query which retrieves the data and the display context, not the data itself. It is worthwhile to develop a formal understanding of what we see. Note the following points: • The base table is Patients, which means that all measures summarize data about patients. • Apart from the header row, each row of this pivot table displays data for one member of the Diagnoses dimension. In all cases, a member corresponds to a set of records in the fact table. (In most cases, each record in the fact table corresponds to one record in the base table.) Therefore, each row in this pivot table displays data for a set of patients with a particular diagnosis. Other layouts are possible (as shown later in this book), but in all cases, any data cell in a pivot table is associated with a set of records in the fact table. • In a typical pivot table, each data cell displays the aggregate value for a measure, aggregated across all records used by that data cell. • To understand the contents of a given data cell, use the information given by the corresponding labels. For example, consider the cell in the asthma row, in the Patient Count column. This cell displays the total number of patients who have asthma. Similarly, consider the Avg Age column for this row. This cell displays the average age of patients who have asthma. • For different measures, the aggregation can be performed in different ways. For Patient Count, the system sums the numbers. For Avg Age, the system averages the numbers. Other aggregations are possible. Measures and Levels In this section, we take a closer look at measures and levels. 1. Click New. 2. Drag and drop Patient Count and Avg Age, to the Measures area. You now see something like this: generated description: view measures This simple pivot table shows us the aggregate value for each of these measures, across all the records in the base class. There are 10000 patients and their average age (in this example) is 35.93 years. 3. Compare these values to the values obtained directly from the source table. To do so: 1. In a separate browser tab or window, access the Management Portal and go to the namespace into which you installed the samples, as described earlier. 2. Click System Explorer > SQL. 3. Click the Execute Query tab. 4. Execute the following query: select count(*) as "count",avg(age) as avgage from bi_study.patient Copy code to clipboard You should see the same numbers. For example: generated description: view measures sql Tip: Leave this browser tab or window open for later use. 4. In the Analyzer, modify the previous pivot table as follows: 1. Expand GenD on the left. 2. Drag and drop Gender to the Row area. Now you see something like the following: generated description: view measures redefined 5. Compare these values to the aggregate values obtained from the source table. To do so: 1. Access the Management Portal and go to the namespace into which you installed the samples, as described earlier. 2. Click System Explorer > SQL. 3. Click the Execute Query tab. 4. Click Show History. 5. Click the query you ran previously. 6. Add the following to the end of the query and then rerun the query: group by gender Copy code to clipboard You should see the same numbers as shown in the pivot table. For example: generated description: view measures redefined sql 6. For a final example, make the following change in the Analyzer: 1. Click the X button in the Rows pane. This action clears the row definition. 2. Expand ProfD and Profession. 3. Drag and drop Electrician to Rows. The system displays something like this: generated description: electrician 7. Compare these values to the values from the source table. To do so: 1. Access the Management Portal and go to the namespace into which you installed the samples, as described earlier. 2. Click System Explorer > SQL. 3. Click the Execute Query tab. 4. Execute the following query: select count(*) as "count",avg(age) as avgage from bi_study.patient join bi_study.patientdetails on bi_study.patient.patientid = bi_study.patientdetails.patientid where bi_study.patientdetails.profession->profession='Electrician' Copy code to clipboard You should see the same numbers. For example: generated description: electrician sql Dimensions and Levels In many scenarios, you can use dimensions and levels interchangeably. In this section, we compare them and see the differences. 1. In the Analyzer, click New. 2. Drag and drop the GenD definition to the Rows area. You should see something like this: generated description: comparison dimension as rows The measure shown is Count, which is a count of patients. 3. Click New. 4. Expand the GenD dimension. Drag and drop the Gender level to the Rows area. You should see something like this: generated description: comparison level as rows In this case, we see the same results. In the Patients sample, the names of dimensions are short and end with D, and the name of a level is never identical to the name of the dimension that contains it. This naming convention is not required, and you can use the same name for a level and for the dimension that contains it. 5. Click New. 6. Expand the AgeD dimension. You will see the following in the left area: generated description: aged contents This dimension is defined differently from the GenD dimension in two ways: • AgeD defines a special member called All Patients, which is an All member. An All member refers to all records of the base class. • AgeD defines multiple levels: Age Group, Age Bucket, and Age. 7. Drag and drop the AgeD dimension to the Rows area. You should see something like this: generated description: comparison2 dimension as rows When you drag and drop a dimension for use as rows (or columns), the system displays all the members of the first level defined in that dimension. In this case, the first level is Age Group. The All Members An All member refers to all records of the base class. Each dimension can have an All member, but in the Patients cube, only one dimension has an All member. This part of the tutorial demonstrates how you can use an All member: 1. Click New. 2. Expand the AgeD dimension. 3. Drag and drop Age Group to Rows. 4. Drag and drop the measures Patient Count, Avg Age, and Avg Test Score to Measures. The system displays something like the following: generated description: all member demo age groups 5. Click the Pivot Options button . 6. In the Row Options area, click the Summary check box, leave Sum selected in the drop-down list, and then click OK. The system then displays a Total line, as follows: generated description: all member demo age groups w total The Total value is appropriate for Patient Count but not for the other measures. For Avg Age and Avg Test Score, it would be more appropriate to display an average value rather than a sum. 7. Click the Pivot Options button again. 8. In the Row Options area, clear the Summary check box and then click OK. 9. Drag and drop All Patients to Rows, below Age Group. The system then displays the All Patients after the members of the Age Group level: generated description: all member demo age groups w allmem The All Patients row is a more useful summary line than the Total line. It shows the Patient Count, Avg Age, and Avg Test Score measures, each aggregated across all patients. Note: For Avg Age and Avg Test Score, in some cases, you might prefer to have an average of the values shown in the pivot table. For example, for Avg Age, this summary line adds the ages of all patients and then divides by 10000. You might prefer to add the values of Avg Age for the three members shown here and then divide that by three. The All member does not help you do this; instead you would create a calculated member (discussed later in this tutorial). 10. Click the X button in the Rows pane. This action clears the row definition. 11. Expand the DiagD dimension. 12. Drag and drop Diagnoses to the Rows pane. 13. Drag and drop All Patients to Rows, below Diagnoses. You then see something like the following: generated description: all member demo diagnoses w allmem As you can see, you can use the generically named All Patient member with dimensions other than Age, the dimension in which it happens to be defined. Hierarchies A dimension contains one or more hierarchies, each of which can contain multiple levels. The Model Contents area lists the levels in the order specified by the hierarchy, but (to save space) does not display the hierarchy names for this cube. Users can take advantage of hierarchies to drill to lower levels. This part of the tutorial demonstrates how this works. 1. Click New. 2. Expand the BirthD dimension in the Model Contents pane. Drag and drop Decade to Rows. Or double-click Decade. The system displays something like the following: generated description: hierarchy demo decades The measure shown is Count, which is a count of patients. 3. Double-click the 1950s row (or any other row with a comparatively large number of patients). Click anywhere to the right of the << symbols. The system then displays the patients born in that decade, grouped by year (the next lowest level in the hierarchy), as follows: generated description: hierarchy demo years This double-click behavior is available within pivot tables displayed on dashboards (not just within the Analyzer). 4. Double-click a row again. The system displays the patients born in that year, grouped by year and quarter: generated description: hierarchy demo quarter years 5. Double-click a row again. The system displays the patients born in that year and quarter, grouped by year and month: generated description: hierarchy demo periods 6. Double-click a row again. The system displays the patients born in that year and month, grouped by actual date: generated description: hierarchy demo dates 7. Click the << symbols repeatedly to return to the original state of the pivot table. Properties A level can have properties, which you can display in pivot tables. 1. Click New. 2. Expand the HomeD dimension in the Model Contents pane. 3. Expand the City level. The system displays the following: generated description: modelcont level with properties 4. Drag and drop City to Rows. The system displays something like the following: generated description: properties demo cities The measure shown is Count, which is a count of patients. 5. Drag and drop Population to Columns. 6. Drag and drop Principal Export to Columns. The system displays the following: generated description: properties demo cities w props 7. Click the X button in the Rows pane. 8. Drag and drop ZIP to Rows. The system displays something like the following: generated description: properties demo zips w props These properties do not have values for this level. In pivot tables, properties are different from measures in several ways: • Properties can have string values. • Properties have values only for the level in which they are defined. Depending on how a cube is defined, properties can also affect the sorting and the member names of the level to which they belong. There are examples later in this tutorial. Listings This part of the tutorial demonstrates listings, which display selected records from the lowest-level data for the selected cell or cells. To see how these work, we will first create a pivot table that uses a very small number of records. Then when we display the listing, we will be able to compare it easily to the aggregate value of the cell from which we started. 1. Click New. 2. Drag and drop Patient Count and Avg Test Score to Measures. 3. Expand the AgeD dimension in the Model Contents pane. 4. Expand the Age level. 5. Drag and drop the member 0 to Columns. This member refers to all patients who are less than 1 year old. Note that you must click the member name rather than the icon to its left. The system displays something like the following: generated description: listing demo step1 6. Drag and drop the member 1 to Columns, below the member 0. The system displays something like the following: generated description: listing demo step2 7. Expand the BirthTD dimension. 8. Drag and drop the Birth Time level to Rows. The system displays something like the following: generated description: listing demo step3 9. Click a cell. For example, click the Patient Count cell in the 12am row, below 0. 10. Click the Display Listing button . The system considers the selected context, which in this case is patients under 1 year old, who were born between midnight and 1 am. The system then executes an SQL query against the source data. This query includes selected fields for these patients, as follows: generated description: listing demo step4 11. Count the number of rows displayed. This equals the Patient Count value in the row you started from. 12. Click the Display Table button to redisplay the pivot table in its original state. By default, the Patients cube uses a listing called Patient details, which includes the fields PatientID, Age, Gender, and others, as you just saw. You can display other listings as well. 13. Click the Pivot Options button to display options for this pivot table. The system displays a dialog box. 14. For the Listing drop-down list, click Doctor details and then click OK. The Doctor details listing displays information about the primary care physicians for the selected patients. 15. Click the same cell that you clicked earlier and then click the Display Listing button . Now the system displays something like the following: generated description: listing demo step5 Filters and Members In a typical pivot table, you use members as rows, as columns, or both, as seen earlier in this chapter. Another common use for members is to enable you to filter the data. 1. In the Analyzer, click New. 2. Expand ColorD and Favorite Color. 3. Drag and drop Favorite Color to Rows. The system displays something like the following: generated description: filter demo step1 This pivot table displays the members of the Favorite Color as rows. The measure shown is Count, which is a count of patients. 4. Drag and drop Red to Filters. The Analyzer now shows only one member of the Favorite Color level. You see something like this: generated description: filter demo step2 Make a note of the total number of patients. 5. Click the X button in the Rows box. 6. Expand AgeD. 7. Drag and drop Age Group to Rows. The Analyzer now displays something like this: generated description: filter demo step3 8. Click the Pivot Options button . 9. In the Row Options area, click the Summary check box, leave Sum selected in the drop-down list, and then click OK. The Analyzer now displays something like this: generated description: filter demo step4 The Total line displays the sum of the numbers in the column. Notice that the total here is the same as shown earlier. You can use any member as a filter for any pivot table, no matter what the pivot table uses for rows (or for columns). In all cases, the system retrieves only the records associated with the given member. You can use multiple members as filters, and you can combine filters. For details, see Using the Analyzer. Filters and Searchable Measures In InterSystems IRIS Business Intelligence, you can define searchable measures. With such a measure, you can apply a filter that considers the values at the level of the source record itself. 1. Click New. The system displays the count of all patients: generated description: searchable step1 2. Click the Advanced Options button in the Filters box. 3. Click Add Condition. Then you see this: generated description: advanced filter step1 4. Click Age Group, which enables you to edit this part of the expression. The dialog box now looks something like this: generated description: advanced filter step2 5. Click the drop-down list on the left, scroll down, and click Measures.Encounter Count. As soon as you do, the expression is updated. For example: generated description: advanced filter step3 6. Click the = operator, which enables you to edit this part of the expression. The dialog box now looks something like this: generated description: advanced filter step4 7. Click the >= operator. As soon as you do, the expression is updated. For example: generated description: advanced filter step5 8. Click 0, which enables you to edit this part of the expression. The dialog box now looks something like this: generated description: advanced filter step6 9. Type 10 into the field and click Apply. 10. Click OK. The system then displays the total count of all patients who have at least ten encounters: generated description: searchable step2 Now let us see the effect of adding a level to the pivot table. 11. Expand the AgeD dimension in the Model Contents pane. 12. Drag and drop Age Group to Rows. The system displays something like the following: generated description: searchable step3 13. Click the Pivot Options button . 14. In the Row Options area, click the Summary check box, leave Sum selected in the drop-down list, and then click OK. 15. Click OK. The Analyzer now displays something like this: generated description: searchable step4 The Total line displays the sum of the numbers in the column. Notice that the total here is the same as shown earlier.
__label__pos
0.986354
Text3DApp renders a single, rotating Text3D Object : Text 3D « 3D « Java Text3DApp renders a single, rotating Text3D Object Text3DApp renders a single, rotating Text3D Object /* * @(#)Text3DApp.java 1.0 99/04/21 * * Copyright (c) 1996-1999 Sun Microsystems, Inc. All Rights Reserved. * * Sun grants you ("Licensee") a non-exclusive, royalty free, license to use, * modify and redistribute this software in source and binary code form, * provided that i) this copyright notice and license appear on all copies of * the software; and ii) Licensee does not utilize the software in a manner * which is disparaging to Sun. * * This software is provided "AS IS," without a warranty of any kind. ALL * EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY * IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR * NON-INFRINGEMENT, ARE HEREBY EXCLUDED. SUN AND ITS LICENSORS SHALL NOT BE * LIABLE FOR ANY DAMAGES SUFFERED BY LICENSEE AS A RESULT OF USING, MODIFYING * OR DISTRIBUTING THE SOFTWARE OR ITS DERIVATIVES. IN NO EVENT WILL SUN OR ITS * LICENSORS BE LIABLE FOR ANY LOST REVENUE, PROFIT OR DATA, OR FOR DIRECT, * INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL OR PUNITIVE DAMAGES, HOWEVER * CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF THE USE OF * OR INABILITY TO USE SOFTWARE, EVEN IF SUN HAS BEEN ADVISED OF THE POSSIBILITY * OF SUCH DAMAGES. * * This software is not designed or intended for use in on-line control of * aircraft, air traffic, aircraft navigation or aircraft communications; or in * the design, construction, operation or maintenance of any nuclear facility. * Licensee represents and warrants that it will not use or redistribute the * Software for such purposes. */ import java.applet.Applet; import java.awt.BorderLayout; import java.awt.Font; import java.awt.Frame; import javax.media.j3d.Alpha; import javax.media.j3d.AmbientLight; import javax.media.j3d.Appearance; import javax.media.j3d.BoundingSphere; import javax.media.j3d.BranchGroup; import javax.media.j3d.Canvas3D; import javax.media.j3d.ColoringAttributes; import javax.media.j3d.DirectionalLight; import javax.media.j3d.Font3D; import javax.media.j3d.FontExtrusion; import javax.media.j3d.Material; import javax.media.j3d.RotationInterpolator; import javax.media.j3d.Shape3D; import javax.media.j3d.Text3D; import javax.media.j3d.Transform3D; import javax.media.j3d.TransformGroup; import javax.vecmath.Color3f; import javax.vecmath.Vector3f; import com.sun.j3d.utils.applet.MainFrame; import com.sun.j3d.utils.universe.SimpleUniverse; /* * Text3DApp renders a single, rotating Text3D Object. The Text3D object has * material properties specified along with lights so that the Text3D object is * shaded. */ public class Text3DApp extends Applet { public BranchGroup createSceneGraph() { // Create the root of the branch graph BranchGroup objRoot = new BranchGroup(); Transform3D t3D = new Transform3D(); t3D.setTranslation(new Vector3f(0.0f, 0.0f, -3.0f)); TransformGroup objMove = new TransformGroup(t3D); objRoot.addChild(objMove); // Create the transform group node and initialize it to the // identity. Add it to the root of the subgraph. TransformGroup objSpin = new TransformGroup(); objSpin.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); objMove.addChild(objSpin); Appearance textAppear = new Appearance(); ColoringAttributes textColor = new ColoringAttributes(); textColor.setColor(1.0f, 0.0f, 0.0f); textAppear.setColoringAttributes(textColor); textAppear.setMaterial(new Material()); // Create a simple shape leaf node, add it to the scene graph. Font3D font3D = new Font3D(new Font("Helvetica", Font.PLAIN, 1), new FontExtrusion()); Text3D textGeom = new Text3D(font3D, new String("3DText")); textGeom.setAlignment(Text3D.ALIGN_CENTER); Shape3D textShape = new Shape3D(); textShape.setGeometry(textGeom); textShape.setAppearance(textAppear); objSpin.addChild(textShape); // Create a new Behavior object that will perform the desired // operation on the specified transform object and add it into // the scene graph. Alpha rotationAlpha = new Alpha(-1, 10000); RotationInterpolator rotator = new RotationInterpolator(rotationAlpha, objSpin); // a bounding sphere specifies a region a behavior is active // create a sphere centered at the origin with radius of 100 BoundingSphere bounds = new BoundingSphere(); rotator.setSchedulingBounds(bounds); objSpin.addChild(rotator); DirectionalLight lightD = new DirectionalLight(); lightD.setInfluencingBounds(bounds); lightD.setDirection(new Vector3f(0.0f, 0.0f, -1.0f)); lightD.setColor(new Color3f(1.0f, 0.0f, 1.0f)); objMove.addChild(lightD); AmbientLight lightA = new AmbientLight(); lightA.setInfluencingBounds(bounds); objMove.addChild(lightA); return objRoot; } // end of CreateSceneGraph method public Text3DApp() { setLayout(new BorderLayout()); Canvas3D canvas3D = new Canvas3D(null); canvas3D.setStereoEnable(false); add("Center", canvas3D); BranchGroup scene = createSceneGraph(); // SimpleUniverse is a Convenience Utility class SimpleUniverse simpleU = new SimpleUniverse(canvas3D); // This will move the ViewPlatform back a bit so the // objects in the scene can be viewed. simpleU.getViewingPlatform().setNominalViewingTransform(); simpleU.addBranchGraph(scene); } // end of Text3DApp (constructor) // The following allows this to be run as an application // as well as an applet public static void main(String[] args) { System.out .println("Text3DApp.java - a demonstration of Text3D in Java 3D"); System.out.println("The scene is of a rotating Text3D object."); System.out.println("The Java 3D Tutorial is available on the web at:"); System.out.println("http://www.sun.com/desktop/java3d/collateral"); Frame frame = new MainFrame(new Text3DApp(), 256, 256); } // end of main (method of Text3DApp) } // end of class Text3DApp Related examples in the same category 1.ExText - illustrate use of 3D textExText - illustrate use of 3D text 2.Text 3D LoadText 3D Load 3.Text 3D GeometryText 3D Geometry 4.Text3D BoundsText3D Bounds 5.Renders a Java 3D 3D Text objects with a custom extrusionRenders a Java 3D 3D Text objects with a custom extrusion
__label__pos
0.99393
Excel - Budget remaining Asked By salman.wa on 06-Aug-12 12:50 AM Suppose I have a expense head Internet and have Internet budget say 10000 for a year.. My sheet shoud work like Sno. | Voucher No. | Head | Amount | Budget remaining 1 | V74 | Internet | 3000 | 7000 Means when every I put an expense "Budget remaining" column should show me the remaining budget. Auric__ replied to salman.wa on 06-Aug-12 08:05 AM This is *very* basic stuff. Have you ever used a spreadsheet before? Are you perhaps taking a class? I am assuming that the header row is row 1, the first data row is row 2, and the first column is column A. The first "Budget remaining" cell needs this formula: =10000-D2 The second "Budget remaining" cell needs this formula: =E2-D3 For all additional rows, just copy the bottom "Budget remaining" cell down. (Do *NOT* copy the top "Budget remaining" cell down.) -- If there is a possibility of several things going wrong, they will all happen at once. Spencer101 replied to salman.wa on 06-Aug-12 04:50 AM Have a look at the very simple attachment. Is this what you mean? +-------------------------------------------------------------------+ |Filename: salman.way example.zip | |Download: http://www.excelbanter.com/attachment.php?attachmentid=524| +-------------------------------------------------------------------+ -- Spencer101
__label__pos
0.695101
Still more on: What is…Inversive Geometry? Now for the final post on inversive geometry!  I’ve been generating some fascinating images, and I’d like to share a bit about how I make them. 2017-07-19Inversion3v2.png In order to create such images in Mathematica, you need to go beyond the geometrical definition of inversion and use coordinate geometry.  Let’s take a moment to see how to do this. Recall that P′, the inverse point of P, is that point on a ray drawn from the origin through P such that [OP]\cdot[OP']=1, where [AB] denotes the distance from A to B.  Feel free to reread the previous two posts on inversive geometry for a refresher (here are links to the first post and the second post). Now suppose that the point P has Cartesian coordinates (x,y).  Points on a ray drawn from the origin through P will then have coordinates (kx, ky), where k>0.  Thus, we just need to find the right k so that the point P'=(kx,ky) satisfies the definition of an inverse point. This is just a matter of a little algebra; the result is P'=\left(\dfrac{x}{x^2+y^2},\dfrac{y}{x^2+y^2}\right). What this means is that if you have an equation of a curve in terms of x and y, if you substitute x/(x^2+y^2) everywhere you see x, and substitute y/(x^2+y^2) everywhere you see y, you’ll get an equation for the inverse curve. Let’s illustrate with a simple example — in general, the computer will be doing all the work, so we won’t need to actually do the algebra in practice.  We’ll look at the line x=1.  From our previous work, we know that the inverse curve must be a circle going through the origin. Making the substitution just discussed, we get the equation \dfrac x{x^2+y^2}=1, which may be written (after completing the square) in the form \left(x-\dfrac12\right)^2+y^2=\dfrac14. It is not hard to see that this is in fact a circle which passes through the point (0,0). Now we need to add one more step.  In the definition of an inverse point, we had the point O being the origin with coordinates (0,0).  What if O were some other point, say with coordinates (a,b)? Let’s proceed incrementally.  Beginning with a point (x,y), translate to the point (x-a,y-b) so that the point (a,b) now acts like the origin.  Now use the previous formula to invert: \left(\dfrac{x-a}{(x-a)^2+(y-b)^2},\dfrac{y-b}{(x-a)^2+(y-b)^2}\right). Finally, translate back: \left(a+\dfrac{x-a}{(x-a)^2+(y-b)^2},b+\dfrac{y-b}{(x-a)^2+(y-b)^2}\right). This is now the inverse of the point (x,y) about the point (a,b). So what you see in the above image is several copies of the parabola y=x^2 inverted about a series of equally spaced points along the line segment with endpoints (1/2,-1/2) and (3/2,1/2).  This might seem a little arbitrary, but it takes quite a bit of experimentation to find a set of points to invert about in order to create an aesthetically pleasing image. Of course there is another perspective on accomplishing the same task — just shift the parabolas first, invert about the origin, and then shift back.  This is geometrically equivalent (and the algebra is the same); it just depends on how you want to look at it. Here is another image creating by inverting the parabola y=x^2 about points which lie on a circle. 2017-07-18Inversion1v2 And while we’re on the subject of inverting parabolas, let’s take a moment to discuss the cardioid example we looked at in our last conversation about inversion: Cardioid2 To prove that this construction of circles actually yields a cardioid, the trick is to take the inverse of a parabola about its focus.  If you do this, the tangent lines of the parabola will then invert to circles tangent to a cardioid.  I won’t go into all the details here, but I’ll outline how the proof goes using the following diagram. Day104ParabolaProperty Draw a line (shown in black) tangent to the blue parabola at its vertex; the inverse curves are shown in the same color, but dashed.  Note that the black circle must be tangent to the blue cardioid since the inverse black line is tangent to the inverse parabola. The small red disk is the focus of the parabola.  Key to the proof is the property of the parabola that if you draw a line from the focus to a point on the black line and then bounce off at a right angle (the red lines), the resulting line is tangent to the parabola.  So the inverse of this line — the red dashed circle — must be tangent to the cardioid. Since perpendicularity is preserved and the line from the focus inverts to itself (since we’re inverting about the focus), the red circle must be perpendicular to this line — meaning that the line from the focus in fact contains a diameter, and hence the center, of the red circle.  Then using properties of circles, you can show that all centers of circles formed in this way lie on a circle (shown dotted in purple) which is half the size of the black circle.  I’ll leave the details to you…. Finally, I’d like to show a few examples of using the other conic sections.  Here is an image with 80 inversions of an ellipse around centers which lie on a line segment. 2017-07-22Inversion1v2 And here is an example of 100 hyperbolas inverted around centers which lie on a line segment.  Since the tails of the branches of a hyperbola all go to infinity, they all meet at the same point when inverted. 2017-07-22Inversion2v2 So now you know how to work with geometrical inversion from an algebraic standpoint.  I hope seeing some of the fascinating images you can create will inspire you to try creating some yourself!
__label__pos
0.852605
How do I configure a One-Time Token or One-Time Link? How do I configure a One-Time Token or One-Time Link? For background information on using one-time tokens or links, see About One-Time Tokens or Links. To configure one-time token or one-time link as a temporary authentication method: 1. Go to Administration > Authentication > Authentication Settings. 2. In Temporary Authentication, Choose one of the following options: • One-Time Token: Sends the user a unique one-time password. • One-Time Link: Sends the user a unique link. 3. Click Save. 4. In Temporary Authentication, click Send Authentication E-mail, and do the following: • Send To: Choose whether the service sends the email to users or groups. • Users: Select the user(s) to which the service sends the authentication email. • Groups: Select the group(s) to which the service sends the authentication email. 5. Click Send Credentials. 6. Click Save and activate the change. After your users receive the email, they can then log in to the service with the temporary authentication method: The Zscaler service sends an email with the one-time temporary password. See image. The user can then use the temporary password to log in to the service. After the user enters the temporary password, the service displays a form requiring the user to enter a new password. See image.   The Zscaler service sends an email with the one-time link. See image.  When the user clicks the link, the service displays a message indicating that the user is logged in to the service. See image.  
__label__pos
0.75315
cancel Showing results for  Show  only  | Search instead for  Did you mean:  Wont play in car any more Odellsloans1960 Local Performer My pandora wont play in my car any more, I get a message on car screen stating pandora error. What can I do to correct this error so I can play in car again mod edit: changing title for clarity Labels (2) 0 Kudos 4 Replies TannerPandora Moderator Moderator Hi @Odellsloans1960 Thanks for posting to community! Could you provide the following information so I can investigate? 1. What is the make, model and year of your vehicle?   2. If you have an aftermarket stereo, what is the make and model of that device?   3. What is the make and model of your phone? Also, can you tell me the Pandora app version (let me know if you need help finding that)?   4. How is your smartphone connected to your car? (Bluetooth, USB, Aux)   5. Is your vehicle equipped with Android Auto or Apple CarPlay?   Let me know and I'll be happy to follow up. Use your mobile device only when conditions allow and as permitted by applicable law. Tanner | Community Moderator Let's share in Music + Podcasts 0 Kudos JDGreen Local Performer Hello Tanner,   I have connected Pandora to my 2015 Lexus RX450h audio system using Bluetooth.  The songs play but do not advance, I have to manually advance to the next song.  Can you help?   Thank you, JDGreen 0 Kudos TannerPandora Moderator Moderator Hi @JDGreen, thanks for posting! Since you're connected via bluetooth have you noticed this happening with any other bluetooth devices? In the meantime, I'd first recommend repairing the bluetooth connection between your car and phone. To remove the pairing, please go into the Bluetooth settings menu on your phone, find the appropriate connection, and select "Forget this device." Once you've done so, refer to your car's manual for the steps to reconnect the car system to your phone. Using the Voice Commands for your unit (if available) is often the easiest way to do this.   If you've already tried this or if it didn't work, I would then recommend uninstalling and then reinstalling Pandora.  To do that: 1. Hold down the Pandora icon on your Home screen until all the icons start "shaking." 2. Then tap the tiny "x" that appears in the upper left of the Pandora icon, and confirm that you want to delete the app. 3. Then re-install Pandora via the App Store on your device. *Please note: Pandora Premium and Premium Family listeners may need to re-download some of their offline content after reinstalling the app.  If you have any trouble with your offline stations after reinstalling, please let me know. Let me know how it goes.    Use your mobile device only when conditions allow and as permitted by applicable law. Tanner | Community Moderator Let's share in Music + Podcasts 0 Kudos esbernard Local Performer kia soul 2019....iphone 8        bluetooth   0 Kudos
__label__pos
0.991133
How to Improve BSAM Data Transfer Rates To optimize BSAM sequential data set transfer rates, take one or more of the following approaches: • If MAXSTGIO is currently defined in the initialization parameter file, review the setting and consider setting it to the 1 MB default or greater. To fine-tune and set the Number of Channel Programs (NCP) in the DCB parameter of the COPY statement, include the second positional parameter as well. • Increase the block size when it is advantageous to do so. Make the block size of a disk data set close to (but not more than) half-track blocking (27998 for non-extended 3390 disk data sets or 27966 for extended 3390 data sets). This improves performance by increasing the number of bytes transferred per I/O. For example, when transferring a data set with an LRECL of 80, it takes much longer to transfer 27920 bytes in 349 blocks (BLKSIZE=80) than in 1 block (BLKSIZE=27920). Note: Exceeding half-track blocking on disk wastes a significant amount of storage capacity without improving the transfer rate. For tape-to-tape transfers, a larger block size improves both performance and capacity. For disk-to-tape transfers, the I/O performance benefit of reblocking to an LBI block size (> 32760) may be outweighed by the CPU performance hit of transferring the data set in "record mode."
__label__pos
0.819842
General XMetaL Discussion XMetaL Community Forum General XMetaL Discussion How to set a Range to a Selection • jsmart How to set a Range to a Selection Participants 0 Replies 1 Last Activity 13 years, 10 months ago The Documentation mentions you can have a Range that never changes. What is the macro sequence to set a Range to Selection, then move around document, then return to the Range and reselect.  Given a Range does not move until Range.Delete. Example: Set Rng to Sel Move to Bottom of text and select some text. Return to the Range and reselect. Derek Posted some clues, can he amplify. Looking for a method to navigate inside a document which is invisible to the User. Reply Derek Read Reply to: How to set a Range to a Selection You have several options for this. The easiest if you don't want to deal with (invisible) Range objects is to do the following: [code]//XMetaL Script Language JScript: ActiveDocument.FormattingUpdating = false; {call various Selection.x methods to move around and manipulate document methods here} ActiveDocument.FormattingUpdating = true;[/code] Using that allows you to work with manipulations visibly implementing everything using Selection methods and then simply wrap all of that in the FormattingUpdating properties to turn it on and off. If however, your script fails at any point and never turns formatting back on updates to the document will remain invisible to the author. Watch out for that and code appropriately to catch errors (JScript try…catch is always good to use in any script). When you move a Range object around the user does not see that, but the Range itself can be moved. Here's a simple example that should work with the Journalist demo (File > New > Journalist tab > Article template): [code]//XMetaL Script Language JScript: //create a range object var rng = ActiveDocument.Range; //find the next Para element rng.MoveToElement(“Para”); //select everything inside the Para rng.SelectContainerContents(); //insert some text rng.TypeText(“hello world”);[/code] There is a limitation to this though, and that is that if you in any way change what was originally selected that may destroy the original selection and you will usually not be able to return to it, because the original Range will often not exist. This is particularly true if the original Range is a selection as opposed to an insertion point (ie: a bunch of selected text vs your cursor just sitting there flashing). In this case there is a workaround for that, which is to store the value for Range.Collapse(sqCollapseStart) and also Range.Collapse(sqCollapseEnd) as separate variables, then perform all your manipulations. If those two points still exist then you can extend one of the Ranges up to and including the second one using Range.ExtendTo() and then make that newly extended Range visible using Range.Select(). In some cases you may find that manipulating the document using Range still causes things to occur visibly to the end user. In this case you may need to resort to using the FormattingUpdating property. A third method for manipulating documents is to us DOM calls. There are basically standard DOM manipulations implemented like most other tools do with some additional extensions. Reply • You must be logged in to reply to this topic. Lost Your Password? Products Downloads Support
__label__pos
0.848823
Abrupt completion and exceptions in Java 2019-05-07 Here are some alternatives titles I could’ve used: Today I’m going to briefly discuss something that tripped me up in my day job the other day. This is perhaps something that every Java developer is taught during the first day on the job or in class but which no one ever remembers until the day it bites them in the behind. Anyway, it’s something I didn’t know and is really scary. I point you first at the Java Language Specification (JLS), specifically the section 14.20.2. Execution of try-finally and try-catch-finally. I will caution that everything in this section of the document seems completely obvious at first glance. However, I would like to bring your attention to two specific sentences: If the finally block completes abruptly for reason S, then the try statement completes abruptly for reason S (and reason R is discarded). and If the finally block completes abruptly for reason S, then the try statement completes abruptly for reason S (and the throw of value V is discarded and forgotten). Note that these two sentences are very similar and have similar implications. In the version of the JLS I’m looking at, the second sentence is specifically rendered in bold. I would argue that the first sentence should be similarly rendered in bold for the same reason: these are both counterintuitive behaviours that can have real-world consequences. Consider the following pseudo-Java: This would seem to be a reasonable way to achieve the following: However, the Java specification tells us that if the try block throws an exception that is assignment-compatible with Exception (in the declaration of the catch block), the catch block will run followed by the finally block and, most importantly for our purposes here, the throw at the end of the catch block will be discarded and replaced by the return true at the end of the finally block. Let that sink in for a minute. Go back and re-read that. This may not ring alarm bells in your head immediately, but it freaked me out. You may encounter code like this yourself: code that assumes that the exception will be rethrown. This is what a percursory glance at the code would suggest. The return true is an example of what I’m going to call “action at a distance” and means that, in order to fully understand any given instance of the try-catch-finally construct in Java, you have to consider how the finally block interacts with both your try and catch blocks. In summary, neither try nor catch blocks are fully self-describing. The key to interpreting the JLS description of finally blocks is to understand what it means for a statement to complete abruptly. This is described in detail in the section 14.1. Normal and Abrupt Completion of Statements in the JLS. This section enumerates all of the associated reasons for an abrupt completion in Java, three of which are relevant to our example: • A return with no value • A return with a given value • A throw with a given value, including exceptions thrown by the Java Virtual Machine Thus, in terms of the terminology used in the JLS: The overall result, therefore, is that the whole try statement completes abruptly with reason S, i.e. return true, with reason R, i.e. throw e, being ignored. The “obvious” behaviour (i.e. the exception is caught and rethrown and the clean-up code in finally block is run) can be achieved by rewriting this code as follows: Critically, the finally block in this example does not complete abruptly and, consequently, the try block completes with reason R (i.e. throw e) and the overall statement throws the exception. This following program systematically demonstrates these behaviours: This program will generate the following output: So, beware. Reference Tags Java Exceptions Content © 2010–2021 Richard Cook. All rights reserved.
__label__pos
0.783583